Two high-severity vulnerabilities in Chainlit expose major enterprises to attacks leading to sensitive information disclosure, cybersecurity firm Zafran reports.
An open source Python package for building conversational AI applications, Chainlit has over 700,000 monthly downloads on PyPI.
The framework provides integration with LangChain, OpenAI, Bedrock, Llama, and more, and supports features such as authentication, cloud deployments, and telemetry.
According to Zafran, there are multiple Chainlit servers accessible from the internet, including instances pertaining to large enterprises and academic institutions, and they are susceptible to attacks leaking the contents of any file on the server.
This is possible because Chainlit versions prior to 2.9.4 are affected by CVE-2026-22218 and CVE-2026-22219, two high-severity bugs that allow threat actors to read arbitrary files and make requests to internal network services or cloud metadata endpoints.
The flaws, Zafran says, allow attackers to exfiltrate environment variables that may contain “API keys, credentials, internal file paths, internal IPs, and ports”, and even the CHAINLIT_AUTH_SECRET variable, which is used to sign authentication tokens.
“Given user identifiers, which can be obtained by leaking the database or inferred from organization emails an attacker can forge authentication tokens, and take over their accounts,” Zafran notes.
If the deployment relies on SQLAlchemy data layer with an SQLite backend, the Chainlit database, which includes users, conversations, messages, and metadata, can be leaked.
If the LangChain LLM integration framework is used, an attacker could exploit the bugs to leak the prompts and responses storage of all users from the LangChain cache. The attacker could also retrieve application source code from the Chainlit directory.
Chainlit instances deployed on AWS could be targeted to retrieve role endpoints and move laterally within the cloud environment, the cybersecurity firm says.
“Once cloud credentials or IAM tokens are obtained from the server, the attacker is no longer limited to the application, they gain access to the cloud environment behind it. Storage buckets, secret managers, LLM, internal data, and other cloud resources may become accessible to an attacker,” Zafran notes.
Related: Weaponized Invite Enabled Calendar Data Theft via Google Gemini
Related: Rethinking Security for Agentic AI
Related: Google Fortifies Chrome Agentic AI Against Indirect Prompt Injection Attacks
Related: Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT

