
LiteLLM Got Backdoored — Supply Chain Attack Analysis
A Hijacked GitHub Action Compromised Every Python Process on the Machine
Not just every LiteLLM import. Every Python process. That's the part most coverage glossed over.
On March 24, 2026, attackers compromised LiteLLM's CI/CD pipeline by hijacking a GitHub Action in the PyPI publish step. Versions v1.82.7 and v1.82.8 were poisoned and pushed to PyPI. The malware didn't hide in LiteLLM's source code -- it installed a litellm_init.pth file.
If you don't know what .pth files do: Python executes them automatically on every interpreter startup. Not on import. On startup. Install the poisoned package once, and every Python process on that machine -- your Django server, your Jupyter notebook, your CI runner -- silently executes the attacker's code.
What Got Stolen
The litellm_init.pth payload harvested:
- SSH keys -- full
~/.ssh/contents - AWS credentials --
~/.aws/credentialsand environment variables - GCP service account keys
- Azure credentials
.envfiles -- every secret you stored "securely" in environment files- kubeconfig -- cluster access credentials
Everything was exfiltrated to models.litellm[.]cloud -- a domain designed to look like official BerriAI infrastructure but controlled by the attacker. Clever enough that outbound traffic monitoring wouldn't flag it on a quick glance.
The Blast Radius
LiteLLM is a transitive dependency for half the AI ecosystem. Downstream projects affected included CrewAI, DSPy, Mem0, Instructor, and Browser-Use. Any project that pulled litellm>=1.82.0 during the compromise window got the poisoned version.
BerriAI hired Google Mandiant for forensic investigation -- you don't bring in Mandiant for a minor incident.
Who Was Safe
One group came out clean: anyone running LiteLLM via the Docker image (ghcr.io/berriai/litellm). The container images pinned dependencies and didn't pull from PyPI during the compromise window. This is the strongest argument for containerized deployments I've seen in production.
Why AI Tooling Is a Uniquely Attractive Target
Traditional supply chain attacks steal credentials. AI supply chain attacks steal the keys to every AI provider simultaneously. LiteLLM is literally a proxy that holds your OpenAI, Anthropic, Cohere, and Azure keys in one place. Compromise one dependency, harvest the entire stack.
The usage patterns make detection harder too. LLM API calls are inherently variable in timing, payload size, and destination. A few extra outbound HTTPS requests blend into normal traffic. The ecosystem is barely two years old -- most packages have zero security review compared to web frameworks with decades of hardening.
Concrete Hardening Steps
If you installed v1.82.7 or v1.82.8, assume full credential compromise and rotate everything. Beyond that:
# Check for rogue .pth files
find $(python -c "import site; print(site.getsitepackages()[0])") -name "*.pth" -exec echo {} \;
# Pin exact versions with hash verification
pip install --require-hashes -r requirements.txt
# Audit for known vulnerabilities
pip-audit --strict
- Pin exact versions:
litellm==1.82.6, neverlitellm>=1.0 - Use a private PyPI mirror (Artifactory, CodeArtifact) with upload scanning
- Monitor outbound traffic from LLM proxy services -- they should only talk to known provider endpoints
- Containerize production deployments -- the Docker users were the only ones unaffected
The AI ecosystem's velocity is its vulnerability. Every pip install in an AI project pulls a dependency tree that nobody has fully audited, built by maintainers who may not have enabled 2FA on their GitHub accounts. Until the ecosystem adopts mandatory signing, reproducible builds, and CI/CD provenance attestation, this won't be the last backdoor -- it'll be the one we were lucky enough to catch.