LiteLLM Supply Chain Attack: What Small Businesses Should Know
A major AI tool library was just compromised
On March 24, a supply chain attack hit LiteLLM, a widely used Python library that helps developers connect to AI models from OpenAI, Anthropic, Google, and dozens of other providers. The compromised package had 97 million monthly downloads and was embedded in over 600 public GitHub projects. If your business uses AI tools built on Python, this is worth paying attention to right now.
The attack was discovered and contained within about three hours, but the damage window was real. Malicious code in versions 1.82.7 and 1.82.8 was designed to steal SSH keys, cloud credentials, API keys, and cryptocurrency wallet data from any machine that installed the update.
What happened
The attack chain
A threat actor known as “TeamPCP” did not hack LiteLLM directly. Instead, they compromised Trivy, a popular open-source security scanner used in LiteLLM’s CI/CD pipeline. By injecting malicious code into the build process, the attacker was able to publish tainted versions of LiteLLM to PyPI, the main Python package repository.
This is what makes supply chain attacks so dangerous. The LiteLLM maintainers did nothing wrong. A tool they trusted — one specifically designed to catch security issues — was itself compromised.
Key facts
- Affected versions: LiteLLM 1.82.7 and 1.82.8 on PyPI
- Exposure window: Approximately 3 hours before discovery and quarantine
- What was stolen: SSH keys, cloud provider credentials, API keys, crypto wallets
- Detection: Security firm Wiz identified the compromise and flagged it
- Current status: All LiteLLM packages on PyPI are quarantined pending a clean release
- Scope: 600+ public GitHub projects had unpinned dependencies that could have auto-updated
Why this matters for small businesses
You might not have heard of LiteLLM. You probably have not installed it yourself. But here is the thing: your AI tools might depend on it.
LiteLLM is infrastructure software. It sits between an application and the AI models it uses. Developers use it because it provides a single interface to dozens of AI providers. If the chatbot on your website, the AI scheduling tool your team uses, or the content generator in your workflow was built by a developer who used LiteLLM, your business could be indirectly affected.
The broader supply chain problem
This is not an isolated incident. Supply chain attacks have been rising steadily across the software industry. The pattern is consistent: attackers do not target the final product. They target a dependency buried three or four layers deep — something the end user never sees and cannot audit.
For small businesses, this creates a genuine blind spot. You evaluate the tool you buy. You check the vendor’s reputation, read reviews, maybe even test a trial version. But you have no visibility into the hundreds of open-source libraries that tool depends on. A single compromised dependency can expose your credentials, customer data, and business accounts.
If you want a practical framework for vetting AI tools before you commit, our guide on how to evaluate AI tools covers the key questions to ask — including questions about a vendor’s security practices.
Our take
What we think
This attack reinforces something we have been saying: the security of your AI tools is only as strong as the weakest link in their supply chain. And for most small businesses, that supply chain is completely opaque.
The bottom line: You do not need to understand supply chain security at a technical level. But you need to ask your AI vendors whether they do.
The good news is that the attack was caught quickly. The bad news is that the three-hour window was enough to compromise any system that auto-updated during that period. And the attacker — TeamPCP — has been linked to previous supply chain campaigns, meaning this is a persistent threat, not a one-off.
What is missing from the conversation
- Most coverage focuses on developers, not the businesses that use the tools developers build. If a SaaS product you rely on was affected, the developer’s problem becomes your problem.
- “It was only three hours” downplays the risk. Automated CI/CD pipelines can pull and install a compromised package within minutes of its release. Three hours is more than enough.
Questions that remain
- How many private (non-public) projects pulled the compromised versions?
- Which commercial AI tools depend on LiteLLM, and have they confirmed they were unaffected?
- What additional security measures will PyPI implement to prevent similar attacks?
What you should do
Immediate actions
- Ask your AI tool vendors — If you use any AI-powered SaaS product, send a quick email: “Are you affected by the LiteLLM supply chain attack disclosed on March 24?” A responsible vendor will have already checked and can give you a clear answer.
- Check your own systems — If your team runs any Python-based AI tools internally (even scripts), check whether LiteLLM is in your dependency tree. Run
pip list | grep litellmon any relevant machine. If you find versions 1.82.7 or 1.82.8, rotate all credentials on that system immediately. - Rotate exposed credentials — If there is any chance a system pulled the compromised package, change your API keys, cloud credentials, and SSH keys now. Do not wait for confirmation of theft.
Watch for
- LiteLLM’s clean release — The package is currently quarantined on PyPI. Wait for an official announcement before updating.
- Vendor disclosures — Watch for statements from AI tool providers about whether their products were affected.
- Follow-on attacks — Stolen credentials are often used days or weeks after initial theft. Monitor your accounts for unusual activity.
Resources
- LiteLLM official security update
- Wiz technical analysis of the attack
- 81% of small businesses were breached last year — broader context on the cybersecurity landscape
Staying ahead of AI security threats
Supply chain attacks are one of the hardest threats to defend against because they exploit trust — trust in legitimate tools, maintained by legitimate developers, distributed through legitimate channels. The LiteLLM incident is a reminder that adopting AI tools means inheriting their entire dependency tree, including its vulnerabilities.
The practical response is not to avoid AI tools. It is to choose vendors who take supply chain security seriously, to keep credentials rotatable and segmented, and to ask the uncomfortable questions before an incident forces you to.
If you are evaluating AI tools for your business and want to understand the security implications, get in touch — we help small businesses adopt AI without taking on unnecessary risk.