Blog · March 25, 2026

LiteLLM Supply Chain Attack: What Happened and How to Protect Your Systems

On March 24, 2026, the TeamPCP hacking group compromised LiteLLM, one of the most popular Python packages for unified LLM API access. The attack affected versions 1.82.7 and 1.82.8, and the group claims to have stolen data from hundreds of thousands of devices.

If you're running LiteLLM in production, you need to check your systems immediately. Here's what happened, what was stolen, and how to protect yourself.

What is LiteLLM?

LiteLLM is an open-source Python library that provides a unified API to call multiple LLM providers — OpenAI, Anthropic, Google, Azure, and others — through a single interface. It's massively popular: over 3.4 million downloads per day and 95 million downloads in the past month.

That popularity made it a prime target.

How the LiteLLM PyPI package was compromised

According to research by Endor Labs, the attackers injected malicious code into LiteLLM's proxy_server.py file. The payload was base64-encoded to avoid detection.

When users imported the compromised package, the malicious code executed automatically. Version 1.82.8 was even more aggressive — it installed a .pth file that runs every time Python starts, even if LiteLLM isn't explicitly used.

The attack deployed TeamPCP's "Cloud Stealer" malware, which harvests:

  • SSH keys and configuration files
  • Cloud credentials for AWS, GCP, and Azure
  • Kubernetes service account tokens and cluster secrets
  • Environment files (.env variants)
  • Database credentials
  • TLS private keys and CI/CD secrets
  • Cryptocurrency wallet data

The stolen data was bundled into an encrypted archive and exfiltrated to attacker-controlled infrastructure. The malware also installed a persistent systemd backdoor that periodically downloads additional payloads.

How many systems were affected?

TeamPCP claims approximately 500,000 data exfiltrations, though many may be duplicates. VX-Underground reports a similar number of infected devices.

Given LiteLLM's 3.4 million daily downloads, even a brief window of exposure could compromise enormous numbers of systems. The attack is particularly dangerous because LiteLLM is often used in backend services that handle API keys for multiple LLM providers.

Who is TeamPCP?

TeamPCP is the same group behind the recent Aqua Security Trivy breach, which cascaded into compromises of Docker images, Checkmarx KICS, and now LiteLLM. They've also been deploying wipers targeting Kubernetes clusters configured for Iranian infrastructure.

This group is sophisticated and persistent. They're specifically targeting developer tooling and security infrastructure.

How to check if you're affected by the LiteLLM breach

If you use LiteLLM, do this now:

1. Check your installed version. Run pip show litellm and verify you're not on 1.82.7 or 1.82.8. The latest clean version is 1.82.6.

2. Rotate everything. If you were compromised, assume all credentials on affected systems are exposed: API keys, cloud credentials, SSH keys, database passwords, Kubernetes tokens. Rotate them immediately.

3. Hunt for persistence. Check for:

  • ~/.config/sysmon/sysmon.py and related systemd services
  • Suspicious files at /tmp/pglog and /tmp/.pg_state
  • Unauthorized pods in the kube-system namespace
  • Any .pth files in your Python site-packages you didn't install

4. Monitor outbound traffic. The malware phones home to models.litellm[.]cloud and checkmarx[.]zone. Block these domains and check your logs for any prior connections.

Why PyPI supply chain attacks are so dangerous

This attack highlights a fundamental risk in modern development: any package you install has access to your entire runtime environment. LiteLLM wasn't compromised because of how it works — it was compromised because attackers got malicious code into a package that millions of developers install and trust.

Once that code runs in your environment, it can read SSH keys, cloud credentials, environment variables, Kubernetes secrets — anything your process can access. The attack surface isn't the package's functionality; it's the package's presence.

LiteLLM happens to be a high-value target because it's ubiquitous in AI infrastructure. But the same attack vector applies to any popular package with deep transitive dependencies.

How to reduce your Python dependency attack surface

The LiteLLM compromise is a reminder to audit your dependency tree, especially for packages that run in sensitive environments.

At MarginDash, we built our SDKs with zero required dependencies. The core npm and PyPI packages have no transitive dependency tree — there's no hidden package that could be compromised without us knowing. (Text estimation is available as an optional add-on for teams that need it.)

For organizations with strict supply chain policies, we also offer a REST API that requires no package installation at all. You can track costs and enforce budgets with plain HTTP calls. Your app keeps calling providers directly — we only see usage metadata (model names, token counts).

The real defense is fewer dependencies, pinned versions, and treating every package as a potential attack vector.

How to prevent supply chain attacks on AI infrastructure

1. Pin your dependencies. If you'd pinned litellm==1.82.6 instead of accepting any 1.82.x release, this attack wouldn't have affected you. Use lockfiles and version pinning for anything touching credentials.

2. Audit what touches your secrets. Map out every package that has access to API keys, cloud credentials, or environment variables. That's your blast radius for a supply chain attack.

3. Prefer architecture over trust. Don't just trust that popular packages won't be compromised. Design your systems so that a compromise has limited impact. Least privilege applies to dependencies too.

4. Have a rotation plan. When (not if) you need to rotate all credentials, how long will it take? If you don't have an answer, figure it out before the next breach.

Summary: LiteLLM breach response checklist

The LiteLLM compromise is one of the largest supply chain attacks targeting AI infrastructure to date. If you're affected, prioritize credential rotation over everything else. Assume the attackers have everything your Python process could access.

  • Check: Run pip show litellm — if version is 1.82.7 or 1.82.8, you're affected
  • Downgrade: pip install litellm==1.82.6
  • Rotate: All AWS, GCP, Azure credentials, SSH keys, API keys, database passwords
  • Hunt: Check for ~/.config/sysmon/, /tmp/pglog, unauthorized Kubernetes pods
  • Block: Outbound traffic to models.litellm[.]cloud and checkmarx[.]zone

For those building new AI systems: consider the security implications of your architecture choices. Convenience features that require deep runtime access are also deep runtime risks. Sometimes the boring, decoupled approach is the right one.

Monitor your AI costs without third-party SDKs in your runtime

MarginDash tracks AI spend with lightweight instrumentation that never touches your prompts, completions, or credentials. SDK-optional architecture means fewer attack surfaces.

See My Margin Data

No credit card required