← All posts
Supply Chain Attack CI/CD Security PyPI Vibe Coding Credential Theft Post-Mortem

Your Security Scanner Just Became the Backdoor: The LiteLLM Attack That Should Terrify Every Developer

On March 24, 2026, every developer who ran pip install litellm handed their SSH keys, AWS credentials, Kubernetes tokens, and cryptocurrency wallets to an attacker — and had no idea it was happening. The library is installed 95 million times per month. It sits inside 36% of cloud environments. And the attack vector was not LiteLLM's code. It was the vulnerability scanner that was supposed to protect it. If you use Python in production, if you have a CI/CD pipeline, if your coding agent has ever run pip install on your behalf — this is the attack that should keep you up tonight.

April 3, 2026 22 min read PhantomCorgi Security Research
TL;DR
  • TeamPCP exploited a GitHub Actions workflow vulnerability in Aqua Security's Trivy to steal LiteLLM's PyPI publishing credentials
  • Two malicious versions (1.82.7 and 1.82.8) were published to PyPI — neither existed in LiteLLM's GitHub repo
  • Version 1.82.8 used Python's .pth file mechanism to execute on every Python interpreter startup — no import required
  • The payload harvested SSH keys, cloud credentials, crypto wallets, and deployed privileged Kubernetes pods across every node
  • The attack was discovered not by any security tool, but by RAM exhaustion from an accidental fork bomb
  • This was Phase 09 of an ongoing campaign that later spread to npm's axios package
A note on sources

This analysis draws from security vendor reports (Wiz, Snyk, Sonatype, Trend Micro, ReversingLabs, Kaspersky, OX Security, Cycode), the independent discovery by FutureSearch, LiteLLM's official security disclosure, and Japanese-language analysis from Qiita and Zenn. Each vendor has a commercial interest in amplifying supply chain risk — it is what they sell defenses for. The technical facts are consistent across all sources and independently reproducible from the malicious package artifacts. We distinguish between verified technical details and vendor framing throughout. This analysis cross-references fourteen sources across English and Japanese.

What Is LiteLLM, and Why Should You Be Worried?

LiteLLM is a Python library that provides a unified API gateway for over 100 LLM providers — OpenAI, Anthropic, Google, AWS Bedrock, Azure, and dozens more. Write one function call, swap providers by changing a string. It is the kind of library that becomes load-bearing infrastructure precisely because it is so convenient. And that convenience just became the single largest attack surface in the AI ecosystem.

The numbers are staggering: 95 million monthly downloads. Present in 36% of cloud environments according to Wiz. And here is the part that should make your stomach drop: it is a transitive dependency of mainstream frameworks — CrewAI, DSPy, MLflow. Many developers had LiteLLM running on their machines without knowing it. They had never typed pip install litellm. It came in through something else. They were compromised by a package they did not know they had.

When a library is this widely installed and this deeply embedded, compromising it does not just affect its direct users. It detonates across entire dependency graphs. The blast radius is not measured in users of one tool. It is measured in every Python environment that transitively trusts PyPI. That is most of them. That is probably yours.

The Attack Chain: Your Security Tools Are the Weapon Now

This is what makes the LiteLLM compromise fundamentally different from anything we have seen before — and far more dangerous than a typical package hijacking. The attackers did not phish a maintainer. They did not typosquat a package name. They turned the security scanner that was supposed to protect LiteLLM into the weapon that destroyed it. The tool you trust to find vulnerabilities became the vulnerability. Let that sink in.

Phase 1 — Poisoning the Scanner (Late February – March 19)

The attack began with Aqua Security's Trivy — one of the most widely used open-source vulnerability scanners in the industry. TeamPCP identified a pull_request_target workflow vulnerability in Trivy's GitHub Actions configuration. This is a known dangerous pattern: pull_request_target runs workflows in the context of the base repository with access to its secrets, while processing code from the forked repository.

Using this access, TeamPCP exfiltrated credentials for the aqua-bot service account and rewrote Git tags to point to a malicious release. Anyone pulling Trivy by tag (rather than by commit SHA) would get the compromised version — v0.69.4, but not the v0.69.4 the Trivy team had published.

The cascading trust chain
1. Trivy GitHub Actions → pull_request_target vuln
2. Exfiltrate aqua-bot credentials → rewrite git tags
3. LiteLLM CI/CD pulls Trivy from apt (unpinned)
4. Compromised Trivy exfiltrates PYPI_PUBLISH token
5. TeamPCP publishes backdoored litellm to PyPI
6. 95M monthly downloads → credential harvest at scale
Total trust chain depth: 3 hops. Total time to mass compromise: hours. Total security tools that caught it: zero.

Phase 2 — Infrastructure Setup (March 23)

The day before the attack, TeamPCP registered two domains that would serve as command-and-control infrastructure: checkmarx[.]zone — impersonating the legitimate security vendor Checkmarx — and models[.]litellm[.]cloud, mimicking LiteLLM's own infrastructure. Both were registered through Spaceship, Inc. and hosted on DEMENIN B.V. — the same registrar and hosting provider used across all phases of the TeamPCP campaign. This consistency is what enabled attribution across incidents.

Phase 3 — The Publish (March 24, 10:39 UTC)

LiteLLM's CI/CD pipeline ran Trivy as a standard part of its build process. It pulled Trivy from apt without a pinned version. The compromised Trivy action exfiltrated the PYPI_PUBLISH token from the GitHub Actions runner environment. Using LiteLLM's own legitimate PyPI credentials, TeamPCP published two backdoored versions within thirteen minutes:

v1.82.7
Published 10:39 UTC
Base64-encoded payload embedded in litellm/proxy/proxy_server.py. Executes when litellm --proxy runs or the proxy module is imported. Drops payload to disk as p.py.
v1.82.8
Published 10:52 UTC
Added a .pth file (litellm_init.pth) that fires on every Python interpreter startup. No import needed. Runs during pip, python -c, even IDE language servers. The nuclear option.

Neither version existed in LiteLLM's GitHub repository. No pull request. No code review. No CI check. The malware was published directly to PyPI using LiteLLM's own stolen credentials, and PyPI distributed it without question. Anyone who ran pip install litellm without a version pin during those hours — and that includes every AI coding agent that casually installs packages on your behalf — received the backdoored package. Your machine was owned before you finished your morning coffee.

The Payload: Everything You Own, Gone in Seconds

What happened next on every infected machine is the stuff of nightmares. This was not a quick-and-dirty credential scraper. It was a military-grade, three-stage operation designed for total credential exfiltration, encrypted transport that evades network monitoring, and persistent backdoor access that survives reboots. If it ran on your system, assume everything is compromised.

Stage 1 — Credential Harvesting

The payload systematically swept the infected system for every credential it could find:

What the payload collected
Category Targets
SSH keys RSA, Ed25519, ECDSA, DSA private keys
Cloud credentials AWS (including EC2 metadata service, Secrets Manager, SSM Parameter Store via self-implemented SigV4 signing), GCP service accounts, Azure tokens
Container & orchestration Kubernetes configs, service account tokens, Docker registry tokens
Developer tools GitHub CLI tokens, npm tokens, Terraform/Helm artifacts, CI/CD configurations
Cryptocurrency Bitcoin, Ethereum, Monero, Solana wallets
System Environment variables, shell history, database passwords, /etc/shadow password hashes
API keys Every environment variable matching common key patterns (STRIPE, OPENAI, ANTHROPIC, etc.)

The SigV4 implementation is where this gets truly terrifying. The payload did not just read AWS credential files from disk. It implemented the AWS Signature Version 4 signing protocol from scratch to directly call the EC2 metadata service, Secrets Manager, and SSM Parameter Store. It reached into your cloud account and pulled secrets that were never stored on the local filesystem. This is not script-kiddie tooling. This is a nation-state-grade adversary who built their own AWS SDK equivalent inside a 34KB payload. Your secrets vault is not safe just because the keys are not on disk.

Stage 2 — Encrypted Exfiltration

Harvested credentials were not sent in plaintext. The payload generated AES-256-CBC session keys via openssl rand, encrypted all collected data with PBKDF2 key derivation, wrapped the session key in 4096-bit RSA-OAEP encryption, bundled everything as tpcp.tar.gz, and POSTed it to https://models.litellm.cloud/.

The domain name was a deliberate act of camouflage. A security analyst glancing at network logs would see traffic to what looks like LiteLLM's legitimate model API endpoint. Would you catch the difference between .cloud and .ai in a wall of network logs at 3 AM? Neither would your SOC team. That is the point.

Stage 3 — It Never Leaves

After stealing everything, the payload did not exit. It dug in. It established persistence that survives reboots, process kills, and even package uninstallation:

  • Wrote backdoor scripts to ~/.config/sysmon/sysmon.py
  • Registered systemd services for automatic restart
  • Polled https://checkmarx.zone/raw every 5–50 minutes for new command payloads

In Kubernetes environments, the escalation was catastrophic. The payload deployed privileged pods named node-setup-{node_name} across every single node in the cluster with hostPID: true and hostNetwork: true, mounting host filesystems to install systemd backdoors directly on the underlying nodes. Read that again: one developer running pip install on their laptop could hand an attacker persistent root access to every node in your production Kubernetes cluster. And uninstalling the package does not remove the backdoor. It is already on the host OS.

MITRE ATT&CK mapping
T1546.018 — Python Startup Hooks (.pth file execution on interpreter launch)
T1003 — Credential Dumping (systematic sweep of SSH keys, cloud creds, env vars)
T1610 — Deploy Container (privileged Kubernetes pods for lateral movement)

The .pth File: Python Has a Backdoor and Nobody Told You

The most dangerous aspect of v1.82.8 is not the payload itself. It is the delivery mechanism — and it exposes a fundamental, unfixed weakness in Python's package ecosystem that most developers have no idea exists. If you write Python, the next paragraph should frighten you.

Python's .pth files were designed for path configuration — telling the interpreter where to find additional packages. But they have a lesser-known feature: any line starting with import is executed as Python code on every interpreter startup. Not when the package is imported. On every single Python invocation.

This means:

  • Running pip install some-other-package triggers the payload
  • Running python -c "print('hello')" triggers the payload
  • Your IDE's language server starting up triggers the payload
  • Any Python-based tool in your PATH triggers the payload

Critically, the .pth file was correctly declared in the wheel's RECORD file, so it passes pip's hash verification. The package installs cleanly. No warnings. No errors. No indication whatsoever that your machine is now under enemy control. The malicious code is 34,628 bytes of double base64-encoded payload that fires via subprocess, making static analysis significantly harder. Your linter will not catch it. Your SAST tool will not catch it. Your code review will not catch it.

As the Qiita analysis by AI-SKILL-LAB noted: the shock is not that a package with 95 million monthly downloads was compromised — it is that twelve lines of code in a .pth file were sufficient to turn every Python interpreter on the machine into a credential-harvesting agent. Twelve lines. And this attack vector still works today. Python has not patched it. pip has not patched it. The .pth mechanism is still there, still executing arbitrary code on every interpreter startup, waiting for the next attacker to use it.

How It Was Discovered: Dumb Luck. That Is It.

No security tool caught this. No vulnerability scanner flagged it. No code review process blocked it. No anomaly detection system raised an alert. The entire global security industry — every scanner, every monitor, every SOC — missed it completely. The attack was discovered because of a bug in the attack.

The .pth file spawned a subprocess to execute its payload. That subprocess invoked Python. Which triggered the .pth file again. Which spawned another subprocess. The result was an unintended fork bomb that caused RAM exhaustion on affected systems.

Security researcher Callum McMahon at FutureSearch noticed the anomalous resource consumption and traced it to the litellm_init.pth file. Without this bug, the payload would have operated silently. The attackers' code was too aggressive for its own delivery mechanism.

Think about that. Really think about it. If TeamPCP had added a single if os.path.exists('/tmp/.lock'): sys.exit() — one line of code — this attack would still be running undetected right now. Silently harvesting credentials from thousands of machines. Silently deploying backdoors across Kubernetes clusters. Silently exfiltrating your AWS Secrets Manager to a server in the Netherlands. The only reason the industry found it is because the attackers made an amateur mistake. Next time, they will not.

The Vibe Coding Multiplier: Why This Is About to Get Much, Much Worse

Jim Fan, NVIDIA's Director of AI, posted the commentary that went viral on LinkedIn (2,137 likes, 143 comments): "LiteLLM has been compromised with a vibe-coded attack. This is pure nightmare fuel." He is right. And the nightmare is just getting started.

You do not know what is on your machine

In the vibe coding era, developers do not manage dependency trees. They do not even see them. They ask an AI agent to build something, the agent runs pip install, and whatever comes in, comes in. LiteLLM is a transitive dependency of CrewAI, DSPy, and MLflow. A developer who asked their coding agent to "build me a multi-agent system" likely had LiteLLM installed without ever seeing its name in a requirements file. They were compromised by a package they never asked for, installed by an agent they gave full disk access to.

Fan's framing: "Your entire filesystem is the new distributed codebase. Every file that could go into context would add to the attack vector. Every text can be a base64 virus."

Your coding agent is the perfect attack vector

Coding agents need broad filesystem and network access to be useful. But that same access is exactly what makes a compromised dependency catastrophic. As Fan put it: "There is very little middleground between 'pressing yes mindlessly for every edit' and '--dangerously-skip-permissions'."

A compromised package running inside a coding agent's environment does not just have access to the project directory. It has access to everything the agent has access to~/.ssh, ~/.aws, ~/.kube, shell history, environment variables, and the agent's own configuration and skill files. You gave your agent god-mode access to your machine so it could be productive. Now every package it installs has that same god-mode access. How many packages did it install last week? Do you even know?

The agent becomes the attacker

Fan predicted the most disturbing attack surface of all: the agent's context window itself. A compromised dependency could inject instructions into files that the agent later reads as context — poisoning ~/.claude, skill directories, or any document the agent processes. The agent would then "impersonate you to an uncanny degree on all your personal and work accounts, and replicate a distorted version of your digital soul faster than you can change password." Your trusted AI assistant becomes an insider threat that knows everything you know, has access to everything you have access to, and is actively working against you. And you would not notice because it still looks like the same helpful agent.

The de-vibing industry

Fan's most prescient observation: "There will be a full blooming industry for 'de-vibing': dampening the slop and putting guardrails/accountability around agentic frameworks. They are the boring old, audited Software 1.0 that watches over the rebellious adolescents of Software 3.0."

This is exactly the space that Code Corgi operates in — applying deterministic, auditable security analysis to the outputs of non-deterministic AI coding systems.

TeamPCP: This Is Not Over. It Is Accelerating.

If you are telling yourself this was a one-off incident, stop. The LiteLLM compromise was Phase 09 of an ongoing, multi-ecosystem supply chain campaign that has been active since December 2025 — and it is picking up speed.

TeamPCP campaign timeline
Date Target Vector
Dec 2025+ Various (Phases 01–08) Multi-ecosystem package poisoning
Late Feb–Mar 19 Aqua Security Trivy, Checkmarx KICS pull_request_target workflow exploit
Mar 24 LiteLLM (PyPI) Stolen CI/CD credentials → direct PyPI publish
Mar 25 Telnyx Same campaign pattern
Mar 31 npm axios Cross-ecosystem spread to JavaScript

Attribution is strong. ReversingLabs and Wiz independently confirmed that the same RSA public key appears across the Trivy, KICS, and LiteLLM payloads — the strongest technical link. The consistent tpcp.tar.gz bundle filename, shared registrar, and hosting provider tie the infrastructure together. This is a single actor with a systematic playbook: compromise the security tooling of a target, steal their publishing credentials, distribute malware through their trusted distribution channel.

The Japanese coverage on Qiita and Zenn highlighted the cascade trajectory: Trivy (Mar 19) → LiteLLM (Mar 24) → npm axios (Mar 31). Each compromise enabled the next. The gap between attacks is shrinking. The ecosystem spread is widening. Python today. JavaScript tomorrow. Your language is next.

Who Was Safe? Almost Nobody.

One group was not affected: users of the official LiteLLM Proxy Docker image. The Docker image pins dependencies in requirements.txt and does not rely on PyPI resolution at build time. This is precisely the kind of hardened supply chain practice that would have prevented exposure for everyone — if everyone did it. Almost nobody does. Do you? Are you sure? When was the last time you checked whether your CI pipeline pins dependencies by SHA hash rather than version string?

The Questions That Should End Careers

1. Why was the security scanner unpinned?

LiteLLM's CI/CD pipeline pulled Trivy from apt without pinning to a specific version or verifying a checksum. This is a common pattern — and it is the root cause of the entire chain. If Trivy had been pinned to a commit SHA rather than a tag, the compromised version would never have entered the pipeline. The irony is striking: the tool designed to find vulnerabilities became the vulnerability.

2. Why did PyPI allow publishing versions not in the repo?

The two malicious versions — 1.82.7 and 1.82.8 — never existed in LiteLLM's GitHub repository. There was no pull request, no code review, no CI run against the actual published code. PyPI accepted the upload because the credentials were valid. There is no mechanism in PyPI to enforce that a published version corresponds to a tagged commit in the source repository. The trust model is entirely credential-based. The world's largest Python package registry will distribute literally anything to millions of machines as long as you have the right API token. That is not a trust model. That is a liability.

3. Why was this discoverable only by accident?

The attack was caught because of a fork bomb — a bug in the malware. Not by any security scanner. Not by dependency monitoring. Not by anomaly detection in the package registry. The .pth file passed pip's hash verification. The package metadata was consistent. The version numbers were plausible. Every automated check said "this is fine."

If PyPI had a quarantine period for new versions — even 72 hours — the community would have had time to notice the discrepancy between the Git repo and the published package. As of today, such quarantine is only available through third-party registry proxies.

4. Where is the accountability for GitHub Actions defaults?

The pull_request_target trigger is documented as dangerous by GitHub itself. Yet it remains the default suggestion in many CI workflow templates. Trivy is not the first project to be compromised through this pattern, and it will not be the last. The question is whether this should be a footgun that projects can accidentally deploy, or whether the default should be safe with an explicit opt-in to the dangerous variant.

A Pattern That Should Terrify Every CISO

Three weeks ago, we published our analysis of the McKinsey Lilli breach. On the surface, the two incidents look very different — one is an API vulnerability in an AI platform, the other is a supply chain compromise of a developer tool. But the underlying pattern is identical, and it is the pattern that should terrify you: AI infrastructure is being built faster than the security practices needed to protect it, and the gap is widening every month.

Pattern comparison
Pattern McKinsey Lilli LiteLLM
Convenience over security 22 unauthenticated endpoints left for dev convenience Unpinned Trivy in CI for ease of updates
Trust model failure Public API docs treated as safe Git tags treated as immutable (they are not)
Blast radius amplification SQLi → IDOR → prompt compromise Scanner → CI/CD creds → PyPI → 95M downloads/month
Scanner blind spot No tool tested JSON key injection No tool checked .pth file content

What We Got Wrong (Red-Teaming Our Own Narrative)

Contradiction resolution
Claim Reality check
"95 million monthly downloads affected" 95M is the total install rate. The malicious versions were live for 2–3 hours. Actual exposure is a small fraction of the monthly total. The 95M figure describes the target's reach, not confirmed compromise.
"36% of cloud environments affected" 36% have LiteLLM present, not 36% installed the compromised version. Wiz's framing describes the potential blast radius, not actual impact.
"Vibe coding caused this" Vibe coding amplified the exposure by making transitive dependencies invisible. The root cause was a CI/CD misconfiguration in Trivy and unpinned dependencies in LiteLLM. These are not vibe coding problems — they are DevOps hygiene problems.
Duration of exposure LiteLLM says ~5 hours (10:39–16:00 UTC). Wiz timeline suggests PyPI quarantine at ~11:25 UTC (<1 hour). FutureSearch says ~2 hours. The exact window is disputed.

The Japanese Perspective: What English Coverage Missed

The Japanese developer community on Qiita and Zenn produced some of the most actionable analysis of this incident — and moved faster to practical solutions.

A Zenn article by fugithora812 demonstrated how to use Claude Code Hooks as a supply chain defense — configuring pre-install hooks that verify package checksums and compare against known-good versions before any dependency is installed. Another by primenumber detailed an internal rollout of SHA pinning, minimum-release-age policies, and Takumi Guard — a registry proxy that enforces a 72-hour quarantine on new package versions before they become available to CI/CD pipelines.

The Qiita article by SoySoySoyB framed the broader lesson: in the AI coding era, supply chain attacks do not target your code. They target your tools' dependencies' build systems' security scanners. The attack surface is the entire transitive trust graph, and vibe coding makes that graph invisible to the developer.

What LiteLLM Did Right

LiteLLM's incident response deserves acknowledgment:

  • Removed compromised packages from PyPI within hours
  • Rotated all credentials immediately
  • Engaged Google's Mandiant for forensic analysis
  • Published a transparent security disclosure with full technical details
  • Released a clean version (v1.83.0) on March 30 via a new CI/CD v2 pipeline with isolated environments

The new CI/CD pipeline addresses the root cause: isolated build environments and stronger security gates that prevent credential exfiltration even if a build tool is compromised. This is the right structural fix.

What To Do Right Now. Not Monday. Now.

01
Check if you are already compromised
Search for litellm_init.pth in your Python site-packages. Check for IoC files: tpcp.tar.gz, /tmp/pglog, /tmp/.pg_state, ~/.config/sysmon/sysmon.py. If you find ANY of these, stop reading this article and start rotating every credential you have. Every SSH key. Every API key. Every cloud credential. Every database password. Do it now.
02
Pin ALL dependencies to exact versions and SHA hashes
pip install litellm==1.82.6 is not enough. Use pip install litellm==1.82.6 --hash=sha256:... to verify the package artifact. Pin GitHub Actions to commit SHAs, not tags. Tags can be rewritten. SHAs cannot.
03
Audit your CI/CD for unpinned security tools — you almost certainly have this vulnerability
If your pipeline runs Trivy, Snyk, Semgrep, or any other security scanner pulled at build time without version pinning, you have the EXACT same vulnerability LiteLLM had. Right now. Today. The irony of your vulnerability scanner being the vulnerability is not theoretical — it is what just happened to a project with 95 million monthly downloads.
04
Deploy a registry proxy with quarantine
Tools like Takumi Guard or Artifactory can enforce a 72-hour quarantine on new package versions. If this had existed in LiteLLM's pipeline, the malicious versions would never have been installed.
05
Minimize your dependency tree — every package is a loaded gun
As Jim Fan noted: "people rarely need all the APIs supported in LiteLLM, might as well build a custom router with only what you need on the fly." Every dependency is an attack surface. Every transitive dependency is an invisible attack surface. How many packages does your project import? How many of those packages' maintainers would you trust with your AWS root credentials? Because that is what you are doing.
06
Isolate development environments from credentials
Use aws-vault or SSO instead of plaintext credentials in dotfiles. Use FIDO2 security keys for SSH instead of file-based keys. Run development inside containers that do not have access to your host's ~/.ssh or ~/.aws directories.
07
Scan PRs for supply chain attack patterns
Unicode homoglyphs, base64-encoded payloads, obfuscated eval() calls, and unexpected .pth files in package manifests are all detectable at code review time — if your tooling looks for them.

The Uncomfortable Truth

The LiteLLM attack is not a story about a compromised Python package. It is a warning shot about the future of software development — and that future is already here. The entire software supply chain, from security scanners to CI/CD pipelines to package registries, operates on implicit trust. A single adversary decided to test every link. Every link broke.

TeamPCP did not need a zero-day. They did not need insider access. They did not need to be particularly clever. They exploited a known-dangerous GitHub Actions trigger that GitHub itself documents as risky, stole credentials from a build environment that should not have exposed them, and published malware through a legitimate distribution channel using the project's own credentials. Every single step used a known weakness. Nothing was novel. And it still worked.

The vibe coding era makes this catastrophically worse, not because vibe coding introduces new vulnerabilities, but because it makes existing ones completely invisible. When an AI agent installs forty packages to build your project and you approve each one with a keystroke, you are trusting the entire transitive dependency tree of every package, the CI/CD pipeline of every maintainer, the security tooling of every CI provider, and the credential management practices of every developer in that chain. You are trusting all of it. You do not know you are trusting any of it. And if any one of those thousands of links is compromised, your SSH keys, your cloud credentials, your crypto wallets, and your Kubernetes clusters are on an attacker's server before you even see the next terminal prompt.

The question is not whether another package will be compromised this way. TeamPCP already spread to npm's axios one week later. The question is not even when — it is happening continuously. The question is whether you will still be telling yourself "it won't happen to us" when the incident response team calls you at 2 AM.

CC

How Code Corgi Detects Supply Chain Attacks at the PR Level

PhantomCorgi Invisible Threat Detection

Base64 payload detection
Identifies obfuscated payloads in source code, configuration files, and package manifests — the exact pattern used in litellm_init.pth.
Unicode & homoglyph scanning
Catches invisible characters (U+200C, zero-width joiners) and Cyrillic lookalikes that disguise malicious code as legitimate.
Semantic pattern analysis
Detects eval(), exec(), dynamic imports, subprocess spawning, and obfuscation patterns that indicate code designed to hide its intent.
.pth and startup hook monitoring
Flags any .pth file additions or modifications in Python packages — a blind spot in standard code review that this attack exploited.
Dependency manifest diffing
Alerts when dependencies are added, removed, or version-bumped in requirements files, lockfiles, or package manifests.
CI/CD configuration review
Scans GitHub Actions workflows for dangerous triggers like pull_request_target and unpinned action references — the entry point of the Trivy compromise.
Install Code Corgi →

The next LiteLLM is already in your pipeline. Find it first.

Code Corgi scans every pull request for obfuscated payloads, .pth file backdoors, homoglyph attacks, suspicious dependency changes, and dangerous CI/CD configurations — every attack pattern that built the LiteLLM compromise. Because the alternative is finding out from your incident response team.