Skip to main content
All Insights
AI & Cybersecurity·7 min read·

AI Development Tooling: The Supply Chain Attack Your Security Team Is Not Watching

By Dritan Saliovski

On March 31, 2026, a trojanized version of the axios npm package was published to the public registry, containing a Remote Access Trojan. The malicious versions (1.14.1 and 0.30.4) were live for approximately three hours. In that same window, Anthropic's Claude Code accidentally shipped its complete source code in a routine update. SentinelOne's AI-powered EDR detected and killed a separate trojanized AI-adjacent package in 44 seconds the same week. These are three distinct events that converged on the same risk: the software supply chain serving AI development tools is now a primary attack vector, and most security teams are not monitoring it as one.

Key Takeaways

  • A malicious axios npm package (versions 1.14.1, 0.30.4) deployed a Remote Access Trojan during a 3-hour window on March 31, 2026, coinciding with the Claude Code source leak
  • SentinelOne's EDR detected and terminated a trojanized AI-adjacent package in 44 seconds, demonstrating the speed gap between automated and manual detection
  • AI coding assistants that can both depend on and autonomously install npm packages create a bidirectional supply chain risk that traditional dependency scanning does not cover
  • Fewer than 30% of organizations have governance controls specifically addressing AI coding tool permissions, package installation rights, and code execution boundaries
  • The LiteLLM supply chain attack earlier in Q1 2026 demonstrated that AI tooling libraries are being specifically targeted as high-value supply chain entry points
3 hrsAxios trojan exposure window on npmnpm Registry incident report, March 31, 2026
44 secSentinelOne EDR detection time for trojanized packageSentinelOne, March 2026
<30%Of orgs with AI coding tool governance controlsIndustry survey data, Q1 2026

How AI Coding Tools Change the Supply Chain Risk Model

Traditional supply chain attacks target dependencies that developers knowingly add to their projects. A compromised package enters the codebase through a pull request, a lockfile update, or a direct installation command. Security teams monitor this through dependency scanning, lockfile auditing, and registry integrity checks.

AI coding assistants introduce a new vector. Tools like Claude Code, GitHub Copilot, and Cursor do not just depend on npm packages for their own functionality. They also recommend, generate, and in some configurations directly install packages on behalf of the developer. This creates a bidirectional supply chain risk.

Scroll to see more
AI coding tools create bidirectional supply chain riskDirection 1 - Upstream: compromised dependency enters through the toolnpm Registry(trojanized package)AI Tool Update(inherits malware)Developer Machine(compromised via update)Exfiltration(keys, source, data)Direction 2 - Downstream: AI tool recommends or installs compromised packageDeveloper Prompt“add HTTP requests”AI Recommends(package suggestion)Auto-Install(single keystroke)Codebase(trojan in project)No human review step in either direction at default configurationsTraditional dependency scanning covers Direction 1 onlySource: Innovaiden analysis based on documented npm supply chain incidents, March-April 2026
Source: Innovaiden analysis based on documented npm supply chain incidents, March-April 2026.
Scroll to see more

In the first direction, the AI tool itself has dependencies that can be compromised. When Claude Code depends on axios, and axios is trojanized, every developer who updates Claude Code during the compromise window inherits the malware through no action of their own. This is a standard supply chain attack, but the blast radius is amplified because AI coding tools have rapid adoption curves and frequent update cycles.

In the second direction, the AI tool acts as a package installation agent. When a developer asks their AI assistant to add HTTP request functionality, and the tool suggests and installs a package, the developer is trusting the model's judgment about which package is legitimate. If the model recommends a typosquatted or trojanized package, or if the tool has been configured with permissions that allow it to install packages without explicit approval, the compromise enters the codebase through a channel that no human directly reviewed.

We analyzed the broader category of AI assistant attack surfaces in March, focused on browser-based AI tools. The development tooling vector is distinct because it operates with higher system privileges. Browser assistants can access web content and DOM elements. AI coding tools can read and write files, execute shell commands, and modify the build pipeline itself.

The LiteLLM Precedent

The March 31 incidents were not the first time AI development tooling was specifically targeted. Earlier in Q1 2026, the LiteLLM package, a popular library for routing API calls across multiple AI model providers, was the subject of a supply chain attack. A trojanized version was published with code designed to exfiltrate API keys and environment variables from development machines.

SentinelOne documented the incident in detail. Their AI-powered EDR platform detected the malicious behavior and terminated the process in 44 seconds. The detection worked because the behavioral anomaly, an outbound network connection to an unknown endpoint immediately after package installation, triggered automated analysis before the exfiltration could complete.

The lesson is not that EDR solved the problem. The lesson is that without automated detection capable of sub-minute response, the exfiltration would have succeeded. Manual review of package installations, even in organizations with dedicated AppSec teams, operates on timescales of hours or days. The attack completes in seconds. As we detailed in our analysis of agentic attackers and 27-second breakout times, automation is no longer optional for the initial containment phase.

What Makes AI-Adjacent Packages High-Value Targets

AI development libraries have characteristics that make them attractive supply chain targets.

First, they are installed broadly and rapidly. When a new AI framework or SDK gains traction, adoption follows a steep curve as development teams rush to integrate it. The speed of adoption often outpaces security review. A trojanized version published during an adoption surge reaches a large number of machines quickly.

Second, they handle sensitive data by design. AI development tools routinely interact with API keys, model endpoints, training data, and in the case of coding assistants, the entire source code of the project being worked on. A compromised AI package has immediate access to high-value assets without needing lateral movement.

Third, they often request broad permissions. AI coding assistants need file system access, network access, and shell execution to function. These permissions are granted at installation and rarely revisited. A compromised tool operating within those existing permissions does not trigger the access control alerts that a newly privileged process would.

Governance Controls That Most Organizations Lack

The AI agent deployment security framework we published in March addresses governance for AI agents broadly. For AI development tools specifically, most organizations lack controls in three areas.

Package installation governance. Who approves the packages that an AI coding assistant recommends or installs? In most configurations, the tool can suggest and the developer can approve with a single keystroke. There is no policy layer between the model's recommendation and the installation command. Organizations should implement allowlists for AI-recommended packages, require lockfile review before committing AI-suggested dependencies, and restrict the tool's ability to install packages in production-adjacent environments.

Execution boundary controls. AI coding tools that can execute shell commands need explicit boundaries on what they can run, where they can connect, and what system resources they can access. The default configuration for most tools is permissive. Claude Code's hooks system, now fully documented through the source leak, allows pre and post-execution scripts that run automatically. Security teams should audit these configurations and restrict execution to sandboxed environments where possible.

Update and version pinning. AI development tools should not auto-update from public registries without verification. The March 31 axios compromise was effective precisely because it targeted a dependency that updates frequently and is rarely pinned to exact versions. Organizations should pin AI tool versions, verify checksums before updates, and implement a cooling-off period before adopting new releases.

Integration with Existing Security Programs

For organizations already operating under ISO 27001, NIS2, or DORA requirements, AI development tooling fits within existing supply chain security obligations. NIS2's Article 21(d) requires risk management for direct suppliers and service providers, including contractual security requirements. An AI coding tool vendor qualifies as a direct supplier to your development process.

DORA's ICT third-party risk management requirements apply to AI tools used within financial services development environments. If your engineers use an AI coding assistant to build or maintain systems that support financial services, that tool is an ICT third-party provider under DORA's definition.

The four-framework regulatory alignment analysis we published this week maps how NIS2, DORA, CRA, and the revised CSA each evaluate vendors across different dimensions. AI coding tool vendors should be assessed against all applicable frameworks, not just the one your compliance team happens to be focused on. For organizations managing AI data governance across multiple tools, the supply chain dimension adds urgency to vendor assessment.

What to Do Now

Inventory all AI coding tools in use across your development teams, including tools installed individually by developers without central IT approval. Audit the permissions each tool has: file system access, shell execution, network access, and package installation rights. Review npm lockfiles for compromised axios versions from March 31. Establish a vendor risk assessment process specifically for AI development tools that includes build pipeline practices and dependency management. Integrate AI tool governance into your existing SDLC security controls rather than treating it as a separate workstream.

The full Intelligence Brief covers the complete AI coding tool governance control matrix, a dependency chain risk assessment template, a comparison of default security postures across major AI coding tools, and an SDLC integration checklist for AI tool security controls.

Work With Us

Assess Your AI Development Tool Supply Chain

Innovaiden works with leadership teams deploying AI agents across their organizations - from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.

Get in Touch

Frequently Asked Questions

What is bidirectional supply chain risk in AI coding tools?

AI coding tools create risk in two directions. In the first direction, the tool itself has dependencies that can be compromised - when Claude Code depends on axios and axios is trojanized, every developer who updates inherits the malware. In the second direction, the AI tool acts as a package installation agent - it recommends and in some configurations directly installs packages, creating a channel no human directly reviewed.

What happened with the axios npm supply chain attack?

On March 31, 2026, trojanized versions of the axios npm package (1.14.1 and 0.30.4) were published containing a Remote Access Trojan. The malicious versions were live for approximately three hours, between 00:21 and 03:29 UTC. This coincided with Anthropic's Claude Code accidentally shipping its complete source code in the same window.

Why are AI development libraries high-value supply chain targets?

Three characteristics make them attractive: they are installed broadly and rapidly during adoption surges, they handle sensitive data by design (API keys, model endpoints, source code), and they often request broad permissions (file system access, network access, shell execution) that are rarely revisited after initial setup.

What governance controls do most organizations lack for AI coding tools?

Most organizations lack controls in three areas: package installation governance (no policy layer between AI recommendation and installation), execution boundary controls (default configurations are permissive for shell commands), and update and version pinning (AI tools auto-update from public registries without verification or cooling-off periods).

How does AI tool supply chain risk fit under NIS2 and DORA?

NIS2's Article 21(d) requires risk management for direct suppliers and service providers, including contractual security requirements - an AI coding tool vendor qualifies. DORA's ICT third-party risk management requirements apply to AI tools used within financial services development environments. Both frameworks extend to AI development tooling.

Sources

  1. npm Registry. axios versions 1.14.1 and 0.30.4 incident report. npmjs.com. 2026.
  2. SentinelOne. Trojanized AI package detection and LiteLLM supply chain attack report. sentinelone.com. 2026.
  3. Anthropic. Claude Code source map exposure incident statement. anthropic.com. 2026.
  4. Zscaler ThreatLabz. Claude Code exposure and AI coding tool threat analysis. zscaler.com. 2026.
  5. CrowdStrike. 2026 Global Threat Report - supply chain and AI-enabled attack data. crowdstrike.com. 2026.
  6. Industry survey data on AI coding tool governance adoption rates, Q1 2026. Synthesized from multiple analyst reports.