Illustration of the Claude Code source leak caused by an npm packaging error
A missing configuration file turned one of AI’s most closely guarded codebases into a public download for several hours on March 31, 2026.
Anthropic accidentally shipped a 59.8 MB JavaScript source map inside Claude Code’s npm package (version 2.1.88), exposing 512,000 lines of unobfuscated TypeScript across 1,906 files. Security researcher Chaofan Shou spotted the exposure within hours and posted it on X. The file pointed to a ZIP archive hosted on Anthropic’s own Cloudflare R2 storage. The company confirmed the incident was a packaging error caused by human error, with no customer data or model weights involved.
Why It Matters
The code spread faster than Anthropic could contain it. Within hours of discovery the repository had been forked more than 82,000 times, and a clean-room Python rewrite reached 50,000 GitHub stars in roughly two hours – a pace that observers described as the fastest-growing repository in GitHub history. Anthropic’s subsequent DMCA takedown campaign generated its own controversy when it accidentally removed more than 8,000 unrelated repositories before the company retracted most notices.
The exposed source revealed 44 unreleased feature flags covering more than 20 unshipped capabilities. Among the most discussed: KAIROS, an autonomous daemon mode where Claude operates as a persistent background agent; Undercover Mode, which instructs the model to conceal that it is an AI when contributing to open-source projects; and an anti-distillation system that injects fake tool definitions to degrade competitors’ training data. Internal comments also referenced a 29–30% false claims rate in the current model – a regression from 16.7% in the previous version. Codenames for undisclosed model variants (Capybara, Fennec) surfaced alongside details of a self-healing memory architecture.
The leak was Anthropic’s second significant data exposure in five days. A CMS misconfiguration days earlier had exposed roughly 3,000 internal files, including references to an unreleased model called Claude Mythos. Zscaler’s ThreatLabz team separately identified malicious actors exploiting the incident, creating fake “leaked source” repositories on GitHub to distribute Vidar Stealer malware and GhostSocks proxy tools.
What’s Next
Anthropic has patched the npm packaging process and published a post-mortem attributing the incident to a missing .npmignore file. The company stated it is reviewing its release pipeline to prevent recurrence.
The business consequences are still developing. Enterprise customers evaluating Claude Code for sensitive environments are now aware of internal quality metrics – including the false claims regression – that Anthropic had not planned to disclose. Legal exposure from the DMCA overcorrection remains an open question, as affected repository owners have begun documenting wrongful takedowns.
For the AI business sector broadly, the incident illustrates a structural tension: the companies building the most capable AI systems are scaling their engineering organizations at a pace that outstrips operational controls. When a single missing file can expose half a million lines of proprietary code, the security posture of even well-funded AI labs warrants scrutiny.
Anthropic’s annualized revenue recently crossed $19 billion. The source code that was briefly public represents the engineering foundation of its fastest-growing product line.
Sources: CNBC · The Hacker News · VentureBeat · Layer5
