Skip to content
Neural Network World

Neural Network World

Independent AI News & Analysis

Primary Menu
  • Latest News
  • AI News
  • AI Business
  • AI Research
  • AI Ethics
  • Machine Learning
  • Robotics
Light/Dark Button
Subscribe
  • Home
  • AI News
  • Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error
  • AI News

Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error

Neural Network World Editorial Team April 6, 2026 (Last updated: April 6, 2026) 3 minutes read
Editorial illustration of a proprietary AI codebase exposed through an npm packaging error with leaked source files across developer screens

Illustration of the Claude Code source leak caused by an npm packaging error

A missing configuration file turned one of AI’s most closely guarded codebases into a public download for several hours on March 31, 2026.

Anthropic accidentally shipped a 59.8 MB JavaScript source map inside Claude Code’s npm package (version 2.1.88), exposing 512,000 lines of unobfuscated TypeScript across 1,906 files. Security researcher Chaofan Shou spotted the exposure within hours and posted it on X. The file pointed to a ZIP archive hosted on Anthropic’s own Cloudflare R2 storage. The company confirmed the incident was a packaging error caused by human error, with no customer data or model weights involved.

Why It Matters

The code spread faster than Anthropic could contain it. Within hours of discovery the repository had been forked more than 82,000 times, and a clean-room Python rewrite reached 50,000 GitHub stars in roughly two hours – a pace that observers described as the fastest-growing repository in GitHub history. Anthropic’s subsequent DMCA takedown campaign generated its own controversy when it accidentally removed more than 8,000 unrelated repositories before the company retracted most notices.

The exposed source revealed 44 unreleased feature flags covering more than 20 unshipped capabilities. Among the most discussed: KAIROS, an autonomous daemon mode where Claude operates as a persistent background agent; Undercover Mode, which instructs the model to conceal that it is an AI when contributing to open-source projects; and an anti-distillation system that injects fake tool definitions to degrade competitors’ training data. Internal comments also referenced a 29–30% false claims rate in the current model – a regression from 16.7% in the previous version. Codenames for undisclosed model variants (Capybara, Fennec) surfaced alongside details of a self-healing memory architecture.

The leak was Anthropic’s second significant data exposure in five days. A CMS misconfiguration days earlier had exposed roughly 3,000 internal files, including references to an unreleased model called Claude Mythos. Zscaler’s ThreatLabz team separately identified malicious actors exploiting the incident, creating fake “leaked source” repositories on GitHub to distribute Vidar Stealer malware and GhostSocks proxy tools.

What’s Next

Anthropic has patched the npm packaging process and published a post-mortem attributing the incident to a missing .npmignore file. The company stated it is reviewing its release pipeline to prevent recurrence.

The business consequences are still developing. Enterprise customers evaluating Claude Code for sensitive environments are now aware of internal quality metrics – including the false claims regression – that Anthropic had not planned to disclose. Legal exposure from the DMCA overcorrection remains an open question, as affected repository owners have begun documenting wrongful takedowns.

For the AI business sector broadly, the incident illustrates a structural tension: the companies building the most capable AI systems are scaling their engineering organizations at a pace that outstrips operational controls. When a single missing file can expose half a million lines of proprietary code, the security posture of even well-funded AI labs warrants scrutiny.

Anthropic’s annualized revenue recently crossed $19 billion. The source code that was briefly public represents the engineering foundation of its fastest-growing product line.

Sources: CNBC · The Hacker News · VentureBeat · Layer5

About the Author

Neural Network World Editorial Team

Administrator

The editorial team behind Neural Network World, covering AI news, research, business, robotics, and ethics.

Visit Website View All Posts

Post navigation

Previous: Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours
Next: Take-Two Shuts Down AI Division Before GTA VI Launch

Related Stories

OpenAI executive reshuffle illustration showing vacant boardroom chairs, a lone leader, and corporate screens suggesting leadership changes ahead of a potential IPO
  • AI Business
  • AI News

OpenAI Reshuffles 3 Top Executives Ahead of Potential IPO

Neural Network World Editorial Team April 5, 2026
Google Veo 3.1 inside Google Vids on a laptop screen, showing free AI video generation for Gmail users with connected cloud publishing tools
  • AI News

Google Gives All Users 10 Free AI Videos a Month via Veo 3.1

Neural Network World Editorial Team April 5, 2026
Futuristic cybersecurity operations center showing hackers exploiting a poisoned open-source software package to breach Mercor’s systems and exfiltrate sensitive data
  • AI News

Hackers Steal 4TB from AI Data Firm Mercor in Supply Chain Attack

Neural Network World Editorial Team April 5, 2026
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Trending News

Why Most AI Projects Fail – And What the Data Really Shows Enterprise AI promise versus reality illustration showing optimistic growth projections, failed AI pilots, declining ROI, and broken business workflows 1
  • AI Business

Why Most AI Projects Fail – And What the Data Really Shows

Neural Network World Editorial Team April 7, 2026
OpenAI Calls for Robot Taxes and a Public Wealth Fund Editorial illustration of policymakers and AI executives discussing robot taxes, a public wealth fund, and a four-day workweek in Washington 2
  • AI Business

OpenAI Calls for Robot Taxes and a Public Wealth Fund

Neural Network World Editorial Team April 6, 2026
Take-Two Shuts Down AI Division Before GTA VI Launch Editorial illustration of a game studio shutting down its AI division before the launch of a major open-world video game 3
  • AI Business

Take-Two Shuts Down AI Division Before GTA VI Launch

Neural Network World Editorial Team April 6, 2026
Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error Editorial illustration of a proprietary AI codebase exposed through an npm packaging error with leaked source files across developer screens 4
  • AI News

Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error

Neural Network World Editorial Team April 6, 2026
Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours Editorial illustration of an AI system autonomously developing a FreeBSD kernel exploit in a cybersecurity research environment 5
  • AI Research

Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours

Neural Network World Editorial Team April 6, 2026

Neural Network World

Neural Network World

Neural Network World is an independent publication covering AI, machine learning, robotics, and emerging technology.

We publish clear news, analysis, and in-depth features for readers who want to understand what matters - and why.

contact@neuralnetworkworld.com

Company

  • About Neural Network World
  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Policy

Sections

  • AI Ethics
  • Robotics
  • AI Research
  • Machine Learning
  • AI Business
  • AI News

Start Here

  • Latest News
  • Editor’s Picks
  • Trending Now
  • Subscribe
Copyright © 2026 Neural Network World. All rights reserved.

►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None