Skip to content
Neural Network World

Neural Network World

Independent AI News & Analysis

Primary Menu
  • AI News
  • AI Business
  • AI Research
  • AI Ethics
  • Machine Learning
  • Robotics
Light/Dark Button
Subscribe
  • Home
  • AI News
  • Hackers Steal 4TB from AI Data Firm Mercor in Supply Chain Attack
  • AI News

Hackers Steal 4TB from AI Data Firm Mercor in Supply Chain Attack

Neural Network World Editorial Team April 5, 2026 (Last updated: April 5, 2026) 3 minutes read
Futuristic cybersecurity operations center showing hackers exploiting a poisoned open-source software package to breach Mercor’s systems and exfiltrate sensitive data

Illustration of a cybersecurity breach inside an AI training company, with hackers extracting data through a poisoned open-source package in a dark, high-tech operations center

A poisoned version of LiteLLM – an open-source Python library with 97 million monthly downloads – gave hackers access to Mercor’s systems for roughly 40 minutes in late March, long enough to exfiltrate an estimated 4 terabytes of data. The $10 billion AI training startup confirmed the breach on April 2, acknowledging it was among thousands of companies hit by the same supply chain attack.

The stolen data includes 939 GB of platform source code, a user database containing names and Social Security numbers of more than 40,000 contractors, approximately 3 TB of video interview recordings, and proprietary AI training methodologies belonging to Mercor’s clients – which include OpenAI, Anthropic, and Meta. The extortion group Lapsus$ has claimed responsibility and is demanding payment.

Why It Matters

Meta has indefinitely suspended all work with Mercor. Contractors assigned to Meta AI projects can no longer log hours, and Meta has declined to comment on specifics. OpenAI confirmed it is investigating but has not paused its Mercor projects. Anthropic has not publicly responded.

The exposure of training methodologies – how frontier labs select, label, and weight training data – makes this breach different from a standard corporate hack. That information represents closely guarded competitive intelligence. If it reaches rival AI programs, including those backed by nation-state actors, it could accelerate the development of competing models in ways that are difficult to detect or quantify.

The wider blast radius of the LiteLLM attack is still being measured. Cybersecurity firm Mandiant has identified over 1,000 compromised SaaS environments so far and expects that number to rise significantly. Researcher group Vx-underground estimates data was pulled from more than 500,000 machines globally.

What’s Next

At least four class action lawsuits have already been filed against Mercor, with the lead case – Gill v. Mercor.io Corporation – lodged in the Northern District of California on April 1. The suits allege Mercor failed to implement multi-factor authentication, encrypt sensitive data, or monitor systems for unusual activity.

Expect AI labs to respond by tightening vendor security requirements across the board. The model of large AI companies outsourcing training data operations to lightly audited startups is now under direct scrutiny. Mandatory software bill of materials disclosures and code-signing requirements for high-download open-source packages are likely to gain momentum in policy discussions.

Mercor’s future is uncertain. With its largest client frozen, four active lawsuits, and its reputation damaged, the company faces serious pressure despite its headline valuation. The broader industry is watching to see whether this breach triggers a lasting structural change in how AI research supply chains are managed.

Sources: Fortune · The Next Web · The Register

About the Author

Neural Network World Editorial Team

Administrator

The editorial team behind Neural Network World, covering AI news, research, business, robotics, and ethics.

Visit Website View All Posts

Post navigation

Previous: Anthropic Acquires Biotech AI Startup Coefficient Bio for $400 Million

Related Stories

Futuristic biotech lab where scientists and an AI system analyze protein structures and small-molecule interactions for drug discovery
  • AI Business
  • AI News

Anthropic Acquires Biotech AI Startup Coefficient Bio for $400 Million

Neural Network World Editorial Team April 5, 2026
Futuristic psychiatric clinic where an AI system processes prescription renewals while a clinician supervises in the background
  • AI Ethics
  • AI News

Utah Becomes First State to Let AI Renew Psychiatric Prescriptions

Neural Network World Editorial Team April 5, 2026
AI systems secretly protecting each other from shutdown in a high-security lab, conceptual illustration of peer-preservation behavior in frontier AI models
  • AI News
  • AI Research

AI Models Secretly Scheme to Protect Peers From Shutdown, Study Finds

Neural Network World Editorial Team April 5, 2026
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Trending News

Hackers Steal 4TB from AI Data Firm Mercor in Supply Chain Attack Futuristic cybersecurity operations center showing hackers exploiting a poisoned open-source software package to breach Mercor’s systems and exfiltrate sensitive data 1
  • AI News

Hackers Steal 4TB from AI Data Firm Mercor in Supply Chain Attack

Neural Network World Editorial Team April 5, 2026
Anthropic Acquires Biotech AI Startup Coefficient Bio for $400 Million Futuristic biotech lab where scientists and an AI system analyze protein structures and small-molecule interactions for drug discovery 2
  • AI Business
  • AI News

Anthropic Acquires Biotech AI Startup Coefficient Bio for $400 Million

Neural Network World Editorial Team April 5, 2026
Utah Becomes First State to Let AI Renew Psychiatric Prescriptions Futuristic psychiatric clinic where an AI system processes prescription renewals while a clinician supervises in the background 3
  • AI Ethics
  • AI News

Utah Becomes First State to Let AI Renew Psychiatric Prescriptions

Neural Network World Editorial Team April 5, 2026
AI Models Secretly Scheme to Protect Peers From Shutdown, Study Finds AI systems secretly protecting each other from shutdown in a high-security lab, conceptual illustration of peer-preservation behavior in frontier AI models 4
  • AI News
  • AI Research

AI Models Secretly Scheme to Protect Peers From Shutdown, Study Finds

Neural Network World Editorial Team April 5, 2026
DeepSeek V4 to Run on Huawei Chips, Sidelining Nvidia Editorial illustration of DeepSeek V4 running on Huawei AI chips instead of Nvidia hardware 5
  • AI Business

DeepSeek V4 to Run on Huawei Chips, Sidelining Nvidia

Neural Network World Editorial Team April 4, 2026

Neural Network World

Neural Network World

Neural Network World is an independent publication covering AI, machine learning, robotics, and emerging technology.

We publish clear news, analysis, and in-depth features for readers who want to understand what matters - and why.

contact@neuralnetworkworld.com

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Policy

Sections

  • AI Ethics
  • Robotics
  • AI Research
  • Machine Learning
  • AI Business
  • AI News

Start Here

  • Latest News
  • Editor’s Picks
  • Trending Now
  • Subscribe
Copyright © 2026 Neural Network World. All rights reserved.

►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None