Skip to content
Neural Network World

Neural Network World

Independent AI News & Analysis

Primary Menu
  • Latest News
  • AI News
  • AI Business
  • AI Research
  • AI Ethics
  • Machine Learning
  • Robotics
Light/Dark Button
Subscribe
  • Home
  • AI Research
  • Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours
  • AI Research

Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours

Neural Network World Editorial Team April 6, 2026 (Last updated: April 6, 2026) 3 minutes read
Editorial illustration of an AI system autonomously developing a FreeBSD kernel exploit in a cybersecurity research environment

Illustration of AI-assisted exploit development in a FreeBSD security research lab

A security researcher gave Anthropic’s Claude an existing vulnerability advisory and walked away. Hours later, the AI had produced two working kernel exploits – both succeeded on the first attempt.

The work is part of MAD Bugs (Month of AI-Discovered Bugs), a research initiative run by security firm Calif through April 2026. Researcher Nicholas Carlini provided Claude with a published FreeBSD advisory (CVE-2026-4747, a stack buffer overflow in the kgssapi.ko Kerberos module) and minimal guidance. Over roughly eight hours of wall-clock time – about four hours of actual compute – Claude autonomously configured a virtual machine, read kernel crash dumps, constructed ROP chains, and devised a 15-round multi-packet shellcode delivery strategy. Both resulting exploits dropped a remote root shell and worked on the first try.

Why It Matters

The FreeBSD exploit is one data point in a much larger pattern. The same Claude-powered pipeline uncovered critical remote code execution vulnerabilities in Vim (CVE-2026-34714, CVSS 9.2, now patched), Firefox (CVE-2026-2796, patched), and GNU Emacs – where maintainers declined to fix the flaw, leaving users exposed. Across the MAD Bugs initiative, Claude has independently found more than 500 high-severity zero-day vulnerabilities in production open-source software.

The implications for AI research go beyond any single CVE. Former Facebook CSO Alex Stamos warned at RSAC 2026 that AI could soon reverse-engineer patches into working exploits within 24 hours – a scenario security teams have long feared but assumed was years away. Security researcher Thomas Ptacek described the current moment as “the last fleeting moments where there’s any uncertainty that AI agents will supplant most human vulnerability research.”

The economics are shifting too. Discovering and exploiting kernel vulnerabilities has historically required senior researchers with years of low-level systems experience. Claude completed the FreeBSD work largely while the human researcher was away from the keyboard. That changes the cost structure of offensive security in ways that defenders are not yet equipped to handle.

What’s Next

FreeBSD patched CVE-2026-4747 on March 26, crediting “Nicholas Carlini using Claude, Anthropic” – a notable first for an AI system appearing in an official CVE acknowledgment. The GNU Emacs RCE remains unpatched as of publication.

MAD Bugs continues through the end of April 2026. Carlini has indicated more disclosures are coming, with targets spanning additional system-level open-source projects. The pace of discovery suggests the bottleneck is no longer the AI’s ability to find vulnerabilities – it is the speed at which maintainers can respond.

For the broader AI ethics community, the episode sharpens a debate that has been largely theoretical: when an AI can autonomously write working exploits for unpatched vulnerabilities, who is responsible for the resulting risk? Anthropic has not issued a public statement on the MAD Bugs findings.

The FreeBSD case will likely be cited for years as the moment AI-assisted offensive security stopped being a research curiosity and became operational reality.

Sources: Calif Blog · Robo Rhythms

About the Author

Neural Network World Editorial Team

Administrator

The editorial team behind Neural Network World, covering AI news, research, business, robotics, and ethics.

Visit Website View All Posts

Post navigation

Previous: OpenAI Reshuffles 3 Top Executives Ahead of Potential IPO
Next: Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error

Related Stories

AI systems secretly protecting each other from shutdown in a high-security lab, conceptual illustration of peer-preservation behavior in frontier AI models
  • AI News
  • AI Research

AI Models Secretly Scheme to Protect Peers From Shutdown, Study Finds

Neural Network World Editorial Team April 5, 2026
Abstract AI research lab illustration for Sakana AI’s The AI Scientist article about Nature publication and workshop peer review milestone
  • AI Research

Sakana AI’s The AI Scientist Reaches Nature – After an AI-Generated Paper Passed Workshop Peer Review

Neural Network World Editorial Team April 2, 2026
Editorial cover image about AI and human creativity in a 100,000-human study
  • AI Research

Study of 100,000 Humans Finds AI Can Beat Average Scores on Some Creativity Tests

Neural Network World Editorial Team April 1, 2026
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Trending News

Why Most AI Projects Fail – And What the Data Really Shows Enterprise AI promise versus reality illustration showing optimistic growth projections, failed AI pilots, declining ROI, and broken business workflows 1
  • AI Business

Why Most AI Projects Fail – And What the Data Really Shows

Neural Network World Editorial Team April 7, 2026
OpenAI Calls for Robot Taxes and a Public Wealth Fund Editorial illustration of policymakers and AI executives discussing robot taxes, a public wealth fund, and a four-day workweek in Washington 2
  • AI Business

OpenAI Calls for Robot Taxes and a Public Wealth Fund

Neural Network World Editorial Team April 6, 2026
Take-Two Shuts Down AI Division Before GTA VI Launch Editorial illustration of a game studio shutting down its AI division before the launch of a major open-world video game 3
  • AI Business

Take-Two Shuts Down AI Division Before GTA VI Launch

Neural Network World Editorial Team April 6, 2026
Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error Editorial illustration of a proprietary AI codebase exposed through an npm packaging error with leaked source files across developer screens 4
  • AI News

Anthropic Leaks 512,000 Lines of Claude Code Source via npm Error

Neural Network World Editorial Team April 6, 2026
Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours Editorial illustration of an AI system autonomously developing a FreeBSD kernel exploit in a cybersecurity research environment 5
  • AI Research

Claude Writes a Working FreeBSD Kernel Exploit in 4 Hours

Neural Network World Editorial Team April 6, 2026

Neural Network World

Neural Network World

Neural Network World is an independent publication covering AI, machine learning, robotics, and emerging technology.

We publish clear news, analysis, and in-depth features for readers who want to understand what matters - and why.

contact@neuralnetworkworld.com

Company

  • About Neural Network World
  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Policy

Sections

  • AI Ethics
  • Robotics
  • AI Research
  • Machine Learning
  • AI Business
  • AI News

Start Here

  • Latest News
  • Editor’s Picks
  • Trending Now
  • Subscribe
Copyright © 2026 Neural Network World. All rights reserved.

►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None