Skip to content
Neural Network World

Neural Network World

Independent AI News & Analysis

Primary Menu
  • Latest News
  • AI News
  • AI Business
  • AI Research
  • AI Ethics
  • Machine Learning
  • Robotics
Light/Dark Button
Follow on X
  • Home
  • AI Ethics
  • Linux 7.0 Ships First AI Code Policy, Holds Humans Liable
  • AI Ethics

Linux 7.0 Ships First AI Code Policy, Holds Humans Liable

Neural Network World Editorial Team April 15, 2026 (Last updated: April 15, 2026) 3 minutes read
Linux kernel 7.0 introduces formal AI-assisted code contribution rules with human review and disclosure requirements

Linux kernel 7.0 introduces the project’s first formal policy for AI-assisted code contributions, requiring disclosure, human accountability, and manual code review.

The Linux kernel project shipped version 7.0 on April 13, 2026, with the first formal policy governing AI-assisted code contributions in its 34-year history. The rules, codified in a 59-line document merged into Linus Torvalds’ mainline repository, make Linux the first major infrastructure project to establish binding AI code governance – a project that runs on 100% of the world’s top 500 supercomputers and forms the foundation of Android, most cloud infrastructure, and the majority of enterprise servers globally.

The policy was finalized after months of debate on the Linux Kernel Mailing List stretching from late 2025 through early 2026, driven by a specific incident: kernel maintainer Sasha Levin submitted an undisclosed LLM-written patch to kernel 6.15 that caused a performance regression, triggering a community reckoning over how to handle AI-generated contributions at scale.

Why It Matters

The policy establishes five binding rules. AI agents cannot add “Signed-off-by” tags – only humans may legally certify the Developer Certificate of Origin. A new “Assisted-by” tag must specify the AI tool name, model version, and toolchain used whenever AI assists in development. Humans bear full legal and technical responsibility for all AI-generated code, including bugs, security flaws, and license compliance. AI is classified as a tool – equivalent to a compiler or static analyzer – not an author. And AI cannot substitute for human code review at any stage.

The commercial stakes are significant. The Linux kernel is embedded in an estimated 3 billion Android devices, powers the majority of public cloud infrastructure at AWS, Azure, and Google Cloud, and underpins the financial systems of most Fortune 500 companies. A vulnerability or license compliance failure in AI-generated kernel code could propagate across all of those surfaces. Stable branch maintainer Greg Kroah-Hartman noted in mailing list discussions that AI ethics concerns around kernel contributions centered on patches that appeared superficially correct but carried subtle logic errors – precisely the failure mode that human review is designed to catch.

What’s Next

Torvalds himself acknowledged the practical limits of the policy. Writing on the mailing list, he noted there is no way to force bad actors to disclose AI tool use – the rules apply only to contributors acting in good faith. In his 7.0 release notes, he described AI-generated bug reports as potentially the “new normal” for an indeterminate period, suggesting the kernel community is preparing for sustained AI-related maintenance overhead rather than a short adjustment phase.

The policy is already influencing adjacent projects. Gentoo Linux and NetBSD had previously imposed outright bans on AI-generated contributions; the Linux approach – permitting AI assistance with mandatory disclosure and human accountability – offers a middle path that several other major open-source foundations are now reported to be evaluating. If adopted widely, the “Assisted-by” tag framework could become a de facto standard for AI attribution in open-source software, with implications for how enterprises track and audit AI involvement in their own software supply chains.

Whether the policy will satisfy regulators is an open question. The EU AI Act’s provisions on high-risk AI systems in critical infrastructure have not yet been interpreted to cover open-source kernel contributions specifically, but legal analysts expect that gap to close as AI-generated code becomes pervasive in software that falls under the Act’s scope.

Sources: Tom’s Hardware · The Register · XDA Developers

About the Author

Neural Network World Editorial Team

Administrator

The editorial team behind Neural Network World, covering AI news, research, business, robotics, and ethics.

Visit Website View All Posts

Post navigation

Previous: AI Agent Hacks Bain’s Platform in 18 Min, Completes MBB Sweep
Next: Snap Cuts 1,000 Jobs as AI Writes 65% of Its Code

Related Stories

Florida investigates OpenAI and ChatGPT over alleged role in FSU shooting planning
  • AI Ethics

Florida Probes OpenAI After Shooter Entered 200+ ChatGPT Prompts Before FSU Attack

Neural Network World Editorial Team April 10, 2026
Futuristic psychiatric clinic where an AI system processes prescription renewals while a clinician supervises in the background
  • AI Ethics
  • AI News

Utah Becomes First State to Let AI Renew Psychiatric Prescriptions

Neural Network World Editorial Team April 5, 2026
Concept image of AI regulation and legal compliance with a judge's gavel and digital security elements
  • AI Ethics

AI Regulation in 2026: The Year Laws Caught Up with Technology

Neural Network World Editorial Team March 28, 2026
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Novo Nordisk OpenAI partnership illustration showing artificial intelligence transforming drug discovery, manufacturing, and pharmaceutical operations in 2026
AI Business

Novo Nordisk Partners With OpenAI for Drug Discovery

Neural Network World Editorial Team
April 16, 2026 0
Meta Broadcom AI chip partnership illustration showing custom MTIA accelerators powering hyperscale data center expansion and 2nm artificial intelligence infrastructure in 2026
AI Business

Meta Commits to 1GW of Custom AI Chips With Broadcom Through 2029

Neural Network World Editorial Team
April 16, 2026 0
AI-driven layoffs at Snap as employees leave a modern office after Evan Spiegel linked workforce cuts to artificial intelligence writing most new company code
AI Business

Snap Cuts 1,000 Jobs as AI Writes 65% of Its Code

Neural Network World Editorial Team
April 16, 2026 0
Autonomous AI agent breaching Bain's Pyxis competitive intelligence platform in a dark enterprise cybersecurity environment with exposed databases, JWT tokens, and AI system controls.
AI Research

AI Agent Hacks Bain’s Platform in 18 Min, Completes MBB Sweep

Neural Network World Editorial Team
April 15, 2026 0

Neural Network World

Neural Network World

Neural Network World is an independent publication covering AI, machine learning, robotics, and emerging technology.

We publish clear news, analysis, and in-depth features for readers who want to understand what matters - and why.

contact@neuralnetworkworld.com

Company

  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Policy
  • About Neural Network World

Sections

  • AI News
  • AI Business
  • AI Research
  • AI Ethics
  • Machine Learning
  • Robotics

Start Here

  • Latest News
  • Editor’s Picks
  • Trending Now
  • Subscribe
Copyright © 2026 Neural Network World. All rights reserved. | ReviewNews by AF themes.

►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None