Linux kernel 7.0 introduces the project’s first formal policy for AI-assisted code contributions, requiring disclosure, human accountability, and manual code review.
The Linux kernel project shipped version 7.0 on April 13, 2026, with the first formal policy governing AI-assisted code contributions in its 34-year history. The rules, codified in a 59-line document merged into Linus Torvalds’ mainline repository, make Linux the first major infrastructure project to establish binding AI code governance – a project that runs on 100% of the world’s top 500 supercomputers and forms the foundation of Android, most cloud infrastructure, and the majority of enterprise servers globally.
The policy was finalized after months of debate on the Linux Kernel Mailing List stretching from late 2025 through early 2026, driven by a specific incident: kernel maintainer Sasha Levin submitted an undisclosed LLM-written patch to kernel 6.15 that caused a performance regression, triggering a community reckoning over how to handle AI-generated contributions at scale.
Why It Matters
The policy establishes five binding rules. AI agents cannot add “Signed-off-by” tags – only humans may legally certify the Developer Certificate of Origin. A new “Assisted-by” tag must specify the AI tool name, model version, and toolchain used whenever AI assists in development. Humans bear full legal and technical responsibility for all AI-generated code, including bugs, security flaws, and license compliance. AI is classified as a tool – equivalent to a compiler or static analyzer – not an author. And AI cannot substitute for human code review at any stage.
The commercial stakes are significant. The Linux kernel is embedded in an estimated 3 billion Android devices, powers the majority of public cloud infrastructure at AWS, Azure, and Google Cloud, and underpins the financial systems of most Fortune 500 companies. A vulnerability or license compliance failure in AI-generated kernel code could propagate across all of those surfaces. Stable branch maintainer Greg Kroah-Hartman noted in mailing list discussions that AI ethics concerns around kernel contributions centered on patches that appeared superficially correct but carried subtle logic errors – precisely the failure mode that human review is designed to catch.
What’s Next
Torvalds himself acknowledged the practical limits of the policy. Writing on the mailing list, he noted there is no way to force bad actors to disclose AI tool use – the rules apply only to contributors acting in good faith. In his 7.0 release notes, he described AI-generated bug reports as potentially the “new normal” for an indeterminate period, suggesting the kernel community is preparing for sustained AI-related maintenance overhead rather than a short adjustment phase.
The policy is already influencing adjacent projects. Gentoo Linux and NetBSD had previously imposed outright bans on AI-generated contributions; the Linux approach – permitting AI assistance with mandatory disclosure and human accountability – offers a middle path that several other major open-source foundations are now reported to be evaluating. If adopted widely, the “Assisted-by” tag framework could become a de facto standard for AI attribution in open-source software, with implications for how enterprises track and audit AI involvement in their own software supply chains.
Whether the policy will satisfy regulators is an open question. The EU AI Act’s provisions on high-risk AI systems in critical infrastructure have not yet been interpreted to cover open-source kernel contributions specifically, but legal analysts expect that gap to close as AI-generated code becomes pervasive in software that falls under the Act’s scope.
Sources: Tom’s Hardware · The Register · XDA Developers
