# Linux Kernel Adopts Formal AI Policy Requiring Disclosure of Assisted Code

_Tuesday, April 14, 2026 at 12:12 AM EDT · AI, Infrastructure, Tech & Business · Latest · Tier 2 — Notable_

![Linux Kernel Adopts Formal AI Policy Requiring Disclosure of Assisted Code — Primary](https://www.zdnet.com/a/img/resize/ba07fdb8ab352aa44893bbedd0f0f6a1008b6742/2026/04/13/2c6d1c54-b521-48ee-ba3d-1d15505de481/gettyimages-1443552838.jpg?auto=webp&amp;fit=crop&amp;height=675&amp;width=1200)

The Linux kernel project has established its first formal policy governing AI-assisted code contributions, mandating disclosure when developers use artificial intelligence tools while maintaining that humans bear ultimate responsibility for all submissions.

After months of deliberation, Linus Torvalds and kernel maintainers finalized guidelines requiring developers to tag any AI-assisted patches with an "Assisted-by" attribution. The policy reflects a pragmatic approach that treats AI as a development tool rather than a co-author while addressing licensing and quality concerns.

Under the new rules, AI agents cannot add the legally required "Signed-off-by" tags that certify compliance with the kernel's Developer Certificate of Origin. Only human contributors can assume legal responsibility for code licensing. Developers must include an "Assisted-by" tag identifying the specific model and tools used, such as "Assisted-by: Claude:claude-3-opus coccinelle sparse."

The policy establishes that human submitters bear full liability for reviewing AI-generated code, ensuring license compliance, and addressing any bugs or security flaws. This maintains the kernel's rigorous quality standards while acknowledging the growing use of AI in software development.

The transparency requirement emerged from controversy earlier this year when Nvidia engineer Sasha Levin submitted an AI-generated patch without disclosing its origin. While Levin reviewed and tested the code, the lack of transparency prompted discussion among maintainers about formal guidelines.

Maintainers selected "Assisted-by" over alternatives like "Generated-by" to emphasize AI's role as a development aid rather than an autonomous author. The tag format aligns with existing metadata conventions like "Reviewed-by" and "Tested-by" while signaling that AI-assisted patches may warrant additional scrutiny.

Torvalds has emphasized treating AI as "just a tool" rather than making ideological statements about its role in software engineering. The policy aims to avoid both alarmist and revolutionary narratives about AI's impact on development practices.

Despite the disclosure requirements, kernel maintainers are not implementing AI-detection software. They continue to rely on technical expertise and code review to identify problematic submissions. As Torvalds noted, the real challenge lies in credible-looking patches that meet specifications but contain subtle bugs, not obvious low-quality submissions.

The policy's enforcement relies on severe consequences for violations rather than comprehensive detection. Developers who attempt to conceal AI assistance risk permanent exclusion from kernel development and other open-source projects.

## Sources

- [ZDNet](https://www.zdnet.com/article/linus-torvalds-and-maintainers-finalize-ai-policy-for-linux-kernel-developers/)

---
Canonical: https://techandbusiness.org/newswire/2kR9nJxTrWAUbGttswnceh
Retrieved: 2026-04-21T10:31:51.884Z
Publisher: Tech & Business (techandbusiness.org)
