Analysis: Should AI implement core features in critical software?
The community is discussing rejecting AI contributions in open-source development. This is neither realistic nor goal-oriented, says Sebastian Springer.
(Image: sommart sombutwanitkul / Shutterstock.com)
- Sebastian Springer
The open-source community and members of Node.js's Technical Steering Committee (TSC) are currently discussing whether to reject pull requests to Node.js if they were created using AI. However, this is unrealistic for several reasons: AI is already a part of the development process, maintainers can hardly reliably check the origin of code, and there are no objective criteria to clearly distinguish AI assistance from human work. Furthermore, a ban is not sensible, as it would further burden the community's already scarce resources and does not replace responsible governance.
Debate about Node.js
Criticism of AI-generated pull requests is not new, but this current case has reached a new dimension and concerns a critical element of software infrastructure: the Node.js platform forms the technical basis for numerous applications or serves as the foundation for build and development tools.
The trigger for the current discussion is an extensive pull request with over 21,000 lines of code. The author, Matteo Collina, an experienced Node.js contributor, has disclosed that he partially developed the feature with Claude Code but thoroughly reviewed the generated code. In his blog post, he explains that he focused on the architecture, API design, and code review, while leaving the tedious writing work, such as implementing method variations, tests, and documentation, to the AI.
The feature is not a cosmetic change or additional tests, but a wholly new core feature: VFS, a virtual file system that allows files and modules to be loaded directly from memory instead of the real file system. This feature deeply affects the file system module and the module system of Node.js itself. To put it bluntly, the question arises: Is AI allowed to implement core features of a critical open-source project?
(Image:Â Stone Story / stock.adobe.com)
Enhance web applications with AI so they really get better? The online theme day enterJS Integrate AI on April 28, 2026, shows how. Early bird tickets and group discounts are available in the online ticket shop.
Handling the AI Flood and its Risks
There is no simple answer to this. However, the case clearly shows one of the fundamental problems that many open-source projects are currently facing. Many projects are being flooded with AI-generated contributions. These range from small spelling corrections in documentation to profound architectural features. Maintainers are increasingly faced with the decision of whether to allow AI-generated code in principle or to ban it. Besides the obvious advantages such as increased efficiency and a lower barrier to entry for new contributors, there are several serious concerns.
These concern copyright, as AI-generated code is generally not protected by copyright. This is different for AI-assisted code. Here, it depends on the individual case. Furthermore, there is a risk that models will accidentally reproduce proprietary code, leading to copyright issues. This uncertainty leads some maintainers to want to reject AI contributions outright.
Secondly, the quality of the code is viewed critically. AI agents tend to produce a lot of code. Both authors and maintainers must ensure that the code meets the project's quality and architectural standards. Static analysis can only cover part of this. Aspects related to architecture usually need to be checked manually. Especially with extensive contributions, the risk increases that suboptimal or difficult-to-maintain solutions will enter the code.
Thirdly, the effort for maintainers increases. Due to the volume of generated code, the focus of many maintainers shifts from writing code to code review. This aspect sometimes becomes so overwhelming that some only deal with code reviews. In many open-source projects, the problem arises that contributions are submitted automatically, so that hardly any human interaction is required. In the flood of AI-generated contributions, valuable bug fixes or feature implementations can easily get lost.
Videos by heise
Despite these risks, a general ban on AI-generated code is hardly enforceable in practice. Many developers use Copilot, Cursor, and other tools for development, and to varying degrees, from simple inline completions to agent-based coding. This raises the question: What is still a smart feature of the development environment, and what is AI-assisted coding, or where is the line between "human" and "AI-assisted"? And how should it be recognized? Especially with smaller contributions, it is factually impossible to determine whether the code originates from an AI or not.
The Linux Foundation takes a pragmatic approach here. According to its policy, AI-generated code is generally permitted. However, the terms of use of the AI tool must not contain any restrictions that contradict the project's license. The AI-generated code must also not cause copyright infringements; the use of the generated code must be permitted. The principle is that AI can assist, but the human remains the responsible author.
Transparency is one of the fundamental principles in the responsible use of generative AI. This applies not only to the development of such systems but also to their use: Developers should disclose if they have developed code using AI tools. Some open-source projects have included corresponding disclosure requirements in their contribution guidelines, for example, the terminal emulator Ghostty. The web framework Django has integrated an AI Assistance Disclosure into the official pull request template.
Reliable Guidelines Instead of AI Ban
The discussion about how to deal with AI is very important for every project, including in the open-source sector, and it will not end. Instead, we must continuously engage with the topic, as the tools and models are constantly evolving. The quality is constantly improving, and experience with the tools and their use is also increasing.
Reliable guidelines are needed to protect projects and their maintainers. A general ban on AI is unrealistic and not goal-oriented. It appears more like a reflexive rejection of new technologies. It is more sensible to clearly define responsibility, demand transparency, and strengthen quality assurance. However, this is not only the task of the maintainers, who are already suffering from the situation, but of every developer who wants to contribute to open-source projects.
(olb)