AI Slop or better code: GCC working group for AI guidelines launched
The Working Group for GCC AI Policy shall determine to what extent contributors may use AI tools when developing the GNU Compiler Collection.
(Image: tadamichi/Shutterstock.com)
The GNU Project has launched a working group for the GNU Compiler Collection (GCC) to establish guidelines for the use of AI in the further development of the project.
Some contributors want to know if they are allowed to use AI to create fresh code for the project. The Working Group for GCC AI Policy is led by Jonathan Wakely, who works at Red Hat.
The "AI" in the group's name stands for any form of AI support, including Large Language Models (LLMs) and Small Language Models (SLMs).
Between useful AI programming assistance and AI Slop
Numerous projects in the field of open source and free software complain about a growing burden of AI Slop. AI coding tools like Claude Code or GitHub Copilot offer good support for experienced developers, but also enable people without programming knowledge to quickly create seemingly working code. However, this often does not meet the project's requirements, or even contains vulnerabilities.
Videos by heise
In March, the Linux Foundation raised 12.5 million US dollars from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to support maintainers of open-source projects in meeting the demands of the sharply increasing amount of AI-generated code.
According to the wiki page of the Working Group for GCC AI Policy, the GCC project has already received some code submissions that were created partly or entirely with LLMs. The working group now aims to create a first preliminary draft for an AI policy within a maximum of three months, which the GCC Steering Committee will review and adopt for the project.
(rme)