AI regulation: Code of Practice for generative AI models presented

After almost a year of work, work on the code of conduct for generative AI models has been completed. Providers do not have to follow its rules.

listen Print view
European flag with map of Europe

(Image: -strizh-/Shutterstock.com)

6 min. read
Contents

The fact that generative AI models can cause problems was already generally recognized by the EU in its AI regulation – the AI Act. However, the Commission, European Parliament and member states have largely outsourced the question of how to deal with these problems to the developers and deploying organizations: training, testing and evaluation must be documented and risks must be adequately addressed.

What exactly these are and how they should be mitigated. The legal text was found to be unsuitable for all of this due to the variety of models and problems. Instead, behavioral codes or the fulfillment of comparable standards by the providers of such models should contain the biggest concerns of generative AI models. And after a good six months of intensive discussions, the central "Code of Practice" has now been presented a few weeks late.

The code of practice is divided into three areas: Transparency, Copyright and, as a third part, Security and Safeguards. In a Word document with multiple-choice and free-text answer fields, providers are to explain in a straightforward manner how, when and with which methods, data sets and resource consumption they have created their models. Such documentation is a mandatory requirement for operation in the EU from the second half of August 2025. In the area of copyright, for example, certain voluntary commitments are included to which GPAI operators would subject themselves with their signature. This includes not crawling piracy sites for training or considering rights restrictions within the meaning of the DSA Directive. This includes preventing rights-infringing outputs as far as possible.

Videos by heise

The most difficult part to implement in reality, however, concerns mitigating the respective systemic risks that depend on the model, training data and its possibilities and restrictions, as required by Article 55 of the AI Regulation. The Code of Practice should now provide relatively concrete guidance on what types of risks could be involved and how they can be dealt with professionally. The range is wide: from artificially generated nude images or pornography based on photos of real people to threats to national security, discriminatory content, inaccurate health advice and radicalization chatbots, anything is conceivable. The use of models for the development of NBC weapons or digital attack tools is also one of the possibilities to be examined. The Code of Practice primarily offers a reference model for dealing with this.

The Code of Practice is not formally binding for any single provider. At the same time, it does not effectively protect any signatory to the code from being prosecuted one day if it turns out that the measures were inadequate. The Code of Practice, which was developed by independent scientists on behalf of the EU Commission and in an extensive participation process, has been the subject of intense debate recently.

Susanne Dehmel, member of the management board of the German IT association Bitkom, is only partially satisfied with the result. She is particularly critical of the fact that providers always have to be on the lookout for new risks, according to Dehmel: "Together with ambiguously defined fundamental rights risks and social risks, for which there are often hardly any established methods for identification and assessment, this creates new legal uncertainty for European AI providers."

Considering the two options of either submitting to the code of conduct and thus at least demonstrating good faith, or alternatively having to develop everything from scratch themselves and present it as legally compliant. It is nevertheless to be expected that some operators of AI models will agree to the variant now presented. EU

Commission Vice-President Henna Virkkunen sees the code of conduct as an "important step towards making the most advanced AI models available in Europe, which are not only innovative, but also safe and transparent". Accordingly, she called on companies to voluntarily submit to the code. However, an important piece of work from the EU Commission is currently still missing: the so-called guidelines, which are intended to define what will be considered AI with a general or unspecified purpose, i.e. GPAI within the meaning of the AI Regulation. However, these are also to be published in the near future.

Nevertheless, companies have nothing to fear for now if they do not yet comply with the rules. Whether with or without a code, sanctions for these obligations under the AI Act are not planned until August 2026 at the earliest. And in many member states, including Germany, the necessary accompanying legislation for the national supervisory structures has still not been passed. It is clear that the German accompanying legislation will not be in place in time for the next AI Act regulations to come into force in August 2025. Although the Ministry of Digital Affairs, which is now responsible, wants to continue working on it, it will no longer be possible to do so in time after the change of government, the ministry recently announced.

Europe has delivered its part on time, and now the German government also needs to step up the pace, says Green MP Rebecca Lehnhardt: "Delays or even a softening of the obligations would only fuel legal uncertainty and mistrust." Recently, there has been repeated speculation about a partial postponement, which some member states and companies would also have liked to see. However, a general postponement has become even less likely with today's publication of the Code of Conduct; at best, individual, later relevant special regulations could now be discussed again.

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.