Swiss media industry adopts AI code
In Switzerland, media associations, companies, and public broadcasters have agreed on binding rules for dealing with AI.
(Image: Photo Kozyr/Shutterstock.com)
The Swiss media industry is adopting a binding code of conduct for the responsible use of Artificial Intelligence (AI). A broad alliance of associations and companies presented the self-regulation instrument on Thursday as part of the Swiss Media Forum in Lucerne. The new rules are to be implemented by the end of the year.
The code was developed by the Swiss Media Publishers Association (VSM) together with the Swiss Broadcasting Corporation (SRG SSR) and the news agency Keystone-SDA. It is intended to strengthen public trust in the media. In parallel, the Institute for Research in Advertising and Media (WEMF) is introducing a “Responsible AI” audit and a corresponding certificate for compliance with the standards.
“Trust is the most valuable asset”
“Trust is the most valuable asset of the media,” explains VSM President Andrea Masüger. “The rapid development of AI presents the media industry with major challenges and at the same time opens up opportunities.” The companies are setting up “AI reporting offices” where anyone can report a violation. In addition, there will be an independent ombudsman's office, which is to publish a report annually.
The code is based on the Council of Europe's Convention on Artificial Intelligence, which is pending ratification by the Federal Council, and is designed as an instrument of self-regulation. The code focuses on four principles: “The AI code is based on user knowledge, protection of democratic processes, data protection, and transparency,” explains Keystone CEO Hanspeter Kellermüller.
This means, among other things, that employees of media companies who use AI systems or process their results will be trained. Editorial content and confidential data are to be particularly protected when using AI tools. Media companies must inform the public about how they use AI systems, for example on an information page on their website.
Furthermore, the AI code introduces binding labeling requirements: Content (text, images, audio) that is completely AI-generated or published without review must be fundamentally recognizable to the public. Content created with AI systems or processed with AI must be appropriately checked for accuracy and possibly labeled.
In addition, there is a labeling requirement for all AI systems (such as chatbots) that interact with users and could be mistaken for humans by them.
Videos by heise
First approaches in Germany
In Germany, there is no such industry-wide code yet. In a rare show of unity, several media organizations and television broadcasters are calling for clear rules for AI and copyright. In a declaration dated April 21, ARD and ZDF, together with the Federation of German Publishers and Newspaper Publishers (BDZV), the Association of Independent Press (MVFP), and the Association of Private Media (Vaunet), are urging politicians to implement stricter rules for AI providers and large technology platforms.
In January, the German public broadcasters agreed on a joint AI code. This is intended to align the possibilities of AI with the public service mandate and common values, it was stated. ARD, ZDF, and Deutschlandradio are relying on a “Human in the Loop” approach: humans always bear editorial responsibility. In addition, the code commits to transparency and clear labeling of AI content.
AI Fiasco at ZDF
The fact that implementation is not yet running smoothly was demonstrated by ZDF just a few weeks later. The scandal surrounding an AI film snippet in a report by heute journal triggered a nationwide debate. And it led to the dismissal of the US correspondent, who was held responsible by ZDF for the error.
As early as November 2023, the German Journalists' Association (DJV) signed the “Paris Charta on AI and Journalism,” with ten principles to commit to responsibly “trustworthy news and media in the age of AI.”
In November 2025, the members of the European Federation of Journalists (EFJ), which also includes German journalists' associations, decided to “advocate for an AI future” that secures journalistic ethics and the rights of authors, and guarantees editorial independence.
At least one essential component of the various self-regulation approaches, the labeling requirement for AI content, will soon be achieved across Europe: From August 2, 2026, such content must be clearly identified when the EU Artificial Intelligence Act (EU AI Act) comes into force. China is already further ahead.
(nen)