AI Transparency: EU Commission Specifies Rules Against Digital Deception
The EU Commission has published guidelines for the AI Act. They regulate labeling requirements for chatbots and AI-generated content.
(Image: Shutterstock)
By publishing detailed draft guidelines on Article 50 of the AI Act on Friday, the EU Commission aims to shed light on the dark side of automated interactions and artificially generated content. The regulations distinguish between four central categories for which specific transparency obligations are to apply.
The focus is initially on interactive AI systems such as voice assistants or chatbots. According to the Commission's plans, these should be designed in such a way that users are unambiguously informed about their artificial counterpart. Providers can decide for themselves how to do this, as long as children and other particularly vulnerable groups are also effectively protected.
Labeling Obligation for AI Content
A second pillar concerns the creation of artificial images, videos, or texts. In the future, these must be marked in a machine-readable format and also labeled as “discoverable as artificially generated or manipulated.” The use of emotion recognition or biometric categorization is also subject to strict information obligations towards those affected, according to the third part.
The fourth category is currently particularly in the spotlight: Deepfakes and AI-generated texts on topics of public interest must, according to the draft, be clearly declared as such. Exception: They obviously serve artistic or satirical purposes. EU lawmakers already agreed last week on a ban on AI applications that produce sexualized deepfakes (“Nudifier apps”).
The Commission plans further practical privileges so as not to unnecessarily restrict innovation and private freedoms. For example, purely assistive functions such as automatic grammar correction remain exempt from the strict rules if they do not “substantially alter” the content.
A “purely personal, non-professional activity” is to remain exempt from the obligations. Anyone who merely sends an AI-generated Christmas card within their private circle does not have to label it. However, as soon as privately created content that “can influence political opinion” is distributed on social platforms, the labeling obligation applies.
Videos by heise
Deadlines and Industry Cooperation
The guidelines are intended to help affected companies and authorities to comply with the AI regulation in a “coherent, effective and uniform manner.” Since the transparency obligations will be binding from August 2, 2026, actors still have some time for technical adjustments.
The Commission emphasizes the shared responsibility in the information ecosystem: it also encourages platforms that do not create AI content themselves to maintain existing labels. This is intended to help ensure that users do not fall for deception.
Interested parties have the option until June 3 to express their views on the proposal as part of a consultation. The final version of the guidelines is expected shortly thereafter.
The Commission intends to establish an effective instrument against the erosion of truth in the digital space. In the future, citizens should always know whether they are communicating with an algorithm or whether a spectacular video actually corresponds to reality. Through the close integration of the initiative with the Code of Conduct for AI providers, it also emphasizes the holistic approach of European regulation. This is intended to bring both legal clarity and technological feasibility.
(wpl)