Expert: EU Commission wants an "unlimited special legal zone" for AI
In a legal analysis for consumer advocates, experts warn of massive protection gaps due to the planned "digital EU omnibus". Big Tech is favored.
(Image: mixmagic/Shutterstock.com)
With the planned "digital omnibus" package, the EU Commission promises a breakthrough against bureaucracy. However, resistance to the initiative is constantly growing. Legal experts from the law firm Spirit Legal are now urgently warning in an expert opinion commissioned by the Federation of German Consumer Organisations (vzbv) that the draft represents a systematic break with the principles of the General Data Protection Regulation (GDPR) and endangers the privacy of hundreds of millions of consumers.
At the center of the criticism is the planned Article 88c, which provides for special relief for data processing in the context of Artificial Intelligence (AI). The experts Peter Hense and David Wagner warn here of an "unlimited special legal zone". Since the term "AI system" is extremely broadly defined, companies could in the future declare almost any automated data processing as AI-relevant in order to evade strict data protection rules. This would replace the technology-neutral logic of the GDPR with a technology-specific privilege that primarily benefits service providers.
The lawyers consider the planned relaxation in handling sensitive data such as health information or political views to be alarming. The draft suggests that their processing is all the more justified the larger the data volumes are. This reverses the principle of data minimization: mass data extraction is rewarded as long as it serves to train AI models. The experts see this as dangerous privileges for Big Tech corporations.
Digital sovereignty of the young generation
The authors also criticize that essential protective mechanisms are merely shifted to the legally non-binding recitals. One example is technical opt-out procedures with which users can object to the use of their data. Without anchoring in the binding legal text, supervisory authorities lack the means to effectively sanction violations. Especially with web scraping, data of individuals would be collected who often have no opportunity to even become aware of their right of objection.
To counteract these negative developments, the experts propose a specific legal basis for AI training. Companies should only be allowed to access personal information if they can prove that their goal cannot be achieved with anonymized or synthetic data. Furthermore, it must be ensured that AI models do not reproduce personal information in their answers ("Data Leakage"). Strict technical standards are therefore necessary already in the training process.
A chapter is dedicated to the protection of vulnerable groups. Since minors frequently cannot grasp the implications of data processing for AI, the authors advocate for explicit parental consent. In addition, individuals reaching the age of majority should receive an unconditional right to prohibit the further use of their data in existing models. Without such guardrails, the digital sovereignty of the next-generation risks being permanently lost.
Loss of trust as an economic risk
According to vzbv board member Ramona Pop, the political dimension of these findings is enormous. She warns that the Commission, under the guise of innovation, primarily wants to issue a free pass to US platforms. Big Tech could easily exploit legal grey areas, while European companies and consumers would be left behind. True legal certainty can only be achieved through clear rules. Brussels, on the other hand, is proposing vague exceptions that would have to be judicially clarified over years.
Current results of a representative online survey for the vzbv demonstrate that data protection is not a hindrance but an economic factor. According to the survey, trust is the fundamental prerequisite for using digital services for 87 percent of consumers. The GDPR serves as an important anchor: over 60 percent of respondents are more likely to trust companies if they demonstrably comply with European regulations. A dilution of these standards thus also risks the social acceptance of new technologies.
Videos by heise
The Digital Omnibus will now be discussed in the EU Council and Parliament. The objections from civil society can hardly be ignored. The package is suspected of being primarily the result of massive pressure from the US government, rather than representing European citizen and economic interests. The rights of data subjects are being weakened under the guise of "simplifications," it is said. The Commission wants to issue AI companies a blank check to extract European data.
(nie)