EU data protection directive on AI: ban on ChatGPTs not off the table
Data protection authorities have a lot of work to do with AI models. Activists are exerting pressure: none of the major AI players are complying with the GDPR
(Image: peterschreiber.media/Shutterstock.com)
Large AI models are trained with masses of personal information without the consent of those affected. This is a major problem under data protection law. The European Data Protection Board (EDPB) has issued an opinion on artificial intelligence (AI) models in light of the General Data Protection Regulation (GDPR). This marks the start of the actual review work by the national supervisory authorities.
The EU data protection commissioners have set out a framework and developed a 3-step test for legitimate AI solutions. This framework leaves civil society organizations and various associations wondering how the European Economic Area will proceed with large language models trained with masses of personal information and assistants and bots based on them. In view of the many vague passages in the paper, it remains to be seen how the data protection authorities will decide. The EDPB does not rule out bans on unlawfully created AI models or applications. At the same time, however, it has brought into play remedial measures for their use, which could be of a technical or organizational nature.
Data protection activists are now increasing the pressure on the supervisory authorities. "Essentially, the EDPB is saying: if you comply with the law, everything is fine," the civil rights organization Noyb (none of your business), founded by Max Schrems, told Euractiv. "But as far as we know, none of the big players in the AI scene are complying with the GDPR." Privacy International made a submission to the EDPB last week, stating that models such as GPT, Gemini or Claude have been "trained without sufficient legal basis" using personal information and are unable to protect the rights of data subjects.
The Italian data protection authority Garante has already had ChatGPT temporarily blocked once. One of the reasons it gave for this was that the mass storage and use of personal data for "training purposes" was not transparent and did not comply with the GDPR. The Garante is now likely to reopen the case in line with the ESDA requirements. Its French counterpart, the CNIL, is already endeavouring to "finalize the EDPB recommendations and ensure the coherence of its work with this first harmonized European position". The main focus will be on web scraping, i.e. the mass extraction of data from more or less open online sources. The EDPB itself also wants to continue working on this point.
Videos by heise
Zuckerberg is sad about the EU's AI backlog
The data protection commissioners of Baden-Württemberg and Rhineland-Palatinate, Tobias Keber and Dieter Kugelmann, stated in an initial reaction to the EDPB decision: "The opinion does not make any statements on the permissibility of specific AI models that are already on the market." Rather, the committee has established "guidelines for a data protection review of AI systems in individual cases and for their design". In principle, this is an important "step towards legal certainty for developers and users of AI systems, as well as for people whose data is processed in this context".
Deputy Federal Data Protection Commissioner Andreas Hartl emphasized that the ESDA enables "responsible AI". Politics is also required: "We would also like to see the clearest possible legal regulations on when training data may be processed". The German Digital Industry Association (BVDW) was hardly enthusiastic: the EDSA had created "little clarity and orientation". The interpretation and delimitation of the line are complex and difficult. "Essential questions remain unanswered for over 36 pages, which creates more legal uncertainty for developers and users of AI." There is a lack of appropriate and technically feasible measures.
Meta CEO Mark Zuckerberg is "saddened that at this point I basically have to tell our teams to roll out our new AI advances everywhere except the EU". He was responding to a comment by meta chief lobbyist Nick Clegg that the work of EU regulators was "frustratingly" slow. Clegg appealed to the national inspectors to apply the new principles "quickly, pragmatically and transparently", otherwise the required AI upswing in the EU would not happen.
(ds)