AI and the GDPR: EU data protectionists agree on a common line
The EU data protection commissioners have published their long-awaited line on the use of personal data for the development and use of AI models.
(Image: StudioProX/Shutterstock.com)
The European Data Protection Board (EDPB) is not placing any major obstacles in the way of the development and use of artificial intelligence (AI) models. This is according to a statement published on Wednesday by the European Data Protection Supervisors on the regulation of AI with regard to the General Data Protection Regulation (GDPR).
According to the data protection experts, Meta, Google, OpenAI & Co. can in principle invoke a "legitimate interest" as the legal basis for the processing of personal data by AI models. However, the EDPB links this approval to a number of conditions.
3-step test
The national data protection authorities are to use a 3-step test to assess whether a legitimate interest exists. The first step is to check whether the claim to data processing is legitimate. This is followed by a "necessity test" to determine whether the data processing is necessary. Finally, the fundamental rights of the data subjects and the interests of the AI providers must be weighed up.
With regard to the balancing of fundamental rights, the EDPB emphasizes that "specific risks" to civil rights could arise in the development or deployment phase of AI models. In order to assess such effects, supervisory authorities should take into account "the nature of the data processed by the models", the context and "the possible further In principle, "the specific circumstances of the individual case" must be taken into account.
As an example, the committee cites a voice assistant in the opinion that is intended to support users in improving cyber security. Such services could be beneficial to individuals and be based on a legitimate interest. However, this only applies if the processing is absolutely necessary and a balance of all rights to be included is maintained.
Clarifications on anonymization
If unlawfully processed personal information was used in the development of an AI model, the EDPB could prohibit its use altogether. Exception: everything is properly anonymized. There are also benchmarks for this: in the case of anonymization, it must be very unlikely that people can be "directly or indirectly identified". It must also be ensured that such personal information cannot be extracted from the model through queries.
Videos by heise
At the urging of the Irish supervisory authority (DPC), the EDPB set up a task force around ChatGPT in mid-2023. This was in response to a brief ban on the system by the Italian data protection authority. With the joint statement, the data protection experts want to ensure uniform law enforcement in the EU.
"We must ensure that these innovations are carried out ethically and securely and that everyone benefits from them," emphasized EDSA Chairwoman Anu Talus. The IT association CCIA welcomed the explanations on legitimate interest. They are "an important step towards greater legal certainty".
(vbr)