Berlin Data Protection Authority: AI chatbots also cause a high increase

AI chatbots caused a massive wave of complaints to the Berlin Commissioner for Data Protection in 2025 – they helped in formulating the complaints.

listen Print view
Young man with an AI device

(Image: Fabio Principe/ Shutterstock.com)

3 min. read

At the Berlin Commissioner for Data Protection, Meike Kamp, the case numbers for the year 2025 are alarmingly high. "From January to November 2025 inclusive, the authority received 8,436 submissions. These include 2,644 formal complaints and 5,772 requests for advice from affected individuals, for example, on how to assert their rights to information or deletion of data," the authority stated. The numbers for December have not yet been included. "Compared to 2024, this represents an increase of around 50 percent," explains Kamp.

The main areas of complaint are the banking and financial sector, debt collection agencies, the imposed use of mobile apps, as well as video surveillance and the consequences of identity theft. Kamp sees the reasons for the strong increase also in a growing public awareness and advancing digitalization. However, a main reason is AI chatbots: "We are receiving more and more submissions that are obviously created with the help of AI chatbots. This means that in response to the question of who can help with data protection issues, our authority's services become more visible through AI," explains the data protection officer.

However, the use of AI in formulating complaints also has a dark side. Kamp warns of false expectations that are raised by chatbot predictions. "We have already experienced that the statements and, above all, the assessments of the legal situation are often incomplete or simply wrong," says Kamp. "In some cases, we have even been confronted with court rulings invented by AI or non-existent legal literature." Kamp therefore advises to always critically examine the results of AI applications.

Videos by heise

While AI can also help citizens as a tool for enforcing their rights, it is itself the subject of data protection concerns. Philosopher Rainer MĂĽhlhoff sees AI less as a tool but primarily as an instrument of power that establishes a new form of "predictive power." This ability to predict unknown information about individuals goes far beyond classic data protection concepts.

Concrete examples, such as Meta's practice of using personal information for AI training, illustrate the problem. According to junior professor Paulina Jo Pesch, Meta deliberately makes it complicated and opaque to object to data usage. Even children and adolescents are affected, as age information in social networks is hardly effectively controlled and their data can thus flow into AI training.

(mack)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.