EU backs away from chat control

In its new draft for combating child abuse online, the EU is abandoning chat control. Instead: risk assessments and voluntary measures.

listen Print view
WhatsApp chat on a smartphone

(Image: Tatiana Diuvbanova/Shutterstock.com)

3 min. read

The European Union is backing away from its efforts to combat child abuse online with the use of chat control. A draft presented by the Council of the European Union draft proposal for further negotiations no longer provides for the highly controversial scanning of devices of Messenger and cloud service users. The outcome of the negotiations had already leaked beforehand. Security experts also view the voluntary measures critically. Now it has been officially confirmed.

After months of wrangling, the Danish Council Presidency had to abandon its plans in November to force messenger services like WhatsApp, Signal, or Threema to automatically scan private communications. Civil rights activists and data protectionists protested against this, fearing the de facto end of end-to-end encryption and sensing a backdoor for further surveillance. Messenger services like Signal even announced their withdrawal from the EU if the law had passed.

The now agreed position, on the other hand, focuses on risk assessments for online services, voluntary protective measures by providers, and a new EU agency. The transitional regulation, limited until April 2026, which allows voluntary scanning for abuse material, is to remain permanently in place.

Videos by heise

With the agreement, the trilogue with the EU Parliament can begin, which had already adopted a significantly more restrictive position in November 2023. The MEPs had fundamentally rejected mandatory measures and want to exclude encrypted communication entirely. Experience shows that trilogues can drag on for years, especially in controversial cases.

In the risk analysis, providers must check whether their services can be misused for the dissemination of abuse material or for contacting children. There are to be three categories for this: high, medium, and low risk. Providers in the highest category could be obliged to participate in the development of risk mitigation technologies. In principle, providers are to provide reporting functions for users and control options for shared content and introduce default privacy settings for children, as some providers already have.

National authorities are to be empowered to oblige companies to remove and block content. And for search engines, it should be possible to remove objectionable entries from the results. This is to be supported by a new EU agency. This agency will process reports from providers, maintain databases, and support national authorities. At the same time, it would be the interface to Europol and national law enforcement agencies. The draft also provides that companies should help victims remove material. The EU agency is to support this.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.