Cybersecurity Experts: Voluntary Chat Control Also Endangers Fundamental Rights

EU states have agreed to retain the option for voluntary scanning of digital messages. Scientists also warn against this approach.

listen Print view
A smartphone with an open messaging app in a human hand, a blurred chat is visible on the screen.

(Image: Tero Vesalainen/Shutterstock.com)

5 min. read
Contents

Leading cybersecurity experts are expressing their concern in an open letter to the EU Council of Ministers about the new proposal from the EU Presidency for a regulation against the dissemination of child sexual abuse material, which enshrines a common compromise line of the member states for voluntary chat control. The agreement is intended to permanently enable online services such as messenger operators to voluntarily scan private communications.

The scientists explicitly welcome that the mandatory on-device detection of abuse material has been removed from the new draft. This improves the balance between child protection, IT security, and privacy. Nevertheless, they are sounding the alarm that other aspects of the proposal continue to pose significant societal risks without clear added value for child protection.

The central point of criticism is the expansion of the scope of detection. By referring to the existing voluntary activities of providers such as Facebook, Google, or Microsoft under the E-Privacy Directive, the option to analyze content beyond images and URLs would be reintroduced, the researchers complain. This applies particularly to text and video. Furthermore, it should be possible to search for newly generated abuse material.

The experts emphasize, citing previous warnings that current AI technology is far from precise enough, to perform these tasks with the necessary accuracy. False reports are unavoidable due to the inherent limitations of the technology and the context-dependent nature of the behaviors to be identified. The expansion to text and video formats will further increase the already very high number of false alarms.

The surveillance of text messages could lead to misjudgments in harmless interactions between relatives, close friends, or teenagers, the signatories provide an example. This mass surveillance carries the risk of flooding investigators with false accusations and diverting them from pursuing real cases, which would reduce overall protection.

Similarly, the scientists strongly oppose the planned mandatory age verification for services classified as high-risk, such as end-to-end encrypted communication services and app stores. They point out that adding age controls does not necessarily mean additional security, especially if content detection is ineffective.

The age verification cannot be carried out in a data protection-compliant manner with currently available technologies, as it relies on biometric, behavioral, or contextual information such as browser history. The increasingly used AI methods exhibit high error rates and are biased against certain minorities, posing a disproportionate risk of serious data protection violations and discrimination.

Videos by heise

Age verification with official identification documents is also disproportionate according to the authors, as it reveals significantly more information than just the age. Privacy-friendly solutions based on cryptography, in turn, could create dependencies on specific hardware or software and thus discriminate against users who do not have the latest technology. Moreover, age controls can be easily circumvented.

Finally, the experts emphasize that even the voluntary application of surveillance technologies on end devices is not a justifiable means of risk mitigation. The potential harm and the potential for abuse are enormous – the benefit unproven. Reporting scan results to third parties, such as law enforcement agencies, could lead to the service provider no longer being able to claim end-to-end encryption. Any communication whose content can be scanned and reported is no longer considered secure or private, thus undermining the basis of a resilient digital society.

The total of 18 international signatories include prominent representatives of IT security and cryptography research. From Germany, Cas Cremers from the CISPA Helmholtz Center for Information Security, Anja Lehmann from the University of Potsdam, Kai Rannenberg from the University of Frankfurt, and Carmela Troncoso from the Max Planck Institute for Security and Privacy are involved.

Last week, Italy also questioned whether the right to user privacy could be sufficiently protected within the framework of voluntary chat control, according to a leaked Council protocol. The government in Rome fears that the instrument could be extended to other offenses. Poland also reserved the right to further examination.

The latest draft law from the Danish Council Presidency explicitly states: "No provision of this Regulation shall be interpreted as imposing detection obligations on providers." Next week, the Permanent Representatives of the EU states are to approve the proposal, and in December the Justice and Interior Ministers.

Former MEP and civil rights activist Patrick Breyer speaks of a partial success: "We have prevented mandatory chat control through the back door. But anonymization-destroying age controls and 'voluntary' mass scans are still planned." The fight will therefore continue next year.

Breyer is suing together with a victim of abuse against voluntary chat control. In Germany, such a measure would likely not be applicable, as messaging services are subject to telecommunications secrecy. Providers are therefore not allowed to obtain knowledge of the content or closer circumstances of telecommunication beyond what is technically necessary.

(wpl)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.