Missing Link: Prevention at the Source – Chat Control and Upload Filters
EU plans for "chat control" are highly controversial. The new "compromise": upload filters. Who are the proponents and what do they propose?
(Image: Black Salmon/Shutterstock.com)
In Brussels and the EU states, a lobby battle is raging that splits Europe's most powerful interest groups into two camps in the dispute over the draft EU Commission regulation for mass online surveillance, which has been fought over for years under the guise of combating child sexual abuse. On the one hand, tech giants—large and small—are resisting demands that would endanger their services or entail new liability obligations. In solidarity with civil rights organizations and also political representatives, they are fighting for the secrecy of communications. On the other hand, there is an alliance of investigative authorities and organizations that champion children's rights.
Ultimately, the dispute, which is constantly becoming more complex, is about a new chapter in the Crypto Wars. In addition to data protectionists, IT associations, the Child Protection Association, and press organizations are mobilizing against the planned scrutiny of private communication under the guise of chat control. It almost seemed as if the black-red federal government wanted to abandon the “Ampel 's” no. That would have cleared the way for the adoption of the EU Council's position on the dossier. But then the CDU/CSU and SPD also took a stand against “unwarranted” chat control, at least.
The opposing side has also ramped up its efforts in parallel. It includes organizations such as the Internet Watch Foundation (IWF), the Canadian Centre for Child Protection (C3P), the International Justice Mission (IJM), ECPAT, the Children's Rights Network, World Vision, Terre des Hommes, Innocence in Danger, the World Childhood Foundation, the Stiftung digitale Chancen, the Children's Rights Network Germany, and the SafeToNet Foundation. This loose alliance is now trying to bring a supposed new approach to the “stalemated debate.” It advocates for a “compromise” that also works in end-to-end encrypted services (E2EE) such as WhatsApp, Signal, and Threema.
The Internet Watch Foundation is making an effort
The proposal aims to detect and block child sexual abuse material (CSAM) without exporting content from communication applications. “Modern on-device and in-app detection checks content locally without transferring data or breaking encryption,” says IJM, for example. Apple, Meta, and Google already use such methods to protect against nude images and detect links with harmful content.
The argument is based on a recently published report by the IWF. The authors describe the instrument they propose as “upload prevention.” They state that it is a technically feasible and privacy-friendly way to block known CSAM before it can be distributed in end-to-end encrypted environments. The term is reminiscent of the ominous upload filters for which the EU legislator cleared the way, albeit restricted, after long debates in the fight against copyright infringement. However, these operate at the platform level, such as YouTube, not on users' end devices.
While E2EE is essential for privacy, the IWF admits. However, criminals are now deliberately choosing end-to-end encrypted platforms due to a lower risk of prosecution. The introduction of E2EE without security measures has led to a drastic decline in CSAM reports. For victims, the continuous distribution of their abuse images represents a constant threat and recurring psychological damage.
Direct hit with client-side scanning
The IWF is marketing upload prevention as a “security feature.” The process is based on digital fingerprints. A unique hash of the file (image or video) is created on the sender's device. This pattern is compared with a secure database containing hashes of illegal CSAM already confirmed by experts. If there is a match, the upload of the file is blocked at the source; otherwise, it is released.
The hash lists must be managed by trusted organizations to ensure their integrity, the paper states. Organizations such as the IWF, the National Center for Missing and Exploited Children (NCMEC), and the C3P maintain such carefully verified directories. In the EU, the planned center for combating CSAM would also be suitable for this purpose. Kerry Smith, CEO of the IWF, warned last week that the current negotiations at Council level are the last chance for political decision-makers in Europe to “integrate these protective measures into everyday life.”
A central point of the IWF's argument is the timing: the file is checked locally on the sender's device before the file is encrypted. However, this is not really new. In principle, upload prevention is nothing other than client-side scanning (CSS). Civil rights activists and scientists have long criticized that this involves searching every encrypted chat on end devices and intervening in cases of suspected violations.
Fear of general surveillance infrastructure
According to CSS opponents, the one-time implementation of a local scanning tool creates a technical infrastructure for surveillance. Even if it were initially used “only” for CSAM, governments could expand the function to other content in the future. A mechanism installed on the devices of billions of users is understood as a “backdoor for everyone” and a potential censorship tool.
True end-to-end encryption is also based on the trust that the app on the user's device never scans content without their knowledge or consent for the service provider. Mandatory scrutiny undermines this reliability and the integrity of the system. Cryptographers warn that a scanned message can no longer be considered truly E2EE-encrypted.
Furthermore, there is always a risk of false positives with hash lists. Such a false positive could lead to the blocking of a legitimate image and, in the worst case, to the unfounded prosecution of a user. In addition, there is the objection that the hash lists themselves could become targets of state or criminal manipulation. A technology for mandatory content verification on the user's device is equivalent to a “surveillance key” that could be used universally.
Unsubstantiated promises
Upload prevention “sounds good at first,” says Bremen-based information law expert Dennis-Kenji Kipker to heise online. However, it assumes that this content is already present on certain “reporting lists.” The spread of new content is not prevented. Especially in times when AI-generated material is increasingly circulating, its effectiveness is questionable. Moreover, such a method does not involve law enforcement, the professor points out. It would be more important to “initiate real alternatives to digital child protection” instead of “following technically unsubstantiated technical promises.” In general, CSS can compromise protected digital communication.
But the other side does not let up and points to successes based on reports received so far: “IJM has already been able to bring over 1300 victims to safety,” emphasizes a spokesperson for the protection organization to heise online. “The cases clearly document how important it is for safety-by-design to be implemented on devices and platforms.”
Videos by heise
AI systems such as Safer by Thorn and Google Content Safety API achieve over 90 percent accuracy with only a 0.1 percent false alarm rate, the IJM spokesperson highlights. Even that would be unacceptable if all false positives led to reports. However, the situation is different if “only” the transmission is prevented. Even the “imperfect” latest proposal from the Danish Council presidency also contained “detailed mechanisms” to prevent governments from extending the approach to other undesirable content.
The Commissioner for Child Abuse weighs in
The organization Thorn, co-founded by Hollywood star Ashton Kutcher, was long considered a driving lobby force behind the draft regulation. The Commission insisted that this US foundation “merely provided expertise and did not attempt” to influence the legislative initiative. That is not true, the EU Ombudsman countered last year. The “business strategy for the use of Thorn products” such as the “Safer” filter was also discussed.
The program is designed to detect depictions of child sexual abuse based on Microsoft's PhotoDNA by comparing hash values of images and videos with databases of known relevant recordings. The Brussels executive simply relied on the statements of Thorn and other major US manufacturers of filter solutions regarding the alleged hit rates, which, however, proved to be unsubstantiated.
Meanwhile, the Federal Commissioner for Child Abuse Prevention, Kerstin Claus, is outraged by the “framing of the so-called unwarranted chat control.” It is a “battle cry” that “stifles any factual debate at its root,” she criticized Politico. The chats of all citizens should not be read, but with judicial authorization, an automated comparison with known depictions of abuse should be carried out. At most, this would affect “specific chat groups or partial services that were previously identified as risk areas.” And Hany Farid, co-developer of PhotoDNA, continues to strongly advocate for the widespread use of the solution.
(emw)