AI Regulation: EU states in favor of biometric surveillance in public spaces

The EU Council has staked out its position for the planned rules on AI. Law enforcement should be able to use automated facial recognition.

In Pocket speichern vorlesen Druckansicht
Ai,(artificial,Intelligence),Concept.,Contents,Concept.,Social,Networking,Service.,Streaming

(Bild: metamorworks / Shutterstock.com)

Lesezeit: 6 Min.
Inhaltsverzeichnis

(Hier finden Sie die deutsche Version des Beitrags)

Law enforcement and border guards should be allowed to use facial recognition and other forms of biometric surveillance in public spaces in many cases. This is what the EU Council of Ministers is advocating in its line adopted on Tuesday on the planned Artificial Intelligence (AI) Regulation.

The "general approach" of the governments of the EU states maintains the fundamental ban on "biometric real-time remote identification in public spaces," as the EU Commission had brought into play a good year and a half ago. According to the Council, however, it has now clarified "for which purposes such use is absolutely necessary for law enforcement purposes and for which law enforcement authorities should therefore be allowed to use such systems by way of exception."

Even in the Commission's original proposal, the actual ban was riddled with various exceptions. According to the Brussels-based government institution, automated facial recognition should be permissible for purposes such as the targeted search for potential crime victims or missing children, the prevention of an imminent terrorist attack, or the recognition and identification of persons who have committed "serious crimes.

The EU Council wants to further expand this list of exceptions. In particular, law enforcement, border control, immigration or asylum authorities are to be allowed to use relevant systems, in accordance with EU or national law, to identify a person, even against his or her will, "who either refuses to be identified during an identity check or is unable to state or prove his or her identity." In addition, according to the member states' position, prisons and border control areas do not fall under the definition of public spaces. Even there, real-time biometric remote identification by officers would thus be allowed.

In the case of manipulated image, audio or video content such as deepfakes, the Commission called for appropriate labeling. However, according to the Council, this should not apply "if the use is authorized by law for the detection, prevention, investigation and prosecution of criminal offenses or if the content is part of an obviously creative, satirical, artistic or fictional work or program."

Furthermore, the EU countries advocate that the areas of national security and defense as well as general military purposes be excluded from the scope of the envisaged AI law. This should also apply to relevant applications and their results used exclusively for research and development purposes.

To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing it from simpler software systems, the Council text narrows the definition to those techniques developed using machine learning and "logic- and knowledge-based approaches."

The proposal generally follows a risk-based approach. The aim is to establish a single, horizontal legal framework for AI. Regarding high-risk practices, member states want to extend the ban on the use of AI for social scoring to private actors. In addition, the provision prohibiting the use of AI systems that exploit weaknesses of a certain group of people is also intended to apply to people who are vulnerable due to their social or economic situation.

The Council also wants to ensure that AI systems with many possible purposes such as image or speech recognition ("general purpose AI") are covered, especially if they are integrated into a high-risk application. The Commission is to draft a delegated act on this, but first conduct a consultation and impact assessment. This would take into account the "specific characteristics of these systems and the associated value chain, technical feasibility and market and technological developments."

The EU countries also want innovations in the field of AI systems to be promoted more strongly. For example, national bodies should be able to set up so-called "regulatory sandboxes" more easily, in which participants can try out technologies largely without restriction. Where sanctions are imminent, more proportionate upper limits for fines are envisaged for small and medium-sized enterprises and start-ups. With this line, the Council can now enter into negotiations with the EU Parliament once the latter has defined its own position.

The majority of member states were not impressed by calls for a general ban on biometric mass surveillance, for example from civil society, research, the German government or data protection supervisory authorities. At the beginning of November, more than 20 civil society organizations such as AlgorithmWatch, Amnesty, Chaos Computer Club and Digitalcourage called on the German executive to keep its promise in the coalition agreement.

Patrick Breyer, Pirate Party MEP, warned that the text would open the door to mass biometric surveillance in public spaces: The proposal "would justify the permanent and widespread use of facial surveillance to search for thousands of 'victims', 'threats' and suspects of 'serious crimes' who are always wanted." The IT association Bitkom, on the other hand, sees decisive improvements in the Council paper compared to the Commission's proposal. However, there is still a danger "that too strong a focus on risks will slow down AI development in Europe.

(olb)