AI Act: countdown to the implementation of the AI regulation has started

The AI Regulation has been published in the Official Journal of the EU. It will come into force on August 1, starting various implementation deadlines.

Save to Pocket listen Print view
Blue,Neon,Glowing,Weight,Balance,Scale,Holding,Red,Alphabet,Ai

(Image: Shutterstock)

4 min. read
This article was originally published in German and has been automatically translated.

Drafts for the regulation on artificial intelligence (AI) adopted by the EU Parliament in mid-March had over 500 pages. The official final version published in the Official Journal of the EU on Friday, however, is much more compact at 144 pages including annexes. It is now also clear that the AI Act will come into force twenty days later, on August 1, 2024.

This marks the start of the implementation period for the regulation, with which the EU aims to promote investment in safe and trustworthy AI systems. From February 2, 2025, for example, the bans on certain AI practices such as social scoring, which is used to automatically assess social behavior and may result in exclusion from public services, will apply.

From February, the use of biometric remote surveillance systems, for example using automated facial recognition in public spaces for law enforcement purposes, will also be prohibited. However, the compromise agreed in December already opened back doors for the use of such technologies by the police. The EU Council also removed the actually agreed list of criminal offenses and the judicial reservation. However, the traffic light coalition, which is against biometric mass surveillance, does not want to change course.

By August 2, 2025, the member states must enact implementing laws in which they are to designate general market surveillance authorities to enforce the regulation, among other things. A dispute has broken out over this in Germany. The Federal and State Data Protection Conference (DSK) believes it is predestined to take on this task. However, at a recent hearing in the Bundestag, experts such as Robert Kilian from the Federal Association of Artificial Intelligence Companies argued in favor of the Federal Network Agency and, in the medium term, a separate higher federal authority for digital matters. This is the only way to ensure a "uniform supervisory density".

In general, market surveillance authorities must be independent, impartial and unbiased according to the AI Act to guarantee the objectivity of their activities and ensure the application and implementation of the regulations. Adequate technical, financial and human resources are also required, as well as an appropriate infrastructure to carry out the tasks effectively. Competent authorities must have a sufficient number of employees at all times whose skills and expertise include a comprehensive understanding of AI technologies and, in particular, the relevant data and product security requirements.

The data protection officers for Hamburg and Baden-Württemberg pointed out on Friday that "the data protection supervisory authorities are already responsible for market surveillance for large parts of the high-risk catalog of AI systems". In the law enforcement, judicial administration and migration control sectors, as well as for AI that influences elections, they are "set" as the relevant authorities. This also applies to software companies and cloud providers that develop corresponding solutions for these areas.

The telco association VATM emphasized that the industry needs a supervisory authority as a point of contact "whose goal must be to support German companies in international competition". What is needed is "a central authority that not only places data protection at the center of its work, but also focuses on the benefits of innovative AI applications" for citizens. Under no circumstances should there be a confusion of AI expertise in 16 federal states.

(vbr)