FAQ – AI Act comes into force: what you need to know about the AI Regulation

The AI Act will come into force on August 1, 2024, and AI will be regulated in the EU. However, the roadmap for the regulations provides for further dates.

Save to Pocket listen Print view

(Image: Shutterstock/Phonlamai Photo)

8 min. read
Contents
This article was originally published in German and has been automatically translated.

There was wrangling and wrangling until the AI Act was finalized. After the AI Regulation was published in the EU Official Journal on July 12, it will come into force today, August 1, 2024. However, this does not mean that it will immediately apply in full to everyone. There is a further timetable for who has to comply with which obligations and regulations and when.

The law applies directly in all EU member states. It basically regulates all AI systems and models. A risk-based approach regulates exactly what the obligations are. AI systems are sorted into risk classes, for which different deadlines and rules apply. The currently much-discussed generative AI falls under the term GPAI – General Purpose AI. There are even separate regulations for it, as ChatGPT and the like did not even exist when the AI Act was being drafted. How these applications can be integrated was therefore only negotiated at a later date.

Six months after the AI Act comes into force, on February 2, 2025, the first applications with an unacceptable risk will be banned. These include social scoring, i.e. systems for monitoring people, such as those used in China. Real-time remote biometric identification in publicly accessible areas by law enforcement agencies is also prohibited in principle.

However, there are exceptions, which civil rights activists in particular criticize. Individual predictive police surveillance is also prohibited, but there are differences here too. It is forbidden to act based on this data. Emotion recognition is also prohibited in the workplace and in educational institutions. Exceptions include aircraft pilots, for example, where the systems can detect fatigue. Images of faces may not be analyzed and evaluated just like that.

AI applications that could have a negative impact on people's safety and fundamental rights are associated with a high risk. There is a list of such applications that can be constantly expanded.

High-risk AI systems must adhere to a number of requirements. These relate to the robustness and accuracy of the data on which the systems are based, documentation and transparency obligations and the need for human supervision. The regulation will take effect from August 2, 2025, i.e. in one year's time.

In order to place a high-risk AI system on the market, it must undergo a conformity assessment. This proves that it meets the requirements. Suppliers must introduce quality and risk management systems.

Most AI systems can be developed and used without being subject to regulations. It is estimated that this applies to around 80 percent of all AI systems. However, this figure dates back to the time before ChatGPT and the like – as regulations apply to GPAI, the figure could now be lower. Nevertheless, most AI applications will simply continue to be offered. These include spam filters in mail programs and algorithms used for search functions, for example. AI has long been used in countless services, but a distinction must be made between the currently hyped applications of generative AI and other AI systems.

General purpose AI or AI models with a general purpose are those to which the large language models belong. Applications based on these models can harbor so-called systemic risks. In an overview of the AI Act, the EU Commission writes that "such powerful models could, for example, cause serious accidents or be misused for far-reaching cyberattacks." Distortions caused by an AI model could also negatively affect many people. GPAI will be regulated under the AI Act from August 2, 2025.

Transparency obligations: AI applications that can be used to manipulate people must comply with certain transparency obligations. Users must know when they are communicating with a chatbot, for example.

Providers of large AI models are obliged to provide all necessary information so that downstream system providers can ensure that they comply with the law. In other words, anyone who uses OpenAI's AI models and becomes one of their providers must ensure that their service complies with the AI Regulation. OpenAI must provide the information.

GPAI providers must ensure that they do not infringe copyright when training their AI models. However, it is not yet entirely clear when they would be doing so. Opinions differ whether training with copyrighted material is permissible. Basically, copyright law simply does not yet recognize this case.

Basically, the risk classification follows the purpose of an AI system. There will be product safety regulations. A number of use cases are attached to the regulation, which you can use to draw a comparison for yourself. The Commission is responsible for the list.

There is also a defined limit for GPAI models. Those that have been trained with a total computing power of more than 10^25 FLOPs generally pose systemic risks. However, this limit is controversial because the size is not necessarily the same as the outgoing risk. So far, they probably only apply to models from the size of GPT-4 and probably Gemini.

Providers of systemic risk models are required to assess risks, report serious cases, carry out tests to assess the models and ensure cybersecurity. It is also mandatory to provide information on the energy consumption of the models. So far, OpenAI, Google and co. have been rather silent on this. However, it is clear that AI is extremely resource-hungry.

The AI regulation states that there is a fundamental right to a high level of environmental protection. Providers must work on improving resource efficiency. The Commission checks whether enough is being done – in general. GPAIM providers must disclose their energy requirements.

In principle, facial recognition is prohibited for criminal prosecution. However, there are 16 defined criminal offenses for which it is permitted. The targeted search for victims in kidnapping and human trafficking allow the use of biometric recognition software, as do imminent terrorist attacks, illegal trafficking in drugs and weapons, grievous bodily harm, murder, rape and environmental crime.

However, the use requires prior approval from a judicial or administrative authority, and there are exceptions here too. There must also be an impact assessment regarding fundamental rights.

Each EU member state must provide a national authority. In Germany, the state data protection authorities want to do this. The Federal Network Agency could also be awarded the contract. This has not yet been decided. Each state is to provide a representative of this authority for the European Committee on Artificial Intelligence.

There will be an advisory forum and a European AI Office; this AI Office is assigned to DG Connect in Brussels, i.e. the Brussels Directorate-General for Communications Networks, Content and Technology. They will be responsible for monitoring the GPAI models. The office is supported by a scientific committee, which is to provide independent experts. The AI Office will also contribute its expertise. "In addition to drawing up codes of conduct to clarify the regulations, this also includes its role in classifying models with systemic risks and monitoring the effective implementation of and compliance with the regulations."

There are Draconian penalties for violations. Up to 35 million euros or 7 percent of the previous year's total global turnover for prohibited practices and violations of data requirements. 15 million or 3 percent for other violations, including GPAI requirements. 7.5 million or 1.5 percent for false or misleading statements to competent authorities. Any person can lodge a complaint with the national authority.

(emw)