Omnibus AI Act: Deadline extension and deepfake ban
High-risk AI systems in the EU will be banned later. A ban on certain deepfakes is newly planned.
(Image: Wirestock Images / Shutterstock.com)
As part of the so-called Omnibus package, the EU Commission had proposed to regulate high-risk AI systems 16 months later than initially planned. The European Council has now approved the amendment to the AI Act and formulated its views. This is part of the simplification agenda, in which the EU is currently reviewing several existing laws for their currency and implementability. In this case, it is primarily about the previously missing standards and tools for high-risk AI, which still need to be developed.
Furthermore, the Council wants to prohibit the generation of content depicting child abuse and intimate situations, as well as non-consensual sexual acts. This refers not only to the lack of permission in the act, but to the lack of permission to artificially create such images.
The trigger for the expansion were mass-produced images with very questionable content that people had created with the image generator from Grok. Many posted the images on X. Grok is the image generator from Elon Musk's xAI, which also owns X. However, there are numerous organizations, such as Hate Aid, which have long been calling for so-called face-swap apps to be banned. With these, it is easy to replace the heads and faces of people in pornographic images with the heads of other people.
Later regulation of high-risk AI
Two further adjustments to the AI regulation concern high-risk AI systems. High-risk means that these AI systems can endanger fundamental rights and lives. Examples include biometric remote identification, also known as facial recognition, as well as the use of AI systems in critical infrastructure, justice, and education. In each case, it is not generally about the use of AI, but about specific tasks – for example, the evaluation of exams in education may not be carried out exclusively by an AI.
The new deadline for high-risk AI systems is August 2, 2028. From then on, rules and obligations will apply to the use of high-risk AI when it is integrated into other systems. December 2, 2027, is now the new deadline for standalone high-risk AI systems – those that are not part of a more extensive product.
Videos by heise
Anyone operating a high-risk AI system must register it in an EU database. The European Council calls for this to always be done – even if operators believe their system might not belong there. This was recently debated regarding the extent to which operators can decide for themselves how their systems are classified. The Council also reaffirms the principle of strict necessity for the processing of personal data.
The AI Office will continue to be responsible for the fundamental monitoring of general-purpose AI models. Exceptions for which national authorities are responsible are to be listed by the EU office.
The Council calls on the Commission to develop guidelines that minimize the administrative burden for companies using AI systems. Exceptions that have so far only applied to small and medium-sized enterprises should also apply to smaller midcap companies.
The Council's demands will now be discussed with the EU Parliament.
(emw)