AI regulation: many areas of conflict with existing regulations such as the GDPR
EU states and companies face the challenge of implementing the AI Act in a timely manner. Inconsistencies and overlaps with existing law make this difficult.
(Image: artjazz/Shutterstock.com)
" As a horizontal legal framework, the AI Act complements sectoral regulations and other digital laws, but is insufficiently coordinated with them." This is the key point of a recently published study carried out by law professor Philipp Hacker (Frankfurt/Oder) for the Bertelsmann Foundation. According to the study, many AI applications that fall under the comprehensive requirements of the regulation for systems with artificial intelligence (AI) are already subject to other regulations at the same time.
Hacker mentions the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), for example. At the same time, the AI Act is also in "tension" with regulations for the financial, medical and automotive sectors, for example, when it comes to AI-based credit assessment or diagnostic systems or functions for autonomous driving.
According to the lawyer, the far-reaching, risk-based approach of the AI Act aims to categorize AI applications according to their risk potential and to create strict requirements for potentially particularly dangerous systems. It is not limited to companies in the EU, but also applies to companies based outside the EU whose AI systems are offered in the EU or whose services are used there. By August 2026, businesses and member states will have the task of applying the AI Regulation in practice step by step. However, individual parts do not yet fit together, according to the paper: Inconsistencies, overlaps and ambiguities could hinder smooth implementation and lead to legal uncertainty.
Implementing regulations and guidelines as a solution
Specifically, Hacker senses conflicts in the risk analysis obligations of the DSA and the AI Act. These mainly affect platforms that integrate generative AI technologies such as large language models. The challenge here is to reconcile platform-specific and AI-related risks. So far, there are no clear rules on the reuse of personal data for AI training, the legal expert points out. This makes it difficult to comply with the GDPR and the AI Act. The civil rights organization Privacy International has just come to the conclusion that models such as GPT, Gemini or Claude have been "trained without sufficient legal basis" using personal information and are unable to protect the rights of data subjects under the GDPR.
In the financial sector, different data protection requirements could make AI-supported risk analyses more difficult, Hacker points out. In the automotive industry, the integration of driver assistance systems into existing product safety and liability regulations poses a double regulatory challenge. In the healthcare sector, contradictory regulations could slow down the spread of AI-based medical applications where approval capacities are already scarce. These include, for example, cancer detection or tools for creating doctor's letters.
Videos by heise
In the short term, the author recommends better dovetailing of existing regulations in order to avoid duplication and increase efficiency. This has already been achieved to some extent in the AI Act, for example with regard to quality management systems at financial institutions. The EU Commission could encourage similar interaction through implementing regulations. National supervisory authorities should also issue guidelines on the application of the AI Regulation in specific sectoral contexts. In the long term, national and European approaches are needed to harmonize AI regulation with other legal acts and eliminate inconsistencies in the long term. Furthermore, the frameworks should be regularly reviewed in order to take appropriate account of technological and social developments.
(vbr)