FAQ: AI Act requires AI expertise – What companies and authorities can expect

From February 2, companies and public authorities must ensure that their employees have sufficient AI skills. What does this entail?

listen Print view
Hand writes on paper, robot hand points to it

(Image: Andrey_Popov/Shutterstock.com)

8 min. read

With the AI Act, the EU legislator has created a large body of legislation that is becoming increasingly important and influential for the use of artificial intelligence –, at least in Europe. Companies and authorities are obliged to keep track of what is relevant for them. One obligation that should not be underestimated is the requirement to ensure that their own staff have sufficient AI expertise.

The AI literacy requirement will apply from February 2, 2025, as part of the first stage of the AI Act, when the sections in Chapters I and II come into effect. The first chapter contains general provisions such as the subject matter of the regulation, its scope of application, definitions and, in Article 4, provisions on AI competence. Chapter II consists only of Article 5, which specifies "Prohibited practices in the field of AI". This prohibits the use of AI for social scoring systems, for example.

Who is affected by the AI competence obligation?

Article 4 of the AI Regulation states: "Providers and operators of AI systems shall take measures to ensure, to the best of their ability, that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI competence, taking into account their technical knowledge, experience, education and training, the context in which the AI systems are intended to be used and the persons or groups of persons with whom the AI systems are intended to be used."

The addressees of the AI competence obligation are therefore providers and operators of AI systems. A provider is a natural or legal person, public authority, institution or other body that "develops or has developed an AI system and places it on the market under its own name or trademark or puts the AI system into operation under its own name or trademark, whether for payment or free of charge" (Article 3 No. 3 AI Act). Although it is usually easy to determine who the "AI developer" is in a specific case, it can be tricky in borderline cases.

Article 3 no. 4 of the AI Act defines who is the "operator" of an AI system: "A natural or legal person, public authority, agency or other body that uses an AI system under its own responsibility, unless the AI system is used in the course of a personal and non-professional activity". If you consider the extremely broad scope of the AI Act, it becomes clear that sooner or later almost every company and every public authority will be regarded as an operator of AI systems. This means that they are all subject to the obligation to ensure sufficient AI competence among their staff or the users of AI systems on their behalf.

What is AI competence?

AI competence includes, in particular, being able to critically scrutinize AI technologies and use them effectively in different areas of life. The concepts include understanding the technical application, measures and design of AI systems as well as knowing how AI decisions affect the people concerned. It all depends on the specific application. This means that it is almost impossible to specify a standardized scheme for implementing this obligation in a company or public authority. Accordingly, the requirements are not very specific.

In addition, the AI competence obligations are dynamically structured. On the one hand, only a "sufficient level" of competence is required, and on the other, the provider or operator only has to ensure this "to the best of their ability".

A European body for artificial intelligence is to support the EU Commission in promoting AI competence tools and raising public awareness. The EU Commission and the member states are to develop voluntary codes of conduct in cooperation with stakeholders in order to promote AI skills. However, it is not yet possible to predict when concrete results such as codes of conduct can be expected. Until then, affected companies and authorities are largely dependent on other support if they are unable to define and communicate sufficient AI competence themselves.

In the absence of precise guidelines on what AI competence should include in detail, the consulting industry has already jumped on the bandwagon and is offering courses to train AI officers or AI managers. Law firms describe what they consider to be the relevant aspects. As always, there is no "one" solution for teaching sufficient AI skills. Ultimately, answers and concepts must be tailored to the specific company or authority. In any case, efforts to achieve AI competence should be well documented.

What do the risk classes mean in terms of AI competence?

The AI Act introduces a classification into prohibited AI systems, high-risk AI systems, AI systems with limited risk and those with minimal or no risk. These are AI systems that the EU considers to be high-risk for the health, safety or fundamental rights of EU citizens, but whose major socio-economic benefits outweigh these risks. The classification is deliberately very broad.

Operators of such high-risk AI systems are required to have special AI skills. They must be able to make informed decisions about AI and have the authority to shut down a system in an emergency.

Article 26(2) of the AI Act clearly describes the obligations of operators of high-risk AI systems with regard to AI competence: "Operators shall entrust human supervision to natural persons who have the necessary competence, training and authority, and provide them with the necessary support." Conversely, the wording makes it clear that persons who do not have the specific AI expertise required may not be entrusted with the operation of high-risk AI systems.

High-risk AI systems should therefore be developed in such a way that natural persons can monitor their functioning and ensure that they are used as intended. Providers must define human supervision measures, including operational restrictions and the ability to respond to human operators, before placing them on the market. The legislator thus provides for an interplay between human AI competence on the one hand and the existence of technical intervention, control and stop mechanisms on the other when it comes to the use of high-risk AI systems.

What is the threat of a breach of the AI competence obligation?

Breaches of the AI competence obligation can have considerable consequences. As a breach of duty of care, there is a risk of liability claims and claims for damages. As with other regulations, personal liability of the managing director may also apply to the AI Act if he has not properly complied with his duties – with the statutory requirements on AI competence.

Videos by heise

Fines and other sanctions are also likely to be imposed in accordance with the national sanctions catalogs currently being developed. It is only a matter of time before it is clear at national level what fines or other sanctions companies and authorities will face if they fail to comply with the AI competence obligation.

(axk)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.