AI guidelines: Public sector AI use must adhere to fundamental rights
The government has published guidelines for the "responsible and safe" use of AI in the federal administration. Everything should be verifiable.
(Image: Anggalih Prasetya/Shutterstock.com)
In view of the "rapid technological developments and the high level of interest within the federal administration in the secure use" of generative artificial intelligence (AI) models such as ChatGPT, Gemini and Claude in particular, the Federal Ministry of the Interior (BMI) has developed and published relevant guidelines. The recommendations for action contained therein are intended to serve as a guide for public sector employees. One of the many maxims: "AI is introduced and used in accordance with fundamental rights."
The personal rights of employees and citizens, such as the right to informational self-determination, must be respected, the department explains. The aim is to "expand human scope for action" with AI. The use of technology should be fair and non-discriminatory. This applies to characteristics such as gender, sexual orientation, disability or age. AI systems should be – adapted to the respective purposes – based on "suitable and representative training data". As a rule, "the most restrictive case of data input" applies to users.
In line with legal frameworks such as the AI Act, "there must be clear responsibilities when using AI" is another requirement. The use of the technology "shall be under appropriate human supervision, with appropriateness to be assessed on a case-by-case basis". This note is likely to be particularly challenging: "In some cases, it may be necessary to be able to understand and evaluate each result with all underlying factors." Operators of relevant systems often find themselves unable to do justice to this principle of explainable AI.
Videos by heise
Maximize benefits, minimize risks
Implementing the paper with the general requirement of responsible use of technology is also likely to be challenging in other respects. "A common value-based approach to the provision and use of AI" should "maximize the benefits of AI and minimize the associated risks", it says, for example. Applications should "always be applied for the common good". Authorities should also use AI in an opportunity-oriented manner where it creates added value for the fulfillment of tasks.
The topic of sustainability also comes up: AI systems and their use should be "as resource-conserving and energy-efficient as possible". The efficiency of the applications can be taken into account during development and procurement. At the hardware level, energy consumption and material resources for the computing power of AI should be minimized. At the same time, the technology could also be actively used for purposes that serve ecological sustainability. The option to switch at a later date, the authority's own ability to shape and influence the IT provider and its market position should be taken into account when making a selection in order to avoid lock-in effects and preserve digital sovereignty. In addition, there is an appeal to give preference to models with a transparent training process and freely available parameters.
(nie)