Invisible Revolution: Federal Government Flooding Administration with AI

The era of manageable AI pilot projects in authorities is over. The federal government is building a "marketplace of possibilities."

listen Print view
Illustration of a German flag on a circuit board

(Image: LongQuattro/Shutterstock.com)

14 min. read
Contents

Anyone asking the federal government in 2025 where artificial intelligence (AI) is being used will no longer receive a simple list. They will get a bundle of tables, references to databases, and – above all – a fundamental refusal to quantify. The recently published response from the lead Federal Ministry for Digital and State Modernization (Bundesministerium für Digitales und Staatssmodernisierung, BMDS) to an inquiry from the Left Party in the Bundestag marks a turning point in the federal government's digital agenda: AI is no longer the exotic "Project X" in a ministry's basement, but is diffusing into the capillaries of German bureaucracy.

While in July 2024, there was still a comparatively clear picture of over 200 AI applications – and an impenetrable secret domain – the federal government is now almost surrendering to the sheer volume. A "clear distinction" is no longer possible, according to the BMDS. The reason: AI is now integrated as a component in firewalls, word processing programs, and standard office software. The technology has essentially gone from being a lighthouse to a light bulb – it's simply there.

But what is driving the government, where are the risks, and which projects stand out from the mass of administrative processes? The Left Party's inquiry was driven by deep skepticism. In their preliminary remarks, the members of parliament paint a picture of an executive branch that uses AI in "areas sensitive to fundamental rights" without having established the necessary protective mechanisms. They warn of discrimination through algorithmic bias – for example, the disadvantage of women or people with a migration background due to prejudices already present in the training data.

The questioners are particularly critical of the plans of the black-red coalition to grant security authorities far-reaching powers for automated data analysis and even to train AI with real data. Another sore point: energy consumption. The Left Party complains that the massive electricity consumption for training complex models plays hardly any role in public debate and government plans. The parliamentary group demands mandatory manufacturer information on the COâ‚‚ footprint as a procurement criterion.

The government indirectly confirms that AI has long since become a tool of hard security policy – by remaining silent on the matter. For the Federal Intelligence Service (Bundesnachrichtendiensst, BND), the Federal Office for the Protection of the Constitution (Bundesamt für Verfassungsschutz, BfV), and the Military Counterintelligence Service (Militärischer Abschirmdienst, MAD), it refuses to provide information across the board. Even a classified, secret answer is not possible.

The justification is technologically insightful: if the services' AI methods, such as "text recognition," were disclosed in combination with specific data sources, adversaries could draw conclusions about technical capabilities. Moreover, if it became known on which databases the spies' AI is trained, adversaries could specifically "poison" this data (Data Poisoning) to manipulate the AI or falsify results.

The Federal Ministry of Defense also remains tight-lipped – as it did last year: information about AI capabilities could allow conclusions to be drawn about the troops' combat strength. Here, the "administrative revolution" ends and raison d'état begins. Critics, such as the Left Party, complain that parliamentary control over the use of AI is lacking precisely where fundamental rights are most at risk.

According to the public part of the response, the comparison with the situation at the beginning of 2024 shows a strategic shift. Apparently, each department is no longer building its own chatbot. Instead, the federal government is increasingly focusing on centralization and platform economics.

Two terms dominate this new phase: MaKI and Kipitz. The Marketplace of AI Possibilities is the new central transparency register – albeit without an official claim to information. It is intended to prevent every authority from reinventing the wheel. Instead of isolated lists, MaKI serves as a "matching platform" where authorities can see what others have already developed. Since November 2024, federal states and municipalities have also gained access – an attempt to bridge the federal patchwork at least technologically.

The operational core, however, is the planned AI Platform for the Federal Administration (Kipitz). Operated by the ITZBund service center, this portal is the answer to the administration's "ChatGPT dilemma." Kipitz is intended to provide generative AI models, such as large language models (LLMs), via a secure interface. The key feature: it is a "closed-source proprietary development" using open-source models. It is intended to ensure that no sensitive government data ends up on the servers of US tech giants. For 2026, budget funds of 1.7 million euros are planned for Kipitz, with hardware costs estimated at 40 million euros.

According to a new Fraunhofer analysis, federal authorities can use many interchangeable, open-source-based solutions for LLMs beyond ChatGPT. The researchers recommend this in the interest of digital sovereignty. The federal administration currently uses predominantly non-European open-source models, which are operated within the administration's internal infrastructure. Major players in this market include Meta's Llama, Google Gemma, and offerings from the Chinese newcomer DeepSeek.

According to the study, this strengthens the possibility of switching, as the models are hosted on their own infrastructure and can be replaced if necessary. However, a strategic gap remains: given the changing understanding of open source in the AI context, the authors are considering the development of their own, openly provided European LLMs.

If interested parties delve into the hundreds of lines of attachments in the response, it becomes clear: AI in the federal government is long since more than just text summarization. Some projects stand out due to their social relevance or technical sophistication. This includes, for example, image recognition software for the identification of war victims in Ukraine (BIKO-UA). It is intended to show how AI technology is used in forensic and humanitarian aid to clarify fates. The Federal Office for Migration and Refugees (BAMF) uses AI models to better assess migration movements.

The Federal Ministry of Transport and its subordinate authorities are heavily relying on AI for environmental monitoring. KIResQ, for example, is an initiative for evaluating thermal images to find missing persons faster – for instance, during search operations in difficult terrain. In Silva, AI-controlled drones and aircraft automatically search for forest fire sources from the air.

Missing Link

Was fehlt: In der rapiden Technikwelt häufig die Zeit, die vielen News und Hintergründe neu zu sortieren. Am Wochenende wollen wir sie uns nehmen, die Seitenwege abseits des Aktuellen verfolgen, andere Blickwinkel probieren und Zwischentöne hörbar machen.

The Federal Institute of Hydrology (BfG) relies on AI to detect plastic in rivers and oil on the sea. The German Weather Service (DWD) is establishing an entire "AI Center." The goal is not only to improve weather forecasting but also, through "nowcasting" (very short-term forecasting), to provide precise protection against extreme weather events. In the area of climate protection, those responsible use the technology, for example, for investigating rock formations or for predicting and early warning of groundwater depletion and salinization.

In the fight against disinformation in the age of deepfakes, the federal government is also digitally upgrading. FACTSBot is a system for detecting and validating machine-generated content to identify misinformation. Nebula is presented by the government as a user-centered initiative for recognizing fake news. SpeechTrust+ is a tool specifically designed to detect AI-based speech synthesis and voice manipulation. The tool could be used against the "grandchild scam 2.0" or political manipulation.

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.