Artificial Intelligence: Struggle for Rules for AI Use in War

In Spain, experts met for the third time to discuss "Responsible AI in the Military Domain". Binding rules are still a pipe dream.

listen Print view

In A Coruña, international experts and military personnel met for the third time to discuss the military use of AI.

(Image: REAIM 2026)

4 min. read
By
  • Monika Ermert
Contents

The third "Responsible AI in the Military Domain" conference (REAIM) concluded on Thursday in A Coruña, Spain, with a list of recommendations for the political regulation of the military use of Artificial Intelligence that did not achieve a majority.

Governments must assess the risks of the military use of AI in more detail, according to the recommendations, which were signed by only 35 of the 80 states represented, including Germany. Decision-making processes must be documented in such a way that accountability for the use of automated weapon systems remains traceable. Major powers such as the USA and China have not signed the final document.

The rule that fundamentally a human, not a machine, must trigger a weapon ("Human in the Loop") is outdated, emphasized Jeroen van der Vlugt, CIO of the Dutch Ministry of Defence. "This is no longer true even for conventional weapon systems today." Regulations must clearly define responsibility across the entire chain of command and the lifecycle of AI weapon systems.

As part of the well-attended conference with 1200 participants and over 80 delegations, UN representatives gave the go-ahead for the development of a catalog of voluntary measures for the AI industry that supplies it.

Last year, the "Global Commission REAIM," founded on the initiative of the Dutch, had developed some guidelines for states, militaries, and industry. These efforts to develop very concrete steps for the use of AI in military contexts are partly reflected in the final report of the 2026 conference ("Pathways to Action").

The national policy of states must comply with and guarantee international law, is one of the recommendations. Militaries should document decision-making processes when using automated weapons to be able to establish accountability unequivocally later. Furthermore, risk assessments of the systems are necessary before operational deployment.

The commission also considered databases with possible risks and side effects of systems to be useful. A representative of the Spanish National Cryptologic Center pointed out in A Coruña the not insignificant manipulation risks, such as the "poisoning" of data sets, which can lead to incorrect intelligence results.

Remarkable in the commission's report is the long list of AI systems already in use by the armed forces of various countries. It ranges from the AI-assisted decision support FELIX at NATO and research programs to shorten development times for fighter aircraft to operationally deployed AI systems: for example, Maven Smart from Palantir (NATO), the LLAMA-based LLM ChatBIT (China), and autonomous cluster bombs of the Harpy brand (Israel).

Videos by heise

The long list of examples illustrates how far diplomatic attempts to create common norms have lagged behind the reality of military AI applications. Instead of hoping for international agreements, REAIM focuses on common definitions and risk assessment. Perhaps an agreement could be reached on a database of incidents involving AI military systems, one hope expressed.

However, a hard red line must be drawn, warned Denise Garcia, a representative of the Global Commission REAIM: the use of AI in decisions regarding the deployment of nuclear weapons must be outlawed. There must be binding international rules for automated, lethal weapon systems. The scientist hopes for a "common position" from the EU in upcoming discussions on a possible international treaty at the UN.

"New international laws on autonomous weapon systems are definitely feasible," explains a spokesperson for the initiative "Stop Killer Robots" in response to a query from heise online. The United Nations could build on the results of the UN working group on lethal autonomous weapons. "We just have to take the next step now."

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.