Metas Llama is used for military purposes – in China and the USA

Llama is an open AI model family. Although the terms of use prohibit it, China, for example, uses the model for military purposes.

listen Print view
Meta logo and head of a female robot

(Image: Below the Sky/Shutterstock.com)

3 min. read

Meta has made its own AI models openly available under the name Llama. While Meta itself speaks of open source, the new AI definition of the Open Source Initiative (OSI) states that it is not an open source model because the training data has not been disclosed. Meta also sets requirements for use. Permission must be obtained from a certain number of users. Llama models may not actually be used for military purposes. China is not bothered by this. And the USA has also recently been able to use Llama for defense purposes.

Meta has specifically announced that the US government may use Llama for national security purposes, as well as private partners who support their work. They are working with other major tech companies such as Amazon, Microsoft, IBM, Palantir and Oracle. Nick Clegg, Head of Global Affairs at Meta, writes in the blog post that they believe AI can help stabilize global security.

Videos by heise

Oracle, for example, is said to be working on specializing Llama with documents relating to aircraft maintenance. These should then be able to be examined more quickly and accurately using AI and repaired if necessary. The collaboration with Amazon or Amazon Web Services and Microsoft relates to the hosting of data, for example. Clegg also writes that, regardless of the OSI definition, open source technology has helped the USA to build the most technologically advanced military in the world for decades. "Open source systems have helped accelerate defense research and high-end computing, identify security vulnerabilities, and improve communications between disparate systems."

Nevertheless, Reuters reports that the Chinese military is also relying on Llama. The AI model is freely available via Hugging Face and Github, for example. The potential downside of open source. In addition to researchers, developers and companies, malicious actors can also build AI tools on the models for their own purposes. Chinese scientists themselves report on their use in several papers, writes Reuters. The authors are said to have close ties to the People's Liberation Army, i.e. the military.

According to the paper, the researchers have developed an AI chatbot called Chatbit. It is said to be based on Llama 2, which is not the latest model. Llama has undergone military fine-tuning for Chatbit. This should help the chatbot to analyze military data and make operational decisions. However, this does not sound particularly powerful at first. There are other companies that offer much more comprehensive warfare software, such as Palantir.

Meta's terms of use actually prohibit its use for military purposes – with the exception of the US authorities and partners. However, Meta has no way of preventing its use. Neither in the case of China nor in the case of other malicious actors.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.