After dispute: Pentagon classifies Anthropic as a risk

Due to restrictions on AI use for surveillance and weapons, the US military now classifies Claude developer Anthropic as a security risk.

listen Print view
US flag in the hands of US soldiers

(Image: Michele Ursi/Shutterstock.com)

3 min. read

The Pentagon has followed through on its threat: the provider of the AI model Claude, the US start-up Anthropic, is now officially considered a “supply chain risk.” After the Pentagon issued an ultimatum for unrestricted AI release, the deadline expired on Friday. Based on Section 3252 of the US military law, the US start-up was thus classified as a threat to the military's supply chain. Until now, this has only been applied to foreign companies. In an official statement, Anthropic announced a lawsuit against the classification, which the CEO described as a retaliatory measure. CEO Dario Amodei expressed doubts that the US Department of Defense's actions, referred to as the War Department by the Trump administration, are legally sound.

The dispute concerns Anthropic's desire to prohibit the government from certain uses when using its AI. In negotiations, Anthropic rejected the Pentagon ultimatum and insisted on ethical guardrails. Mass surveillance of US citizens and the use in autonomous weapon systems were to be contractually excluded. The Pentagon rejected this and threatened Anthropic with countermeasures if it insisted on the demand. Other companies, such as OpenAI and xAI, on the other hand, agreed to the unrestricted use of their AI models, according to media reports.

For Anthropic, this immediately concerns a 200 million-US-dollar contract with the Pentagon. However, the classification is likely to make it more difficult or even impossible for the start-up to do business with other government agencies or with defense contractors working for the Pentagon. Anthropic CEO Amodei, however, stated that the law only affects Pentagon-specific contracts. Microsoft confirmed that it can continue to work with Anthropic on civilian projects.

Videos by heise

As a first step, for example, its use in the “Maven Smart System” developed by Palantir must be terminated. The software was used in the current US military operation against Iran. However, according to a report by the US news agency Bloomberg, Claude itself continues to be used by the Pentagon in the Iran conflict. Secretary of Defense Pete Hegseth informed leading members of Congress about Anthropic's classification.

According to Bloomberg, the conflict has not harmed Anthropic's value. The company is valued at 380 billion US dollars. This year's annual revenue is forecast to reach up to 18 billion US dollars.

(mki)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.