Anthropic CEO calls Pentagon's actions "retaliatory and punitive"

Anthropic CEO Amodei defends himself in an interview against the classification as a security risk, invoking fundamental American values.

listen Print view

Anthropic founder Dario Amodei

(Image: Antrophic)

4 min. read

Anthropic founder Dario Amodei has now publicly responded to the US Department of Defense's classification of his company as a security risk. In an interview with CBS, he spoke of an unprecedented process: "This designation has never happened before with an American company. And I think it was made very clear in some of their statements, in some of their language, that this was retaliatory and punitive," said Amodei.

In the interview, excerpts of which can be seen on YouTube, for example, the Anthropic CEO presented the conflict as a question of fundamental American values. What was done was for the good of the country and to support US national security. The red lines Anthropic drew were also an expression of these values.

When the Pentagon threatened with the supply chain classification and the Defense Production Act, Anthropic merely exercised its right to freedom of expression. "Disagreeing with the government is the most American thing in the world," Amodei told CBS.

In a statement, the company announced that it would legally challenge the classification as a supply chain risk. Such a classification would mean that companies wishing to do business with the Pentagon would not be allowed to enter into contracts with Anthropic. According to the Anthropic statement on the company website, the measure is legally untenable and sets a dangerous precedent for any American company negotiating with the government.

In July 2025, the Pentagon had promised Anthropic a $200 million contract for the development of agentic AI workflows. However, in subsequent negotiations, Anthropic demanded safeguards for two specific areas of application. The dispute escalated publicly when it became known that Anthropic technology was used in the US military operation to capture Venezuelan leader Nicolás Maduro – in what form exactly was not disclosed.

Videos by heise

As Amodei explains in a parallel blog post on the Anthropic website, the company rejects the use of Claude for mass domestic surveillance and for fully autonomous weapons. Regarding surveillance, Anthropic argues that AI can automatically combine scattered, individually harmless data into comprehensive personality profiles – to an extent that existing legislation cannot keep up with.

Regarding autonomous weapons, the company points out that today's AI systems are not reliable enough to select and attack targets without human control. Anthropic offered to research improvements to this reliability together with the Pentagon – but the offer was rejected, according to Amodei.

The Pentagon sees it differently. According to CBS News, Emil Michael, the Pentagon's chief technology officer, stated that the military had made significant concessions to Anthropic. However, at some point, the military must be trusted to act responsibly.

Meanwhile, OpenAI CEO Sam Altman announced on X that an agreement had been reached with the Pentagon. The company is apparently stepping into Anthropic's shoes.

Altman explained that two of OpenAI's most important safety principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems. According to Altman, the Department of Defense agrees with these principles and intends to incorporate them into laws and policies.

However, what exactly was agreed upon between OpenAI and the Pentagon remains unclear. (ssi)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.