AI technology for the US military: Anthropic rejects the Pentagon's ultimatum
In a dispute with the US Department of Defense, Anthropic CEO Dario Amodei insists on two red lines for the AI. The decision now rests with the Pentagon.
(Image: Ivan Cholakov/Shutterstock.com)
In the dispute between Anthropic and the US Department of Defense, the head of the AI company rejected the ultimatum issued and declared that the red lines would not be abandoned. His company could not allow its AI to be used for mass surveillance in the US or for the operation of fully autonomous weapon systems. AI systems are not yet reliable enough for fully autonomous weapons, and mass surveillance contradicts democratic values. These exceptions have also not been an obstacle to the US military's use of AI so far, writes Dario Amodei. He also criticizes that his company was threatened with classification as a risk to supply chains or the obligation to provide its technology under wartime law. Both contradict each other. If the Pentagon wants to remove its AI, they will help, he promises.
Contradictory Threats
With the blog post, Amodei is now calling on the Pentagon to decide. Earlier this week, it had threatened to force the provision of a version of the AI via the Defense Production Act (DPA). The law is actually intended to enable the US government to order the production of goods vital to the war effort. Previously, the US government had already threatened to classify Anthropic as a security risk, thus presenting other companies with the choice of doing business either with the US military or with the AI company. Amodei now believes that this is contradictory: either his company is a security risk or indispensable for national security. However, neither changes their stance.
Videos by heise
By publicly defending its red lines, Anthropic is now taking a clear stance. It is ready to continue defending US national security, but it will not give up its conditions. US Secretary of Defense Pete Hegseth must now explain whether he will implement any of his department's threats. This could be easier said than done, as it has been repeatedly pointed out from his department how valuable Anthropic's Claude AI is. The AI system is the only one used in the Pentagon's secret and shielded networks. “The problem for these guys is that they are that good,” the US magazine Axios quoted an official as saying. Anthropic's AI is far ahead of the competition, and it would be difficult to remove it from the systems again, it was also said.
(mho)