US Government vs. Anthropic: Appeal Against Sanctions Halt
AI developer Anthropic initially successfully defended itself against sanctions from the Trump administration. It is now striking back.
(Image: Michele Ursi/Shutterstock.com)
Because the US government was not allowed to do as it pleased with Anthropic's technology, the AI developer Anthropic was placed on a sanctions list and has been in a dispute with its former client ever since. After an initial success for the company, the US government is now countering.
Anthropic had been classified by the Trump administration as a "supply chain risk" and thus a threat to national security because the government and the US military were not allowed to use the Anthropic AI for mass surveillance of the US population and for fully autonomous warfare. Anthropic defended itself against the classification as a threat to national security by suing the US government, initially successfully. At the end of March, a US district judge issued a preliminary injunction blocking the US government's classification of Anthropic.
Judge: US government's arguments questionable
The decision at the US District Court for Northern California (case number 3:26-cv-01996-RFL) was justified by Judge Rita F. Lin because the government's stated reason for the ban – national security – was questionable. In her view, the move was more intended to punish Anthropic because the company did not want to unconditionally follow the will of the US government. However, she also gave the US government the opportunity to take legal action by allowing the injunction to take effect only after seven days.
That's exactly what the government did this week, appealing Lin's injunction. On Thursday, the US Department of Justice filed a notice of appeal with the court, which initially reveals nothing more than the desire to appeal.
Wave of solidarity in the AI industry
In addition to the reputational damage caused by the Pentagon's classification as a national risk, Anthropic is no longer allowed to supply companies that are themselves contractors of the US military, for example. And for Anthropic itself, government contracts of any kind are of course taboo. The US government's move had caused a lot of public criticism and expressions of solidarity. Tech giants backed Anthropic and assured their customers of continued cooperation.
Microsoft, as well as some employees of Google and OpenAI, directly addressed the responsible court with amicus briefs. Amicus briefs are statements on a legal dispute that a party not involved in the proceedings can submit. In her preliminary injunction, Lin also quoted from these briefs, which likely largely share her perspective.
Videos by heise
As a new contractor for the US government, OpenAI promptly stepped in, and disgruntled users flocked to Anthropic's Claude AI. However, OpenAI's own $200 million contract with the US government is likely less financially significant than it is strategically. State support in general – both regulatory and as a major funder, should OpenAI as a supplier for defense purposes ever get into serious financial trouble – probably plays the main role here.
(nen)