Lawsuit against entire US government: Anthropic defends itself against Pentagon
In a dispute with the US government, Anthropic is now going to court. A lawsuit aims to reverse the classification as a security risk.
(Image: jackpress/Shutterstock.com)
In a dispute with the Pentagon, Anthropic has now filed a lawsuit against the US government to prevent its placement on a sanctions list for national security protection. In the lawsuit against numerous US ministries and their secretaries, the AI company describes their measures as “unprecedented and unlawful.” The US Constitution does not permit the government to use its enormous power to “punish a company for its protected speech.” They are turning to the judiciary as a last resort “to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.” Anthropic has immediately received support from numerous employees of competitors OpenAI and Google.
Support from the Competition
The dispute, which has been publicly ongoing for weeks, concerns the US Department of Defense's attempt to secure unrestricted access to Anthropic's AI technology for itself and the entire military. Competitors reportedly consider it vastly superior, and it is already in use in various areas. Anthropic has no fundamental objection to this, but has formulated two red lines and does not want to allow the US government to use its AI for mass surveillance of the US population and for the development of fully autonomous weapons. The US government was unwilling to agree to this and, instead of merely terminating the contracts, took drastic measures.
The classification as a “Supply-Chain Risk to National Security” means that no company doing business with the US military may cooperate with Anthropic, the AI company explains in the lawsuit. According to the US government, this must be implemented immediately, even though it is unlawful. Shortly after filing the lawsuit, Anthropic received support from a number of employees of competing AI firms, who, as uninvolved third parties, filed a so-called Amicus Curiae with the court. In it, they describe Anthropic's arguments as “legitimate” and the US government's actions as an “improper and arbitrary use of power that has serious ramifications for our industry.”
Videos by heise
Anthropic's AI software Claude is a sharp competitor to OpenAI's ChatGPT, and during the dispute, the US Department of Defense repeatedly stated that it is superior to the competition. Anthropic CEO Dario Amodei explained the red lines in a blog post, explaining that AI can automatically compile scattered data from individuals across the internet into a detailed picture of their lives on a large scale. Furthermore, the technology is not yet reliable enough to be used in fully autonomous weapons. The Amicus Curiae now also points out that the mere knowledge of such capabilities could have a chilling effect on democratic participation.
Criticism of OpenAI's Actions
Following the dispute with the Pentagon, OpenAI has reached an agreement with the Pentagon. Although Sam Altman agreed to the terms, he later assured that technical hurdles would prevent its use for mass surveillance. Nevertheless, OpenAI's head of robotics left the company over this on the weekend. She criticized that OpenAI should have dealt with the risks better beforehand. In addition, Amazon, Google, and Microsoft have backed Anthropic. The corporations assured that their customers could continue to use Claude and other Anthropic tools with their tools.
(mho)