Anthropic to sue Trump administration after AI lab is labelled security risk
Unlock the White House Watch newsletter for free
Your guide to what Trump’s second term means for Washington, business and the world
Anthropic promised a legal challenge after it was labelled a security risk by the Pentagon and banned from government contracts, escalating a battle between the AI lab and the Trump administration.
US President Donald Trump on Friday gave Anthropic six months until it is cut from government contracts, saying the AI start-up made a “disastrous mistake” in challenging the Pentagon over the military use of its technology.
The president said in a Truth Social post that he would not “ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!”
The Pentagon then designated Anthropic a supply-chain risk, an unprecedented action against an American company.
After a deadline to agree to terms lapsed on Friday, defence secretary Pete Hegseth said the company had “delivered a masterclass in arrogance and betrayal”. Effective immediately, no Pentagon contractors or suppliers “may conduct any commercial activity with Anthropic”, he added.
Anthropic said it intends to challenge that decision in court.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” the company said on Friday evening.
Anthropic added Hegseth lacked “the statutory authority to back up this statement. Legally, a supply chain risk designation . . . can only extend to the use of Claude as part of Department of War contracts — it cannot affect how contractors use Claude to serve other customers.”
Alan Rozenshtein, associate professor at the University of Minnesota Law School, said “the supply chain risk designation is legally tenuous . . . I wouldn’t be surprised if all sides, including the government, understand this”.
Anthropic chief executive Dario Amodei has clashed with Hegseth over how the military uses its AI models, sticking to red lines on lethal autonomous weapons or for mass domestic surveillance.
Trump in his post said he was “directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology”.
But the president added he would allow a “Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels”, potentially allowing time for the two sides to cut a deal.
The Pentagon was negotiating with Anthropic up to the moment the president posted, said two people familiar with the matter.
Anthropic’s Claude is the only AI model deployed in classified operations, having been used in the seizure of Venezuelan leader Nicolás Maduro last month.
An administration official told the FT that it remained the best model for military use. Mark Warner, the top Democrat on the US Senate’s intelligence committee, said Trump’s move poses “an enormous risk to US defence readiness”.
OpenAI later on Friday announced it had reached an agreement with the Pentagon to “deploy our models in their classified network”.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” the company’s chief executive Sam Altman said on X.
“The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman on Thursday told staff the ChatGPT maker would permit “any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons”.
Elon Musk’s xAI is also close to a deal, said people with knowledge of the matter.
Josh Gruenbaum, the US procurement chief who agreed a deal with Anthropic last August to supply government agencies, said the agreement had been terminated immediately because it would be “dangerous to our nation” to maintain a business relationship with the company.
In cutting Anthropic from its supply chain, the Pentagon is implementing a process that up to now has only been used against foreign companies located in countries such as China and Russia.
Groups that have previously been deemed a supply-chain risk include China’s Huawei and ZTE Corporation, and Russia’s Kaspersky Lab.
Anthropic has argued AI models are not reliable enough for humans to be removed from the “kill chain” and that existing surveillance laws are inadequate to prevent mass surveillance.
The Pentagon has disputed that claiming it had no intention of using AI in the ways Anthropic feared and accusing the company of derailing the talks by pushing for excessive control over military operations.
Amodei was summoned to Washington for talks on Tuesday, which failed to produce a resolution. On Thursday, he said his company “cannot in good conscience” agree to the US government’s terms.
The Pentagon “decided to make Dario a non-patriotic villain, to make an example and intimidate the other companies”, said a former senior defence official. “It’s deeply dangerous.”
Additional reporting by Joe Miller in Washington
