Military AI: The Pentagon Issues an Ultimatum to Anthropic

Update – Dario Amodei did not wait for the ultimatum deadline, this February 27, set by Pete Hegseth, Secretary of Defense.

In a lengthy statement, the Anthropic CEO delivered his answer yesterday afternoon (Pacific time): no. “These threats do not alter our position: we cannot, in good conscience, comply with their request,” writes Dario Amodei.

And he continues: “I profoundly believe in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. That is why Anthropic has worked proactively to deploy its models with the Department of War [the new name of the U.S. Department of Defense] and the intelligence services,” he writes, but “in certain precise cases, we believe AI can undermine, rather than defend, democratic values.”

Read also: Anthropic vs. the Pentagon: a multi-billion-dollar lawsuit

To clarify its refusal, Dario Amodei reiterates his doctrine on military AI: “We support the use of AI for legitimate foreign intelligence and counter-espionage missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” he writes, estimating that this practice “poses a grave and unprecedented risk to our fundamental freedoms.”

Another area of disagreement: autonomous weapons. The CEO notes, “Without proper controls, one cannot rely on fully autonomous weapons to demonstrate the same discernment as our professional and highly skilled troops. Their deployment must be bounded by safeguards that do not currently exist.” And he adds that none of these issues had, until now, constituted an obstacle to accelerating the adoption and use of our models within our armed forces.

He also observes that “These two threats are inconsistent: one labels us a security risk, the other makes Claude an essential element of national security.”

With Anthropic’s defiance of Pete Hegseth’s threat to designate the company as a “risk to the supply chain” now a fait accompli, will the Defense Department carry out its threat?

——————————————————————-

The unprecedented battle now unfolding between the Pentagon and Anthropic takes a new turn. According to exclusive Axios reporting, the Department of Defense contacted two defense industry giants on Wednesday to assess their dependence on the Claude AI model: Boeing and Lockheed Martin. This marks a first step toward possibly designating the company as a “risk to the supply chain.”

Such a sanction, typically reserved for companies from adversarial countries, like China’s Huawei, has never been imposed on a leading American technology firm. Applying it to Anthropic would set a historic precedent.

Claude at the Heart of Classified Systems

The stakes are high. Claude is currently the only AI model operating within the U.S. military’s classified systems. According to Axios, it was deployed in the operation to capture Venezuelan President Nicolás Maduro, through Anthropic’s partnership with Palantir, and could eventually be used in potential military operations in Iran.

Read also: Military AI: Anthropic officially barred from the Pentagon

While the Pentagon praises Claude’s performance, it is, however, “furious” that Anthropic refuses to drop its guardrails to allow use for “all legal missions.” The startup is determined to block the use of its AI for mass surveillance of American citizens or for the development of autonomous weapons—capable of firing without human intervention.

Un ultimatum fixé au 27 février

Tensions culminated on February 24 during a particularly tense meeting. Defense Secretary Pete Hegseth reportedly set an ultimatum for Anthropic’s CEO, Dario Amodei: to comply with the Pentagon’s demands by February 27. Failing that, the administration would either resort to the Defense Production Act, a law that enables the government to compel private companies to serve national interests, or designate the company as a “risk to the supply chain.”

“That will be a real headache to untangle, and we will ensure they pay the price for forcing our hand,” a senior Defense official told Axios, referring to this possible designation.

On Anthropic’s side, tensions are being downplayed. A spokesperson described the meeting as “a continuation of good-faith discussions about our usage policy, to ensure that Anthropic can continue supporting the government’s national security mission within what our models can reliably and responsibly accomplish.” No comment was issued on the potential risk designation.

La concurrence s’engouffre dans la brèche

Meanwhile, competitors are positioning themselves. xAI, Elon Musk’s AI company, has just signed an agreement to integrate classified military systems under the clause of “full lawful use” that Anthropic explicitly refused. Google and OpenAI, whose models are already present in non-classified systems, are reportedly negotiating to take the same step. According to a source cited by Axios, Google’s Gemini already represents “a solid alternative” to Claude in several military-use cases.

A designation as a risk would be a heavy blow to Anthropic if it forced state partners to remove Claude from their infrastructure. But the startup could also reap an unforeseen benefit: being seen, in the eyes of its customers and its talent, as the company that stood firm against pressures in a race to arm AI. The deadline is approaching.

Article updated February 27 with Anthropic’s refusal position

Dawn Liphardt

Dawn Liphardt

I'm Dawn Liphardt, the founder and lead writer of this publication. With a background in philosophy and a deep interest in the social impact of technology, I started this platform to explore how innovation shapes — and sometimes disrupts — the world we live in. My work focuses on critical, human-centered storytelling at the frontier of artificial intelligence and emerging tech.