
Anthropic to the Pentagon: "We would rather remove Claude than let you use it for mass surveillance"
Why does Anthropic (Claude) refuse to work with the Pentagon?

Anthropic refuses Pentagon: "We cannot in good conscience accept their demands"
Anthropic, the company behind the AI tool Claude, has publicly pushed back against the U.S. Department of Defense (DoD) after demands to remove security protections on how Claude can be used. CEO Dario Amodei is clear: mass surveillance and autonomous weapons systems are non-negotiable.
What happened between Anthropic and the Pentagon?
The conflict escalated after a meeting between Amodei and Defense Secretary Pete Hegseth, where the Pentagon demanded that Anthropic accept what they called "all legal uses" of Claude. According to Anthropic, this phrase concealed two specific and controversial areas of use:
Mass surveillance of American citizens
Fully autonomous weapons systems – meaning AI-controlled weapons without human oversight
The meeting ended with a threat from the Pentagon to exclude Anthropic from the defense supply chain.
Amodei's response was direct: "These threats do not change our position: we cannot in good conscience accept their request."
Anthropic's red lines regarding Claude
Anthropic has long built its business around the concept of "responsible AI" – and this conflict with the Pentagon is essentially a test of how seriously that promise is taken.
Amodei argued on several points:
On mass surveillance: AI systems like Claude, if used without limitations, can compile scattered and seemingly innocent data into a detailed picture of a person's entire life – automatically and on a massive scale. Anthropic supports intelligence work within the framework of the law but believes that mass surveillance of the domestic population is incompatible with democratic values.
On autonomous weapons systems: Today's AI models, no matter how advanced, are, according to Amodei, simply not reliable enough to control weapons without human involvement. Without proper control mechanisms – which do not yet exist – such systems risk putting both soldiers and civilians in danger.
Anthropic also stated that it had offered to collaborate with the Pentagon on research and development to improve the reliability of AI systems in defense contexts. The offer was declined.
Pentagon's reaction and threat
The Department of Defense was quick to respond. Emil Michael, the U.S. Deputy Secretary of Defense, launched a personal attack on Amodei on X, claiming that he wants "to personally control the American military."
In an interview with CBS News, Michael argued that the areas of use Anthropic opposes are already prohibited by law and the Pentagon's own guidelines – and claimed that contractual wording should not be necessary.
The Pentagon is also said to have threatened to invoke the Defense Production Act, a law that gives the president the power to compel companies to meet national defense needs. Additionally, they threatened to label Anthropic as a "supply chain risk" – a security-unstable company – which would effectively shut them out from government contracts.
A former anonymous DoD official told the BBC that Hegseth's grounds for these actions were "extremely weak."
Background: Claude and Venezuela
Tensions were said to have been ongoing for months – even before it became known that Claude was used as part of a U.S. operation to capture Venezuela's President Nicolás Maduro. This suggests that the Pentagon's ambitions with the AI tool extend further than what has been publicly known.
What's next?
Anthropic stated that an updated contract writing from the Pentagon – received the day after the meeting – practically did not bring any real progress. New wording presented as a compromise contained legal text that would still allow security protections to be overridden.
"Despite [the Department of Defense's] recent public statements, these narrow safeguards have been at the core of our negotiations for months," said a spokesperson for Anthropic.
If the Pentagon chooses to end the collaboration, Anthropic is prepared: "We will work for a smooth transition to another supplier."
What does this mean for the AI industry?
This is not just a battle between a tech company and a government agency. It is a crossroads for the entire AI industry.
Anthropic's stance is one of the most concrete examples to date of a leading AI company actually being willing to forgo revenue and government contracts to defend its ethical guidelines. It puts pressure on other players – and creates an industry debate about where the line is drawn for what AI tools should do.
For Swedish and European companies working with AI tools, the question is relevant: what demands should be placed on the AI providers they choose to build their systems on?
Frequently asked questions about the Anthropic and Pentagon conflict
What is Anthropic? Anthropic is an American AI safety company founded in 2021, known for the AI model Claude. The company focuses on developing AI systems that are safe, reliable, and transparent.
What is Claude? Claude is Anthropic's AI assistant and competes with tools like ChatGPT and Gemini. Claude is used by businesses and individuals for everything from writing assistance to complex analysis.
Why is Anthropic refusing the Pentagon? Anthropic believes that using Claude for mass surveillance of citizens or for fully autonomous weapons systems goes against democratic values and the company's ethical guidelines – regardless of whether it is legal or not.
What is the Defense Production Act? It is a U.S. law that grants the president the authority to require private companies to deliver goods or services to the country's defense if deemed necessary for national security.
Can the Pentagon force Anthropic to cooperate? In theory, the Defense Production Act could give the president that authority. However, whether it is actually applicable in this case is assessed by independent experts as legally weakly founded.
The full interview can be found here:
Want to keep up with the latest news on AI tools and how they affect businesses and society? Follow our newsletter
Written by Aival.se
Date: 2026-03-05
Read more:
Load more



