TLDR
- Anthropic received an immediate “supply chain risk” classification from the Pentagon.
- This classification prohibits defense contractors from deploying Claude AI in Pentagon projects.
- Reports indicate Claude was deployed in U.S. military activities related to Iran and Venezuela.
- Anthropic’s CEO Dario Amodei plans to contest the classification through legal channels.
- Such designations are usually applied to foreign threats, such as Huawei from China.
The Pentagon has formally classified Anthropic as a supply chain risk, effectively preventing defense contractors from integrating the company’s Claude AI system into Department of Defense projects.
This immediate classification places Anthropic alongside a select group typically comprised of foreign adversaries. Most notably, China’s Huawei received similar treatment in previous years.
Anthropic leader Dario Amodei addressed the classification publicly, clarifying that its scope remains limited. According to Amodei, the restriction specifically targets Claude’s use in direct Pentagon contract work rather than all applications by companies holding military agreements.
“It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War,” Amodei stated.
Yet Claude’s integration within military infrastructure runs deep. Sources with knowledge of the situation reveal that Claude has supported U.S. military activities in Iran and Venezuela, providing intelligence analysis and operational planning assistance.
Extricating the technology won’t be straightforward. Industry observers characterize the removal process as “painful” due to Claude’s extensive implementation across military systems.
Why the Dispute Happened
Tensions between Anthropic and the Pentagon have intensified over recent months, centered on a fundamental disagreement about safety protocols.
Anthropic maintains strict policies against permitting Claude to control autonomous weapon systems or conduct widespread domestic surveillance. Pentagon officials countered that technology usage should be permitted wherever U.S. law allows.
The disagreement became public knowledge earlier this year before reaching a critical point this week. An internal company communication, drafted last Friday and leaked Wednesday through The Information, intensified the situation. In the memo, Amodei implied Pentagon leadership held negative views of Anthropic partially because “we haven’t given dictator-style praise to Trump.”
Amodei issued an apology regarding the memo’s release. Company investors reportedly attempted damage control following the revelation.
Pentagon Chief Technology Officer Emil Michael posted Thursday evening on X, declaring no active negotiations exist between the Department of Defense and Anthropic.
What Happens Next
According to Amodei’s statement, Anthropic and Pentagon officials had explored potential arrangements allowing continued military collaboration while maintaining safety protocols. Those discussions have not yielded an agreement.
Amodei announced Anthropic’s intention to pursue legal action contesting the classification.
Microsoft conducted its own review and determined that Claude remains accessible to customers via Microsoft 365, GitHub, and AI Foundry platforms — with the exception of Department of War contract work.
Palantir’s Maven Smart Systems, which delivers intelligence analysis and weapons targeting capabilities to military clients, had developed numerous workflows utilizing Anthropic’s Claude technology.
Amazon, holding substantial investment in Anthropic, had not issued a statement as of publication.



