Key Takeaways
- San Francisco federal judge granted preliminary injunction halting Pentagon’s prohibition of Anthropic’s Claude AI system
- Judge Rita Lin characterized the government’s action as “classic illegal First Amendment retaliation”
- Conflict emerged after Anthropic rejected Pentagon demands to permit Claude’s use in lethal autonomous weapons and mass surveillance operations
- Anthropic commanded 32% of enterprise AI market share in 2025, surpassing OpenAI’s 25%
- Seven-day pause on injunction allows federal government opportunity to file appeal
A federal court has intervened to stop the Trump administration from enforcing its prohibition on government agencies using Anthropic’s artificial intelligence platform, temporarily halting actions that threatened to eliminate billions in potential company revenue.
US District Court Judge Rita Lin, presiding in the Northern District of California, granted the preliminary injunction Thursday. The order includes a seven-day suspension period allowing federal authorities to pursue an appeal.
The legal battle originated from a July 2025 agreement between Anthropic and the Department of Defense. Under the proposed arrangement, Claude would have become the inaugural frontier AI system authorized for deployment on classified government networks.
By February 2026, negotiations had collapsed. Pentagon officials sought to restructure the agreement, insisting Anthropic permit military deployment of Claude “for all lawful purposes” without operational limitations.
Anthropic declined these terms. Company leadership maintained their technology must not support lethal autonomous weapons systems or enable widespread domestic surveillance of American citizens.
President Trump issued an executive directive on February 27 prohibiting all federal departments from utilizing Anthropic’s products. In a Truth Social post, he declared the company committed a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
The Defense Department subsequently classified Anthropic as a national security supply chain threat. Anthropic responded by filing a lawsuit in Washington, DC federal court on March 9, contending Defense Secretary Pete Hegseth exceeded his legal authority.
Court Scrutinizes Government’s Justification
A 90-minute judicial hearing convened in San Francisco on March 24. Judge Lin pressed government attorneys regarding whether Anthropic faced punishment for publicly challenging Pentagon positions.
In her written decision, Lin determined the prohibition appeared unrelated to legitimate national security objectives. “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude,” her ruling stated.
She further concluded the government’s actions seemed “designed to punish Anthropic” and represented conduct that was “arbitrary, capricious, and an abuse of discretion.”
During proceedings, an Anthropic legal representative emphasized that Pentagon officials maintain authority to evaluate any AI system prior to operational deployment. Anthropic lacks technical capability to disable a model remotely, alter its functionality, or monitor military applications.
Arguments Presented by Each Party
A Department of Justice attorney contended that Anthropic undermined trust throughout contract discussions by attempting to influence Pentagon operational policies. Government counsel expressed concern about potential “future sabotage” risks from Anthropic.
Judge Lin dismissed this argument. She ruled the Justice Department lacked a “legitimate basis” for concluding Anthropic’s position on usage restrictions could transform the company into a security threat.
According to Menlo Ventures data, Anthropic controlled 32% of the enterprise AI marketplace in 2025, exceeding OpenAI’s 25% share. A comprehensive federal ban threatened to severely undermine that market leadership.
Anthropic expressed being “grateful to the court for moving swiftly.” The organization simultaneously submitted an additional legal challenge in a Washington, DC appellate court addressing procurement law violations.
The case identifier is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California.



