TLDR
- Pentagon officials stated that Claude AI’s constitutional framework contains policy preferences incompatible with defense operations.
- The Defense Department designated Anthropic as a supply chain threat, marking the first time an American tech firm received this classification.
- Pentagon vendors and contractors are now required to verify they’re not utilizing Claude for defense-related projects.
- The AI company filed litigation against the Trump administration on Monday, describing the designation as “unprecedented and unlawful” with hundreds of millions in contracts threatened.
- Palantir’s CEO Alex Karp revealed his firm continues deploying Claude for American military applications despite the prohibition.
Earlier this month, the Defense Department took the historic step of labeling Anthropic a supply chain security concern, representing the first instance of an American enterprise receiving such a designation. Previously, this classification had been reserved exclusively for foreign threats.
During a Thursday appearance on CNBC’s “Squawk Box,” Defense Department Chief Technology Officer Emil Michael defended the controversial decision. He pointed to Claude’s foundational “constitution” — Anthropic’s proprietary framework for governing AI behavior — as creating inherent policy biases that could undermine military applications.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
Anthhropic released the latest iteration of Claude’s constitutional document in January 2026. According to the company, this framework serves a “crucial role” in model development and “directly shapes Claude’s behavior.”
The supply chain classification requires all defense industry partners and suppliers to confirm they’re excluding Claude from Pentagon-related operations.
Michael characterized the action as “not meant to be punitive.” He additionally pointed out that government contracts represent merely a “tiny fraction” of Anthropic’s total business.
Anthropic emerged in 2021 when a group of researchers departed from OpenAI. The company has successfully developed significant enterprise relationships, including initial Defense Department agreements.
Anthhropic responded aggressively to the Pentagon’s action. The company launched legal proceedings Monday against the Trump administration, characterizing the supply chain classification as “unprecedented and unlawful.”
According to court documents, Anthropic argues it faces “irreparable” damage with hundreds of millions of dollars worth of business now uncertain.
Pentagon Denies Active Outreach to Companies
Michael refuted Anthropic’s allegations that government representatives were proactively contacting businesses to discourage Claude usage. He characterized these allegations as “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He further recognized that phasing out Claude won’t happen immediately. The Pentagon has established a structured transition strategy, he explained, emphasizing that extracting deeply embedded AI systems requires considerably more effort than uninstalling standard software.
Claude Still in Use for Military Operations
Notwithstanding the official designation, Claude remains operational in certain military capacities. CNBC has documented that the AI system supported American military actions in Iran.
On Thursday, Alex Karp, CEO of Palantir — among America’s most significant defense contractors — verified his organization continues leveraging Claude.
Michael said the agency cannot “just rip out” Anthropic’s technology overnight and confirmed a transition plan is underway.



