The U.S. Defense Department has formally designated Anthropic a “supply-chain risk,” a move that bars defense contractors from working with the government if their products incorporate Claude, Anthropic’s AI system. The designation, first reported by The Wall Street Journal, marks the first time an American company has publicly received a label typically reserved for foreign firms with ties to adversarial governments.
Anthropic CEO Dario Amodei confirmed in a company blog post Thursday evening that the Pentagon had delivered the official notification on Wednesday. “As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court,” Amodei wrote.
What Triggered the Designation
The conflict centers on two specific use cases Anthropic has refused to permit: autonomous lethal weapons operating without human oversight, and mass surveillance. The Pentagon argued that Anthropic’s insistence on controlling how the government uses Claude places excessive authority in private hands. Anthropic, for its part, was not satisfied that the government would honor those limits.
Negotiations deteriorated over several weeks, with Defense Secretary Pete Hegseth repeatedly threatening the supply-chain risk label if Anthropic did not comply. When Anthropic formally announced last Thursday that it would not relax its acceptable use policy, the Pentagon followed through.
Scope of Enforcement Remains Unclear
When Hegseth announced his intent to apply the designation on Friday, he stated that any company engaged in “any commercial activity” with Anthropic, even outside Pentagon work, could have its defense contracts cancelled. Anthropic immediately pushed back, calling that scope of enforcement illegal.
The breadth of how the Pentagon will actually apply the designation remains an open question. Both Hegseth and President Donald Trump set a six-month deadline for Anthropic to remove Claude from government systems.
Complicating the Exit
That six-month window may prove difficult to meet. Following the U.S. military strike on Iran over the weekend that killed Supreme Leader Ayatollah Ali Khamenei, reports indicated that Claude-powered intelligence tools played a significant role in the mission’s execution. Removing a deeply integrated system from active military operations carries its own operational risks, adding a layer of practical complexity to what has become a legal and political confrontation.
Defense contractors have already begun distancing themselves from Claude in anticipation of broader enforcement, according to separate reports. The speed of that industry response suggests the designation carries immediate commercial weight, regardless of how courts ultimately rule.
Anthropic’s decision to litigate rather than concede sets up what could become a precedent-setting case on the limits of government authority over private AI companies and their usage policies. No court date has been announced.
Photo by IIONA VIRGIN on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article