Pentagon Designates Anthropic a Supply Chain Risk Over Claude

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

The Pentagon’s push to build an “AI-first” warfighting force has collided directly with the usage limits some AI developers consider non-negotiable, producing the sharpest public confrontation yet between the U.S. military and a major AI company.

Anthropic confirmed Friday that Secretary of Defense Pete Hegseth has directed the Pentagon to designate it a “supply chain risk to national security,” a move the company called “legally unsound.” The company says the standoff traces to two specific exceptions it sought during months of contract negotiations: a prohibition on using its model, Claude, for mass domestic surveillance of Americans, and a prohibition on its use in fully autonomous weapons systems.

On Truth Social, President Donald Trump ordered all federal agencies to phase out Anthropic technology within six months. Hegseth followed with an X post directing all Pentagon contractors, suppliers, and partners to cease “commercial activity with Anthropic” effective immediately. The company responded that a designation under 10 USC 3252 is legally limited in scope — it can only affect Claude’s use within Department of War contracts, and cannot extend to other customers.

Where the Two Sides Draw Their Lines

The Pentagon’s position, articulated by chief spokesperson Sean Parnell, is that the department is simply asking to use Anthropic‘s model “for all lawful purposes” and that its interest in conducting mass domestic surveillance or deploying autonomous weapons without human involvement is a “fake” narrative. A Pentagon memorandum issued last month stated that the department “must utilize models free from usage policy constraints that may limit lawful military applications,” framing safety guardrails as ideological interference.

Anthropic drew a different boundary. The company said it supports AI for “lawful foreign intelligence and counterintelligence missions” but argued that mass domestic surveillance is “incompatible with democratic values” and that AI-driven surveillance presents “serious, novel risks to our fundamental liberties.” On autonomous weapons, the company cited technical grounds — that current systems are not capable enough to support such applications safely and reliably.

The company also warned that the designation sets a dangerous precedent for any American company that negotiates with the government over contract terms.

Industry Fracture Lines

The dispute has split the technology sector. Hundreds of employees at Google and OpenAI signed an open letter urging their employers to stand with Anthropic. xAI CEO Elon Musk sided with the administration, posting that “Anthropic hates Western Civilization.”

The contrast with OpenAI‘s posture is direct. CEO Sam Altman announced that OpenAI reached an agreement with the U.S. Department of Defense to deploy its models on a classified network, and asked the DoD to extend those terms to all AI companies. Altman did, however, align with Anthropic on principle, stating that prohibitions on “domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems” are among OpenAI‘s most important safety commitments.

Anthropic closed with a pointed framing of the stakes: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

Photo by Ian Hutchinson on Unsplash

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article