Enterprise contracts now account for roughly 80 percent of Anthropic‘s revenue, and the company just closed a $30 billion funding round valuing it at $380 billion. The pressure to serve institutional clients — including defense clients — has never been greater, which makes the standoff now taking shape with the Pentagon unusually consequential.
The conflict centers on a potential designation: the U.S. Department of Defense has signaled it may label Anthropic a “supply chain risk” — a classification typically reserved for foreign adversaries — unless the company removes its restrictions on military use of its AI models. According to the report, such a designation could effectively compel Pentagon contractors to remove Claude from sensitive work.
The friction traces back to a specific incident. On January 3, U.S. special operations forces raided Venezuela and captured Nicolás Maduro. Reporting from The Wall Street Journal indicated that forces used Claude during the operation through Anthropic‘s partnership with defense contractor Palantir. When an Anthropic executive contacted Palantir to ask whether the technology had been used in the raid, the inquiry triggered immediate concern inside the Pentagon. The company has disputed that the outreach was meant to signal disapproval of any specific operation.
Secretary of Defense Pete Hegseth is described as “close” to severing the relationship entirely. A senior administration official told Axios: “We are going to make sure they pay a price for forcing our hand like this.”
Where the lines are drawn
Anthropic has established two explicit limits: no mass surveillance of Americans, and no fully autonomous weapons. CEO Dario Amodei has stated the company will support “national defense in all ways except those which would make us more like our autocratic adversaries.” The Pentagon, for its part, has demanded that AI be available for what it calls “all lawful purposes.”
That gap is not easily bridged. Anthropic was founded in 2021 by former OpenAI executives who believed the broader industry was not treating safety with sufficient seriousness. The company positioned Claude explicitly as the ethical alternative in a competitive field. Other major labs — OpenAI, Google, and xAI — have agreed to loosen safeguards for use in Pentagon unclassified systems, though their tools are not yet running inside classified military networks.
The capability gap that makes this urgent
The dispute arrives as Anthropic‘s models have advanced sharply. In late 2024, models capable of controlling computers could barely operate a browser. Sonnet 4.6, released 12 days after the February 5 launch of Claude Opus 4.6, can now navigate web applications and fill out forms at what the company describes as human-level capability. Both models carry working memory large enough to hold a small library. Opus 4.6 adds the ability to coordinate teams of autonomous agents — multiple AIs dividing and executing tasks in parallel.
Those capabilities — reasoning, planning, acting independently at scale — are precisely what the military wants running inside classified networks. Whether a company built on safety constraints can maintain those constraints once its most powerful systems are embedded in that environment is the question the current standoff has forced into the open.
According to the report, Anthropic made Claude available on a Palantir platform with a classified cloud security clearance in late 2024, a step that placed the technology deeper inside defense infrastructure than it had previously been.
Photo by Mick Latter on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article