Debates over military use of artificial intelligence and the ethical limits of tech companies have been building for months. The first half of 2025 brought those tensions to a head in ways few anticipated.
The year’s defining story so far centers on Anthropic and its refusal to bend to the Pentagon’s contract demands. CEO Dario Amodei drew firm boundaries: the company’s AI would not be used for mass surveillance of Americans or to power autonomous weapons capable of striking without human oversight. The Department of Defense — rebranded by the Trump administration as the Department of War — pushed back, arguing it should have access to Anthropic‘s models for any “lawful use.”
“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei wrote in a public statement. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
The Pentagon set a deadline. It passed without agreement.
Escalation on Both Sides
The fallout was swift. President Trump directed federal agencies to phase out Anthropic tools over a six-month transition period, publicly describing the company — valued at $380 billion — as a “radical left, woke company” in an all-caps social media post. The Pentagon then moved to designate the firm a “supply-chain risk,” a label typically reserved for foreign adversaries, which would bar any company working with Anthropic from doing business with the U.S. military. The company has since sued to challenge that designation.
Hundreds of employees at Google and OpenAI signed an open letter urging their own leadership to hold the same red lines on autonomous weapons and domestic surveillance. The gesture reflected how broadly the dispute had rippled through the industry.
Then OpenAI moved. The company announced it had reached a separate agreement permitting its models to be deployed in classified military contexts — a decision that surprised the tech community, given prior reports suggesting OpenAI would align with Anthropic‘s position. The announcement said the agreement makes clear its own limits: “no autonomous weapons and no autonomous surveillance.” Hardware executive Caitlin Kalinowski resigned in response, stating the deal was “rushed without the guardrails defined.”
The Public Reacts
User behavior offered a pointed verdict. The day after OpenAI announced its military deal, ChatGPT uninstalls jumped 295% day-over-day. Anthropic‘s Claude app climbed to the No. 1 spot in the App Store.
A separate but connected story also defined February. A vibe-coded AI assistant app called OpenClaw went viral, spawned multiple spinoffs, and accelerated broader industry conversation around agentic AI — software that can take actions autonomously on a user’s behalf. Its rise added another layer to an already dense month of AI development.
The immediate next step in the Anthropic-Pentagon dispute is the company’s ongoing legal challenge to the supply-chain risk designation.
Photo by RDNE Stock project on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article