Anthropic Denies It Can Sabotage Claude in Military Use

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Anthropic says it cannot interfere with its AI model Claude once the US military has deployed it — and a senior executive put that claim in writing in a federal court filing.

Thiyagu Ramasamy, Anthropic‘s head of public sector, wrote that the company “has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations.” He added that no back door or remote kill switch exists, and that Anthropic personnel cannot log into a Department of War system to modify or disable models during an operation.

The statement was filed in response to a government argument that Anthropic could disrupt active military operations by cutting off access to Claude or pushing harmful updates if the company disapproved of how the military used it.

According to the filing, updates to deployed systems would require approval from both the government and its cloud provider before Anthropic could implement them. The company also states it cannot access the prompts or data that military users enter into Claude.

How the Dispute Reached Court

Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk earlier this month, blocking the Department of Defense from using the company’s software — including through contractors — for the coming months. Other federal agencies have also moved away from Claude. Anthropic filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it. Customers have already begun canceling deals. A hearing is scheduled for March 24 in federal district court in San Francisco.

The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, according to the report. Government attorneys wrote this week that the department “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations.” As a short-term measure, the Defense Department says it is working with third-party cloud providers to ensure Anthropic leadership cannot make unilateral changes to the Claude systems currently in place.

The Negotiation That Collapsed

Sarah Heck, Anthropic‘s head of policy, filed a separate declaration stating the company had proposed a contract on March 4 that explicitly waived any right to veto military operational decisions. The proposal included language stating the license does not “grant or confer any right to control or veto lawful Department of War operational decision-making.”

Anthropic also offered to accept contract language addressing its concern about Claude being used to carry out deadly strikes without human supervision, Heck claimed.

Negotiations broke down anyway.

Photo by Godfrey Atima on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article