OpenAI’s Pentagon Deal Falls Short of Anthropic’s Hard Line

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

OpenAI announced on February 28 that it had secured a deal allowing the US military to use its technologies in classified settings, following months of tension that began when the Pentagon publicly reprimanded Anthropic for refusing similar terms. CEO Sam Altman acknowledged the negotiations were “definitely rushed.”

The company was quick to draw distinctions between its agreement and the terms Anthropic rejected. A published blog post stated that the deal prohibits use for autonomous weapons and mass domestic surveillance, and Altman wrote that OpenAI did not simply accept the same contract language Anthropic turned down. On the surface, it reads as a win on both business and ethical grounds.

The details, though, tell a more complicated story.

The core difference between the two companies’ approaches was not the outcome but the method. Altman wrote that “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with.” OpenAI’s deal essentially rests on an assumption that the Pentagon will comply with existing law, rather than building in contractual red lines that independently restrict behavior.

Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school, put it plainly. The published contract excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use.” The Pentagon simply cannot use OpenAI’s technology to violate laws and policies as they exist today. That is a narrower protection than it might appear.

The laws OpenAI cites range from a 2023 Pentagon directive on autonomous weapons, which sets guidelines rather than prohibitions, to the Fourth Amendment. Neither guarantees the outcome OpenAI is implying.

The Snowden Problem

The reason Anthropic attracted wide support, including from some OpenAI employees, is that many people do not believe current laws are adequate to prevent AI-enabled mass surveillance or autonomous weapons development. That concern has historical grounding. The surveillance programs exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after protracted legal battles. Relying on existing law as a safeguard is a thin promise to anyone who lived through that episode.

OpenAI has offered a counter-argument: if you doubt the government will follow the law, you should also doubt it would honor the specific contract prohibitions Anthropic was pushing for. That logic has some internal consistency, but it sidesteps a key point. Imperfect enforcement does not make contractual constraints worthless. Written terms still shape behavior, create oversight mechanisms, and carry political consequences when violated.

The Model-Level Defense

OpenAI claims a second layer of protection. Boaz Barak, an OpenAI employee Altman appointed to speak on the matter, wrote that the company can “embed our red lines — no mass surveillance and no directing weapons systems without human involvement — directly into model behavior.” The argument is that safety controls live in the model itself, not just on paper.

The company has not specified how these military-facing safety rules differ from those applied to regular users. It is also deploying these protections in a classified environment for the first time, with limited external visibility into how enforcement actually functions.

The net result is that the US military retains access to advanced AI for any lawful use, operating within boundaries set largely by laws that critics already consider insufficient. OpenAI secured the contract; whether it secured the principles it claims remains an open question.

Photo by RDNE Stock project on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article