Tensions between OpenAI employees and company leadership over military partnerships had already been building for months before this week’s Pentagon deal drew public attention. What has received less scrutiny is how the US military gained access to OpenAI‘s models even while the company’s own policies explicitly prohibited it.
In 2023, OpenAI‘s usage policy contained a blanket ban on military access to its AI models. Yet that same year, some employees discovered the Pentagon had already begun experimenting with Azure OpenAI — a version of OpenAI‘s models distributed through Microsoft — according to two sources familiar with the matter who spoke on condition of anonymity. Pentagon officials were also seen walking through the company’s San Francisco offices that year, the sources said.
The arrangement exploited an ambiguity that most employees, at the time, did not know how to resolve: did OpenAI‘s usage policy apply to Microsoft? According to the announcement, it did not. “Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,” said spokesperson Frank Shaw. The company declined to specify when the Pentagon was granted access, but noted the service was not approved for “top secret” government workloads until 2025.
Microsoft had been contracting with the Department of Defense for decades and was also OpenAI‘s largest investor, holding broad license to commercialize the startup’s technology. The relationship created a channel the public-facing policy did not account for.
A Policy Quietly Rewritten
By January 2024, OpenAI removed the blanket military ban from its usage policies. Several employees learned of the change not through internal communication, but through a news article, according to the sources. Company leadership later addressed it at an all-hands meeting, describing how the firm would approach military work with care going forward.
The internal reaction was not uniform. Some employees believed the models were too unreliable to handle sensitive battlefield applications. Others felt the company was managing its military exposure responsibly. A current OpenAI researcher described the company’s approach as “measure twice, cut once” when it comes to broad classified deployments.
Anduril, Palantir, and Where Lines Were Drawn
In December 2024, OpenAI announced a partnership with Anduril to develop AI systems for “national security missions.” Before the announcement, employees were told the partnership would remain narrow in scope and cover only unclassified workloads — a deliberate contrast, the sources say, to Anthropic‘s deal with Palantir, which extended to classified military work.
Palantir had approached OpenAI in the fall of 2024 about its “FedStart” program. OpenAI confirmed to the report that it turned the offer down, telling employees the arrangement would have been too high-risk. The company nonetheless works with Palantir in other capacities.
Around the time of the Anduril announcement, a few dozen employees created a public internal Slack channel to discuss concerns about the company’s military direction, which an OpenAI spokesperson confirmed. CEO Sam Altman this week acknowledged the most recent Pentagon deal looked “sloppy” in a social media post, with employees publicly calling on him to release more information about it. The Department of Defense did not respond to a request for comment on the matter.
Photo by Nellie Adamyan on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article