Generative AI has been moving steadily into military operations for several years, but the deployment of large language models for real-time targeting decisions represents a qualitatively different step.
OpenAI struck a deal roughly two weeks ago allowing the Pentagon to use its AI in classified environments. According to the report, Sam Altman stated the military cannot use the technology to build autonomous weapons — though the agreement effectively defers to the military’s own guidelines on such weapons, which are described as “quite permissive.” The company’s separate claim that the deal prevents domestic surveillance use appears, by the same account, equally dubious.
The timing matters. The agreement arrived as the US escalates strikes against Iran, a conflict where AI already plays a larger role than in any prior engagement. The operative question the source poses is not whether OpenAI‘s technology will enter this environment, but where and in what form.
Targeting Workflows and the Human-in-the-Loop Problem
A defense official described to the source how the technology might function in practice: an analyst loads a list of potential targets into the model, which then processes text, image, and video inputs to prioritize which targets to strike first, factoring in logistics data such as aircraft and supply locations. A human is meant to verify the outputs before action is taken.
That human-oversight framing, however, creates a tension the source identifies directly: if outputs require thorough manual review, the speed advantage disappears. The more likely use case involves layering a conversational interface on top of existing systems like Maven — the military’s long-running AI platform for analyzing drone footage and flagging possible targets. OpenAI‘s models would allow analysts to query that underlying intelligence and receive prioritized recommendations in natural language, a capability being tested in an active conflict for the first time.
The integration timeline remains uncertain. The technology must be embedded within existing classified infrastructure before deployment, a process Elon Musk‘s xAI — which struck its own Pentagon deal recently — is expected to go through as well with its Grok model. Pressure to accelerate that process has grown after Anthropic refused to permit its AI for “any lawful use,” prompting President Trump to order the military to stop using it and the Pentagon to designate the company a supply chain risk — a designation Anthropic is contesting in court.
Counter-Drone Operations
A second, more concrete application stems from OpenAI‘s partnership with defense firm Anduril, announced at the end of 2024. Under that agreement, OpenAI would support time-sensitive analysis of drones attacking US forces to help neutralize them. A company spokesperson characterized this as consistent with OpenAI‘s policies against systems “designed to harm others,” on the basis that the technology targets drones rather than people. Anduril supplies counter-drone systems to military bases globally, though the firm declined to confirm whether its systems are currently deployed near Iran. Neither company has provided updates on the project’s development since the announcement.
As for OpenAI‘s broader motivations, the source notes two plausible explanations without resolving the question: financial pressure, given the company’s substantial AI training costs and active search for new revenue streams including advertising, or genuine ideological conviction — Altman has repeatedly argued that liberal democracies must control the most powerful AI systems to remain competitive with China. The two explanations are not mutually exclusive.
Photo by Pixabay
This article is a curated summary based on third-party sources. Source: Read the original article