Pentagon Plans AI Model Training on Classified Military Data

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

The U.S. military’s use of commercial AI models in classified settings is already established practice — Anthropic‘s Claude is currently deployed to analyze targets in Iran. The Pentagon is now considering a material expansion of that relationship.

According to a U.S. defense official who spoke on background, the Department of Defense is discussing plans to create secure, accredited data center environments where AI companies can train military-specific versions of their models directly on classified data. The distinction matters: existing deployments allow models to answer questions about classified material, but training on that data would embed sensitive intelligence — surveillance reports, battlefield assessments — into the models themselves.

The official says training on classified data is expected to improve model accuracy and effectiveness on specific military tasks. Before proceeding, however, the Pentagon intends to first assess model performance when trained on non-classified material, such as commercially available satellite imagery.

How the Architecture Would Work

Under the envisioned structure, a copy of an AI model would be paired with classified data inside a secure facility accredited to host government projects. The Department of Defense would retain ownership of the data. Personnel from AI companies holding appropriate security clearances could access the data, but only in rare circumstances, according to the official.

The Pentagon has already reached agreements with OpenAI and Elon Musk‘s xAI to operate their models in classified environments. Infrastructure for secure AI querying exists through Palantir, which has won contracts to build systems allowing officials to query AI models on classified topics without routing data back to the AI companies. Applying that infrastructure to training, however, is a distinct and newer challenge.

The Security Risk That Concerns Experts

Aalok Mehta, director of the Wadhwani AI Center at the Center for Strategic and International Studies and a former AI policy lead at both Google and OpenAI, identifies the primary risk as internal rather than external. Classified information absorbed during training could be resurfaced to any user of the model — a serious problem when multiple military departments operating at different classification levels share the same system.

“You can imagine, for example, a model that has access to some sort of sensitive human intelligence — like the name of an operative — leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That scenario is difficult to fully contain when a single model serves multiple groups within the military.

The risk of classified data escaping to the broader internet or back to the AI companies is more manageable, he says: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.”

The push toward classified AI training sits within a broader acceleration: a January memo from Defense Secretary Pete Hegseth directed the Pentagon to become an “AI-first warfighting force.” Current applications already span generative AI ranking and recommending airstrike targets to drafting administrative contracts and reports — with the classified training initiative representing the next step in that trajectory.

Photo by Paula Nardini on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article