Pentagon Plans Classified AI Training for Military Models

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

The U.S. military has been expanding its use of artificial intelligence in classified operations, with models like Anthropic‘s Claude already deployed to answer questions in sensitive settings — including analysis related to targets in Iran. Now, according to the announcement, the Pentagon is planning to take that relationship considerably further.

Defense officials say the department intends to establish secure environments where generative AI companies can train military-specific versions of their models directly on classified data. That is a meaningful step beyond current use. Where AI has until now operated as a tool that queries sensitive information, training on it would embed that information into the models themselves — surveillance reports, battlefield assessments, and other sensitive intelligence becoming part of how the models think and respond.

The security implications are distinct from anything the industry has faced before.

Allowing private AI firms access to classified training data would bring them closer to national security material than any commercial arrangement previously has. The risk is not simply that data could be exposed during training — it is that once intelligence is absorbed into a model’s weights, it cannot be cleanly extracted or contained. A model trained on classified battlefield assessments carries those assessments in a form that is difficult to audit, recall, or isolate if something goes wrong.

That concern sits alongside a separate but related dispute already unfolding in Washington. US officials, according to a separate report, are pushing to remove Anthropic from all government agencies over questions of trustworthiness in warfighting contexts — a disagreement that OpenAI has moved to take advantage of. The Pentagon’s new training plan would expand classified AI access at precisely the moment that confidence in one of its leading AI partners is under scrutiny.

Elsewhere, the Pentagon is also moving to mass-produce a kamikaze drone called Lucas — modeled on Iran‘s Shahed UAV, which has proven effective in active conflict — following its use in strikes against Iran.

On a separate front, DeepSeek appears to be quietly testing a next-generation AI model, with an official launch potentially imminent. Nvidia, meanwhile, has launched NemoClaw, an AI agent platform with built-in privacy and security features, and has received Beijing‘s approval to resume sales of H200 chips in China. Chinese AI stocks surged on the news.

The Pentagon‘s classified training program has no confirmed timeline in the source material. What is stated is that planning is underway and that the arrangement would represent a new category of access for commercial AI firms to national security data.

Photo by Christina Morillo on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article