The Pentagon’s conflict with Anthropic over battlefield AI has exposed a legal gap at the center of American surveillance law: no one can say with certainty whether the US government is permitted to use AI tools to conduct mass surveillance on its own citizens.
The dispute began when the Department of Defense sought to deploy Anthropic‘s Claude to analyze bulk commercial data on Americans. Anthropic refused to allow its AI to be used for mass domestic surveillance or autonomous weapons systems. Negotiations collapsed, and the Pentagon designated the company a supply chain risk — a label typically applied to foreign entities that threaten national security.
Its rival, OpenAI, initially signed a deal permitting the Pentagon to use its AI for “all lawful purposes,” language critics said left domestic surveillance on the table. Users uninstalled ChatGPT in significant numbers. Protesters chalked messages around the company’s San Francisco headquarters. By Monday, OpenAI announced a revised agreement explicitly barring domestic surveillance and excluding intelligence agencies such as the NSA.
CEO Sam Altman argued the original contract simply failed to cite existing law that already prohibits such surveillance by the military. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote on X.
Anthropic CEO Dario Amodei took the opposite position. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote in a policy statement.
What the Law Actually Covers
Both executives may be describing the same problem from different angles, according to legal experts. The answer to whether surveillance is lawful depends heavily on how surveillance itself is defined.
“A lot of stuff that normal people would consider a search or surveillance … is not actually considered a search or surveillance by the law,” says Alan Rozenshtein, a law professor at the University of Minnesota Law School.
Under current legal interpretations, publicly available information — social media posts, surveillance camera footage, voter registration records — is freely accessible to government agencies. So is data on Americans collected incidentally during surveillance of foreign nationals. The government can also purchase commercial data directly from private companies, including mobile location records and web browsing histories, bypassing the warrant requirements that would otherwise apply.
Agencies including ICE, the IRS, the FBI, and the NSA have all drawn on this commercial data marketplace, according to the report. “There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,” Rozenshtein says.
Laws Written Before the Internet
The legal framework governing surveillance was built for a different technological era. The Fourth Amendment was drafted when collecting information meant physically entering homes. The Foreign Intelligence Surveillance Act passed in 1978. The Electronic Communications Privacy Act followed in 1986. Most of the statutory architecture predates the internet economy that now generates continuous, detailed data trails on virtually every American.
AI changes the calculus not by creating new data, but by making previously unmanageable volumes of it actionable. Individually, a location ping or a browsing record may seem trivial. Aggregated and processed at scale, they can build detailed profiles of behavior, association, and belief — without triggering any existing legal threshold that would require a warrant.
The law, as it stands, has no clear answer for that.
Photo by Pixabay
This article is a curated summary based on third-party sources. Source: Read the original article