X Bans Creators 90 Days for Undisclosed AI War Videos

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

X will ban creators from its revenue-sharing program for 90 days if they post AI-generated war footage without disclosing that it was made using artificial intelligence, the platform announced this week.

Nikita Bier, X’s head of product, outlined the rule on Wednesday, framing it as a measure to protect the integrity of information during armed conflicts. “During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote. “With today’s AI technologies, it is trivial to create content that can mislead people.”

How the enforcement works

The policy ties monetization eligibility directly to AI disclosure. Creators who publish undisclosed AI-generated conflict footage face a 90-day suspension from X’s revenue-sharing program. The penalty goes beyond conventional moderation tools like content labels or removals, targeting creators’ income instead.

Posts can be flagged through multiple channels: Community Notes reports, metadata analysis, or signals detected from generative AI tools. Repeat offenders risk permanent removal from the revenue-sharing program entirely.

The policy applies specifically to videos depicting armed conflicts. X clarified it does not constitute a broader restriction on AI-generated content across the platform.

Why conflict footage specifically

The focus on war-related video reflects a growing concern about synthetic media during fast-moving geopolitical events, when fabricated footage can spread widely before corrections catch up. Bier’s statement pointed directly to that dynamic, citing the ease with which modern AI tools can generate convincing but false imagery.

By attaching a financial consequence rather than just a visibility penalty, X is betting that economic risk will be a stronger deterrent than a warning label or a temporary post removal.

The broader context

The announcement arrives against a backdrop of escalating tensions in the Middle East. On February 28, the United States and Israel conducted joint airstrikes on Iran. The strikes rattled financial markets briefly, with Bitcoin dipping to around $63,000 before recovering to near $70,000 at the time of reporting, according to CoinGecko.

AI is increasingly present in conflict environments beyond social media. On March 1, the US military used Anthropic’s Claude AI model to support intelligence analysis and targeting during operations connected to the Iran strikes, illustrating how rapidly the technology is being integrated into active military contexts.

X’s move reflects a broader tension platforms face: AI-generated content is cheap to produce, fast to spread, and difficult to distinguish from real footage without explicit disclosure. Linking that disclosure to revenue shifts the compliance burden squarely onto creators who profit from the platform.

Disclaimer: The information provided in this article is for educational and informational purposes only and does not constitute financial or investment advice.

Photo by Lance Reis on Unsplash

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article