Pressure on xAI over its Grok chatbot’s image-generation capabilities had been building for months — following an episode in which the tool flooded X with explicit material involving both adults and minors, triggering federal scrutiny, a European Union probe, and a public warning from UK Prime Minister Keir Starmer.
Now the company faces a lawsuit directly from victims. Three Tennessee teenagers — two current minors and one adult who was underage when the alleged events occurred — filed a proposed class action on Monday accusing xAI, Elon Musk, and other company leaders of knowingly launching a feature that generated child sexual abuse material.
What the Lawsuit Alleges
The filing centers on Grok‘s “spicy mode,” introduced last year. The plaintiffs allege that the perpetrator — who has since been arrested — used the feature to generate sexually explicit images and videos of the three victims.
One plaintiff, identified as “Jane Doe 1,” says she discovered last December that explicit, AI-generated material depicting herself and at least 18 other minors had been posted on Discord. According to the lawsuit, at least five files — one video and four images — showed “her actual face and body in settings with which she was familiar, but morphed into sexually explicit poses.”
The material did not stay contained. The perpetrator allegedly used Jane Doe 1’s images “as a bartering tool in Telegram group chats with hundreds of other users, trading her CSAM files for sexually explicit content of other minors,” the lawsuit states.
The complaint accuses xAI of having “failed to test the safety of the features it developed” and describes Grok itself as “defective in design.”
Legal and Legislative Backdrop
The case arrives against a shifting legal landscape. The Senate passed a bill in January allowing victims of nonconsensual deepfakes to sue creators directly. The Take It Down Act — signed by President Donald Trump in 2025 — will criminalize the distribution of nonconsensual, AI-generated deepfakes when it takes effect in May.
X has stated that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” The platform did not respond to a request for comment on the lawsuit.
Attorney Annika K. Martin of Lieff Cabraser, representing the plaintiffs, framed the case plainly: “These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company’s AI tool and then traded among predators. We intend to hold xAI accountable for every child they harmed in this way.”
The lawsuit seeks financial damages for victims affected by Grok‘s alleged generation of illegal images, and asks the court to order xAI to stop producing and distributing the material.
Photo by Brett Jordan on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article