Meta faces a lawsuit in the United States over its Ray-Ban smart glasses, after an investigation by Swedish newspapers revealed that workers at a Kenya-based subcontractor were reviewing footage from customers’ devices, including images of nudity, people having sex, and individuals using the toilet.
The complaint was filed by plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by Clarkson Law Firm. They allege that Meta violated privacy laws and engaged in false advertising by marketing the glasses with phrases like “designed for privacy, controlled by you” and “built for your privacy,” while users’ footage was being reviewed by overseas contractors without clear disclosure.
What the Lawsuit Alleges
The plaintiffs say they bought the glasses based on Meta’s marketing and encountered no disclaimer or information that contradicted the advertised privacy protections. The suit names both Meta and glasses manufacturing partner Luxottica of America, charging them with conduct that violates consumer protection laws.
Clarkson Law Firm, which has previously filed major cases against Apple, Google, and OpenAI, points to the scale of the issue. More than 7 million people purchased Meta’s smart glasses in 2025. According to the complaint, footage from those devices feeds into a data pipeline for review, and users cannot opt out.
The complaint focuses heavily on Meta’s advertising materials, showing examples of ads that promoted privacy settings and described an “added layer of security.” One ad read: “You’re in control of your data and content,” explaining that glasses owners could choose which content was shared with others.
Meta’s Position and the Disclosure Question
Meta told the BBC that it uses contractors to review content shared with Meta AI in order to improve user experience, and that this practice is explained in its privacy policy. The company pointed to its Supplemental Meta Platforms Terms of Service but did not specify exactly where that disclosure appears.
Reporters found that a reference to human review appeared in Meta’s U.K. AI terms of service. The equivalent U.S. policy states: “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).”
Meta spokesperson Christopher Sgro issued a statement saying, “Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”
Sources cited in the original investigation disputed that Meta’s claimed face-blurring consistently worked.
Broader Regulatory Attention
The revelations prompted the U.K.’s Information Commissioner’s Office to open its own investigation into the matter. The lawsuit in the U.S. adds legal pressure on top of that regulatory scrutiny.
The controversy fits into a wider backlash against always-on consumer devices. One developer has already published an app capable of detecting when smart glasses are nearby, reflecting growing public unease about ambient surveillance embedded in everyday wearables.
Meta did not comment on the litigation itself.
Photo by Azwedo L.LC on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article