Meta’s Smart Glasses Capture Videos Without Owner Knowledge

The things you record with your AI-powered Meta Ray-Ban glasses — yes, even those intimate moments where you think you’re alone — are probably being seen by strangers.

An investigation by Swedish outlets Svenska Dagbladet and Göteborgs-Posten found that offshore Meta workers in Kenya were asked to analyze intimate and even “disturbing” videos taken by glasses wearers, including videos taken in bathrooms, footage featuring nudity and sexual content, and images showing personal information like bank accounts. It’s part of a process known as data labeling, used to train AI models with footage first reviewed and annotated by humans so that the AI can understand what it’s “looking” at.

Workers told the publication that many of the videos appear to be moments captured when users weren’t aware they were being recorded. The group works under Sama, the same Meta contractor facing a class action lawsuit on behalf of content moderators who allege they have been exploited and forced to review traumatic content without proper working conditions.

— DiBenedetto, Chase. “Meta Workers Forced to Review Intimate Videos Taken by Ray-Ban Smart Glasses.” Mashable, 4 Mar. 2026.

At this point it would probably be safest for people to avoid anything produced by Meta or any audio/video product from Silicon Valley.  They have proven time and time again to be untrustworthy, greedy, exploitive, unconcerned with individual privacy, unconcerned with worker protection, contemptuous of user non-consent, servile to power, and lacking any accountability.  This behavior demands a boycott or regulation.

You Cannot Hide On The Internet

Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said.

The finding, from a recently published research paper, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators. Recall—that is, how many users were successfully deanonymized—was as high as 68 percent. Precision—meaning the rate of guesses that correctly identify the user—was up to 90 percent.

— Goodin, Dan. “LLMs Can Unmask Pseudonymous Users at Scale with Surprising Accuracy.” Ars Technica, 3 Mar. 2026.

Your weekly remind to always practice Operational Security and keep critical communication offline.

2026-03-03T17:27:25+00:00Categories: Surveillance|Tags: , , , |
Go to Top