Meta Ray-Ban Smart Glasses Privacy Scandal: Workers in Kenya Allegedly Viewing Sensitive Footage
Workers claimed they encountered extremely private footage during the review process. These clips allegedly included people changing clothes, handling sensitive personal information, or engaging in intimate activities.
A major investigation has raised global privacy concerns surrounding the popular smart eyewear created by Meta in partnership with Ray-Ban. Reports suggest that footage captured by the company’s AI-powered smart glasses may be reviewed by contractors in Kenya—sometimes including highly sensitive and intimate moments.
The revelation has triggered scrutiny from regulators and privacy advocates, who question whether users and bystanders fully understand how their data is being used to train artificial intelligence systems.
Investigation Reveals Sensitive Footage Reviewed by Contractors
According to a joint investigation by Swedish newspapers, footage recorded through the smart glasses can be routed to contractors working for the Nairobi-based data annotation firm Sama.
These workers reportedly review and label video clips to help improve the AI assistant integrated into the glasses. The device allows users to activate recording or AI analysis using voice commands like “Hey Meta.”
However, some workers claimed they encountered extremely private footage during the review process. These clips allegedly included people changing clothes, handling sensitive personal information, or engaging in intimate activities.
Contractors told investigators that the individuals being recorded often appeared unaware their images were being captured, let alone analyzed by human reviewers overseas.
Mark Zuckerberg’s company, Meta, has defended its practices, saying the review process is disclosed in its privacy policy and helps improve the user experience.
Meta states that footage is filtered before reaching human reviewers and that privacy protections—such as automatic face blurring—are applied to sensitive content. The company also says that recordings are reviewed only when users intentionally share them with Meta’s AI system.
Despite these safeguards, some former employees and contractors reportedly told investigators that automated filtering tools do not always work as intended, especially in poor lighting or complex environments.
Meta has also emphasized that contractors operate in controlled environments where personal devices are prohibited, reducing the risk of data leaks.
Regulators and Privacy Experts Raise Legal Questions
The investigation has prompted concerns among regulators. The UK’s data watchdog, the Information Commissioner’s Office, said it plans to seek clarification from Meta regarding the company’s data-handling practices.
Privacy experts point out that many individuals captured by the glasses are bystanders who never consented to their data being recorded or used for AI training.
European privacy rules under the General Data Protection Regulation require explicit consent from people whose personal data is collected. This raises questions about whether recordings involving bystanders meet those legal standards.
Another issue involves cross-border data transfers. Kenya currently lacks the same data protection recognition as EU countries, potentially complicating the legality of transferring European user data to overseas contractors.
The controversy arrives at a time when AI-powered wearables are expanding rapidly. Sales of the Meta-Ray-Ban glasses have surged in recent years, with millions of units reportedly sold worldwide.
The glasses resemble regular sunglasses but include built-in cameras, microphones, and an AI assistant that can answer questions about the user’s surroundings. This design helped avoid the social stigma that once affected devices like Google Glass.
While these innovations promise convenience and accessibility—especially for visually impaired users—they also highlight the growing tension between technological advancement and personal privacy.
As regulators investigate and public debate intensifies, the future of AI-enabled wearable technology may hinge on how companies address transparency, consent, and data protection.