Meta is currently facing a class-action lawsuit for alleged "false advertising and misleading privacy statements," stemming from a recent report by foreign media that Meta's outsourced reviewers in Kenya were seen using smart glasses to capture images of users during their work."Private images"The lawsuit, which includes footage of extremely private moments such as using the toilet and engaging in sexual activity, not only shatters the tech giants' promises to protect AI privacy but also raises serious concerns about data security in wearable devices.
Outsourcing review causes trouble: Kenyan employee sees "inappropriate" footage
The trigger for this controversy was an investigative report by the Swedish newspaper Svenska Dagbladet.
The report indicates that Meta's local contractors in Kenya raised serious concerns. Their job was to "label objects" in the videos captured by smart glasses to help train AI models. However, the employees reported witnessing a large amount of extremely private material during the review process, including users' bathroom time, sexual activity, and other private details that should not be made public.
Class action lawsuit launched: accusing Meta of concealing "human censorship" mechanisms.
Following revelations from outsourced employees, the law firm Clarkson officially filed a nationwide class-action lawsuit in San Francisco federal court on Wednesday.
The lawsuit specifically names two Ray-Ban Meta glasses users, residing in California and New Jersey respectively. They argue that they made the purchase based on their "trust" in Meta's marketing claims regarding privacy protection features; if they had known their view would be viewed by a "human outsourcing company," they would never have bought the product.
The plaintiffs' complaint harshly alleges that "this undisclosed human vetting process not only renders the privacy features of the Meta AI glasses substantially misleading, but also turns this personal device into a surveillance conduit." They further emphasize that this exposes consumers to unreasonable risks such as emotional distress, stalking, extortion, and identity theft.
The gray area of privacy terms: To use AI features, you have to give up your field of vision.
In response to the lawsuit, a Meta spokesperson confirmed that data from the smart glasses is indeed shared with human contractors "in certain circumstances," but declined to comment on the details of the lawsuit.
Meta's defense is that media files remain on the device unless users actively choose to share them. However, when people share content with Meta AI to "answer questions about the world around them," the company sometimes uses contractors to review this data to improve the experience, and claims to have implemented filtering measures to protect privacy.
However, there is a huge trap here.
In fact, without transmitting images of their surroundings to Meta, users cannot use the glasses' main "multimodal" features, such as Live AI real-time question answering. When these AI features are enabled, the images captured by the glasses are not saved to your phone's photo album, but are transmitted to the cloud and may fall into the view of human reviewers to train Meta's AI model.
Worse still, while Meta's privacy policy mentions that data can be used for training, it deliberately avoids explicitly mentioning the involvement of "human contractors." The lawsuit also points out that Meta's so-called "anonymization security mechanism" is unreliable in practice, and reviewers can still see credit card numbers and clearly identifiable faces.



