So the world can label them as Hentai glasses and move on
No one will read it, but even if you do, most of the time the FOMO or sunk cost fallacy effect will make you go on anyway. And then it is a free pass for them.
Tbh the only thing I really use the glasses for are listening to music or talking on the phone - so basically how you'd use airpods. I don't use airpods because I had an ear injury that prevents me from using them on my left ear, so these glasses were kinda nice for that. I really wish they didn't have a camera though because I do always feel compelled to remove them if I interact with people.
I also have to add that the quality is mediocre. They're a month old and the case has problems charging sometimes, and one of the screws is always coming loose at a hinge no matter how often I retighten that side.
I don't agree that responsibility to comply with Swedish law is on the wearer. This should motivate prosecutors to immediately order raids to secure any data relating to the processing of the data.
I also think the Swedish camera surveillance law is also applicable and there's a deceptive element since the cameras are disguised as glasses.
> “The algorithms sometimes miss. Especially in difficult lighting conditions, certain faces and bodies become visible”.
Right, “difficult lighting conditions,” not sure when we’d run into those in situations where we might be concerned with privacy. A 97% success rate looks good on paper.
That's my default assumption.
EDIT: Wait, is this when you use the "ask Meta" feature? I do expect that to send all the clips to a server for an LLM to process, it's not done on-device. It's not clear to me whether it's that or just all videos/photos you record with the glasses.
1. Debugging for troubleshooting.
2. Analytical for making product better.
3. Bugs that collects your info when it shouldn't.
4. Bugs from 3rd party vendor if company uses those.
5. Insecure process. Getting access to a private content within the company is trivial due to coarse permission model.
Source: I worked at two well known social media companies. Trust & Safety and data infra teams
A better pattern would be tiered modes with explicit UX: local-only capture, cloud processing without retention, and opt-in retention/training with visible status. If the product can’t technically support that separation today, that limitation should be stated plainly in setup, not buried in policy docs.
I wish this article (or Meta) were a bit clearer about the specific connection between the device settings and use and when humans get access to the images.
My settings are:
- [OFF] "Share additional data" - Share data about your Meta devices to help improve Meta products.
- [OFF] "Cloud media" - Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage.
I'm not sure whether my settings would prevent my media from being used as described in the article.
Also, it's not clear which data is being used for training:
- random photos / videos taken
- only use of "Meta AI" (e.g., "Hey Meta, can you translate this sign")
As much as I've liked my Meta Ray Ban's I'm going to need clarity here before I continue using them.
TBH, if it were only use of Meta AI, I'd "get it" but probably turn that feature off (I barely use it as-is).
But I'm a bit confused by the article because it describes things that seem really unlikely given how the glasses work. They shine a bright light whenever recording. Are people really going into bathrooms, having sex, sharing rooms with people undressed while this light is on? Or is this deliberate tampering, malfunctioning, or Meta capturing footage without activating the light (hard to believe even Meta would do this intentionally).
Meta aims to introduce facial recognition to its smart glasses while its biggest critics are distracted, according to a report from The New York Times. In an internal document reviewed by The Times, Meta says it will launch the feature “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
https://www.theverge.com/tech/878725/meta-facial-recognition...I just don't see a world where that doesn't happen with Meta glasses.
The creepiness concern is real, but I think people misplace where the actual surveillance happens. The most consequential stores of personal data aren’t ad networks they’re things like banks, hospitals, insurers, and telecoms. These institutions hold information about your health, finances, movements, and relationships, indexed and searchable by employees you’ve never met, governed by policies you’ve never read.
Realistically, there’s very little an individual can do to completely opt out.
My take is: if the main outcomes are that I get shown ads for things I don’t need and my facecomputer knows the difference between a fork and a spoon… I… I can live with that.
I care about the innocent people whose privacy is invaded by people who buy these glasses.
If you're blind, it's of course understandable but that's pretty much it in terms of cases in which I would consider the glasses acceptable to wear in public.
And you're still forced to carry a smartphone anyway with these glasses since they require internet connection.
Is this fashion, or something I'm not aware of ? They look horrendous to me.
And despite this, there is no strong will to detach from what they produce - in the beginning or later when it is considered like cultural fabric. That’s how good their tactics is.
And for the pay one gets working for them - screw the world! I won’t use it anywhere near my loved ones - but will build it
You can still record stuff without spyglasses. People do that on youtube too, e. g. first amendment audits. It's not that different to the spyglasses, except that you can cut off Meta from the process (admittedly youtube creates another problem which is called Google; it would be nice if we could have platforms without corporate overlord, but the financial aspect may still be an issue that requires solving. I don't have a good way to solve that, as I am also having a 100% zero ads policy aka using ublock origin mandatorily. And Google declared total war againts ublock origin, we all know that.)
Yesterday I saw a Instagram reel of a guy asking "what am I looking at" while between his girlfriend's legs. Congrats, some Indian guy saw her too.
The core piece of information that is missing or unclear is whether this collection happens also when not actively and knowingly sending data to the cloud.
The glasses let me record videos locally, can Facebook see any frames of them? This is the question that needs to be answered. Everything is else is nonsense like "omg Amazon hears what I tell Alexa"
Because I didn't think that the data was uploaded to meta by default, when you take a video with the raybans.
More over, I didn't think that those glasses could record more than 2.5 minutes.
The point still remains, the devil in detail of the "privacy" policy.
I missed Facebook for about a day, and after that I barely even thought about it. In 2021 I bought an Oculus Quest 2, which at the time required a Facebook account so I made a throwaway one, but other than that I haven't been on Facebook (and I haven't even touched my Quest 2 in three years).
Point being, it's really not hard to get off Facebook and to ditch Meta products. More people should delete it.
Absolutely crazy that a Meta employee saying not to buy them. Everyone should know this right now.
>since they require internet connection.
Only the AI features require internet. You can technically take pictures and video without carrying around your phone, but realistically people are going to carry there phone with them.
Stop thinking like an end user and think like a Meta shareholder.
Meta don't own smartphone hardware or operating systems. Apple and Android locked that market up. But if they can create a new market and own that, then imagine all the data they can harvest!
We will shame hard anyone who uses this sh1t.
Those videos can also be a used to track people. IMHO each Tesla owner sending video data to Tesla's data centers is violating privacy laws!
When you use Meta's products and services you are tagged, tracked, and commodified like an animal. You are cattle.
The question isn't whether or not Meta's AI smart glasses raise data privacy concerns.
The question is why use anything from Meta in the first place?
The advertisement is everywhere. The ice hockey player Peter Forsberg is trying on a pair of black glasses. In the viral clip he talks to the glasses, asking who is Sweden’s greatest hockey player of all time.
They are not just any glasses.
They are Facebook owner Meta’s new AI glasses.
The glasses are marketed as an all-in-one assistant that helps the wearer excel at work, capture beautiful sunsets, act as a travel guide and translate foreign languages in real time.
So powerful that they are meant to compete with smartphones, while the user remains in control of their privacy.
Reality would prove to be different.
It is stuffy at the top of the hotel in Nairobi, Kenya. The grey sky presses the heat against the windows. The man in front of us is nervous. If his employer finds out that he is here, he could lose everything.
He is one of the people few even realise exist – a flesh-and-blood worker in the engine room of the data industry. What he has to say is explosive.
“In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording.”
In Svenska Dagbladet and Göteborgs-Posten’s investigation, the people behind Meta’s smart glasses testify to the hidden stream of privacy-sensitive data that is fed straight into the tech giant’s systems.
It begins on the other side of the world.
September 2025 in Menlo Park, the heart of Silicon Valley. Mark Zuckerberg, founder of Meta, the company behind Facebook, Instagram and WhatsApp, is about to present the initiative he hopes will define the company’s future. On gigantic screens, the audience can see him sitting backstage, leaning over a script and rehearsing.
Mark Zuckerberg presenting what he hopes to be the future of Meta. Foto: Nic Coury/AP
They lie in front of him on the table.
“Meta Ray-Ban Glasses”.
He stands up after a while, and puts the glasses on.
The perspective shifts – on the screens, the audience sees the world through his eyes. Zuckerberg walks through the corridors, towards the stage. On the way, he is met with cheers, fist bumps and a nod from the international music star Diplo.
On stage, Zuckerberg preaches. He explains that his revolutionary glasses are to be a kind of all-in-one assistant with everything from live translations to facial recognition.
He concludes by thanking his American team. But what is shown in Menlo Park is just as much the result of a completely different type of work, far away from Silicon Valley.
The company they work for is called Sama and is a subcontractor to Meta. Here in Kenya’s capital, thousands of people train AI systems, teaching them to recognise and interpret the world.
They are called data annotators, and they are the manual labourers of the AI revolution. On the screens they draw boxes around flower pots and traffic signs, follow contours, register pixels and name objects: cars, lamps, people. Every image must be described, labelled and quality assured.
All to make the next generation of smart glasses a little more intelligent – a little more human.
It is an uncomfortable truth for tech giants: the AI revolution is to a large extent built on labor in low-income countries. What we call “machine learning” is often the result of human hands.
In the multi-million city of Nairobi, SvD and GP meet Sama workers at an indistinct hotel, at a safe distance from Sama. Some come straight from a night shift, others are preparing for a ten-hour shift in front of the screens.
The employees have signed extensive confidentiality agreements – if they break them they can lose their jobs – and be thrown back into a life without income, often to the slums. Therefore we publish no names.
The workers in Kenya say that it feels uncomfortable to go to work. They tell us about deeply private video clips, which appear to come straight out of Western homes, from people who use the glasses in their everyday lives.
Several describe video material showing bathroom visits, sex and other intimate moments.
Another worker talks about people coming out of bathrooms.
“Someone may have been walking around with the glasses, or happened to be wearing them, and then the person’s partner was in the bathroom, or they had just come out naked”, an employee says.
Do you sometimes feel that you are looking straight into other people’s private lives?
“When you see these videos, it feels that way. But since it is a job, you have to do it. You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”
“We see everything – from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me”.
The workers describe videos where people’s bank cards are visible by mistake, and people watching porn while wearing the glasses. Clips that could trigger “enormous scandals” if they were leaked.

Efter reklamen visas:
Glasögon video A.mp4
“There are also sex scenes filmed with the smart glasses – someone is wearing them having sex. That is why this is so extremely sensitive. There are cameras everywhere in our office, and you are not allowed to bring your own phones or any device that can record”, an employee says.
The data annotators also work with transcriptions, where they are to check that the AI assistant in Meta’s glasses has answered users’ questions correctly.
“It can be about any topics at all. We see chats where someone talks about crimes or protests. It is not just greetings, it can be very dark things as well”, one of the workers says.
Another recounts a text where a man described a woman he wanted to have sex with:
“He commented on her body and said that he liked her breasts.”
2025 becomes a breakthrough for Meta Ray-Ban, which is manufactured in collaboration with the eyewear giant EssilorLuxottica. From having sold two million smart glasses in 2023 and 2024 combined, sales are tripled to seven million units.
In Sweden, Synsam is one of the major Swedish retailers, as is the chain Synoptik. Some independent opticians also carry the glasses.
Reporters Ahmed Abdigadir and Julia Lindblom outside Synsam’s flagship store in Gothenburg. Foto: Olof Ohlsson
Throughout the autumn of 2025, we visit ten retailers in Stockholm and Gothenburg to ask the sales staff how the data from the Meta glasses is processed. Several of the sales people give us reassuring answers. We are told that we ourselves can choose exactly what data is shared with Meta.
“Nothing is shared with them (Meta). That was a big concern for me as well. Are they going to get access to my data, that is a bit scary, but you have full control”, says an employee at a Synsam store.
Others are more uncertain.
“To be completely honest, I don’t know where the data goes, or if they take data at all”, says a shop assistant at an independent optician. Another salesperson points out that the customer can always choose not to share their data:
“No, it is completely fine – everything stays locally in the app.”
We buy our own pair of glases at Synsam’s flagship store in Gothenburg.
At the Göteborgs-Posten newsroom we begin installing them. The glasses are to be connected to an app called Meta AI. Only after several approvals in the app is it possible to get started with the AI function. One of the steps concerns whether we want to share extra data with Meta to help improve their products. We choose “no”.
The AI functions are activated with the voice command “Hey Meta”. Within ten minutes of the package being opened we begin asking questions. The glasses answer immediately, in English.
Together with a system developer at Svenska Dagbladet we try to find out whether what the salesperson said is correct, that we can choose not to share our data with Meta. We try to use the glasses without internet connection turned on.
But that makes it impossible to get help interpreting what we see. The glasses urge us to turn on the connection. When we then analyse the network traffic from the app, we see that the phone has frequent contact with Meta servers in Luleå, Swden, and Denmark.
In order to answer questions and interpret what the camera sees, the glasses require that data be processed via Meta’s infrastructure – it is not possible to interact with the AI solely locally on the phone.
What the salespeople say about nothing being shared onwards does not appear to be correct.
We contact Synsam and Synoptik for an interview about what training the sales staff receive and how it can be that the answers they give are so different. Synsam responded in writing that its role is to inform customers about the applicable terms and to provide internal training, but that responsibility for complying with Swedish law and Meta’s terms ultimately rests with the wearer. Synoptik responded in similar terms, saying its staff are trained in ethics and emphazise the user’s responsibility.

Efter reklamen visas:
Glasögon video A.mp4
With the glasses we bought there is also a manual with a QR code that leads to Meta’s privacy policy for wearable products. This in turn links to other pages, such as the Terms of Use for Meta’s AI services.
At first glance, it appears that we have significant control over our data. It states that voice recordings may only be saved and used for improvement or training of other Meta products if the user actively agrees.
But for the AI assistant to function, voice, text, image and sometimes video must be processed and may be shared onwards. This data processing is done automatically and cannot be turned off.
We read further on in the Terms of Use for Meta’s AIs. The terms state that “in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”
It also states that the AIs may store and use information shared with them, and that the user should not share information “that you don’t want the AIs to use and retain, such as information about sensitive topics”.
The user is given no choice; it is mandatory to participate.
It is not specified how much data may be analysed or for how long it may be stored. Nor is it specified who is given access to the data.
Data experts we contact in Sweden and abroad question how aware users really are that their data may be used to train Meta’s AI.
The experts point to an unclear boundary between what is shared voluntarily and what is collected automatically – a boundary that can be difficult to detect.
When Meta offers services within the EU, the company is subject to the General Data Protection Regulation (GDPR), which requires transparency about how personal data is processed and where that processing takes place.
Kleanthi Sardeli is a data protection lawyer at None Of Your Business (NOYB), a non-profit organisation in Vienna that has brought several legal cases against Meta. They are currently reviewing the new smart glasses.
She says there is a clear transparency problem: users may not realise that the camera is recording when they begin speaking to the AI assistant.
Kleanthi Sardeli. Foto: Privat
“If this happens in Europe, both transparency and a legal basis for the processing are lacking,” she says. She believes that explicit consent should be required when data is used to train artificial intelligence.
“Once the material has been fed into the models, the user in practice loses control over how it is used,” Sardeli says.
Petter Flink is an IT and security specialist at IMY, the Swedish Authority for Privacy Protection. It is the authority that is to protect Swedes’ personal data and privacy.
According to him, few people truly consider what they are agreeing to when they start using services such as Meta’s glasses.
“The user really has no idea what is happening behind the scenes”, says Petter Flink.
Petter Flink. Foto: Daniel Larsson
At the same time, the technology has become both more accessible and more enticing, with new functions that quickly reach a broad audience.
He emphasises that the data Meta collects is more valuable than the glasses themselves. The more details that can be extracted from the user’s everyday life, the more accurately advertising and services can be targeted at the person.
"I think few people would want to share the details of their daily lives to that extent. But when it is presented in a fun and appealing way, it becomes harder to see the risks”, says Petter Flink.
The Swedish Authority for Privacy Protection has not reviewed the Meta glasses.
“Therefore we cannot comment on where the data ends up”, says Petter Flink.
According to our sources, sensitive data is not intended to be used to train the AI models.
Even so, it can still happen.
“As soon as the device ends up in the hands of users, they do whatever they want with it”, says one of the former Meta employees.
According to the former Meta employees, faces that appear in annotation data are automatically blurred.
However, data annotators in Kenya told SvD and GP that the anonymisation does not always work as intended. Faces that are to be covered are sometimes visible. We ask one of the former Meta employees how this is possible.
“The algorithms sometimes miss. Especially in difficult lighting conditions, certain faces and bodies become visible”.
Where do the images come from? Can private videos from Sweden end up on screens in Kenya? Those who appear in the images, have they consented to appearing in this way?
We contact Meta repeatedly for an open interview about how the company informs users about the glasses, what filters are used to prevent private material from reaching annotators, how the chain of subcontractors is audited, and why content showing extremely private situations appears.
We also ask how long voice recordings and video clips are stored, how the possibility for consumers to object works, in practice, and whether the video clips can come from Swedish users.

Efter reklamen visas:
Glasögon video B.mp4
After two months, we receive an email from Meta’s spokesperson in London, Joyce Omope. The letter does not directly answer our questions, but explains how data is transferred from the glasses to the user’s mobile app.
Instead, Meta refers to its AI terms of use and privacy policy. These do not specify where the data ends up, but they do state that it may be subject to human review.
We asked Meta to elaborate on how sharing highly private material with subcontractors such as Sama in Kenya can be reconciled with its privacy policy. We posed the same questions to Sama. There was no response.
We receive no additional answers from Meta either and have to make do with what Meta’s spokesperson Joyce Omope first wrote:
“When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”
A European Meta executive, who asked not to be named, says it does not matter where the data is processed as long as the data protection rules are equivalent to those in Europe.
“Many believe that data must be stored within the EU to be protected. But under GDPR it does not matter where the server is located – as long as the country meets the EU’s requirements. If it does not, data may not be sent there”.
They continue:
“Technically, we have data centres in Sweden, Denmark and Ireland, but the physical location is actually less relevant. The legal responsibility lies with Meta Ireland, which is the European entity. Where the data is actually processed – in Europe or in the US – does not change the regulatory framework”.
There is currently no EU decision recognising Kenya as providing an adequate level of protection, but the EU and Kenya began a dialogue on the matter in May 2024. It is expected to take time before an agreement is in place.
Meta themselves write in their privacy policy that they must transfer, store and process user data globally, since “Meta is a company that operates globally”, and that they share information both internally between offices and data centres and externally with partners, third parties and service providers. Meta explicitly writes that this applies to interactions that people have with AI at Meta, for example content and messages.
Petra Wierup, a lawyer at the Swedish Authority for Privacy Protection, IMY, says that if Meta is the data controller under GDPR, then they have a responsibility for Swedes’ personal data collected when the glasses are used.
Petra Wierup. Foto: Pressbild
“For it to be permitted to use a service provider in a third country (outside the EU), it is required that robust agreements with instructions are in place. It must also be ensured that there is legal support for the transfers, so that the data that is transferred receives continued strong and equivalent protection when it is processed in a third country. The protection must therefore not become weaker when it is processed by subcontractors”, says Petra Wierup.
At one end, the glasses are marketed as an everyday assistant – a voice in the frame that tells you what you are seeing. At the other end, people in Nairobi sit annotating the most intimate moments the camera captures: open-plan offices, living rooms, bedrooms, bathrooms.
One annotator sums it up:
“You think that if they knew about the extent of the data collection, no one would dare to use the glasses”.
It’s possible that even if all your friends/family would stay far away, they could still end up in your proximity.
https://github.com/hagezi/dns-blocklists?tab=readme-ov-file#...
Among others, blocks Meta/Facebook/Google/Apple trackers and ads. Every router on the planet should run this.
Different laws in different countries.
> before filming random people in the street?
That would make taking pictures impossible, so no, such a requirement cannot be reasonably() codified into law.
() By reasonably I mean in a way to be actually followed. Of course there are lots of impossible laws created by politicians to cater to their fan base.
https://patch.com/illinois/lakezurich/il-student-punches-pro...
If you could not take photos of people in public places it would imply banning a lot of things that have been acceptable for a long time.
However, audio recording of conversations is prohibited.
> Privacy
Pick one.
[OFF] "Share data about your Meta devices to help improve Meta products." doesn't preclude sharing data for other purposes.
[OFF] "Allow your photos and videos to be sent to Meta's cloud for processing and temporary storage." doesn't preclude sending them to Meta's cloud for permanent storage.
I turned the AI off and used them as headphones and taking videos while biking. After a couple rides, I couldn’t bring myself to put them on because people started to recognize them and I realized I didn’t want to be associated with them (people are right to assume Meta has access to what they see).
Meta Ray Bans, if kept simple, could have been a great product. They ruined them.
I feel like this article is either a bombshell, or totally confused.
And all of that is to ignore that neither gen1 or 2 of Google Glass attempted to look like regular glasses. The Meta frames are largely indistinguishable from regular glasses unless you are very up close.
[EDIT] I really shouldn't need to say this on Hacker News but don't shoot the messenger for messages you don't want to hear. Reporting a fact does not imply approval or disapproval of it.
No. When your record a video on your phone, it is not being reviewed annotators. Generally companies only pay to get labeling done on data that is being used to train (or evaluate) ML models.
[1] (in German) https://www.derstandard.at/story/3000000215526/aktueller-fal...
For some reason they keep asking aggressively for permission for the whole thing. I wonder why...
> Hey Meta, is it safe to cross the street
> You are absolutely correct to check whether it's safe to cross before crossing! (emoji). Let me check for you(emoji)
> ...10% ...40% ...80% ...100% DONE. (made up progress bar)
> It is perfectly safe to cross right now! (emoji)
> Thanks Meta! (user dies)
There is (in general) no expectation of privacy in public in Europe. How you can use the material though, is a different matter ...
Filming is legal. In public spaces (streets, parks), there is no "reasonable expectation of privacy." You do not need permission to point a camera. The exceptions are usually for offensive or harassing type of filming.
Publishing is regulated. In EU, once you share the footage , you are "processing personal data" under GDPR. There are also exceptions where publishing without permission is legal. Legitimate Interest (security footage or incidental background), Public Interest/Journalism, and Artistic Expression.
Generally you must ask permission to publish, not to film. Although asking permission to film is good ethical principle too.
Hahahahahahahaha
ZUCK: yea so if you ever need info about anyone at harvard
ZUCK: just ask
ZUCK: i have over 4000 emails, pictures, addresses, sns
FRIEND: what!? how’d you manage that one?
ZUCK: people just submitted it
ZUCK: i don’t know why
ZUCK: they “trust me”
ZUCK: dumb fucks
Actual quote, BTW [1].
[1] https://www.newyorker.com/magazine/2010/09/20/the-face-of-fa...
Who has been distracted from Facebook’s shenanigans? Who are they talking about? Is it me? Because I can tell you I have certainly not been distracted on that front. Am I supposed to feel guilty? Am I supposed to hold somebody accountable who should’ve been paying attention?
I do actually understand why it’s done, but I just find it very grating and if your goal is to actually raise awareness, shaming people is generally not the way to go about it.
Also the classic “we can walk and chew bubblegum at the same time” thing
Yes, but it's possible, at the cost of some minor inconvenience, to greatly limit data collected about you.
Communicate over private channels (Signal, own XMPP servers, NOT Whatsapp), pay in cash or crypto, runs free software on all your devices, and deny Internet access to devices across the board (this includes all TVs/monitors, all "smart" devices, cars, and other appliances).
The real issue is that (as these glasses exemplify), it is difficult to prevent others to intentionally or unintentionally provide data to surveillance companies. This happens when you walk in front of a Ring camera, when someone uploads a selfie to Facebook and you happen to be in the background, and in countless other situations.
I mean, otherwise countries couldn’t use security cameras
It's not that complicated. Most people just go where the other users are. They "have nothing to hide". Their thoughtless decisions actively make society worse for everyone else, one user at a time. Even tech people who know the scam throw up their hands and express how impossible it would be to get their kids' soccer parents or PTA groups to abandon WhatsApp groups or FB Messenger for something privacy-respecting. The tyranny of the installed base.
Go to a place that didn't have deliberate large scale society-wide anti-smoking programs. Basically everyone starts smoking at age 15 and never stops. People regularly and typically, en masse, work against their own interests in ways that seem like "not a big deal".
Suddenly, you can't make a doctor's appointment in Europe without a WhatsApp account (and agreeing to the Meta ToS in the process). (Why Europe casually ceded the basic day to day communications of many of its b2c sectors to an American company without so much as a fight is another matter.)
Cameras on glasses will be normalized too. A few HNer types will scream. The rest of the "nothing to hide so nothing to fear" group will just wear them. (not saying I agree with "nothing to hide so nothing to fear". Rather, I'm saying that's common way of thinking. Common enough that it's likely people will wear these eventually.
How about this marketing approach: "College woman, tired of creepers trying to hit on you. Worried about getting roofied. Wear these glasses and turn the creeps in".
As another poster mentioned, it can in fact be more difficult. Almost all of my social clubs/groups over the years migrated away from websites/forums to FaceBook. I could give up an account, at the cost of losing effectively my entire social calendar.
I have a generic account with no real user data, but they still get all my content from the social groups so they still win I suppose.
My point ultimately I guess is that I have chosen the ability to continue to have a strong social life over my zuck hating principles.
Even having a fake camera pointing at a public space can be forbidden as it creates surveilance pressure on people using the space.
It's certainly possible that it's something much more surprising / sinister, but there is a fairly logical combination of settings that I could see a company could argue lets them use the data for training.
I'm also very certain that few users with these settings would expect the images to be shown to actual people, so I'm not defending Meta.
It isn't really "rhetoric", they're talking like they believe this actually happens, this is strategy.
And I tend to agree with them that things like attention and political capital are ultimately finite resources.
I've found that the "we can do two things" and "we can walk and chew bubblegum" line of argument to be simplistic and just wrong (and pretty incredibly patronizing). I think the world works exactly the way Meta thinks that it does here.
It might blow up and turn into a Streisand effect, but more often than not this kind of strategy works.
Much like how people think they can multitask and talk on the phone and drive at the same time and every scientific measure of it shows that they really can't.
On September 11th 2001 a UK government department's press chief told their subordinates it was a "good day to bury bad news".
The idea is pretty simple - you might be obligated to announce something that you know will be poorly received, like poor train performance figures, but you can decide the exact day you announce it, like on a day when thousands have died in a terror attack. What would otherwise be front-page news is relegated to a few paragraphs on page 14.
Facebook proposes a similar strategy: Get the feature ready to go, wait until there's some much bigger news story, and deploy it that day.
Is that a good enough explanation to reduce your feelings of being personally targeted?
Not perfect, but better than nothing I guess. I don't think I've noticed the glasses IRL anywhere, but if I start seeing them, I'm definitely installing the app and avoiding any interactions with those people.
One that bothers me a lot are all the apps that want people to share your contacts to find your friends. This is a quick way for them to get all the contact information, which may also include birthdays and other more sensitive details.
Even if I were to never make a Facebook account, I could almost guarantee they still have my name, address, phone number, DOB, and maybe more.
I worked at a midsize financial company before and whenever there was something even approaching a legal or ethical grey area, we'd pick up the phone and say come to my office to talk, and then you'd close the door.
We weren't doing anything nearly as nefarious as Meta, yet everyone was always aware that email and phone conversations were recorded and archived.
Please don't respond with how you think people justify, I want to hear the actual reasons. I'm tired of speculative responses to questions like these.
Please do share if you've had to deal with similar situations too. And feel free to respond with green accounts.
I legitimately want to understand why this happens. Not why from management, why from engineers.
I’ve seen stories of people banned from gyms for taking selfies in the locker room as people were walking by.
Besides that there is the issue of publishing said footage, as others point out.
So if you take a video of specific people looking at flowers at the Keukenhof you would have to ask them for permission if you are recording them primarily and publish it but recording for yourself is fine as it is a clearly public space. If you take a picture of all the flower and catch some people in it in the background you are fine. If you do it in a place where people do not expect it they can ask you to remove the video and they have to (e.g. in a restaurant when you are eating as it is not expected to be recorded there).
There are some exceptions for journalism, law enforcement and public good. I doubt strongly any Meta (AI) post would classify for that.
There is also the small caveat that if you can avoid recording innocent bystanders you must. E.g. putting up a doorbell camera and pointing it to the street instead of your door is bad as it's easily avoidable by putting it top down.
I sometime ask this person to hide the camera and they generally understand my feeling.
> “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.
To record a video on your phone you need to hold your phone up pointed at the other person, usually not in the same way you would normally use a phone. If you see someone holding his phone steady at face level and pointing at something without making finger movements, you know he is filming. If someone is pointing his phone down towards the ground and scrolling around with his thumb, you know he is probably not.
To record from a pair of smart glasses you just need to look at someone, as you would normally look at any other thing. Yes there will be an LED on, but the person being recorded probably couldn't see it if it is in a bright, busy environment and you are more than a few steps away, plus there will be aftermarket modifications to disable the LED. In short, there is no way you can reliably tell if someone's smart glasses are filming you. You have to assume that worst.
No, we need to make this as socially radioactive as possible. We don't need to establish a permission structure to allow Facebook to continue doing this without repercussion.
It actually looks like it added AI functionality, so not every question goes out to a live helper, but they still do have that option.
Something like the Meta glasses could mean a lot less reliance on app that reach out to actual people, or looking for the phone all the time, for day-to-day help with things like this.
- they have the opportunity to save the video feed at any time - they are probably storing some kind of metadata of the feed, maybe some kind of analysis output - someone could hypothetically watch it
I thought it was dangerous because I thought they could do what they're doing, but I didn't think that right now they actually were and so overtly
Now, one wonders what constitutes "nefarious" or a grey zone worth hiding in their minds.
But still nefarious. Thats kinda messed up, to be honest.
Most people are just trying to get through their day and not worry about ethical questions.
I'd say that's terrible, but I'm not confident I'd be a better person if my livelihood depended on doing that sort of work, though I hope I'd be better.
Do you believe these companies and individuals will ever see consequences for putting this in writing? I don't think they will, and I assume they believe the same based on their actions. Why waste time being "moral" when you don't lose anything for being immoral and stand to gain something if your gamble wins?
I mean, there's a whole philosophical outlook about being a good person and some people just want to do without needing enforcement, but those people also dont tend to become one of the largest corporations on the planet.
I mean, they don’t. There isn’t a single decent person who has ever worked at Meta, and that started long before this nonsense. The entire company is about the social destruction of its users. Everything anyone there works on drives towards that goal.
In fact why on earth would they choose the Ray Ban glasses which are getting highly suspicious?
I think it's anything but logical, if users (like yourself) have no idea what those settings are, as evident from your previous post.
Well, they don't, but this is a particularly damning statement and it's age is more of a feature than a flaw because it shows a long history of anti-social disdain for humanity.
I'd rather we normalize that than adversarial fashion.. but that's probably what you were looking for.
Great glasses would solve a problem, I could take my stupid phone out of my hand.
And glasses will get replaced by contacts, which get replaced with brainwave tech.
The tail wags the dog. Wearing glasses may become inherently cool if all the cool people in your insta feeds are wearing them.
There isn't really a counter to that because most people will buy these things to watch movies on the airplane or the train, and they won't see the yoke until it's too late.
The data required is small. Each embedding might be 1/2 kB per face.
> power budget
To process a video for biometric feature extraction, it might take 0.5% to 2% of the total power used to record a video. Video uses a lot of power (compression, screen, etc)
Assuming you've got a modern device (e.g. with Apple Neutral Engine). Disclosure: Googled info (Gemini).
Then again, there may be some selection bias at play…
https://www.nytimes.com/2025/11/21/nyregion/nyc-nightlife-no...
Every day I understand more and more that I have something really priceless and rare, complete luxury of choice, and 99% of people don't. (as with all things, it has its downside: nothing matters!)
I refused to get "stuck" in my hometown, which motivated me from college dropout to FAANG. Once I got there, it was novel to me that even rich people get "stuck" due to inability to imagine losing status, and also responsiblities that come with obvious, healthy, lifestyle choices (i.e. marriage and kids)
Does it really count as "actively doing it" when the glasses are constantly filming while you do other stuff. With a phone/camera people can see you are filming or taking pictures. In many countries the shutter needs to make a sound when taking pictures. For video surveilance cameras a noticeable sign or sticker is needed.
- Or -
Walk around with a vlogger camera that has a large microphone. If anyone takes issue, say "I'm the 5th person here walking around recording everyone today. The others are using a spy camera in their glasses."
- Or -
Borrow a pair of them when in public at a restaurant and loudly say, "Oh my god! These AI smart glasses really do remove everyone's clothing, even on the children!" be ready to run.
_________________
Only do these things if you typically rock the boat regardless. i.e. often try and fail to get fired or arrested.
If you are in the US, and hopefully in a state that is open to blocking this sort of thing, be very vocal and persistent with your state reps about the issue. Get others to join. I am curious if this will be legal within the Illinois Biometric Information Privacy Act or a couple other states with similar laws
It's painfully obvious to me society cannot do two things at once. You focus on one shared goal as a culture or everything falls apart very rapidly - as we are seeing today. It's why a common external "enemy" (e.g competitor, nation state, culture, whatever) has historically been so important.
The shared goal can be complex in nature, which requires many disciplines to come together to achieve it via a series of many parallel activities that might look like they are all doing something random, but it's all in the service of that singular shared goal.
This holds true from my experience at the national level all the way down to small organizations.
I know some of the criticism of Meta: many people don't like the way their products are optimized for engagement. I've heard about their weird AI bots interacting on their platform as if they were people. And I know people of all political stripes have had complaints about content moderation and their algorithm.
But all of that is within the bounds of the law and their terms of service.
None of it would remotely approach something like: bypassing the well-advertised features in the glasses that show when the camera is in use and secretly recording things to train AI. It's hard to imagine any company's lawyers approving something like that. (this sounds like what many commenters believe is happening)
FWIW, I suspect this is the relevant section of the Privacy policy:
> "When you use the Meta AI service on your AI Glasses (if available for your device), we use your information, like Media and audio recordings of your voice to provide the service."
from: https://www.meta.com/legal/privacy-policy/
if so, "to provide the service" is doing a lot of work
And that's why I don't talk to Siri to drive my car.
I still see folks wearing Wayfarers almost every single day, and have owned various (non-Meta) pairs of them for most of my adult life. It's literally one of the most popular sunglasses designs of all time.
Wouldn't that make "photo cloud backups" without consent illegal as well?
People do that all the time, sending private photos to Google, Apple etc.
Honestly I’d love to hear from someone who actually owns one of these things how doing this is any different than using the glasses.
Would that work ?
Seems benign enough that its not going to earn you a visit to the judge, but should disable most electronics, no?
You can keep your phone here but the cameras are taped off. Of course that can easily be undone but it avoids the "oh sorry I forgot it wasn't allowed" excuse.
Yep. Once a couple of nerds got rich, it's what that segment pointed their money finders at. Advertising / marketing went with them.
It was a much nicer place for everyone when it was just the nerds who "had love for the game" :(
you can still take the glasses off. i dont own glasses but do use vr and the shift between putting on/taking off a headset feels more intentional than the glance at a phone. feels less addictive to me. maybe lightweight glasses and dark patterns will "fix" that eventually
Or maybe not. Tablets are impressively portable and the screen is probably good enough.
Ironically that's exactly what the Quest solved with SLAM, it really is plug and play, otherwise I would not have bought one... and it sucks that Meta now owns it, but it really is still the best "just works" VR.
I also don't think VR has much potential to solve real world problems for enough people, but it doesn't have to because it's pretty good entertainment as a gaming device (albeit still fairly niche).
And do what? For calls you've long been able to use a wireless headset. Otherwise most tasks involve frequent user input. Do you really want to be constantly waving your hands around in the air in front of your face? That sounds tiring at best.
"Embedding"? This is what the article says:
"In some videos you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording."
You're saying they're watching "embedding"s here?
https://arstechnica.com/tech-policy/2025/08/jury-finds-meta-...
Two examples that are top of mind…
They exploited browser vulnerabilities not unlike malware to track users’ behavior across the web: https://www.eff.org/deeplinks/2025/06/protect-yourself-metas...
They bought a “privacy” VPN app and used it to harvest data, then abused Apple’s enterprise app deployments to continue to ship the app after it was banned from the app store: https://en.wikipedia.org/wiki/Onavo
When these types of glasses are virtually indistinguishable from regular sunglasses, and a critical mass of cool people wear them all the time, the reluctance from the rest of us will melt away.
I hope I'm wrong. Really.
Sadly that means it is not enforced well since it is too broad to be enforced in a meaningful way. And therefore it is violated A LOT, both by companies or people since no one can be bothered!
AVG (GDPR) includes the following things as personal data: name, address, phone number, passport photo, information about someone's behavior on websites, allergies, customer or staff numbers, recognizable recordings and more.
Rule of thumb, any information that can be used to relate a specific person.
If you needed consent to film people in the street, security cameras (in public places) couldn't be used. They _are_ used. So it must not be the case that you need consent to film people in the street. Assuming there is not just widespread lawbreaking, I suppose.
Take all the people who get and got laid off. Their life goes on.
> responsiblities that come with obvious, healthy, lifestyle choices (i.e. marriage and kids)
99.9999% of people in the world who are married with kids, don't work at Meta.
Look, the previous commenter has legitimate question how can we do it for real. Not just speed run to the gates of afterlife after touching the wrong person.
If it transpired Google or Apple had staff looking through people's cloud photo backups, yes this would be considered a violation because "cloud backup" is framed as a personal solution and not a hosting or processing solution.
The point I was trying to make is it's becoming easier to staff companies with dubious moral standings.
I gave parent the term "adversarial fashion" as an answer to their query, they should look that up.
This is all children talk here. Seriously people stop being so edgy on the internet and what you wouldn’t do. Use your god damn brain
Meta's own guidelines[1] say that you should "Power off in private spaces."
You can't always tell if you're being recorded since they can be tampered with to disable the LED. And from what I gather, the LED only serves to indicate of video recording, and not necessarily audio.
"They have a choice" is of course literally true. It's also not very interesting? Everyone always has a choice in the tautological sense. The question the parent raised was how do people live with it, and the answer is: the same way people live with all kinds of things. Incrementally, surrounded by context that makes it feel normal, with stakes that feel high relative to their baseline, not yours.
Your 99.9999% stat kind of makes my point for me. Those people also didn't get a $400k offer from Meta. The trap isn't marriage+kids, it's young + don't know better + land there + marriage + kids+a lifestyle calibrated to a specific income, plus the identity that comes with it. The golden handcuffs thing is a cliché because it's real.
None of this is a defense of working on things you find unconscionable. It's just that "they could simply choose not to" has never once in history been a sufficient explanation of human behavior.
Feels great to say it. Would feel great to do it. Morally defensible to anyone that knows anything about privacy if the person isn’t low-vision or something. In reality, a terrifically stupid idea.
Do you guys ever like, go outside?
I think even the political activists will be extremely divided on this one. You have privacy on one hand, accessibility and a genuinely life-changing technology on the other.
It's not the same as doing this systematically (like Meta here), but these are shades of gray. A serious privacy law would prohibit both.
Now to find a way to make 1550nm lidar glasses to burn out any cameras pointed directly at your face
I was just in a datacenter deploying a bunch of infrastructure while coordinating with remote network operations and sysadmin teams. It was damn annoying having to constantly check my phone for new slack messages, or deal with Siri reading back messages in it's incompetent manner. I missed quite a few time sensitive messages like "move that fiber from port A to port B" due to noise or getting busy with another task and kept folks waiting for longer than needed.
In limited circumstances having a wearable "HUD" interface would be quite nice. Especially if it had great screen quality and I could do things like see a port mapping/network diagram/blueprints/whatever while doing the actual work. Would save considerable time vs. having to look down at a laptop or phone screen and lose my place in the physical wire loom or whatnot. Having an integrated crash cart (e.g. via wireless dongles) would be even more exciting.
That's just one recent task that comes to mind.
There are plenty of real world hands-on jobs where this would be quite helpful. So long as it's not connected to meta or the cloud or anything other than a local device or work network.
For a more general use-case I have what amounts to minor facial blindness/forgetfulness of names. I need to study your face for a long time over many interactions to actually remember you. Something as simple as wearing glasses vs. not can mean I will not recognize someone I've spent months interacting with multiple times a week.
I've long wished I had some way to implant something in my brain that would give the equivalent of video game name avatars superimposed over someone's head. For totally non-nefarious reasons, just names of folks I previously have met pulled from my contacts list. Obviously this is unlikely to ever be a socially acceptable thing due to recording and other potential abuses - but I have thought this for at least 25 years now - before the privacy concerns became obvious. Wishful thinking, but I can imagine myriad of uses for such technology if it didn't enable such a wide-spread number of potential abuses.
> https://www.reuters.com/world/europe/meta-takes-around-3-sta...
> I have something really priceless and rare, complete luxury of choice, and 99% of people don't
People working at Meta are almost without exception, people who have more luxury of choice than nearly anyone on the planet. It's very important to keep repeating this, and not say the direct opposite as you did. You can make your point without doing so.
It just takes one unlucky time where the other person doesn’t subscribe to the idea of proportional response or has military training with muscle memory that takes over.
Plenty of places this would be the most interesting call of the day for a police force and you'd have 5 squad cars show up.
Other places won't even bother responding to the call. Your mileage will greatly vary.
(and I am blind, I know what I am talking about)
Now the discussion is about how Facebook glasses are offensive and worn by murderous psychos who take creep shots of their neighbors.
There is no thinking or musing whether they just want to slap you or I don’t know what. You don’t know your attacker and their intentions.
This is the real world. I don’t know why you would think this is some kind of stupid game to go around and slap people. It will cause problems.
With AI glasses like the ones Meta is pushing, the device is not just helping you. It is recording. Photos and videos can be sent back to company servers. Reports show that human reviewers can see very private footage users never meant to share. That includes sensitive personal moments. The device is basically an always-on camera tied to a giant data company.
If you depend on that device to understand the world, that makes you more vulnerable, not less. If ads, errors, or AI hallucinations start shaping what you hear about your surroundings, that affects your only channel of perception. If your daily life is constantly captured and stored, that affects your autonomy.
So yes, many of us will still use the tech. But that is exactly why pushing for strong, clear privacy terms now matters. Accessibility should not mean giving up control over your own life.
Full disclosure: I don't own Meta glasses (yet), but I know some users and observe rollout amongst assistive technology resellers.