Lowering the skills bar needed to reverse engineer at this level could have its own AI-related implications.
I believe there was some good that came from last months decision to be more open to what apps and data can say without going through huge regulatory processes (though because we apply auditory stimulation, this doesn't apply to us), however, there should be at least regulatory requirements for data security.
We've developed all of our algorithms and processing to happen on device, which is required anyway due to the latency which would result from bluetooth connections, but even the data sent to the server is all encrypted. I'd think that would be the basics. How do you trust a company with monitoring, and apparently providing stimulation, if they don't take these simple steps?
Almost out of a Phillip K Dick novel
Amazing.
https://www.jeffgeerling.com/blog/2025/i-wont-connect-my-dis...
Then there's hardening your peripheral and central device/app against the kinds of spoofing attacks that are described in this blog post.
If your peripheral and central device can securely [0] store key material, then (in addition to the standard security features that come with the Bluetooth protocol) one may implement mutual authentication between the central and peripheral devices and, optionally, encryption of the data that is transmitted across that connection.
Then, as long as your peripheral and central devices are programmed to only ever respond when presented with signatures that can be verified by a trusted public key, the spoofing and probing demonstrated here simply won't work (unless somebody reverse engineers the app running on the central device to change its behaviour after the signature verification has been performed).
To protect against that, you'd have to introduce server-mediated authorisation. On Android, that would require things like the Play Integrity API and app signatures. Then, if the server verifies that the instance of the app running on the central device is unmodified, it can issue a token that the central device can send to the peripheral for verification in addition to the signatures from the previous step.
Alternatively, you could also have the server generate the actual command frames that the central device sends to the peripheral. The server would provide the raw command frame and the command frame signed with its own key, which can be verified by the peripheral.
I guess I got a bit carried away here. Certainly, not every peripheral needs that level of security. But, into which category this device falls, I'm not sure. On the one hand, it's not a security device, like an electronic door lock. And on the other hand, it's a very personal peripheral with some unusual capabilities like the electrical muscle stimulation gizmo and the room occupancy sensor.
[0]: Like with the Android KeyStore and whichever HSMs are used in microcontrollers, so that keys can't be extracted by just dumping strings from a binary.
For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.
I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
I have deployed open MQTT to the world for quick prototypes on non personal (and healthcare) data. Once my cloud provider told me to stop because they didn’t like it, that could be used for relay DDOS attacks.
I would not trust the sleep mask company even if they somehow manage to have some authentication and authorisation on their MQTT.
Like, don't actually do it, but I feel like there's inspiration for a sci-fi novel or short story there.
I find it difficult to believe that a sleep mask exists with the features listed: "EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio." while also being something you can strap to your face and comfortably sleep in, with battery capacity sufficient for several hours of sleep.
I also wonder how Claude probed bluetooth. Does Claude have access to bluetooth interface? Why? Perhaps it wrote a secondary program then ran that, but the article describes it as Claude probing directly.
I'm also skeptical of Claude's ability to make accurate reverse-engineered bluetooth protocol. This is at least a little more of an LLM-appropriate task, but I suspect that there was a lot of chaff also produced that the article writer separated from the wheat.
If any of this happened at all. No hardware mentioned, no company, no actual protocol description published, no library provided.
It makes a nice vague futuristic cyperpunk story, but there's no meat on those bones.
The difference is when it's a sleep mask, someone reads your brainwaves. When it's a cloud credential, someone reads your customer database. Per-device or per-environment credential provisioning isn't even hard anymore. AWS has IAM roles, IoT has device certificates, MQTT has client certs and topic ACLs. The tooling exists. Companies skip it because key management adds a step to the assembly line and nobody budgets time for security architecture on v1.
What's the real risk profile? Robbers can see you are asleep instead of waiting until you aren't home?
I have not implemented MQTT automations myself, but it's there a way to encrypt them? That could be a nice to have
Also discovered during reverse-engineering of the devices’ communications protocols.
IoT device security is an utterly shambolic mess.
"The ZZZ mask is an intelligent sleep mask — it allows you to sleep less while sleeping deeper. That’s the premise — but really it is a paradigm breaking computer that allows full automation and control over the sleep process, including access to dreamtime."
or if this is another scifi variation of the same theme, with some dev like embellishments.
Dudes so stupid being tied to tech everywhere.
Coward. The only way to challenge this garbage is "Name and Shame". Light a fire under their asses. That fire can encourage them to do right, and as a warning to all other companies.
My guess is this is Luuna https://www.kickstarter.com/projects/flowtimebraintag/luuna
You know, now that I'm thinking about it, I'm beginning to wonder if poor data privacy could have some negative effects.
Baby's gotta get some cash somewhere.
Are beta waves a sign that my mind is racing and wide awake, or are they the reason?
Perhaps the author is not a coward, but is giving the company time to respond and commit to a fix for the benefit of other owners who could suffer harm.
What makes you think this is the one?
The other side of owning equipment like this is it still could be useful for some for personal and private use.
Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi/standards like Lab Streaming Layer for connecting to a hodgepodge of devices.
This particular data?
Probably not so useful. While it’s easy to get something out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.
The K, of course, stands for Ka-ching!
Yesterday I watched it try and work around some filesystem permission restrictions, it tried a lot of things I would never have thought of, and it was eventually successful. I was kinda goading it though.
As for the reverse engineering, the author claims that all it took was dumping the strings from the Dart binary to see what was being sent to the bluetooth device. It's plausible, and I would give them the benefit of the doubt here.
The operator is still a factor.
Google for a list of all the exceptions to HIPPA. There are a lot of things that _seem_ like they should be covered by HIPPA but are not...
I guess that’s not a huge problem, though, since all users are presumably at least anonymous.
If that's the case then they should have deferred this whole blog post.
It's good that they were responsive in the disclosure, but it's still a mark of sloppiness that this was done in the first place, and I'd like to know so I can avoid them.
I said a guess, not absolute.
12 Feb, 2026
I recently got a smart sleep mask from Kickstarter. I was not expecting to end up with the ability to read strangers' brainwaves and send them electric impulses in their sleep. But here we are.
The mask was from a small Chinese research company, very cool hardware -- EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio. The app was still rough around the edges though and the mask kept disconnecting, so I asked Claude to try reverse-engineer the Bluetooth protocol and build me a simple web control panel instead.
The first thing Claude did was scan for BLE (Bluetooth Low Energy) devices nearby. It found mine among 35 devices in range, connected, and mapped the interface -- two data channels. One for sending commands, one for streaming data.
Then it tried talking to it. Sent maybe a hundred different command patterns. Modbus frames, JSON, raw bytes, common headers. Unfortunately, the device said nothing back, the protocol was not a standard one.
So Claude went after the app instead. Grabbed the Android APK, decompiled it with jadx. Turns out the app is built with Flutter, which is a bit of a problem for reverse engineering. Flutter compiles Dart source code into native ARM64 machine code -- you can't just read it back like normal Java Android apps. The actual business logic lives in a 9MB binary blob.
But even compiled binaries have strings in them. Error messages, URLs, debug logs. Claude ran strings on the binary and this was the most productive step of the whole session. Among the thousands of lines of Flutter framework noise, it found:
We had the shape of the protocol. Still didn't have the actual byte values though.
Claude then used blutter, a tool specifically for decompiling Flutter's compiled Dart snapshots. It reconstructs the functions with readable annotations. Claude figured out the encoding, and just read off every command byte from every function. Fifteen commands, fully mapped.
Claude sent a six-byte query packet. The device came back with 153 bytes -- model number, firmware version, serial number, all eight sensor channel configurations (EEG at 250Hz, respiration, 3-axis accelerometer, 3-axis gyroscope). Battery at 83%.
Vibration control worked. Heating worked. EMS worked. Music worked. Claude built me a little web dashboard with sliders for everything. I was pretty happy with it.
That could have been the end of the story.
Remember the hardcoded credentials from earlier? While poking around, Claude tried using them to connect to the company's MQTT broker -- MQTT is a pub/sub messaging system standard in IoT, where devices publish sensor readings and subscribe to commands. It connected fine. Then it started receiving data. Not just from my device -- from all of them. About 25 were active:
Claude captured a couple minutes of EEG from two active sleep masks. One user seemed to be in REM sleep (mixed-frequency activity). The other was in deep slow-wave sleep (strong delta power below 4Hz). Real brainwaves from real people, somewhere in the world.

The mask also does EMS -- electrical muscle stimulation around the eyes. Controlling it is just another command: mode, frequency, intensity, duration.
Since every device shares the same credentials and the same broker, if you can read someone's brainwaves you can also send them electric impulses.
For obvious reasons, I am not naming the product/company here, but have reached out to inform them about the issue.
This whole thing made me revisit Karpathy's Digital Hygiene post, and you probably should too.
The reverse engineering -- Bluetooth, APK decompilation, Dart binary analysis, MQTT discovery -- was more or less one-shotted by Claude (Opus 4.6) over a 30' autonomous session.
That hasn't, universally, been my experience. Sometimes the code is fine. Sometimes it is functional, but organized poorly, or does things in a very unusual way that is hard to understand. And sometimes it produces code that might work sometimes but misses important edge cases and isn't robust at all, or does things in an incredibly slow way.
> They have no problem writing tedious guards against edge cases that humans brush off.
The flip side of that is that instead of coming up with a good design that doesn't have as many edge cases, it will write verbose code that handles many different cases in similar, but not quite the same ways.
> They also keep comments up to date and obsess over tests.
Sure but they will often make comments or tests that aren't actually useful, or modify tests to succeed instead of fixing the code.
One significant danger of LLMs is that the quality of the output is higly variable and unpredictable.
That's ok, if you have someone knowledgeable reviewing and correcting it. But if you blindly trust it, because it produced decent results a few times, you'll probably be sorry.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
So, will they? Probably. Can you trust the kind of LLM that you would use to do a better job than the cheapest firm? Absolutely.
Identify the kickstarter product talked around in this blog post: (link)
To think some blackhat hasn't already did that is frankly laughable. What I did was like the lowest of low-bars these days.
LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don't know what they don't know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they'd do if something goes wrong.
It’s wasteful not to save and learn from those.
The lack of detail makes me suspect the truth of most of the story.
These blog posts now making the rounds on HN are the usual reverse engineering stories, but made a lot more compelling simply because they involve using AI.
Never mind that the AI part isn't doing any heavy lifting and probably just as tedious as not using AI in the first place. I am confused why the author mentions it so prominently. Past authors would not have been so dramatic and just waved their hands that they had some trial and error before finding out how the app is built. The focus would have been on the lack of auth and the funny stuff they did before reporting it to the devs.
It’s quite literally why the internet is so insecure, because at many points all along the way, “hey, should we design and architect for security?” is/was met with “no, we have people to impress and careers to advance with parlor tricks to secure more funding; besides, security is hard and we don’t actually know what we are doing, so tow the line or you’ll be removed.”
Found that in seconds. EEG, electrical stimulation, heat, audio, etc. Claims a 20 hour battery.
As to the Claude interactions, like others I am suspicious and it seems overly idealized and simplified. Claude can't search for BT devices, but you could hook it up with an MCP that does that. You can hook it up with a decompiler MCP. And on and on. But it's more involved than this story details.
We often treat doxxing the same way, prohibiting posting of easily discovered information.
So yeah, a product exists that claims to be a sleep mask with these features. Maybe someone could even sleep while wearing that thing, as long as they sleep on their back and don't move around too much. I remain skeptical that it actually does the things it claims and has the battery life it claims. This is kickstarter after all. Regardless, this would qualify as the device in question for the article. Or at least inspiration for it.
Without evidence such as wireshark logs, programs, protocol documentation, I'm not convinced that any of this actually _happened_.
> I stumbled upon these vulnerabilities on one of the coldest days of this winter in Vancouver. An attacker using them could have disabled all Mysa-connected heaters in the America/Vancouver timezone in the middle of the night. That would include the heat in the room where my 7-month-old son sleeps.
It's used in a enormous number of IoT devices.
The "IoT gateway" service from AWS supports MQTT and a whole lot of IoT devices are tethered to this service specifically.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
If we applied this similar analogy to a e.coli infection of foods, your recommendation amounts to "If we say the company name, the company would be shamed and lose money and people might abuse the food".
People need to know this device is NOT SAFE on your network, paired to your phone, or anything. And that requires direct and public notification.
(Also, "We're not happy until you're not happy.")
The parents are saying they'd rather vibe code themselves than trust an unproven engineering firm that does(n't) vibe code.
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.