"Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user"
Haven't we learned our lesson on this? Don't read and act on my sms messages without me asking you to!
This makes me feel better about Google, but also makes me kind of frightened of the rest of Android. I wonder what Apple's response time is?
Feels like there’s something new every other day - linux, windows, mobile, various commonplace tools used by everybody, the list goes on
```
does this look right to you? don't do any searches or check memory, just think through first principles
static int vpu_mmap(struct file fp, struct vm_area_struct vm) { unsigned long pfn; struct vpu_core core = container_of(fp->f_inode->i_cdev, struct vpu_core, cdev); vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); / This is a CSRs mapping, use pgprot_device */ vm->vm_page_prot = pgprot_device(vm->vm_page_prot); pfn = core->paddr >> PAGE_SHIFT; return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0; }
```
And it correctly identified the issue at hand, without web searches. I'd love to try something more comprehensive, e.g. shoving whole chunks of the codebase into the prompt instead of just the specific function, but it seems the latent ability to catch security exploits is there.
So then.... I wonder how this got out in the first place. I know I'm using a toy example but would love to learn more!
It does make me scared for what other dangers lurk since this was a really bad one and it was so little work to find.
Also of note: so many security issues lately have been done using AI. This report makes me think two things:
1. Expertise is still immensely valuable, the more niche, the more valuable.
2. There are lots of niches still where AI doesn't dominate...
OpenBSD fixed this back in 2017.
What is the purported lesson we should have learned? Users choose phones with rich messaging features. This was a major selling point for iPhone, first, with iMessage, and later with Android until iOS caught up with RCS.
Or my favorite, I marked an extremely suspicious message with what was almost certainly a malicious attachment as junk in a certain BigTech webmail client (the only other option was phishing which it most certainly was not) and it "helpfully" opened the unsubscribe link in my local browser without first asking me for permission. It's difficult to imagine the level of incompetence and dysfunction required to not only write but review, approve, and deploy such a feature in a security and privacy sensitive context.
Doesn't that just turn a 0-click exploit into a 1-click exploit? It's unlikely the user can make an informed decision to not process a potentially malicious message, without clicking on the message.
Somewhere there's an NSA agent reading this and laughing like a gin addict on payday.
Now imagine the dark horrors hiding in the BSPs of other Android devices... or embedded devices in general.
Frankly, it should be a requirement of Google's certification process that everything regarding drivers gets upstreamed into the Linux kernel. Yes, even if this adds quite a time delay to the usual hardware development process.
[1]: https://www.trendmicro.com/en_us/research/17/i/cve-2017-0780...
A 0-click exploit is horrendously worse than even a 1-click one. I often don't even open messages from numbers I don't recognize
This article doesn't mention AI helping find this bug. Seems like humans can still do that on their own.
I've seen quite a few saying that they were inspired by the previous report that is presented as "the model pointed us to it" and you get FOMO about missing out if you don't snatch bugs now as well
Smaller brands often ship budget android devices and never update them.
If this is the case it's good news for everyone else besides NSO and Co
This is quite impressive considering I’m just a dumbass with a Claude subscription.
> Observations & Potential Issues A few things worth flagging: 1. No bounds checking on the mapping size. Userspace controls vm_end - vm_start and vm->vm_pgoff. Here vm_pgoff is ignored entirely and the size is trusted blindly. If the VPU's register block is, say, 64KB but userspace requests a 1MB mapping, the driver will happily map 1MB of physical address space starting at core->paddr — potentially exposing whatever hardware happens to live at adjacent physical addresses. A defensive check would be:
---
70 day release cycles are very quickly not going to be fast enough to stop widespread use of exploits when you have bots able to scan every PR on every open source project as it comes out.
It seems like the lesson is that you shouldn't be processing data sent to the device by random strangers without the user explicitly choosing to open the file or follow the link.
Not to automatically execute things within data that we have been sent.
Going backwards from 2023, the doubling interval for published CVEs was approximately 4 to 4 1/2 years. Since then it’s approximately two years.
There has definitely been a rapid uptick.
https://projectzero.google/2026/01/pixel-0-click-part-1.html
So AI usage increases bugs and humans have to weed them out!
[1] https://gs.statcounter.com/android-version-market-share [2] https://www.cybersecurity-insiders.com/survey-reveals-over-1...
Here's a cool project that inventories all your KASLR info leaks: https://github.com/bcoles/kasld
Why can't they just make it like most email clients? No preview by default, give a banner with an option to explicitly allow a preview for that specific message or conversation?
There probably is more vulnerabilities found, but the amount of CVEs is not a good metric.
https://docs.qualcomm.com/securitybulletin/may-2026-bulletin...
Since a lot of vendors are months or even years behind, their phones are full of known holes.
When it comes to security, basically: GrapheneOS > iOS > PixelOS >> Samsung OneUI >>>>>>>> everybody else.
Sadly, Samsung lets anyone who pays enough push bloatware and analytics on their phones. E.g. AppCloud from an Isreali company, Meta services that stay even when you remove Meta apps (only removable with ADB/UAD), etc. So there are only three somewhat serious options (and for two of them, you still give a lot of analytics to Apple or Google).
I've heard they cleaned up their program recently to respond much quicker nowadays
Search "android phone" on aliexpress and there's top selling phones on the first page running android 8, android 10, etc. They're not getting security updates of any sort, let alone driver updates.
Someone T-bones you in parking lot, chef causes food poisoning, plumber's leak floods your bathroom, personal trainer pushes to injury, mislabeled allergen on food, movers break your armoire, roofer leaves a leak -- I bet we'd see a lot less of all that if a $1MM fine + life in jail loomed over everyone.
Nobody would want to do business, but boy would we be in a golden age.
Yes, they certainly would. You wouldn't have smartphones, for instance.
I can't tell if this is satirical or not. But there are so many takes like this recently (hold the website liable for user content, hold the corporate developer liable for zero days in a project they happened to touch) that would all result in the same outcome (no more product at all) that I can't help but wonder if there's some luddite psy-op trying desperately to bring us back to a pre-Internet era in any way they can...
By definition in Rust it's incorrect to overflow the non-overflowing integer types, and so if you intend say wrapping you should use the explicit wrapping operations such as wrapping_add or the Wrapping<T> types in which the default operators do wrap - but if you turn off checks then it's still safe to be wrong, just as if you'd call the wrapping operations by hand instead of using the non-wrapping operations.
That Dolby overflow code looks awkward enough that I can't imagine writing it in Rust even if the checking was off - but I wasn't there. However the reason it's on Project Zero is that it resulted in a bounds miss, and that Rust would have prevented anyway.
That said you can enable overflow checks in Rust's release mode. It's literally two lines:
[profile.release]
overflow-checks = true
I wonder if it would make sense for ISAs to have trapping versions of add and subtract. RISC-V's justification for not doing that is that it's only a couple more instructions to check afterwards. It would be interesting to see the performance difference of `overflow-check = true` on high performance RISC-V chips once they are available.We've moved slightly closer to this, but in a world where we're still arguing over memory safety being necessary we've probably still got a ways to go before we notice that addition silently overflowing is a top-10 security issue. It's the silent top-10 security issue, I guess.
Unfortunately, since we don't live in that world, we need to not open links, emails, text messages, etc, if they are sketchy.
A better solution may someday exist, but as of yet has not been found.
I think Zig has the most interesting approach here with 3 different "+" operators (+ aborts on overflow, +& wraps, and +| saturates) along with addWithOverflow builtin. It'd probably be a challenge for Rust to adopt that at this point, but it'd be a great improvement
MIPS does (did?). And VAX, IBM/360, ....
Corporate Security will tell you that it's ok to click links to the payroll system or hr or vanta or the 'secure email service' or jira or github or to docusign or the microsoft office document that a partner company sent you or an amazon delivery notification, but not ok to click links in the phishing email that looks exactly like one of those that they sent you.
It's not possible to tell whether a message giving you a link to something is 'sketchy' or not before clicking the link, and any 'security' that relies on people knowing whether a message is malicious or not by magic is broken in the real world.
It's easier to find a needle in the haystack if the haystack is 50% needles.
If you’re not then this seems quite paranoid, bordering on LARPing.
Pssst - hey, kid, want some GNU?
That is not a solution because it means the code can behave differently, and expose vulnerability if wrong compilation settings are chosen.
The functions like "wrapping_add" have such a long names so that nobody wants to use them and they make the code ugly. Instead, "+" should be used for addition with exceptions, and something like "wrap+" or "<+>" or "[+]" used for wrapping addition.
That's how people work, they will choose the laziest path (the simplest function name) and this is why you should use "+" for safer, non-wrapping addition and make the symbol for wrapping addition long and unattractive. Make writing unsafe code harder. This is just basic psychology.
C has the same problem, they have functions checking for overflow, but they also have long and ugly names that discourage their use.
> modern hardware will just wrap if you don't check and that's cheaper
So you suggest that because x86 is a poorly designed architecture, we should adapt programing languages to its poor design? x86 will be gone sooner or later anyway.
Also, there are languages like JS, Python, Swift which chose the right path, it is only C and Rust developers who seem to be backwards.
This is a very C-flavoured "solution". For those who haven't seen it this involves a pointer (!) and we're going to compute the addition, write the result to the pointed-at integer and then if that didn't fit and so it overflowed we'll return true otherwise false.
The closest Rust analogy would be T::carrying_add which returns a pair to achieve a similar result.
And yeah, checking is "basically free" unless it isn't, that's not different. If you haven't measured you don't know, same in every programming language.
It's never been true that you can't write correct software in C or C++ the problem is that in practice you won't do so.
That said, CHERI is super complicated. Checked integer arithmetic operations would be way simpler.
If the software is correct nothing changes. The existence of people who write nonsense but expect you to work around that doesn't change between languages, they write crap in Swift or Python or Javascript just the same.
The long names are because there are, in fact, a lot of things you might want. Although Swift manages to take several pages and lots of diagrams to explain what wrapping is, that is in fact all their special operators do. What if you don't want wrapping? Too bad.
Rust provides saturating, which is almost always what you wanted for signal processing (e.g. audio) as well as separate "carry" booleans to do arithmetic the way you were probably shown in primary school, the wrapping most often provided on hardware and useful in cryptography among other places and explicitly cheap but dangerous and expensive but safe options. It also provides both kinds of division (and remainder), which doesn't matter for the unsigned integers but is important for signed integers and is a source of confusion and woe when languages provide only one kind or worse a mixture that makes no mathematical sense. These all need names.
Second, it's easy to say "trap on overflow" but traps are super annoying. You really ideally would want to avoid leaving user mode. As soon as you trap to the OS you're now dealing with signals which are pretty much the worst thing in the world. The 4 instruction case at least lets you just branch to other code.
So you ideally want an "add or branch" instruction, but there isn't enough space in the opcodes for that. The fallback is flags, which also massively suck. I don't know if anyone has a great solution to this problem.
If I can plausibly claim I wasn't sure it was legit (ie it was sent from the outside form the sketchy looking host), I'd always report it internally as phishing attempt. Just to make the security work with it.
Sure it is. It's just not something the average user can do. But what makes the situation worse is that most emails now use click tracking, so ALL links are sketchy. For example, emails from my union all link to 2mv.aplink.red and are 200 characters long and look like /dev/urandom output. No fucking idea what or who controls that domain, but it for sure is not my union. I've complained multiple times, including acting dumb and asking if they've been hacked because their email look shady as hell.
Email with the unsubscribe link wrapped in click tracking gets sent straight to SpamCop. I hate tech more and more every day.
just doubled the value and use cases of your AI solution!
There are sooooooo many other situations where such device lockdown is warranted. Government intrusion, sensitive industry, journalism, anything ITAR/EAR covered, and more. Your reduction to a single issue is absurd.
We have seen multiple software hacks resulting in >10 million dollar payouts. Apple's bug bounty program only pays out 4 million dollars (2 million dollars (2x) more than non-Lockdown) for a zero-click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously. Even at the low end of that cyberattack payout range that is still a >2x ROI if your successful cyberattack depends on a iPhone zero-click, with many publicly known attacks being in the 10x ROI range. Lockdown mode, at best, raises the bar slightly for commercial profit-motivated attackers and reduces their profit margin from wildly profitable to slightly less, but still, wildly profitable.
And of course I am using the Apple bug bounty program as merely a available metric with at least some semblance of objective support. There are zero certifications, audits, or analysis that Apple has even attempted that would confirm any claim of protection against state level actors.
I consider Anthropic's Mythros security bug finder mostly marketing, but other things worry me that there might be a global hack contagion: for example, a few months ago I saw in the news that an executive at a US security company was caught selling information to a hacking group.
Except for disabled Javascript compilation possibly slowing down web sites, not getting some attachments in messages, and some graphics not showing up on some web sites, having Lockdown mode set doesn't seem to affect anything I do. For dev I use VPSs with ssh set for ensuring SSH agent forwarding is strictly disabled, as are reverse tunnels.
It seems like doing little things like this make sense because it is such a tiny hassle to be a little safer.
I agree that e.g. working on an OS should require guild-type credentials. But I don't know if most SWEs understand the professional-standards requirements such organisations are empowered to enforce on their members.
The economics of the device exploitation industry are completely orthogonal from bug bounty payouts; the markets only overlap at the _extreme_ fringes. Trying to use one as a proxy for the other is meaningless.
It’s undeniable that the proverbial guns for hire make it easy (if not cheap) to target basically anyone — but just because the vibes are bad doesn’t mean we can just say “it’s common knowledge that …”
The fact is mitigations are costly in terms of convenience and ease of use. Helping people make informed choices about whether to enable mitigations and bear that cost requires more than platitudes imo
This sets a nice price bar for exploitation. Is someone willing to pay 10+ million dollars to get access to your phone?
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode.
> click total compromise that can trivially worm to take down hundreds of millions of iPhones simultaneously
Where is the profit motive in doing this? Possibility is one thing, but a realistic threat is another.
Take my friend who is a property lawyer. The firm she works for buys her insurance, because it would be insane to operate without insurance, but the only available insurance is personal insurance, it insures a specific person to do property law. So, although her day job is helping that $100Bn farm equipment company buy a $10M new factory from a $100Bn construction firm, at the weekend she is covered by that same insurance when she represents her friend buying a $500k cottage. AIUI this is a completely normal arrangement.
If that was the situation for programming, the company is going to buy your $100M exploit insurance because they need a programmer, but it's personal insurance so you could work on your Game jam game using the same insurance, and it'd be crazy to just "Go commando" if you don't have employment and thus insurance, in case somehow your "Galaga but also Blue Prince and somehow a visual novel" Game jam entry causes a $10M damages payment.
It really just introduces a legal burden to prove competence and work in good-faith, and nets immense power to throw out ridiculous deadlines. Your managers are legally responsible too, and if they push beyond what's reasonable you have just cause to bring them to court in a way that you currently don't. To re-emphasize, I don't think this is a better world, but it's not unlivable.
Not yours specifically usually, but there is a lot of money in a general tool that law enforcement can use to read out phones. Of course, most of them focus on physical access. In the few Cellebrite reports/presentations that have leaked, iPhones would fall after a relatively short time (IIRC a few months), but did better than most Android phones (except GrapheneOS).
Also, sometimes you do not need the 10M exploit, you can buy many cheaper exploits and make a chain yourself.
The obvious caveat here is that for a lot less than 10 million dollars someone can be hired to hit you with a metal pipe until you give up your passcode
If they hit you with a metal pipe, it's likely that you won't survive even if you give up your passcode. So most likely you are protecting something or someone else. Set up a duress PIN so that you have options in that case.
If I was personally liable for damages, and there was an insurance program or some sort - similar to how doctors & dentists practice - sure, I'd probably still write code, very carefully. But if there was a decent change of me spending the rest of my life in prison because something I wrote on a Friday at 4pm under some amount of stress? No thanks. I can re-train as a plumber, and stand knee-deep in shit all day.
Insurance companies are very, very good at figuring out how to identify and price risk, once motivated to do so.
As a example of how they might be used in that fashion for profit, NSO group had a revenue of 240 million dollars in 2020. Many of their customers were governments who wanted to spy on activists and journalists. NSO group was in the business of economies of scale to democratize access to journalist devices by reusing a small stockpile of exploits across many targets with enough revenue to assure a steady stream of new exploits as fast as they were burned.
We recently published an exploit chain for the Google Pixel 9 that demonstrated it was possible to go from a zero-click context to root on Android in just two exploits. The Dolby 0-click vulnerability existed across all of Android, until it was patched in January 2026. While we had an exploit chain for the Pixel 9, we wanted to see if it was possible to write a similar exploit chain for Pixel 10.
Altering our exploit for CVE-2025-54957 was fairly straightforward. The majority of needed changes involved updating offsets calculated for the specific version of the library we targeted on the Pixel 9 to similar offsets in the library for Pixel 10. The only challenge (outside of wishing we’d better documented which syncframes contained offsets) was that the Pixel 10 uses RET PAC in the place of -fstack-protector, which meant that __stack_chk_fail wasn’t available to be overwritten by code. After a bit of trial and error, we used dap_cpdp_init, initialization code that can be overwritten without causing functional problems, as it is called once when the decoder is initialized and never again. The updated Dolby UDC exploit is available here. This exploit will only work on unpatched devices (SPL December 2025 or earlier).
Porting the local privilege escalation link of the chain to Pixel 10 was not feasible as the BigWave driver does not ship on this device. However, a new driver is visible in the mediacodec SELinux context at /dev/vpu. This driver is used for interacting with the Chips&Media Wave677DV silicon on the Tensor G5 chip meant for accelerating video decoding. Based on the comments within the open-source C files, this driver is developed and maintained by the same set of developers who built the BigWave driver. Working in collaboration with Jann Horn, we spent 2 hours auditing this VPU driver and discovered an exceptional vulnerability.
Unlike the upstream Linux driver for WAVE521C (which is an older Chips&Media chip), the Pixel driver for WAVE677DV does not integrate with V4L2 (the “Video for Linux API”); instead, it directly exposes the chip’s hardware interface to userspace, including letting userspace map the chip’s MMIO register interface. The driver mainly establishes device memory mappings, does power management, and allows userspace to wait for interrupts from the chip.
This bug in particular caught our attention as exceptionally simple to exploit:
static int vpu_mmap(struct file *fp, struct vm_area_struct *vm)
{
unsigned long pfn;
struct vpu_core *core =
container_of(fp->f_inode->i_cdev, struct vpu_core, cdev);
vm_flags_set(vm, VM_IO | VM_DONTEXPAND | VM_DONTDUMP);
/* This is a CSRs mapping, use pgprot_device */
vm->vm_page_prot = pgprot_device(vm->vm_page_prot);
pfn = core->paddr >> PAGE_SHIFT;
return remap_pfn_range(vm, vm->vm_start, pfn, vm->vm_end-vm->vm_start, vm->vm_page_prot) ? -EAGAIN : 0;
}
This mmap handler is intended to be used in order to map the MMIO register region of the VPU hardware into the userland virtual address space - a region contained within a certain physical memory address range. In doing so, it makes a call to remap_pfn_range based purely on the size of the VMA and not at all bounded to the size of this register region. This means that, by specifying a size larger than the register region in an mmap syscall, the caller can map as much physical memory as they want into userland, starting at the physical address of the VPU register region. The entirety of the kernel image (including .text, and .data region) is located at a higher physical address than the VPU register region, and can therefore be accessed and modified by userspace with this bug.
At this point, one can simply overwrite any kernel function to gain kernel code execution - or indeed any primitive one might desire. This is rendered even easier by the fact that the kernel is always at the same physical address on Pixel and so the offset between the VPU memory region and the kernel is always a known value. Thus it is not even necessary to scan for the kernel in the mapped physical memory - you simply know exactly where it is relative to the address returned by mmap, presuming you make the VMA length large enough.
Achieving arbitrary read-write on the kernel with this vulnerability required 5 lines of code and writing a full exploit for this issue required less than a day of effort.
I reported this bug on November 24, 2025 and Android VRP rated the issue High severity. This is an improvement, given that the BigWave bug we used for privilege escalation on the Pixel 9 (which had identical security impact) was initially rated as Moderate severity. This represents a meaningful and positive change in posture regarding how these types of bugs are triaged and patched. The vulnerability was patched 71 days after its initial report, in the February Pixel security bulletin. This is notably fast given that this is the first time that an Android driver bug I reported was patched within 90 days of the vendor first learning about the vulnerability.
There are both positives and negatives to take from this research. A key goal of Project Zero is to drive systemic improvements that go beyond individual bug fixes, influencing better development processes and more resilient codebases that lead to improvements in security for end-users. The handling of this VPU vulnerability demonstrates clear progress in Android’s triage pipeline, as this bug had an initial remediation in a much shorter period of time than the previous BigWave issues. Android’s effort to ensure that serious vulnerabilities are patched efficiently will help protect many Android devices.
At the same time, this case underscores the ongoing need for more exhaustively robust and security-aware code in Android drivers. When I reported the bugs in BigWave, I hoped to spur its developers to evaluate their other drivers for obvious security issues, but 5 months later we nevertheless found a serious and extremely shallow vulnerability in their VPU driver that was instantly noticeable with even a cursory audit of the codebase. Strengthening driver security remains a crucial priority for ensuring a safe Android ecosystem, and we continue to strongly encourage vendors to improve software development practices in a proactive effort to prevent these sorts of vulnerabilities from ever reaching end-users.
Security reports often uncover complex issues missed by the product teams but it is important that software vendors take necessary steps to ensure software products, especially security-critical ones, launch in a reasonably vulnerability-free state and that software teams take a proactive approach to software security, code auditing, and vulnerability patching.
Also from what I've seen there are way too many GA accidents involving airline pilots for the insurers to eat that loss. They almost invariably have superior skills, but some of them more than compensate with risk taking.
That is still quite a small pool, and there are other network effects preventing any Joe blogs with that much capital from launching an exploitation campaign.
But if they noticed that they were paying out more than expected on these $500k deals, the insurance would change quite quickly.
The same thing happened with GA insurance - there was an assumption that airline pilots would be safer but it didn't really turn out as expected, because a 747 has a heck of a lot more "keep you safe" doohickeys and doesn't fly low to the ground much.
To, once again, use the same example of NSO Group as it is infamous and well-documented [1]. In 2016 it was 500,000 $ upfront and 650,000 $/year for 10 devices. That article claims Saudi Arabia was monitoring 15,000 phones at a average cost of 10,000 $/phone. In [2] it was 7 million $ for 15 devices, but the upfront versus marginal cost per device is not broken down. And this was a relatively "above-board" company in the sense that they were a legitimate business entity with government deals which commands a premium relative to random unknown blackhat organization with no reputation.
And again, my original comment was discussing commercial profit-motivated attackers for which 1 million $ is easily within reach and just a cost of doing business to unlock greater amounts of profit. That is less than the cost of setting up a McDonalds. There is a vast, vast gap spanning factors of millions between Joe Schmo and commercial actors and a even vaster gap to state actors. There is no evidence that Lockdown mode is adequate against even commercial actors, let alone the vastly more capable state actors.
[1] https://prodefence.io/news/pegasus-spyware-operating-costs-c...
[2] https://www.reuters.com/business/media-telecom/meta-suit-aga...