Delete Chrome's silent 4 GB AI model file and AI
In Chrome, go to: chrome://flags
Search for and Disable these:
Enables optimization guide on device
Prompt API for Gemini Nano
AI Mode
Open DevTools (F12 or Ctrl+Shift+I). Click the Settings (gear icon).
Go to AI Innovations and uncheck Enable AI assistance.
For Linux, in a bash shell, this should prevent Chrome from trying to download the file again because the root user instead of my user, will own the file/directory. sudo rm -rf ~/.config/google-chrome/OptGuideOnDeviceModel
sudo rm -rf ~/.config/googlechrome/Default/OptGuideOnDeviceModel
sudo touch ~/.config/google-chrome/OptGuideOnDeviceModel
sudo chmod 400 ~/.config/google-chrome/OptGuideOnDeviceModel
sudo touch ~/.config/google-chrome/Default/OptGuideOnDeviceModel
sudo chmod 400 ~/.config/google-chrome/Default/OptGuideOnDeviceModel
In case they already existed from doing the above previously, make sure root user owns them. sudo chown root:root ~/.config/google-chrome/OptGuideOnDeviceModel
sudo chown root:root ~/.config/google-chrome/Default/OptGuideOnDeviceModel
List to check them. ls -l ~/.config/google-chrome/OptGuideOnDeviceModel
ls -l ~/.config/google-chrome/Default/OptGuideOnDeviceModelIt reminds me of the "dialup warnings" common 2 decades ago on huge pages (often containing many images). Yes, bandwidth and storage has gotten cheaper, but the unwanted waste should still be called out. I'm not even anti-AI, having waited several hours recently to get some local models to experiment with, but that's because I wanted to and made the decision to use that bandwidth.
https://chromeenterprise.google/policies/#GenAILocalFoundati...
The prompt API can be tested here: https://chrome.dev/web-ai-demos/prompt-api-playground/
It would be really helpful if there was a way to download the model to a central location, so multiple users on a single system could easily share it.
Works without Javascript, no CAPTCHA, no DDoS, no geoblocking, etc.
https://web.archive.org/web/20260504192142if_/https://www.th...
Now we can argue whether or not it's an appropriate amount of disk space or bandwidth to use, but that's just a reasonable practical discussion to have. Framing it around consent is unnecessarily inflammatory and makes it harder to have a discussion, not easier.
https://developer.chrome.com/docs/ai/prompt-api
When Chrome 148 releases tomorrow, this will be the default behaviour on desktop.
To download, it should check for 22 GiB free disk space on the volume where your Chrome data dir is, and at least double the model size of free space in your tmp dir.
As much as I’m against unexpected 4GB bloat for an AI model, I’d much prefer it to install one copy, system-wide. 4GB per Windows or Linux lab machine, rather than a 4TB minimum load on our NFS server and 4GB downloads per user, per machine on our Windows labs.
2018? An estimate from 8 years ago is going to be off by a factor of 10 or so.
Not sure you'd get far with the legal arguments unless you're actually a lawyer. Too easy to misunderstand the jargon (i.e. the same reason why it's dangerous to use an LLM as your lawyer).
(As an aside, the whole thing reads to me like the style LLMs use; not saying for sure it was, just giving me those vibes).
Quitting chrome these days is the easiest thing to do. The writing is on the way. You don't control the browser on your network, google does. ANd for better or worse, google's priority is AI at this time.
Sysadmins should take notice.
If the network is ~65% chrome and thus deemed painful, take the gradual approach. Do not push chrome on new devices or users. Watch that problem slowly go away.
Gemma 4 E4B is a much better model, but it's too large to simply download and run everywhere.
IMHO, this is jumping the gun. Google's going through a lot of effort to release a model that will give everyone a very poor first impression of what on-device models are capable of, souring it for everyone for a long time afterwards. It would be better to wait until a smaller, better model ships before doing this.
# From one's $HOME dir:
rm -fr ./.config/google-chrome/OptGuideOnDeviceModel
mkdir -p ./.config/google-chrome/OptGuideOnDeviceModel
touch ./.config/google-chrome/OptGuideOnDeviceModel/weights.bin
chmod 0400 ./.config/google-chrome/OptGuideOnDeviceModel/weights.bin
chmod 0500 ./.config/google-chrome/OptGuideOnDeviceModel
Adapt as appropriate for your OS. For "Chrome Unstable" installs, the dir name is google-chrome-unstable.This has, so far, kept Chrome from (re)installing that file on my system.
Hypothetically the parts involving weights.bin aren't needed so long as the containing directory is not writable.
Environmental analysis for operations? Not a fan of thinking in such terms.
> For users on capped mobile data plans, particularly in regions where smartphone-as-only-internet is dominant (much of Africa, much of South and Southeast Asia, most of Latin America), 4 GB of unrequested download is on the order of a month's data allowance, vapourised by Chrome on the user's behalf. Google has not, to my knowledge, published any analysis of the welfare impact of this on the populations whose internet access is metered.
THIS is a valid concern. Otherwise I'm not buying into "ask for consent because of dependency X". Users don't like questions/consents.
However OS (at least windows) has an way to set network connection as a metered so software can make informed decisions. Also Android has "Data Saver" function which should also be honored by software.
https://web.archive.org/web/20260505052217/https://www.thatp...
https://archive.ph/sM7O5 (missing images and styling, but the content all seems to be there)
The good point in this article is about how the "AI" features in Chrome all use Google's cloud API and not a local model. That's true and some of it should be local. ("AI mode" uses the Web index, so it fundamentally cannot be local, but there are features that could be.)
Or Firefox of course.
languagemodel should be an OS service..
Brave has always just worked for me and seems light on memory usage. Dunno why anyone would use chrome.
So if you see this as just a new feature that provides some on-device AI, it's a bit, so what? A new feature? The last GT7 or Flight Sim patch was bigger than this, what's the big deal, etc.
However, that's not really what's going on. It theory Chrome gives you a local LLM that can provide local AI powered features. In practice, everything gets sent to the cloud anyway so the local LLM seems mostly to exist as a disguise for that, which is shady AF.
As others have pointed out, the solution is https://www.firefox.com/. And whilst it's been trendy on HN for several years to slag off Firefox and Mozilla, I went back to Firefox as my daily driver several years ago, and Chrome's high-handed enforcement of Manifest V3 extensions (meaning no full fat uBlock Origin) has only served to cement that decision.
It's mostly been great. The only downside is that some sites don't work properly on Firefox, and I'm 99.999% sure that's not Firefox's fault.
For example, Paypal's post-login verification step breaks so every time I want to buy something using Paypal I have to switch to Chrome. And, no, disabling uBlock Origin and other extensions on Paypal doesn't help - I've done this already. Seriously, Paypal, it's been months: will you please just fix signing in and paying on Firefox, please?
And many sites will assume you're a bot first and ask questions later if you hit them with anything other than Chrome or Safari... which is also extremely lame and scummy.
So where is the line we draw where bait-and-switch goes from being acceptable to unacceptable?
MA Chapter 93A for example clearly says that businesses are prohibited from "unfair or deceptive practices" including misrepresentation or concealing defects. Where do you think the line should be?
If you market a product as a Browser and it's codebase is 10% browser related and 90% some other program... Should Google have to correctly represent that product?
Related; If you didn't like when Apple forced you to use Siri on your phone, why did you purchase a Mac? Did you not expect them to continue disrespecting your sovereignty after you let them get away with it the first couple dozen times?
What isn't part of the software? Can they just install as much garbage they want to, as long as they claim it is part of the "browser"?
Also, scale absolutely matters. If I pull up in front of your house and say "hey, mind if I park here?" and you say yes, then I park, walk away, and 10 minutes later park a fleet of 18 wheelers in front of your house, you're going to feel like I wasn't...entirely forthcoming about what I intended.
You're not even the customer when it comes to Google.
They simply read your mails, how would you expect there to be anything resembling decency in a company like that? It is the ad business.
Bad thing is that people still use gmail.
Not everyone has access to the same infrastructure you have.
So make your own judgement, but this seem pretty significant to me.
[1]: https://www.aboutchromebooks.com/global-chrome-user-base/ [2]: https://www.iea.org/commentaries/the-carbon-footprint-of-str... [3]: https://www.anthesisgroup.com/insights/what-exactly-is-1-ton...
2.5 million downloads of 4 GB are 10 PB of traffic.
I think there are be a lot more than 2.5 million Chrome users in the world.
Sorry folks, your low bandwidth situation is not, in fact, a climate change emergency.
Curious if Google plans to allow other browsers doing that too.
Why not? It's about 60 000 London - New York City flights by the way (https://www.theguardian.com/environment/ng-interactive/2019/...). And what's the benefit again?
Unfortunately, that automation is unreliable. It doesn't work across operating systems - Windows laptops won't enable data-saver mode when connected to iPhones and macOS laptops won't when connected to Android phones, and neither will enable it when connected to, say, public transport wifi.
And even if the OS has the information, websites can't reliably use it either. Firefox and Safari both don't implement the NetworkInformation API [1].
[1] https://developer.mozilla.org/en-US/docs/Web/API/NetworkInfo...
Honestly, for most features you could justifiably say its fine. I mean honestly, how large is an English dictionary? 100 KiB? That is a far cry from 4 GiB. Just taking up 4 GiB of disk space without even asking is indeed a shit move no matter how you shake it. If Microsoft Word updated and suddenly took up 4 GiB more for something like a dictionary, it might not cause as much uproar as if it were something that many people are tired of hearing about and not interested in, but I'm not sure you would find a single soul who would find that acceptable, more just tolerated, probably partly because a lot of people simply wouldn't know better.
Shipping an AI model with a browser is starting to look like sticking cameras on ALL glasses, not just smart glasses, regardless of whether anyone wants that. Saying this is fine and not unusual is clearly motivated reasoning and just normalizes the surveillance state. It's very obvious the way this ends. Browser-based models will eventually be using your computer at the edge to save corporate money in the cloud while they do ever more expensive and invasive stuff to profile you.
For me the most significant problem is the lack of consent. I assume it's just not how you want to frame the problem. Ignoring the problematic parts or behavior of some sort of behavior is a common problem in modern software, and it's actually what the article is complaining about.
You're right in the sense that practicality and consent are orthogonal issues. There are probably stronger arguments to complain about a feature than the disk use.
Chrome is not entitled to my disk space just because I installed it and Microsoft has been excoriated for the exact same behaviour with AI.
[1] Used since forever by the Tobacco & Pharmaceutical, Fossil Fuels & Climate, Food & Diet Industries.
is about 60 gramms of co2 per user?
For one, not everyone in this world lives on high bandwidth unmetered connections. In Germany, you got a lot of people still running on 16 MBit/s ADSL, that's half an hour worth of full load just for AI garbage. With the average 50 MBit/s, it's still 10 minutes. For those running on hotspots - be it their phone with often enough 10 GB or less on your average data plan or train hotspots that cut you off after 200MB - the situation is similarly dire.
The other thing is storage. I got a nominally 256GB MacBook Air. Of these 256 GB, easily 50GB are already gone for macOS itself, swap, Recovery and everything that macOS doesn't store as part of the immutable partition (such as, you guessed it, its own AI models). Taking up 2% of the disk space without consent is definitely Not Cool.
You mean like Siri? It does the exact same thing and no one asked for it, either. That shit barely works too.
You know, I never thought about it like that, but it is true. The bloat and spyware is a core part of the OS now.
It's just more efficient that way!
A local Gemini Nano might be useful!
I wonder what that will do for the competition between hosted genai and local models...
An xBox game can be 50+ gigs. Millions of gamers. Fire up the presses!
I'm not at all saying nothing matters so we shouldn't care. I just disagree about the utility of calling out specific things out of proportion to their place in the climate crisis. Tackle AI, yes, and fast fashion and cars, and ... that one change to Chrome? I guess if that's where you want to put your energy, Sisyphus.
When Firefox does it, it sparks outrage across the internet, with entire forums filled with people vowing to leave Firefox forever and switching to something like Waterfor or Ilp/Zorp/Floop instead.
As a result, searching for experiences other people had with Firefox makes it sound like hell on earth, while people have little more to say about Chrome other than "Google gonna Google, but it's fast at least".
It's how you get things like "Browser monocultures are an issue, so don't use Chrome (Blink), use Brave (Chromium (Blink)) instead!" said in earnest.
You just described 95% of the parts of all software, especially in this era. And think of the Web - how many gigabytes of terrible adtech and tracking code does the average user download in a month of web browsing without an adblocker? Remember, each one probably packages in a couple hundred NPM dependencies into its bundle.
I don't have even a single use for Siri on my Mac. It's useless AND redundant with the Siri that I have to have on my phone, yet Apple downloaded and installed "Siri" on there. If I install GarageBand which is the only first-party way to do basic audio manipulation, Apple installs at least 4GB of audio samples on my Mac.
None of this is to say "I approve of this exact thing Google is doing" - just that I agree with GP that this is exactly the same as what every big company (and many small ones) do every day.
The only "consent" we ever get is basically the all-or-nothing EULA we have to click Agree to in order to log in for the first time - the relevant terms are "Want computer? Accept that we will be shipping you all kinds of code constantly, for 'reasons.'"
If tomorrow Google was to include a Blockchain miner in Google chrome, you'd still say you consented to it by using their software ?
Because I'm pretty sure that this LLM is also going to be used by Google to gather data on the user and feeding it to Google, hence just like the Blockchain miner using our computer ressources (space & performance) to feed Google yearly benefits.
Question from last November, even referring to macOS, by @paulirish: https://superuser.com/q/1930445/can-i-delete-the-chromes-opt...
With policy setting, debug url, docs in the answers.
One search away.
One option I'm leaving as default is "Use LiteRT-LM runtime for on-device model service inference." Any comment on that?
Google should know better. Chrome has local administrator permissions anyway (w/ its updater) so they should have installed a single copy for the entire machine.
It's not cool to give a damn about the people who keep mundane stuff like desktop infrastructure, file servers, etc, working, I guess. The wanton disregard to even talk to a single in-the-trenches corporate sysadmin seems like malice.
The tactic used to work even as prevention to common RPC exploits (viruses/worms) on windows as well (in the early 2000s).
There’s nothing stopping Google Chrome from doing something similar except, I suspect, Google knows or feels it will result in many fewer installs of its bloatware.
A 4 GiB model has nothing to do with the functionality of a web browser. It is something forced on users without their consent.
Of course that's what we get for giving the benefit of doubt to the company that insisted on learning the wrong things from the Google Buzz fiasco.
It's good to have something to work with if these Web APIs are going to be part of a standard. I suppose this means that ALL the browser vendors are likely to implement something
Or this summary on its status:
> Mozilla: Opposed
> WebKit: Opposed
> Microsoft: Several concerns
> W3C TAG: Several concerns
> Developers: Mostly negative
From https://mastodon.social/@jaffathecake/116527007495775507
He even boasts about it on twitter.
Meanwhile car batteries are draining faster. Costing the consumer electricity & battery cycle life.
#omnibox-ml-url-scoring-model
#omnibox-on-device-tail-suggestions
#optimization-guide-on-device-model
#text-safety-classifier
#prompt-api-for-gemini-nano
#writer-api-for-gemini-nano
#rewriter-api-for-gemini-nano
#proofreader-api-for-gemini-nano
#summarizer-api-for-gemini-nano
#on-device-model-litert-lm-backend
Then around gemini but not caught by the search for models: #skills (maybe? I think this is implied by "gemini in chrome"?)
edit: I don't see a carte blanch AI disabling option. As much as I dislike Mozilla's growing obsession with AI, at least they give me a top level option to disable all AI stuff. I only keep Chrome around for occasional testing reasons.
I'm not even sure that is actually true for most people. If you mainly work in the browser, which many do, then you can change the OS under it without impacting the user too much but change the browser and there will need to be much more to adapt/relearn.
If Chrome shipped a crypto miner and used the resulting coins generated on my device to let me automatically bypass paywalls with micropayments that would be way better than if they shipped the same and just took the coins.
https://adsm.dev/posts/prompt-api/#which-browsers-support-th...
That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.
Its a good idea to run small models locally if your computer can host them for privacy and cash saving reasons. But how can you trust Google to autoinstall one on your machine in 2026? I just couldn't do it.
I find it works fine for simple classification, translation, interpretation of images & audio. It can write longer prose, but it's pretty bad.
It can also write text in the format of a JSON schema or regexp for anything you might want to do with structured data.
Not happy about that as I would like to see more local models but that's the current state of things.
https://sendcheckit.com/blog/ai-powered-subject-line-alterna...
"optimization-guide-on-device-model"
- Enables optimization guide on device
"prompt-api-for-gemini-nano"
- Prompt API for Gemini Nano
- Prompt API for Gemini Nano with Multimodal Input
and deleted weights.bin and the 2025.x folder in "OptGuideOnDeviceModel"
Will report if Chrome 148 downloads the model again.
That other flag is for using a different open-source inference engine to the (from what I can tell) closed-source one that's used by default.
Additionally, the cited number also conflates wired internet (low power consumption) with mobile internet (higher), even though this model is only being downloaded to Chrome Desktop AFAICT.
If Chrome had installed 4GB for some other tooling that most people don't need, would anyone care? My operating system installs with a million default packages that I don't need. Users install applications with optional features all the time. Applications install additional tooling so that they'll function all the time.
To the other point: of course Claude Desktop modifies the browser--that's how it works. Most apps install integrations with existing apps. Often apps install a whole collection of plugins, even for things the user doesn't use, so they're available if the user does start using the other apps.
The fact that this happens to be AI-related is a moot point. The environment concern is utter nonsense. They're not using everyone's browser to power AI for others as some kind of shared collective resource. 4GB is not a lot of data in the grand scheme of things (beyond general application bloat). I have more than 4GB worth of ads shoved in my face every month.
The legal argument is facile as well. When you install any application, its terms of service cover functional updates and additions. You don't have to explicitly consent to all of them.
Other than the size of it, I don't have any problem with anything this article is mentioning.
This is a huge nothingburger that only caught peoples' attention because of the irrelevant mention of AI.
Summarizer.create()
[0]: https://developer.chrome.com/docs/ai/summarizer-api#model-do...I think this is a distinct model from the Prompt API, since the other shipped AI APIs use fine tuned models.
Because my Chrome stable has been updated to v148 now, and I don't see any AI models in my user profile folder. My profile size is only 328 MB, with the Code Cache subfolder occupying the most space (135 MB).
With Librewolf I can get proper WebGL, full UBo -with the AI blocklist too to avoid all the slop- and Bypass Paywall Clean from Giflic or whatever was called. Yeah, eh, y local newspaper won't mainly get adverts' money but the rest of local company ads show up well even with UBo/BPC, so they get some money after all.
On RAM usage, Librewolf it's far lighter on the long term and it doesn't ping back as Firefox, and many times less than Chrom* based browsers where, I repeat, Chrome based browsers don't allow UBo any more even if installed from their Github repo enforcing some about:flags variables related to legacy extension support.
The web today without UBo it's unmanageable. Popus, more than the ones from 2003, malware disguised as ads even on mainstream, safe sites, and all of these running zillions of cookies and trackers converting your -otherwise perfectly usable- old amd64 Celeron machine with 2GB of RAM into some crawling Pentium III with 256MB of RAM. With LibreWolf and UBo I could even test Yandex Maps with Prypiat and the like and InstantStreetView too. No slowdowns, no OpenGL >= 3.3/Vulkan video card required, and no need to own a 8GB machine.
HN developers there without UBo if they depend on the web for documentation they are bit screwed if they use Chrom* based browsers, sorry. Half of the resources for their machines coudn't be used, you know for IDE's, compilers, virtual machines/containers and whatnot. And, yes, I know about ZRAM under GNU/Linux, and just imagine how many tasks would anyone accomplish with a ZRAM compressed chunk (~1/3 of the physical RAM), a light desktop environment as Lumina/LXQT and a non-Chrom* browser blocking all pests. Up to 3X more tasks in the same machine. No need to waste money on upgrades, and compilng cycles are cut down for the good.
I doubt anyone would appreciate software bloat purely because of how widespread it is[1] - it just hasn't risen to the level where it's so noticeable for such a contemporarily controversial topic yet.
1. As an aside - ubisoft game sizes are absolutely bonkers. I didn't realize that each Assassin's Creed had twelve different operating systems crammed into it but I can't see how else they're clocking in where they do.
> You just described 95% of the parts of all software, especially in this era. And think of the Web - how many gigabytes of terrible adtech and tracking code does the average user download in a month of web browsing without an adblocker? Remember, each one probably packages in a couple hundred NPM dependencies into its bundle.
So what are you saying? Don't be mad over this becoming the norm, just shut up and sit down and accept it?
The alternative is sending the data to Google.
Two weeks ago I wrote about Anthropic silently registering a Native Messaging bridge in seven Chromium-based browsers on every machine where Claude Desktop was installed [1]. The pattern was: install on user launch of product A, write configuration into the user's installs of products B, C, D, E, F, G, H without asking. Reach across vendor trust boundaries. No consent dialog. No opt-out UI. Re-installs itself if the user removes it manually, every time Claude Desktop is launched.
This week I discovered the same pattern, executed by Google. Google Chrome is reaching into users' machines and writing a 4 GB on-device AI model file to disk without asking. The file is named weights.bin. It lives in OptGuideOnDeviceModel. It is the weights for Gemini Nano, Google's on-device LLM. Chrome did not ask. Chrome does not surface it. If the user deletes it, Chrome re-downloads it.
The legal analysis is the same one I gave for the Anthropic case. The environmental analysis is new. At Chrome's scale, the climate bill for one model push, paid in atmospheric CO2 by the entire planet, is between six thousand and sixty thousand tonnes of CO2-equivalent emissions, depending on how many devices receive the push. That is the environmental cost of one company unilaterally deciding that two billion peoples' default browser will mass-distribute a 4 GB binary they did not request.
This is, in my professional opinion, a direct breach of Article 5(3) of Directive 2002/58/EC (the ePrivacy Directive) [2], a breach of the Article 5(1) GDPR principles of lawfulness, fairness, and transparency [3], a breach of Article 25 GDPR's data-protection-by-design obligation [3], and an environmental harm of a magnitude that would be a notifiable event under the Corporate Sustainability Reporting Directive (CSRD) for any in-scope undertaking [4].
On any machine that has Chrome installed, in the user profile, sits a directory whose name is OptGuideOnDeviceModel. Inside it is a file called weights.bin. The file is approximately 4 GB. It is the weights file for Gemini Nano. Chrome uses it to power features Google has marketed under names like "Help me write", on-device scam detection, and other AI-assisted browser functions.
The file appeared with no consent prompt. There is no checkbox in Chrome Settings labelled "download a 4 GB AI model". The download triggers when Chrome's AI features are active, and those features are active by default in recent Chrome versions. On any machine that meets the hardware requirements, Chrome treats the user's hardware as a delivery target and writes the model.
The cycle of deletion and re-download has been documented across multiple independent reports on Windows installations [5][6][7][8] - the user deletes, Chrome re-downloads, the user deletes again, Chrome re-downloads again. The only ways to make the deletion stick are to disable Chrome's AI features through chrome://flags or enterprise policy tooling that home users do not generally have, or to uninstall Chrome entirely [5]. On macOS the file lands as mode 600 owned by the user (so it is deletable in principle) but Chrome holds the install state in Local State after the bytes are written, and as soon as the variations server next tells Chrome the profile is eligible, the download fires again - the architecture is the same, only the file permissions differ.
Most of the existing reporting on this behaviour is from Windows users who noticed their disk filling up - useful, but Google could (and probably will) try to characterise those reports as anecdotes from non-representative configurations. So I went looking for a clean witness on a different platform.
The witness I found is macOS itself. The kernel keeps a filesystem event log called .fseventsd - it records every file create, modify and delete at the OS level, independent of any application logging. Chrome cannot edit it, Google cannot remotely reach it, and the page files that record the events survive the deletion of the files they reference.
I created a Chrome user-data directory on 23 April 2026 to run an automated audit (one of the WebSentinel 100-site privacy sweeps). The audit driver is fully Chrome DevTools Protocol - it loads a page, dwells for five minutes with no input, captures events, closes Chrome between sites - and the profile had received zero keyboard or mouse input from a human at any point in its existence. Every "AI mode" surface in Chrome was untouched - in fact every UI surface in Chrome was untouched, the audit driver only interacts with the document via CDP and the omnibox is never reached. By 29 April the profile contained 4 GB of OptGuideOnDeviceModel weights - and I knew it because a routine du -sh of the audit-profile directory caught it during a cleanup pass.
I went back to .fseventsd to ask exactly when those 4 GB landed. macOS gave me the answer, byte-precise, in three sequential page files:
OptGuideOnDeviceModel directory in the audit profile (page file 0000000003f7f339)./private/var/folders/.../com.google.Chrome.chrome_chrome_Unpacker_BeginUnzipping.*/. One of them (5xzqPo) writes weights.bin, manifest.json, _metadata/verified_contents.json and on_device_model_execution_config.pb. The second writes a Certificate Revocation List update. The third writes a browser preload-data update. Chrome batched a security update, a preload refresh and a 4 GB AI model into the same idle window, as if they were equivalent (page file 00000000040c8855).weights.bin is moved to its final location at OptGuideOnDeviceModel/2025.8.8.1141/weights.bin along with adapter_cache.bin, encoder_cache.bin, _metadata/verified_contents.json and the execution config. Concurrently four additional model targets (numbered 40, 49, 51 and 59 in Chrome's optimization-guide enum) register fresh entries in optimization_guide_model_store - these are the smaller text-safety and prompt-routing models that pair with the LLM. None of these targets existed in the profile before this moment (page file 00000000040d0f9c).Total install time, from directory creation to final move: 14 minutes and 28 seconds. Total human action against the profile during that window: none. The audit driver was either dwelling on a third-party home page or transitioning between sites - the unpacker fired in the background while a tab waited for a five-minute timer to expire.
The naming inside that fseventsd record is, if anything, the most damning detail. The temp directory is com.google.Chrome.chrome_chrome_Unpacker_BeginUnzipping.5xzqPo - that prefix com.google.Chrome.chrome_chrome_* is the bundle ID and subprocess naming convention Google Chrome itself uses. It is not com.google.GoogleUpdater.* and it is not com.google.GoogleSoftwareUpdate.*. The writer is Chrome - the browser process the user has installed and trusts to load web pages - reaching into the user's filesystem on its own initiative and laying down a 4 GB ML binary while the foreground tab does something completely unrelated.
Three further pieces of corroborating evidence sit elsewhere on the same machine:
Chrome's own Local State JSON for the audit profile contains an optimization_guide.on_device block with model_validation_result: { attempt_count: 1, result: 2, component_version: "2025.8.8.1141" }. Chrome ran the model. The component_version matches the version string the fseventsd events recorded as the path component. Two independent witnesses, same artefact. The same block reports performance_class: 6, vram_mb: "36864" - Chrome characterised my hardware (read the GPU, read the unified memory total) to decide whether I was eligible for the model push, before any user-facing AI feature surfaced.
Chrome's ChromeFeatureState for the audit profile lists OnDeviceModelBackgroundDownload<OnDeviceModelBackgroundDownload and ShowOnDeviceAiSettings<OnDeviceModelBackgroundDownload in the enable-features block. The first flag is what triggers the silent download. The second flag is what reveals the on-device AI section in chrome://settings. Both are gated by the same rollout flag - which means that by Chrome's own architecture, the install begins before the user has any settings UI in which to refuse it. The settings page that would let you discover the feature exists is enabled in lockstep with the install - it is design, not oversight.
The GoogleUpdater logs record the on-device-model control component (appid {44fc7fe2-65ce-487c-93f4-edee46eeaaab}) being downloaded from http://edgedl.me.gvt1.com/edgedl/diffgen-puffin/%7B44fc7fe2-65ce-487c-93f4-edee46eeaaab%7D/... - a 7 MB compressed control file that arrived on 20 April 2026, three days before the audit profile in question was created. That is the upstream control plane: it is profile-independent, it is launched automatically by a LaunchAgent that fires every hour, and the URL is plain HTTP (the integrity is verified by the CRX-3 signature inside the package, not by transport security). The control component gives Chrome the manifest pointing at the actual weights, and Chrome's in-process OnDeviceModelComponentInstaller - a separate code path from GoogleUpdater - then fetches the multi-GB weights direct from Google's CDN.
So we now have a four-way evidence chain - macOS kernel filesystem events, Chrome's own per-profile state, Chrome's runtime feature flags, and Google's component-updater logs - all four agreeing on the same conduct, and the conduct is: a 4 GB AI model arrived on this user's disk without consent, without notice, on a profile that received zero human input, in a window of 14 minutes and 28 seconds, on a Tuesday afternoon.
Reports of the OptGuideOnDeviceModel directory and the weights.bin file have been circulating in community forums for over a year - what is new in 2026 is the scale and the verifiability. Chrome's market share has held above 64% globally [9][10], Chrome's user base is between 3.45 billion and 3.83 billion individuals worldwide depending on which 2026 estimate you trust [9][11], and Google has been rolling Gemini features into Chrome with increasing aggression. The behaviour is no longer affecting a minority of power users on a minority of platforms - it is affecting hundreds of millions of devices, on every desktop OS Chrome ships against.
The same dark-pattern playbook. I am repeating my categorisation from the Claude Desktop article [1] because the patterns are identical and that is the point.
1. Forced bundling across trust boundaries. Anthropic installed Claude Desktop, then wrote into Brave, Edge, Arc, Vivaldi, Opera, and Chromium. Google installs Chrome, then writes a 4 GB AI model under the user's profile directory without authorisation. The binary is not Chrome. It is a separately-trained machine-learning model, with a separate purpose, a separate data-protection profile, and a separate consent footprint.
2. Invisible default, no opt-in. No dialogue at first launch. No checkbox in Settings. The model is downloaded; the user finds out about it months later when their disk fills up [5][6][7].
3. More difficult to remove than install. Adding the file took zero clicks. Removing it requires (a) discovering the file exists, (b) understanding what it is, (c) navigating into a hidden user profile path, (d) deleting it (and on Windows, also clearing the read-only attribute first), and (e) accepting that Chrome will silently re-download it on next eligible window unless the user also navigates chrome://flags, enterprise policy, or platform-specific configuration tooling to disable the underlying Chrome AI feature [5]. None of those steps is documented in the place a normal user looks - none of them is even hinted at in default Chrome.
4. Pre-staging of capability the user has not requested. The Nano model exists on the user's disk so that Chrome features that use it can run instantly when the user invokes them. The user has not invoked any of those features. The model still sits there, taking 4 GB.
5. Scope inflation through generic naming. OptGuideOnDeviceModel is internal Chrome jargon for "OptimizationGuide on-device model storage". A user looking at their disk usage, even one who knows roughly what they are looking at, would not match OptGuideOnDeviceModel/weights.bin to "Gemini Nano LLM weights". Accurate naming would be GeminiNanoLLM/weights.bin. Google chose to obfuscate the name.
6. Registration into resources the user has not configured. A user who has not opened Chrome's AI features still gets the model. A user who has opened them once and decided they were not interested still gets the model. The file's presence is decoupled from the user's actual use of any feature it powers.
7. Documentation gap. Google's user-facing documentation about Chrome's AI features does not, with the prominence proportionate to a 4 GB silent download, tell the user that the cost of the feature being available is a 4 GB file appearing on their device. The behaviour is documented in places a curious admin will find. It is not documented in the place a regular user looks before installing Chrome or before Chrome decides to begin pushing the model.
8. Automatic re-install on every run. Same as Claude Desktop. Delete the file, Chrome re-creates it. The user's deletion is treated as a transient state to be corrected, not as a directive to be respected.
9. Retroactive survival of any future user consent. If Google in future starts asking users "would you like Chrome to download a 4 GB AI model", that prompt does not retro-actively legitimise the silent installs that have already happened on hundreds of millions of devices. The damage to the trust relationship is done. The bytes have moved. The atmosphere has been written to.
10. Code-signed, shipped through the normal release channel. This is not test build behaviour. It is Chrome stable.
Here is the part that should make every privacy lawyer in the audience put their coffee down. When Chrome 147 launches against an eligible profile, the omnibox - the address bar at the top of the window, the most visible piece of real estate in the entire browser - renders an "AI Mode" pill to the right of the URL field. A reasonable user, seeing "AI Mode" sitting in their browser's most prominent UI element in 2026, with the well-publicised existence of on-device LLMs in Chrome and a 4 GB Gemini Nano binary already silently installed on their disk, is going to draw what feels like an obvious inference - that the visible AI Mode is using the on-device model, that their queries stay on the device, that the local model is what powers the local-looking surface.
Every part of that inference is wrong. The AI Mode pill in the Chrome 147 omnibox is a cloud-backed Search Generative Experience surface - every query the user types into it is sent over the network to Google's servers for processing by Google's hosted models. The on-device Nano model is not invoked by the AI Mode UI flow at all. They are entirely separate code paths - the most visible AI affordance in the browser does not use the local model the user has been silently given, and the features that do use the local model (Help-Me-Write in <textarea>, tab-group AI suggestions, smart paste, page summary) are buried in textarea-context menus and tab-group right-click menus that the average user will discover, on average, never.
Think about what that arrangement actually is. The user pays the storage cost of the silent install (4 GB on disk, plus the bandwidth of the silent download). The user's most visible AI experience - the pill they actually see and click - delivers no on-device benefit at all because it routes to Google's servers regardless. The on-device model is therefore a sunk cost imposed on the user, with no offsetting transparency benefit at the surface where transparency would matter most. To put it another way - if the on-device install had given the user a clear "your AI Mode queries stay on your device" property, the install would have a defensible privacy framing (worse storage, better data flow). It does not - the install gives Google a future-options resource (the model can be invoked by other Chrome subsystems without further server round-trips) at the user's disk-and-bandwidth expense, while the headline AI surface continues to send the user's queries to Google as before. The local model is a Google-side asset positioned on the user's device - it is not a user-side asset and one could argue it is nothing but sleight-of-hand to hide that actually, the visible AI mode is NOT using the local model.
That arrangement, on its own, engages at least three of the deceptive design pattern families catalogued in EDPB Guidelines 03/2022 [20]. It is misleading information because the visible label "AI Mode" creates a false impression about where processing occurs - the label does not say "cloud-backed" or "queries sent to Google", and a reasonable user with knowledge of on-device AI will infer locality from the proximity of an on-device 4 GB model on their disk. It is skipping because the user is not given a moment to choose between local-only and cloud-backed AI surfaces - both are switched on by the same upstream rollout, with no per-feature consent. And it is hindering because turning AI Mode off does not also remove the on-device install, and removing the on-device install does not turn AI Mode off - the two are separately controlled, and discovering both controls requires knowing about both chrome://flags and chrome://settings/ai, neither of which is obvious in default Chrome.
So: not just a non-consented install, but a non-consented install that doubles as cover for a parallel cloud-backed surface that misrepresents to the user where their typing is being processed. Both layers compound the consent problem.
Article 5(3) of Directive 2002/58/EC (the ePrivacy Directive) prohibits the storing of information, or the gaining of access to information already stored, in the terminal equipment of a subscriber or user, without the user's prior, freely-given, specific, informed, and unambiguous consent, except where strictly necessary for the provision of an information-society service explicitly requested by the user [2]. The 4 GB Gemini Nano weights file is information stored in the user's terminal equipment. The user did not consent. The user has not requested any service that strictly requires a 4 GB on-device LLM. Chrome is functional without the file. The Article 5(3) breach is direct.
Article 5(1) GDPR requires processing of personal data to be lawful, fair, and transparent to the data subject [3]. Where the user's hardware is profiled to determine eligibility for the model push, where the install events are logged on Google's servers, and where the on-device features the model powers process user prompts (whether or not those prompts leave the device), the lawfulness, fairness, and transparency of all of that processing depend on the user being told, in plain language, what is happening. They are not.
Article 25 GDPR requires the controller to implement appropriate technical and organisational measures to ensure that, by default, only personal data that are necessary for each specific purpose are processed [3]. Pre-staging a 4 GB AI model on a user's disk, against a contingency that the user might in future invoke an AI feature, is the architectural opposite of by-default minimisation and the profiling of the device to determine whether or not to push the model is not different to the profiling used to track you online and as such that profile contains personal data and if the AI model is used, will process personal data, so the GDPR arguments are in scope and valid.
Under the UK GDPR and the Privacy and Electronic Communications Regulations 2003, the analysis is the same. Under the California Consumer Privacy Act, the absence of a notice-at-collection covering this specific category of pre-staged software puts Google's CCPA notice posture in question [12].
Then there are the criminal-law violations under various national computer-misuse statutes - which again cannot be overstated.
The Anthropic case I wrote about was a desktop application installing a 350-byte JSON manifest in seven directories. The bandwidth and energy cost of that, summed across all Claude Desktop users, was negligible. The Chrome case is different. Chrome is pushing a 4 GB binary across hundreds of millions of devices. That has a measurable, quantifiable, and frankly alarming environmental footprint.
I am calculating this using the same methodology our WebSentinel audit platform applies to website environmental analysis [13]:
That is per device, per push. A single download of the model. It does not include re-downloads triggered by the user trying and failing to delete the file. It does not include subsequent updates to the model. It does not include the on-device inference energy when the model is actually used. It is just the one-time delivery cost to one device.
Google does not publish how many devices receive the Nano push. The eligibility criteria gating the push (a hardware "performance class" that Chrome computes from CPU class, GPU class, system RAM and available VRAM - typically ~16 GB unified memory or better on Apple Silicon, ~16 GB RAM and a discrete or integrated GPU with sufficient VRAM on Windows and Linux) carve out the very low end of the consumer install base, but the qualifying population is still enormous. I will use three illustrative deployment bands so the reader can pick whichever they consider closest to reality. None of these bands is implausibly large for a feature that ships in default-on Chrome.
| Devices receiving the push | Total bytes pushed | Total energy | Total CO2e |
|---|---|---|---|
| 100 million (low band: ~3% of Chrome users) | 400 petabytes | 24 GWh | 6,000 tonnes CO2e |
| 500 million (mid band: ~15% of Chrome users) | 2 exabytes | 120 GWh | 30,000 tonnes CO2e |
| 1 billion (high band: ~30% of Chrome users) | 4 exabytes | 240 GWh | 60,000 tonnes CO2e |
To compare those numbers to what an ESG report could compare to:
24 GWh (low band) is roughly the annual electricity consumption of about 7,000 average UK households [16].
120 GWh (mid band) is roughly the annual electricity consumption of about 36,000 average UK households, or the annual output of a 14 MW wind turbine running at typical UK capacity factor.
240 GWh (high band) is roughly the annual electricity consumption of about 72,000 average UK households, or the annual output of about 28 MW of installed wind capacity.
6,000 tonnes CO2e (low band) is roughly the annual emissions of 1,300 average passenger cars in the EU [17].
30,000 tonnes CO2e (mid band) is roughly the annual emissions of 6,500 cars, or one return flight from London to Sydney for about 8,000 passengers in economy.
60,000 tonnes CO2e (high band) is roughly the annual emissions of 13,000 cars.
These are the delivery-only numbers. They count the bytes traversing the network exactly once. They do not count:
In ESG-reporting language, the one-time push of the current model is a Scope 3 Category 11 ("use of sold products") emission against Google, attributable to the user-side delivery of a binary the user did not request, in the operation of a free product Google distributes [4].
In addition to the carbon cost, the network-bandwidth cost is paid by ISPs, by mobile network operators, by users on metered connections, and by every piece of network infrastructure that has to carry an unwanted 4 GB payload to a destination that did not ask for it. Per the Pärssinen reference, around 50% of that delivery energy is in the access network and CDN edge, around 30% is in user-side equipment (router, modem, NIC), and the remainder is in the core. None of that infrastructure exists for free. Every byte Chrome pushes is a byte that competes with bytes the user actually wanted.
For users on capped mobile data plans, particularly in regions where smartphone-as-only-internet is dominant (much of Africa, much of South and Southeast Asia, most of Latin America), 4 GB of unrequested download is on the order of a month's data allowance, vapourised by Chrome on the user's behalf. Google has not, to my knowledge, published any analysis of the welfare impact of this on the populations whose internet access is metered.
Keep in mind that mobile data plans (4G and 5G) are used by many households who do not have access to fiber, cable or adsl and are used for desktop devices as well as mobile - so the argument that Google won't push this to mobile devices (although I have not found anything official to support that argument anyway) will not fly.
This is not a hard list. It is the same list I gave Anthropic in the Claude Desktop article, applied to Google.
Ask. First time Chrome is about to download the Nano model, pop a dialogue. "Chrome would like to download a 4 GB AI model file to your device to power the following features. Allow, or skip and decide later." Two buttons. Done.
Pull, not push. Trigger the download as a downstream consequence of the user invoking an AI feature for the first time. Let the feature itself be the consent event. Do not pre-stage on a contingency.
Surface it. In chrome://settings/, list the AI model files Chrome has downloaded, their size, the features they power, and a "Remove and stop downloading" button per model. Make removal persistent, not a transient state Chrome corrects on next launch.
Document it. Tell the user, plainly, in the Chrome description on the Microsoft Store, in the Chrome installer, on the Google Chrome download page, that Chrome will download additional model files of substantial size on supported hardware. Currently, this is essentially undocumented to a normal user.
Respect deletion. If the user deletes weights.bin, do not re-create it. If the user has a strong preference about what is on their disk, the application is not in a position to override that preference because the application thinks it knows better.
Disclose at scale. Publish, in Google's annual ESG report, the aggregate bandwidth and carbon footprint of all AI-feature model pushes to user devices, broken down by region. Treat it as the Scope 3 Category 11 emission it is. Account for it.
Notify retrospectively. Users who already received the model without consent should, on next Chrome launch, be told what happened, shown the file, and offered a one-click revoke + uninstall. This is the same retrospective-consent step Anthropic should also have taken.
Both of these episodes, the Anthropic Claude Desktop manifest install I wrote about two weeks ago and the Google Chrome Gemini Nano push I am writing about today, share the same underlying decision. An engineering team at a large AI vendor decided that the user's machine is a deployment surface to be optimised for the vendor's product roadmap, not a personal device whose owner is the legal authority on what runs there.
The Anthropic case put a pre-authorisation for browser automation on around three million Claude Desktop user devices [19]. The Google case puts 4 GB of AI weights on, by my mid-band estimate, around 500 million Chrome user devices, with proportionally larger ePrivacy, GDPR, and environmental exposure.
Both companies have a public posture of caring about safety, ethics, and responsible AI. Both companies, in the silent installation behaviours documented here, have undermined the foundational consent on which the legitimacy of any of those positions depends. The fact that the bytes are AI bytes does not exempt them from the law that governs every other byte that gets written to a user's device without permission. The fact that the bytes are "small" relative to the user's disk does not exempt the cumulative carbon footprint from being a real, measurable, ongoing harm to the climate.
If Google's next Chrome update silently removes the unconsented installs and replaces the behaviour with an explicit opt-in, we will know the company can read the room. If it does not, we will know what the company's published positions on responsible AI and sustainability are actually worth.
In light of what is increasingly becoming default behaviour, one has to ask a very simple question. When will the Regulators and Public Prosecutors start to enforce the law which has been in place since 2002 - or are global tech corporations exempt from criminal and civil statutes?
[1] Hanff, A. "Anthropic secretly installs spyware when you install Claude Desktop", That Privacy Guy!, 18 April 2026. https://www.thatprivacyguy.com/blog/anthropic-spyware
[2] European Parliament and Council. Directive 2002/58/EC on privacy and electronic communications (ePrivacy Directive), Article 5(3). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02002L0058-20091219
[3] European Parliament and Council. Regulation (EU) 2016/679 (GDPR), Articles 5(1), 25. https://eur-lex.europa.eu/eli/reg/2016/679/oj
[4] European Parliament and Council. Directive (EU) 2022/2464 amending Regulation (EU) No 537/2014, Directive 2004/109/EC, Directive 2006/43/EC and Directive 2013/34/EU as regards corporate sustainability reporting (CSRD). https://eur-lex.europa.eu/eli/dir/2022/2464/oj
[5] Pure Infotech. "Stop Chrome from silently downloading Gemini Nano AI model on Windows 11". https://pureinfotech.com/stop-chrome-gemini-nano-download-windows-11/
[6] Dhavale, V. "Chrome Installed a 4GB LLM on My Machine. Here's What I Found Out." https://www.vishwamdhavale.com/blog/chrome-gemini-nano-on-device
[7] WinAero. "Google Chrome Secretly Downloads Huge Local AI Models". https://winaero.com/google-chrome-secretly-downloads-huge-local-ai-models/
[8] AIBase. "Google Chrome Exposed for Forcing 4GB AI Model Installation". https://www.aibase.com/news/25955
[9] StatCounter. "Browser Market Share Worldwide". https://gs.statcounter.com/browser-market-share
[10] Wikipedia. "Usage share of web browsers". https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[11] DemandSage. "How Many People Use Google Chrome (Updated 2026 Data)". https://www.demandsage.com/chrome-statistics/
[12] State of California. California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 et seq. https://oag.ca.gov/privacy/ccpa
[13] Hanff, A. "WebSentinel ESG Considerations chapter methodology". WebSentinel report template, Chapter 08. (Source: this article's author's audit platform, code at /backend/lib/transparency/esg-calculator.js.)
[14] Pärssinen, M., Kotila, M., Cuevas, R., Phansalkar, A., Manner, J. "Environmental impact assessment of online advertising", Science of The Total Environment, 2018. https://www.sciencedirect.com/science/article/pii/S0195925517303505
[15] European Environment Agency. "Greenhouse gas emission intensity of electricity generation". https://www.eea.europa.eu/en/analysis/indicators/greenhouse-gas-emission-intensity-of-1
[16] Ofgem. "Average gas and electricity usage". (UK average household electricity consumption: ~2,700 kWh/year, "low" TDCV 2024.) https://www.ofgem.gov.uk/
[17] European Environment Agency. "Average CO2 emissions from new passenger cars" (EU-27, 2024 reporting baseline ~109 g/km × ~12,000 km/year ≈ 1.3 t/year per average car). https://www.eea.europa.eu/
[18] Tannu, S., Nair, P. J. "The dirty secret of SSDs: embodied carbon", ACM SIGENERGY Energy Informatics Review, 2023. https://dl.acm.org/doi/10.1145/3630614.3630618
[19] Anthropic. Reported Claude Desktop install base estimates from Q1 2026 disclosures. (Estimate; Anthropic does not publish exact figures.) https://www.anthropic.com/
[20] European Data Protection Board. "Guidelines 03/2022 on deceptive design patterns in social media platform interfaces", version 2.0, adopted 14 February 2023. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-032022-deceptive-design-patterns-social-media_en
It's the fact that they were forcing it into MY computer, using MY bandwidth for THEIR profit goals. The lack of consent was the final nail in the coffin for me, no computer in my house uses Windows now, and it will at best be a long time before that changes.
I got rid of Chrome ages ago as well. Chrome's only redeeming feature is its user base. It's slower, uses more system resources, ugly as a browser, and now its an AI rapist too.
you would think google is not stupid enough to mess with gcp account holders
Silicon Valley is not the world.
The current state can be viewed at the internal debug URL chrome://on-device-internals/
Since January there's a user facing toggle in Settings > System > On-device AI in Chrome Canary https://www.bleepingcomputer.com/news/artificial-intelligenc..., described in Chrome Help Center at https://support.google.com/chrome/answer/16961953https://sup...
Precedence for shipping models alongside consumer software.
Potentially without consent if it truly is a silent install.
now I'm working on upgrading my computer lol
About equal to a major iOS update at 8 GB x 1.5B.
Netflix and YouTube together are perhaps around 200EB/month.
It's called a PUP, or Potentially Unwanted Program and most anti-viruses offer to remove them. They can be legitimately installed, but often aren't. (Usually they were shipped in the installers of legitimate software downloaded from sketchy distributors.)
Random AI models being shipped with Chrome is very much a PUP. The user wanted to browse the internet, not use a model. They'd install an extension if they wanted that.
The Ask toolbar was seen as a virus. Mozilla had massive user bleed in Firefox due to installing sponsored extensions in the browser. The only reason this shit isn't regarded the same way is because it's both done by Google and because it's labeled with AI, so all AI bros have to retroactively find an excuse to justify it.
This is also GOOGLE chrome, it serves their ends, in the past that was to render internet unimpeded (they saw a need then), needs change. I'd rather models serve most requests locally anyway, so long as it's not destroying my battery life.
Remember the whole chrome-RAM-gate saga? This shouldn't be shocking to anyone. PC's shipping 8gb ram, Google removing ad blocker extensions, these should be the real rally points.
yea your analogy doesn't even remotely make sense
If yes, it's an interesting API to call when a AI crawler hit your website.
The browser would then have a configuration option of which JS interpreter to use.
When does code become software?
If the onboard LLM means no data sending and you get your own little service wholly subservient to you like a good little program. That's nice!
If the onboard LLM means better data filtering, possibly even exploration of the local system, to send information to Google while lessening their datacentre bills running LLM services. That seems a little underhanded to just bake into things without notification.
Pick your assumption, you get your outcome. What are your assumptions?
We can be positive the entire motivation of Chrome is user behavior surveillance. There's not a nano-chance in all the multiverses that Chrome model is doing anything privately. They've gone to extraordinary length to accomplish this. It's not for free.
There are only three major browser rendering engines. One is Gecko, by Mozilla. One is Webkit, currently tended to by Apple. And one is Blink, which is Google/Microsoft. Of those, Blink is the most featureful. That's why.
99.99% I do not need Chromium but when I do, it's worth the ~200MB of used space.
but to answer your question: one of the services that uses a small model: PermissionsAIv4
""" Use the Permission Predictions Service and the AIv4 model to surface permission notification requests using a quieter UI when the likelihood of the user granting the permission is predicted to be low. Requires `Make Searches and Browsing Better` to be enabled. – Mac, Windows, Linux, ChromeOS, Android """
The oceans are boiling [0], marine life is dying [1]. Land close to the water will be land under water soon [2]. The ice caps are melting and setting free all sorts of diseases. [3]
Large parts of our planet on fire all the time now, here's one from Australia from this year [4], but I'm sure you've read about wildfires in Australia last year, California every year, Greece last year etc etc.
What you're proposing is nothing short of a death cult. It's either degrowth or we all die, sacrificed at the altar of capitalism.
[0] https://www.theguardian.com/environment/2026/jan/09/profound...
[1] https://www.nature.com/articles/s41559-026-03013-5
[2] https://www.nature.com/articles/s43247-025-02299-w
[3] https://www.unep.org/news-and-stories/story/could-microbes-l...
[4] https://phys.org/news/2026-01-australia-declares-state-disas...?
Are they against washing machines too? Or are they just grandfathered in?
Related... to the functionality of feeding the same profit and loss account, right?
The idea that the model is local is just Privacy Washing. What's the chance they aren't capturing your prompts somehow? For "Telemetry" so they can triage bugs of course!
29 grams for something that takes most folks less than 20 seconds to download? How many watts (neglecting the machinery was going to be running regardless of whether you are transferring something!) do you think it takes to transfer data?
https://www.eia.gov/tools/faqs/faq.php?id=74&t=11
Coal, the absolute worst of all, represents 18 grams over 60 full seconds to produce 1000 watts of power.
What's that number? How did you arrive at it and why?
My Chrome binaries are about 700MB on Mac and 500MB on Windows. Is this below or your line, or are they actually in trouble as soon as they're extracted?
My point is just that it seems there may be an arbitrary limit here that may not be the same for everyone (and 90% of users are nontechnical and thus couldn't give an answer whether 4GB is "worth it" for whatever the features are). Rather than add another whole ecosystem of "Cancel or Allow?" dialogs I'd rather operating systems did a better job of letting users put piggish applications on a strict space budget. Most of the apps on my phone are storing half a gig of "stuff" (called "Documents & Data" but not itemized, and even apps that have none of my 'data' such as browsers), which I can't force them to dump even in an extreme emergency. I can only delete the whole app.
I'm talking about Apple platforms as examples because I use those a lot and with their epic stinginess of SSD, anyone who doesn't pay $400 more than the base model will exhaust their storage within hours to months.
I'm defaulting to Firefox ever since I moved my desktop to CachyOS, but I need to either reacquaint myself with its add-on situation after a long arc of using "chrome alternatives", or migrate to something else niche. Vivaldi was what I was sold on before Arc caught my attention through its wonderful UX/UI.
Of course I would. It’s already the largest application on my computer, and I only keep it around for when a site doesn’t render right in Firefox.
This kind of size increase clearly pushes it over the line for me and it’s getting uninstalled.
Have you seen SSD prices recently?
Oh, the horror!!!
Wait, let me pay my HVAC guy $500 he deserved because he came all the way from his home to replace a fuse
A lighter Brave.
If you were to install Chrome fresh, what if it was a 4GB+ download from their website? I would at least pause. For reference, a regular offline installer is 140MB.
EDIT: got the maths very wrong with some other estimates, deleted them.
I don't see that any one gigabyte of software I don't want is especially more
noteworthy than any other gigabyte of software I don't want.
I feel like you're being intentionally naive here. There's a difference between a forum using up a gig here or there, and one of the biggest software makers in the world shipping 4GB to all of its millions of users (if not billions at this point).i certainly never activated it willfully. i use Chrome only as a fallback testing platform for web dev - a handful of times per month - yet both Chrome Stable and Chrome Unstable had installed this 4GB monstrosity in my home dir. 8GB of junk i'd never used. Both have since been uninstalled and replaced with Chromium.
If you google OptGuideOnDeviceModel, there’s already a lot of results of people asking what it is an how they can delete them. It’s not some kind of obscure niche feature.
I wonder when the first crypto miner-like malware appears that offloads model usage to the client computers.
Q: Does <company> understand consent?
A: No / Maybe Later
but the Google version is: Q: Does <company> understand consent?
A: No / Maybe Later / we did it anyway, you'll need to search to find out how to turn it off, maybe ask the new AI model we've just back-door installed?It’s probably a business misplay to tell the other 99% of users about something they weren’t going to think about. But if by chance it goes awry and there’s outcry, just apologize and commit to do better.
1. Yes
2. Ask me later
Man, so many things could be better if people cared.
Policy GenAILocalFoundationalModelSettings disables and removes the local model without any flag hacks since 2024. In Canary since January behind Settings > System > On-device AI
The article doesn't mention Chrome version, release channel, whether on fresh vs existing install an if settings were altered.
i.e. when firefox does it, people wonder why they aren't using chrome. That's the entire point. The only thing that makes firefox attractive is if they don't do what google does, and they do almost everything google does.
Even if it results in extended campaigns of complaints and hostility from their most devoted users, and the loss of 95% of their installs. As far as I can tell the only thing they backed down on was destroying ublock, and that's because they recognized that it was an existential threat to firefox. The 3% market share that they have now would have become 0.3%, no matter what google did to prop them up.
I certainly don't recommend firefox any more. The amount of effort I have to go through to get the standard 2010 experience quality is absurd and I can't expect anyone else to think it's worth it. It's not worth it to dodge any of this bad behavior anymore, it's industry standard. Going through the effort of dodging it makes you stand out more, and makes you more trackable and targetable. For me it's just compulsive, and my values don't change when the values of the crowd changes. But I can't expect anyone else to download and maintain a git repo that allows you to have basic control over your UI, or to fill out captchas after every pageload.
If you're going to use plain firefox, you might as well use plain chrome. Both of them have the same degree of respect for you, and both of them are owned by the same company. Using plain firefox for freedom is like using an Android phone for freedom. It's amusing that google gets to play the "bad guy" in one of those stories (browser wars) and gets to play the "good guy" in the other (mobile wars.) It's all keyfabe. None of these companies are competing with each other.
> while people have little more to say about Chrome other than "Google gonna Google, but it's fast at least".
Wise words.
I mean this should (and is) be tackled at the source: 0/low emission energy generation and not consumer having to think about these decisions. Sustainable data centers using renewables etc. But not that the companies should associate/evaluate/consider bytes downloaded with environmental impact.
Technological progress is also societal progress. If we embraced degrowth in the 1800's (there was a ton of pollution back then, and a Malthusian belief in disaster!) we might not see slavery being abolished or women being able to vote.
It's never a binary thing. "Is using energy good or bad?" is a stupid question which can only provide stupid answers. It has to be placed in the context of whether it's proportionate to benefit.
Things which burn a lot of energy for little benefit - and in the case of AI, often negative benefit - end up more towards the "bad".
That's kind-of the point though right? An application that has been say <700 MB for decades, suddenly deciding it'll take a multiple of it's size without asking seems pretty unreasonable, I think it's pretty fair to say the expectations for Chrome were set already.
It'd be similarly unreasonable for a video game that once took 50 GB, to suddenly decide to take 400 GB.
Just because there is a gradual spectrum between two states doesn't mean we can't draw distinctions. For example, just because we cannot define the exact, precise color when blue turns into green, it does not mean that blue and green are the same color for any normal person discussing an issue publicly in good faith.
When someone says "X and Y are on a spectrum, X is good and Y is bad", the point is to highlight the differences. Pointing out that the spectrum or continuum might not have a precise boundary has literally zero weight towards the validity of the ultimate conclusion a person is making here and really is just a complete derail done by people who have no substantive points to make.
- software company decides to release a new version and auto installs it for everyone who has the old version (like Google Chrome)
- software company decides to release a new version. The Debian packaage maintainer checks if the update is fine, is compatible with Debian policies, then includes it in the packages repositories.
In the first, there are no checks. In the second, there are.
Its miles ahead of something like Osmand , which i really, really, tried to like for a year, but its a UX disaster (i could never ever figure it out)
Mozilla has taken a strong stand against the prompt api.
Also, someone installing Steam is going to expect large downloads, hell, the platform tells you the size as you're about to start the download.
I don't think anyone expects a browser to suddenly download 4GB, let alone behind their backs!
The word you're looking for is "respect". They understand consent, the same as JBS* understands animal rights.
You wouldn’t throw the same fit if [insert dictator you don’t have high expectations of here] shot a hundred random civilians compared to if your government did, no?
I'm in my 40s I have no desire for this new technology unless we get the kind of AI from Japanese anime.
Because this is something expected from Google. Google has never committed to security, but Mozilla did.
EDIT: I meant privacy, not security.
He’s the founder of Brave, by the way.
... Mozilla absolutely did this to themselves. Come think of it, they really remind me of what Microsift's been doing with Windows.
Why are AI models something I'd be uniquely unable to trust Google to install, compared all the other code included in Chrome updates? Is your point just that you shouldn't trust Chrome in general?
Really? I'm a total amateur when it comes to doing anything with local models but I tried a few in this range using ollama at this point, and they didn't seem to know much about anything, and I couldn't figure out how to get them to search the web or run other tools, so that was where the experiment ended.
A small local model that can use bash would be a bit of a game-changer for me.
It’s like how people are outraged that electricity is being used in data centers to power AI models. When you do the math, the power consumption is far, far less than all the other things you do all day without thinking twice. But again, anti-AI double standard
So no, I don't think it's a weird trend at all that people start describing software as "silently" doing things when trust in automatic updates of software (a thing that software silently does) has deservedly gone down the drain in the last few years.
I wanted a browser, not an LLM.
This does not happen. The model is not downloaded unless the user intentionally uses the feature that requires it. Then it's downloaded at that point.
In my experience a game worth playing never exceeded 1 (one) gig in size.
It is only incompetent creators that feel the need to bury their incompetence under gigabytes of irrelevance.
Our users interact with a huge array of internal and external sites and web apps, virtually all of which will be tested on Chrome. Our LMS, collaboration tools, internal apps, SIEM tooling, HR systems, ERP, knowledge exchange partner portals - it's all been tested on, and works with, Chrome. And we're not in a position to force thousands of vendors to make sure their applications are standards compliant and work in less popular browsers (as much as we might like to). Not to mention the deluge of tickets we'd be dealing with when incompatibilities arise; banning Chrome would cripple us.
Google have backed us into a corner with this one by making a careless default choice that takes advantage of their market dominance and forces us to work around their decision.
They're going to get blasted with cellular data charges when they fire up their computer in the field.
I'm more inclined to think it's cost unloading. Move their cloud GPU costs to your desktop.
The OSMand UX is clearly not made for casual use, but Comaps is basically the main user-friendly application. It is missing a couple of commonly-used features though, most notably traffic information, which of course Google bases on data collected from its users.
https://github.com/mozilla/standards-positions/issues/1213#i...
Mozilla makes great points. Even if the API is model agnostic, which it ought to be designed as from the very beginning to even be considered a spec, models can act vastly different.
Mozilla didn't say this but the user should at least be presented an option to choose which model (at least once) starting from day one, even if your browser only has one option available. That's assuming a universe where Google plans on actually being concerned about standards adoption.
(I wanted to write something far snarkier and sarcastic but getting annoyed at google is like getting annoyed at a lawnmower/Oracle. That plus HN guidelines.)
If people read the release notes instead of the comment sections, not only would they have a lot more specific knowledge of the work going into the browser but they wouldn't be locked in this cycle of outrage and escalation that normally you only see in YouTube comment sections.
This is better than my current solution of an actual human with masters degreed intelligence performing all my cognitive tasks for free how? I mean, i'm the first to admit i'm extremely lazy and even i'm over here like "really??"
Would you be able to compare this to other local models in it's class and a above that would fit consumer-grade hardware?
"Even if you're paying for the product, you're still the product: Incentives matter, but impunity matters more."
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar...
Mozilla doesn't care about your grievances. It collects lots of telemetry about you by default, and has recently officially removed the obligation not to sell your personal data to third parties etc. It also plans to "introduce AI" into its browser.
> And I would expect that most Firefox users are of the kind who have strong opinions about how their computers work.
On the contrary. Those people have moved on, or are in the process of moving on, from Firefox itself to more privacy-minded forks. Like Palemoon, LibreWolf and maybe Mullvard.
- It failed? This must be a mistake, I’ll try it again. It failed? This must be a mistake, I’ll try it again because then I will complete the task (repeat about every six seconds until you rescue it).
- You know, the best way to deal with a permissions problem is to erase the entire system. That’ll definitely solve those pesky permissions and I’ll complete the task.
If you wanted to point to the year where they've been the best financed they've ever been and where they've had the most resources invested into browser development they ever have, that year would be 2026. Only to be exceeded by 2027 and then 2028, 2029 and beyond.
At a bare minimum, their endowment gives them probably a two to three year firewall in the event that their funding is cut off, which it hasn't been. I also thought the accusation was supposed to be the other way around, namely that we all knew they were going to get funded into perpetuity as controlled opposition.
i’m not anti-ai by any stretch, but to pretend like their personal choices don’t matter is a bit too dismissive. it’s their choice, we probably shouldn’t imply other people having their own personal taste is hysterical or whatever it is you’re dancing around.
The 'internet' is not an entity. Outrage and engagement drive ads. Beyond that 'AI' has very little benefit for most people and it's straight loss if you look at consumer electronics (getting price out of PCs) or energy prices.
The Avalanche Has Already Started. It is Too Late for the Pebbles to Vote. -- Ambassador Kosh Naranek
The funny thing about "AI Data Centers!!1!" is that they're unsurprising to anyone who knows the progression of this. First there were gigantic computers. Then telecom closets and machine rooms. Those machine rooms and closets got big and hungry! But they were hidden inside drab office space and far inside security perimeters and nobody really paid them mind, because it was part of doing business for the businesses.
Then came the cloud mania and corporations began gutting their machine rooms and migrating to the clouds. So if the consumption and demand for resources ramped up, who knows, but it was transferred from a very distributed, scattered model to centralized in a few big datacenters.
And now those datacenters are becoming an end unto themselves and everyone's gotta get one. Yeah, the scale and consumption of computing increases, but this has been evolutionary and it's only alarming because now, you can drive around a big city and pass several obvious data centers (and a few non-obvious ones) on your way. Did people freak out over AT&T constructing central offices? Dunno, those meant a lot of jobs. We all needed to reach out and touch someone.
But kinda wary about that Death Star.
Just fyi, this is not a temporary phenomenon, not a phase. People dont like spam, robocalls, persistent advertising, even as we use the tools that enable them. They definitely wont like massive job losses, if that actually comes to fruition. Constant surveillance, "slop" news and entertainment, significantly reduced human contact - not popular. Like most technologies, AI benefits a small group - those who control the means of production - but everyone else loses out.
That we as a society are beholden to corporations is a myth those corporations want you to believe but its not how things actually work. If we come together to say no then those corporations either comply or will cease to exist.
Will people's lives really be better once they're drowning or choking on wildfire smoke? But hey, at least they had cheap junk!
It's possible to have better lives as well as societal progress without endless growth. Technological progress, too, doesn't have to mean burning our oceans. We just gotta actually think about the costs and consequences of our actions.
Not every technological development is inherently good. Sometimes the cost is not worth the result. I posit the cost of AI so far has been astronomical, higher than anything else in living memory. The results on the other hand have been rather middling.
This is my issue. A cost/benefit analysis, not a strict no to progress.
The emissions per kWh of energy used in providing internet downloads probably is similar to that per kWh of energy used for washing clothes.
Local storage and cache only have limits relative to available disk space in Chrome, IIRC, and can easily bloat to 100 GB without intervention. Personally I think that's a design flaw and they need customizable hard limits as well, but web browsers wasting space without asking is not a new or sudden development.
Excuse me while I go count the hairs on my chin to see if they are >= MIN_BEARD_THRESHOLD.
This is infuriating behavior.
Silicon Valley must wake up and understand the entire world does not live like them.
No, this is not true. The large requirement comes after a user wants to use the feature, not as a part of the normal upgrade. If the user never engages with the feature, it's not downloaded.
But I would consider us users to be more like an asset on their balance sheet. Not something they would care about the opinion of.
Chrome may be a privacy nightmare, but in terms of security it beats Mozilla.
He’s the founder of Brave, by the way.
You mean that Chrome browser re-skin that mines crypto without your consent? a private political donation from 10 years earlier
Yeah, he was only a bigot 10 years ago! I'm sure it's changed now.For me Firefox is (slightly) better than is used to be, not by a wide margin but it's not gotten worse either.
I've been running it since it was Phoenix so I think my experience is at least somewhat valid, which is why I'm so confused by these comments.
I'm really tired of such overinflated ridiculousness shrillness against Google. Yes there are very real tensions to this company and their as business is scary as heck.
But folks don't seem capable of processing duality, don't seem to be able to do much but ad-hominem until they pass out. Its really so exhausting having such empty energy charging in every single time, and it keeps obstructing any ability to think straight or assess.
That that feature (a) requires a local LLM, (b) will install a multi-GB download without telling the user, all happen without any explicit user consent.
(I miss Arc, such a shame it only gets security/chromium updates now ...)
And I think Codex's desktop client has a built-in browser now? At least I've seen someone using something like that. Nevermind Atlas is a thing now too. https://openai.com/index/introducing-chatgpt-atlas/
(Tell me if I'm misunderstanding you?)
Stardew Valley, universally acclaimed and not graphically intensive at all, still takes up nearly 2 GB of space.
Your view on games is not grounded in the reality of modern gaming.
Firefox does debatable thing, expressed outrage is at Level 11
but when
Chrome does far worse version of thing, expressed outrage is at Level Shrug.Now I can't see it anymore, but shouldn't the model be under chrome://on-device-internals/ -> model-status?
Maybe you can uninstall there too.
Consumers vote and advocate for what they want and don't want. There are many who say it's not an individual problem and should be dealt with broadly through regulation, then also oppose any attempts at regulation.
Until we're at that point though, the 'winners' in this market society (that wield unimaginable amounts of money = resources) such as Google could certainly think about consequences of their choices. And they usually do to some extent, I'm not saying they don't, just that electric supply and demand has two sides to it
Not everyone wants this at the cost of others. It's not as simple as that / not a necessary consequence of our desire to find clever solutions to solve everyday inconveniences
I hadn't considered that societies rightfully impose standards on these things.
I consider it too early to judge the cost-benefit, but it's fair that others might have already evaluated that. I rescind my comment.
--------
[1] Follow one of them around the way they track us online, or let out a bit of information about, for example, their tax affairs, and see how fast lawyers or law enforcement arrive on your doorstep…
Having said that, I keep a copy of Ungoogled Chromium for those websites that refuse to test against FF.
The feature that didn't say it would cost you 4Gb, right?
It's pretty terrible how much this kind of dynamic rules over tech. I'm not a 'capitalist', but god damn if competition isn't the most important thing to prevent total enshittification.
Ooh, this is interesting. There's nothing stopping them from sending jobs down to local machines. That's some 3 billion nodes. We went through this with coin mining and spam botting.
Nothing stopping it except your ire if it's discovered.
Also, as it turned out, Windows wasn't much more secure than Linux, and I guess we'll find this with Chrome as well. In fact I wonder if this isn't obvious already now that uBlock Origin doesn't work on Chrome any longer?
Besides, isn't Chrome approaching 20 years now and I still cannot have tree style tabs on it so it is still a toy browser meant for causual browsing, not work ;-)
This is anecdata, of course, take with a pinch of your preferred flavouring powder.
> Chrome also came in at slightly lower memory consumption across all the benchmarks with total memory usage on average at 4.67GB to Firefox at 4.83GB.
More people "legitimately" using Tor makes it less likely to have its exit nodes outright blocked, as well, and assuming all traffic from them is malicious.
Is your position really that any feature that “many” users failed to ask for must require additional consent to install?
And where is this registry features that a sufficient number of users asked for to allow it to be installed silently?
I had no idea chrome had this feature. Wish Apple had something like this honestly. https://blog.google/innovation-and-ai/technology/safety-secu...
Yes it's also that it's AI and mostly that chrome is foisting off all the cost of that AI model to me and other users. Without warning and explaining what this model is, is my workplaces power cost going to be up 10% because of whatever they want to run it for? Who knows.
There'd be a lot less complaining if they'd actually warned and less still if they asked.
(I used to dislike this "GNU/Linux" term, it seemed unimportant - Android showed me why the GNU part of it is)
Me, individually not doing something is gonna absolutely be drowned out by the scale of many other people not thinking of it or being incentivized against it.
This is a systemic issue. A systemic issue needs a systemic solution, not a blame shift to the individual.
We didn't get rid of lead in gas or asbestos in walls by telling people it was bad for them. We did so by banning it.
EDIT: whoops, should've scrolled down a bit on the website, looks like Waterfox has vertical tabs as well. damn, probably going to try to migrate to it sometime soon...
EDIT2: of course supports firefox extensions as well, perfect.
What did we expect when they dropped "don't be evil" from their company values?
Using the 3 regularly, no, Firefox is not "10 times better than Safari". Though, yes, Chrome(ium) is a ressource hog.
The action of performing real-life drastic sanctions against people you don’t tolerate is an extremism.
And it is the general opinion of most Mozilla idealists. Mozilla is a political project, and is dangerous to our democracy.
Doesn't that make it worse? They forced everyone to download 4GB of crap for nothing. They could have done one of two things:
(1) bundle the model with the application so you can tell ahead of time you're signing up for 4GB of bandwidth usage or
(2) make downloading the model some kind of opt-in thing.
Either of those would have worked. Just because you can easily tolerate 4GB of unplanned bandwidth usage doesn't mean everyone who can't is wrong.
Also, average doesn’t mean 50% lower and 50% higher.
It's the tech company's problem to convince me they are trying to do something useful to me. Come to think of it, it's their problem to convince me they still understand "useful to the customer" first.
We are probably on the brink of very bad consequences for a signification fraction of all humans (up to and including all of them, to some extent), which is a huge problem that needs to be addressed.
But what do you gain by incorrectly labeling that as "extinction"? Because you do definitely lose credibility for it, similarly to everybody using hyperbolic language such as "boiling the oceans" etc.
Wouldn't be easier for an email provider to classify the emails already?
Other than that - if the tool provides utility is good. Personally, I'd not touch it - everyone in the family uses firefox everywhere (incl. phones)
I always end up coming back to Safari for personal use. It seems to do the best job getting out of my way. I am annoyed by how Safari now handles browser extensions. I’d like them to take a page out of Orion’s book and support both Firefox and Chrome extensions. However, I generally have very few extensions, as they tend to slow things down, so this has been a relatively minor issue. The main things I’ve wanted extensions for in other browsers (like word lookup) have come out of the box in Safari (or Apple platforms as a whole) for quite a long time.
https://davidgerard.co.uk/blockchain/2020/06/06/the-brave-we...
People have problems with what they choose to program, not the quality of their code. I too have used FF since the beginning, but switched to Waterfox last year (it took me about two years to make that decision - I didn't make it lightly). I chose WF in large part because its profile remains compatible with FF so I can switch back if they calm the F down and start acting normal again for long enough to rebuild some trust.
https://en.wikipedia.org/wiki/Criticism_of_Mozilla_Corporati... - start at the end for most recent.
Also go to the website of any one of the FF forks and read their reasons for existing. For example:
https://developer.chrome.com/docs/ai/prompt-api
>With the Prompt API, you can send natural language requests to Gemini Nano in the browser.
But I think it as sarcasm is also wrong.
Windows even runs (semi-playably) 2020's shooters in this condition, though you need to kill any windows close to the tab limit that are full of recently opened tabs.
[Yes, I know, the horror]
Also, the next version of Gemini Nano will be based directly on Gemma 4 (so not distilled, not Gemini at all except for the name)[2].
So no, it's not a frontier model. Those don't run on your phone or in your browser.
[1]: https://developer.android.com/blog/posts/ml-kit-s-prompt-api...
[2]: https://android-developers.googleblog.com/2026/04/AI-Core-De...
The goal was to offer folks a means of supporting the development of a privacy-preserving browser, at no cost to them. We blogged about the feature at https://brave.com/blog/referral-codes-in-suggested-sites/, and ultimately disabled it by default. But there was never any "hijacking of links," or "swapping of affiliate codes".
The truth is less exciting, I know.
Is bigotry always a permanent condition?
Yes, people famously change more as they get older. Eich was already a man in his 40s at that point in time. He also doubled-down instead of acknowledging any wrongdoing.Which ones are you talking about? I'm talking about Firefox, not the Mozilla Corp to be clear.
Yes, and it's none of your business how other people spend their electricity.
Also, distillation is how most of these smaller models are made from the biggest models. That process largely defines the frontier along most of the curve.
Making Mozilla a politically-extremist organization intolerant to other opinions than theirs, and thus incompatible with being a steward of the global web.
Do you use a translation program to play browser games?
I'm not going to keep arguing with you. If you want to keep arguing, go to https://gemini.google.com/. Gemini knows what a frontier model is and it knows that Gemini Nano is fundamentally different from the other Gemini models. For one, it uses the Gemma architecture. And the next version of Gemini Nano is built directly on Gemma 4.
As for your original claim that I quoted, there are other "open American frontier-level models" by your definition. Like Gemma 4.
This is like saying that the part of driving where you wash dishes is why you, personally, need a dishwasher in your car. There is no feature that would fail the challenge if you can always claim that you need it to render a web page.
Local disk cache is a standard and reasonable feature expected by the vast majority of browser users. You are being obtuse.
Until that's resolved, I don't wish that debt incurred for frivolous uses.
Instead of trying to control other people, why can't you start with yourself? Throw away your phone/computer. Go live in a small hut. Practice what you preach.
I understand that it is difficult for me to shun (which is basically what I'm talking about) so many people, or to even know if they should be shunned, but it would definitely be my preference.
You may pay for it, but I and the rest of the planet incur the cost.
I can go live the life of a hermit and the above will still be true.
Your electricity use puts more pollution into our air. It burns our forests. It kills species we all depend on.
No man is an island. Your actions affect others. Just paying your indulgences does not make that basic fact away.
You seem to have no problem whatsoever with using electricity yourself. So when do you get to tell me (or anyone else) how to live? And when does it stop? Btw, this is all bizarrely dramatic since we were talking about small local models anyway.
>future generations
Yeah, and some will also say (using the same arguments) that having children is harmful to the planet and we need "measures" to limit that too.
That does not follow logically for me. As humans we disagree about many things, but we generally agree that things that we do often affect others, so one way or another, we need to come together and decide which things are agreed to be acceptable and which things are not.