When Gemini came around, rather than that service being disabled by default for those keys, Gemini was enabled, allowing exploiters to easily utilize these keys (Ex. a "public" key stored in an APK file)
There are no "leaked" keys if google hasn't been calling them a secret.
They should ideally prevent all keys created before Gemini from accessing Gemini. It would be funny(though not surprising) if their leaked key "discovery" has false positives and starts blocking keys from Gemini.
A slew of recommendations, one of them being:
Disable Dormant Keys: Audit your active keys and decommission any that show no activity over the last 30 days.
(Although I don't think this even addresses the underlying issue)
Changing the semantics of existing non-key keys, making them actually keys is horrendous
Even if you have a key that you use for maps (not secret) someone could add the generative AI scope to it and make it now necessarily secret (even though it’s probably already publicly available)?
This whole Gemini roll-out has me reminded of the Google '+' days when they thought they were going to die if they didn't do social.
This destroys Google's right to pursue an unpaid "AI" bill as a debt.
Can't you just run up a huge bill for a developer by spamming requests with their key? I don't see how this wasn't always an issue?
It feels like something that would happen if you outsourced planning to an LLM.
Malpractice/I can't believe they're just rolling forward
I’m very careful with Google and co since they’re so intent on infinite scaling access to your wallet
Making AI utilization appear to go up is the only thing that matters right now if you're in the boardroom at one of these companies. Whether or not that utilization was actually intended by the customer is entirely irrelevant. From here, the only remaining concern is mitigating legal issues which google seems to be immune to.
It will be more interesting if they scan GitHub code instead. The number terrified me. Though I am not sure how many of that are live.
I mean, I get that authentication to the service is performed via other means, but what's the use of the key then?
I'm guessing it's just a matter of binding service invocations to the GCP Project to be billed, by first making sure that the authenticated principal has rights on that project, in order to protect from exfiltration. That would still be a strange use case for what gets called an "API key".
- 6 weeks ago Google said they would fix it
- 3 weeks ago Google said they were working on it
...but we're publishing the info anyway, so everyone can go nuts with it.
That said, I’d actually argue there’s an evolutionary explanation behind this where at a certain size, and more importantly complexity, an oversight like this becomes even more likely, not less.
Which is what makes this so notable. Did the security review not catch this, or did they choose to launch anyways because it was too hard to fix and speed was of the essence?
The problem described here is that developer X creates an API key intended for Maps or something, developer Y turns on Gemini, and now X's key can access Gemini without either X or Y realizing that this is the case.
The solution is to not reuse GCP projects for multiple purposes, especially in prod.
This is going to break so many applications. No wonder they don't want to admit this is a problem. This is, like, whole-number percentage of Gemini traffic, level of fuck-up.
Jesus, and the keys leak cached context and Gemini uploads. This might be the worst security vulnerability Google has ever pushed to prod.
But the fact that permissions are not hardened at time of creation is bonkers to me.
The problem that you, and many people are having in this thread, is that you are typing "API key" but, in your head, you're thinking "private API key". API keys can be secret or public, and many services have matching pairs of secret and public keys (Stripe, Chargify, etc. etc. etc.)
But there's a second insight that seems tough for a security review to catch. You have to realize that even though you can't do anything obviously malicious with the API, there is a billing problem.
1. You never know how much a single API request will cost or did cost for the gemini api
2. It takes anywhere between 12-24 hours to tell you how much they will charge you for past aggregate requests
3. No simple way to set limits on payment anywhere in google cloud
4. Either they are charging for the batch api before even returning a result, or their "minimal" thinking mode is burning through 15k tokens for a simple image description task with <200 output tokens. I have no way of knowing which of the two it is. The tokens in the UI are not adding up to the costs, so I can only assume its the first.
5. Incomplete batch requests can't be retrieved if they expire, despite being charged.
6. A truly labyrinthine ui experience that makes modern gacha game developers blush
All I have learned here is to never, ever use a google product.
https://docs.cloud.google.com/billing/docs/how-to/budgets
They are still not a spending cap of course.
I think this was much less likely to happen without the needless obfuscation. If the only purpose is to identify what project the data is for, and you're trusting the client to report that value, and counseling the client to use that value in a way that trivially exposes it to everyone... what is the point of making it look like cryptic garbage? Just use the account signup name or something, and don't call it a "key" in your query parameters. Keys are supposed to unlock stuff. A name tag is not a key.
Imagine for a moment the there is no oversight. Every intern can ship prod code with their own homemade crypto.
How do you, in a retail business, agree to accept credentials that anyone can mint for free?
I mean obviously it happened. But… this doesn’t even seem like a compliance mistake. It’s a business-level mistake.
Isn't that squarely at odds with Google's supposed AI prowess? Is the rot really so severe that their advances in AI (including things they've yet to make public) are insufficient to overcome it? Or are the capabilities of Gemini and AI systems in general being oversold?
Google Maps has one, even. And Stripe.
Not perfect protection of course - an attacker could spam requests with all the right headers if they wanted to - but it removes one of the big motivations for copying someone else's API key.
[1] https://docs.cloud.google.com/api-keys/docs/add-restrictions...
Distributed “shared nothing” API handling should make usage available to accounting, and the API handling orchestrator should have a hook that allows accounting to revoke or flag a key.
This gets the accounting transactions and key availability management out of the request handling.
Then I saw the disclosure at the end and didn't get the sense that the flaw was fixed, so then I was still thinking... Is it responsible for them to be sharing this?
I'm glad that they did, because I can audit my own projects, but a bad actor may also be glad that they did.
The fact that we're hearing this first from a third-party and not from Google themselves is extremely problematic.
It shouldn't be enabled by default on either one.
Of course, Google is full of smart anti-fraud experts, they just handle 80% of this shit on the back-end, so they don't care about the front-end pain.
Do you have a link?
If a company like Google, with its ability to attract the best of the best, cannot handle the complexity of security and safety with SaaS/PaaS products, at what point do we say that perhaps this sector needs much more oversight?
There's a long stretch from over optimizing a UI to something that is very clearly an error like what has happened here.
You are also wrong in saying there are no projects that could reasonably have a safe api key made unsafe by this exploit.
One example, a service that has firebase auth must publish the key (Google's docs recommend). Later, you add gen ai to that service, managing access using IAM/service accounts (the proper way). You've now elevated the Firebase Auth Key to be a Gemini key. Really undeniably poor from Google.
The problem here is that people create an API key for use X, then enable Gemini on the same project to do something else, not realizing that the old key now allows access to Gemini as well.
Takeaway: GCP projects are free and provide strong security boundaries, so use them liberally and never reuse them for anything public-facing.
How did this get past any kind of security review at all? It’s like using usernames as passwords.
Of course, I bring this up because they could just version their API keys, completely solving this problem and preventing future ones like it.
Versioning data formats is wrongthink over there, so I’m guessing they just… won’t.
But also, I don’t think even Google would claim that their LLM stuff can solve problems like this.
SSNs were a good potential identifier, until the people that needed security cheaped out and started using SSNs as a bad implementation of security. Now they're bad at both purposes!
The only purpose of the keys Maps/Stripe encourage you to publicly put into your website is to guarantee it is talking to _your_ Google/Stripe account not someone else's. Obviously once you put them in your client they are of zero value in helping Google/Stripe identify you. The fact that Google allows you to use the same type of key they also use elsewhere to identify _you_ not _them_ was always incredibly bad design. Google already have the 'Project ID' which would have been the best thing to use.
An oversimplified version is this: So there are two core very critical components to the mid/late-phase tech megacorp strategy, you need to protect the core money printing product at all cost first and sustain that fiercely over a long period of time (decade+), then use any and all profits to find/fund the next cash cow, looking for optionality. While doing that, grow the market or consume a larger share of market. Google benefited from mainly the latter two and all while the internet blew up globally, funneling even more money into the machine.
It’s no secret that nearly every Google product that wasn’t search, lost them money. They were searching for the next big thing. They likely were some of the first to see AI as exactly that but moved too slowly to commercialize. Likely because of bureaucracy risk and also perhaps some sense of altruism in knowing the cataclysmic impacts AI could have. There have been plenty of former Google employees confirming this.
They also used to do things just to be cool, but those days have been long gone since Larry Page tapped out (and probably a few years before that, about a decade). Since then they’ve almost completely lost sight of what made them so successful that nobody even knows their vision or identity as a company today. These don’t correlate to market cap but they do silently lead to stagnation.
Their brand protects them from quite a lot but it’s not invincible.
I pretty much sure that if anyone asked Gemini "Is it good idea to retroactively opt-in new services into for old API keys?" it would suggest it's bad idea. Problem is that no one asked.
I like that. Easy to tell if you should keep the key a secret or not.
[Edit: It's likely that you intended to reply to this comment: https://news.ycombinator.com/item?id=47163147 ]
It is entirely believable to me that a company like Google would do the same with AI use numbers. I suspect that all these AI use factors in corporate performance reviews are about the same thing.
This could be a standard oversight too, I find Google’s documentation on this stuff to be Byzantine.
It sent me to a url: https://console.cloud.google.com/google/maps-apis/onboard;fl...
which auto-generated an API key for me to paste into things ASAP.
---
Get Started on Google Maps Platform You're all set to develop! Here's the API key you would need for your implementation. API key can be referenced in the Credentials section.
It’s pretty much a daily occurrence in all three of the big cloud subs that people still learning get wiped out because the clouds refuse to provide appropriate safeguards
Also, for APIs with quotas you have to be careful not to use multiple GCP projects for a single logical application, since those quotas are tracked per application, not per account. It is definitely not Google's intent that you should have one GCP project per service within a single logical application.
I can somewhat follow this line of thinking, it’s pretty intentional and clear what you’re doing when you flip on APIs in the Google cloud site.
But I can’t wrap my mind around what is an API key. All the Google cloud stuff I’ve done the last couple years involves a lot of security stuff and permissions (namely, using Gemini, of all things. The irony…).
Somewhat infamously, there’s a separate Gemini API specifically to get the easy API key based experience. I don’t understand how the concept of an easy API key leaked into Google Cloud, especially if it is coupled to Gemini access. Why not use that to make the easy dev experience? This must be some sort of overlooked fuckup. You’d either ship this and API keys for Gemini, or neither. Doing it and not using it for an easier dev experience is a head scratcher.
This resonates so well and I love it. I'm stealing this
Things get stupid for sure. But I have never once seen “hey let’s do away with access controls for high-COGS services”.
(Although `pk` always freaks me out. Public or private?! Oh, right, the other one's "secret".)
The extensive experience with Enterprise Authentication that the decades of use of Active Directory has given Microsoft may mean that their SSO and Enterprise Authentication stuff is the best out of those on offer. I wouldn't know about that... I just made (and destroyed) VMs and was often driven to frustration whenever Azure failed to reliably perform that simple task.
(/s, of course)
Artifical Intelligence service design and lack of human intelligence are highly correlated. Who'd have guessed??
At $DAYJOB, we had a (not very special) special arrangement with GCP, and I never heard of anyone who was unable to create a project in our company's orgs [0].
Given how Google never, ever wants to have a human do customer support, I expect a robot will quickly auto-approve requests for "number of projects" quota increases. I know that's how it worked at work.
[0] ...with the exception of errors caused by GCP flakiness and other malfunction, of course.
To this day I am unable to access the models they say I should be able to.
I still get 2.5 only, despite enabling previews in the google cloud config etc etc.
The access seems to randomly turn on and off and swaps depending on the auth used (Oauth, api-key, etc)
The entire gemini-cli repo looks like it is full of slop with 1000 devs trying to be the first to pump every issue into claude and claim some sort of clout.
It is an absolute shit show and not a good a look.
app-scripts creates projects as well but maps just generates api keys in the current project
--- Get Started on Google Maps Platform You're all set to develop! Here's the API key you would need for your implementation. API key can be referenced in the Credentials section.
There's usually a small handful of people that care more than they should, keeping the company afloat, but it's despite the company's policies, not because of them.
You can't maliciously embed it in a site you control to either steal map usage or run up their bill because other people's web browsers will send the correct host header.
That means you can use a botnet or similar to request it using a a script. But if you are botnetting Google will detect you very quickly.
Re host header seems an odd way for Google to do it, surely they would have fixed that by now? I guess not a huge problem as attackers would have to proxy traffic or something to obscure the host headers sent by real clients? Any links on how people exploit this?
(Or at least didn't at the time I've tried to use it. That may have changed, but we don't know when the GP tried it either.)
You can do what you're describing but it's not the model Google is expecting you to use, and you shouldn't have to do that.
It seems what happened here is that some extremely overzealous PM, probably fueled by Google's insane push to maximize Gemini's usage, decided that the Gemini API on GCP should be default enabled to make it easier for people to deploy, either being unaware or intentionally overlooking the obvious security implications of doing so. It's a huge mistake.
So many organizations have the IAM "Project creator" role assigned to everyone at the org level. I think it's even a default.
They don't do anything against that.
Something that can be abused is if the key also has other Maps APIs enabled, like Places API, Routes API or Static APIs especially for scraping because those produce valuable info beyond just embedding a map.
The only suggestions I have are:
- If you want to totally hide the key, proxy all the requests through some server.
- Restrict the key to your website.
- Don't enable any API that you don't use, if you only use the Maps Javascript API to embed a map then don't enable any other Maps API for that key.
Like deciding ATM cabinets should be default open to make it easier for people to withdraw cash.
No, there must be more behind this than overzealotry.
The only suggestion I see there from a quick skim that would avoid the above is for customers to set up a google maps proxy server for every usage with adds security and hides the key. That is completely impractical suggestion for the majority of users of embedded google maps.
tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.

Google Cloud uses a single API key format (AIza...) for two fundamentally different purposes: public identification and sensitive authentication.
For years, Google has explicitly told developers that API keys are safe to embed in client-side code. Firebase's own security checklist states that API keys are not secrets.
Note: these are distinctly different from Service Account JSON keys used to power GCP.

Source: https://firebase.google.com/support/guides/security-checklist#api-keys-not-secret
Google's Maps JavaScript documentation instructs developers to paste their key directly into HTML.

This makes sense. These keys were designed as project identifiers for billing, and can be further restricted with (bypassable) controls like HTTP referer allow-listing. They were not designed as authentication credentials.
Then Gemini arrived.

When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.
This creates two distinct problems:
Retroactive Privilege Expansion. You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.
Insecure Defaults. When you create a new API key in Google Cloud, it defaults to "Unrestricted," meaning it's immediately valid for every enabled API in the project, including Gemini. The UI shows a warning about "unauthorized use," but the architectural default is wide open.

The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet.
What makes this a privilege escalation rather than a misconfiguration is the sequence of events.
A developer creates an API key and embeds it in a website for Maps. (At that point, the key is harmless.)
The Gemini API gets enabled on the same project. (Now that same key can access sensitive Gemini endpoints.)
The developer is never warned that the keys' privileges changed underneath it. (The key went from public identifier to secret credential).
While users can restrict Google API keys (by API service and application), the vulnerability lies in the Insecure Default posture (CWE-1188) and Incorrect Privilege Assignment (CWE-269):
Implicit Trust Upgrade: Google retroactively applied sensitive privileges to existing keys that were already rightfully deployed in public environments (e.g., JavaScript bundles).
Lack of Key Separation: Secure API design requires distinct keys for each environment (Publishable vs. Secret Keys). By relying on a single key format for both, the system invites compromise and confusion.
Failure of Safe Defaults: The default state of a generated key via the GCP API panel permits access to the sensitive Gemini API (assuming it’s enabled). A user creating a key for a map widget is unknowingly generating a credential capable of administrative actions.
The attack is trivial. An attacker visits your website, views the page source, and copies your AIza... key from the Maps embed. Then they run:
curl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"
Instead of a 403 Forbidden, they get a 200 OK. From here, the attacker can:
Access private data. The /files/ and /cachedContents/ endpoints can contain uploaded datasets, documents, and cached context. Anything the project owner stored through the Gemini API is accessible.
Run up your bill. Gemini API usage isn't free. Depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account.
Exhaust your quotas. This could shut down your legitimate Gemini services entirely.
The attacker never touches your infrastructure. They just scrape a key from a public webpage.
To understand the scale of this issue, we scanned the November 2025 Common Crawl dataset, a massive (~700 TiB) archive of publicly scraped webpages containing HTML, JavaScript, and CSS from across the internet. We identified 2,863 live Google API keys vulnerable to this privilege-escalation vector.

Example Google API key in front-end source code used for Google Maps, but also can access Gemini
These aren't just hobbyist side projects. The victims included major financial institutions, security companies, global recruiting firms, and, notably, Google itself. If the vendor's own engineering teams can't avoid this trap, expecting every developer to navigate it correctly is unrealistic.
We provided Google with concrete examples from their own infrastructure to demonstrate the issue. One of the keys we tested was embedded in the page source of a Google product's public-facing website. By checking the Internet Archive, we confirmed this key had been publicly deployed since at least February 2023, well before the Gemini API existed. There was no client-side logic on the page attempting to access any Gen AI endpoints. It was used solely as a public project identifier, which is standard for Google services.

We tested the key by hitting the Gemini API's /models endpoint (which Google confirmed was in-scope) and got a 200 OK response listing available models. A key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.
We reported this to Google through their Vulnerability Disclosure Program on November 21, 2025.
Nov 21, 2025: We submitted the report to Google's VDP.
Nov 25, 2025: Google initially determined this behavior was intended. We pushed back.
Dec 1, 2025: After we provided examples from Google's own infrastructure (including keys on Google product websites), the issue gained traction internally.
Dec 2, 2025: Google reclassified the report from "Customer Issue" to "Bug," upgraded the severity, and confirmed the product team was evaluating a fix. They requested the full list of 2,863 exposed keys, which we provided.
Dec 12, 2025: Google shared their remediation plan. They confirmed an internal pipeline to discover leaked keys, began restricting exposed keys from accessing the Gemini API, and committed to addressing the root cause before our disclosure date.
Jan 13, 2026: Google classified the vulnerability as "Single-Service Privilege Escalation, READ" (Tier 1).
Feb 2, 2026: Google confirmed the team was still working on the root-cause fix.
Feb 19, 2026: 90 Day Disclosure Window End.
Transparently, the initial triage was frustrating; the report was dismissed as "Intended Behavior”. But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously.
They expanded their leaked-credential detection pipeline to cover the keys we reported, thereby proactively protecting real Google customers from threat actors exploiting their Gemini API keys. They also committed to fixing the root cause, though we haven't seen a concrete outcome yet.
Building software at Google's scale is extraordinarily difficult, and the Gemini API inherited a key management architecture built for a different era. Google recognized the problem we reported and took meaningful steps. The open questions are whether Google will inform customers of the security risks associated with their existing keys and whether Gemini will eventually adopt a different authentication architecture.
Google publicly documented its roadmap. This is what it says:
Scoped defaults. New keys created through AI Studio will default to Gemini-only access, preventing unintended cross-service usage.
Leaked key blocking. They are defaulting to blocking API keys that are discovered as leaked and used with the Gemini API.
Proactive notification. They plan to communicate proactively when they identify leaked keys, prompting immediate action.
These are meaningful improvements, and some are clearly already underway. We'd love to see Google go further and retroactively audit existing impacted keys and notify project owners who may be unknowingly exposed, but honestly, that is a monumental task.
If you use Google Cloud (or any of its services like Maps, Firebase, YouTube, etc), the first thing to do is figure out whether you're exposed. Here's how.
Step 1: Check every GCP project for the Generative Language API.
Go to the GCP console, navigate to APIs & Services > Enabled APIs & Services, and look for the "Generative Language API." Do this for every project in your organization. If it's not enabled, you're not affected by this specific issue.

Step 2: If the Generative Language API is enabled, audit your API keys.
Navigate to APIs & Services > Credentials. Check each API key's configuration. You're looking for two types of keys:
Keys that have a warning icon, meaning they are set to unrestricted
Keys that explicitly list the Generative Language API in their allowed services
Either configuration allows the key to access Gemini.

Step 3: Verify none of those keys are public.
This is the critical step. If a key with Gemini access is embedded in client-side JavaScript, checked into a public repository, or otherwise exposed on the internet, you have a problem. Start with your oldest keys first. Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API.
If you find an exposed key, rotate it.
Bonus: Scan with TruffleHog.
You can also use TruffleHog to scan your code, CI/CD pipelines, and web assets for leaked Google API keys. TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.
trufflehog filesystem /path/to/your/code --only-verified
The pattern we uncovered here (public identifiers quietly gaining sensitive privileges) isn't unique to Google. As more organizations bolt AI capabilities onto existing platforms, the attack surface for legacy credentials expands in ways nobody anticipated.
Webinar: Google API Keys Weren't Secrets. But then Gemini Changed the Rules.
tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.

Google Cloud uses a single API key format (AIza...) for two fundamentally different purposes: public identification and sensitive authentication.
For years, Google has explicitly told developers that API keys are safe to embed in client-side code. Firebase's own security checklist states that API keys are not secrets.
Note: these are distinctly different from Service Account JSON keys used to power GCP.

Source: https://firebase.google.com/support/guides/security-checklist#api-keys-not-secret
Google's Maps JavaScript documentation instructs developers to paste their key directly into HTML.

This makes sense. These keys were designed as project identifiers for billing, and can be further restricted with (bypassable) controls like HTTP referer allow-listing. They were not designed as authentication credentials.
Then Gemini arrived.

When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.
This creates two distinct problems:
Retroactive Privilege Expansion. You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.
Insecure Defaults. When you create a new API key in Google Cloud, it defaults to "Unrestricted," meaning it's immediately valid for every enabled API in the project, including Gemini. The UI shows a warning about "unauthorized use," but the architectural default is wide open.

The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet.
What makes this a privilege escalation rather than a misconfiguration is the sequence of events.
A developer creates an API key and embeds it in a website for Maps. (At that point, the key is harmless.)
The Gemini API gets enabled on the same project. (Now that same key can access sensitive Gemini endpoints.)
The developer is never warned that the keys' privileges changed underneath it. (The key went from public identifier to secret credential).
While users can restrict Google API keys (by API service and application), the vulnerability lies in the Insecure Default posture (CWE-1188) and Incorrect Privilege Assignment (CWE-269):
Implicit Trust Upgrade: Google retroactively applied sensitive privileges to existing keys that were already rightfully deployed in public environments (e.g., JavaScript bundles).
Lack of Key Separation: Secure API design requires distinct keys for each environment (Publishable vs. Secret Keys). By relying on a single key format for both, the system invites compromise and confusion.
Failure of Safe Defaults: The default state of a generated key via the GCP API panel permits access to the sensitive Gemini API (assuming it’s enabled). A user creating a key for a map widget is unknowingly generating a credential capable of administrative actions.
The attack is trivial. An attacker visits your website, views the page source, and copies your AIza... key from the Maps embed. Then they run:
curl "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"
Instead of a 403 Forbidden, they get a 200 OK. From here, the attacker can:
Access private data. The /files/ and /cachedContents/ endpoints can contain uploaded datasets, documents, and cached context. Anything the project owner stored through the Gemini API is accessible.
Run up your bill. Gemini API usage isn't free. Depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account.
Exhaust your quotas. This could shut down your legitimate Gemini services entirely.
The attacker never touches your infrastructure. They just scrape a key from a public webpage.
To understand the scale of this issue, we scanned the November 2025 Common Crawl dataset, a massive (~700 TiB) archive of publicly scraped webpages containing HTML, JavaScript, and CSS from across the internet. We identified 2,863 live Google API keys vulnerable to this privilege-escalation vector.

Example Google API key in front-end source code used for Google Maps, but also can access Gemini
These aren't just hobbyist side projects. The victims included major financial institutions, security companies, global recruiting firms, and, notably, Google itself. If the vendor's own engineering teams can't avoid this trap, expecting every developer to navigate it correctly is unrealistic.
We provided Google with concrete examples from their own infrastructure to demonstrate the issue. One of the keys we tested was embedded in the page source of a Google product's public-facing website. By checking the Internet Archive, we confirmed this key had been publicly deployed since at least February 2023, well before the Gemini API existed. There was no client-side logic on the page attempting to access any Gen AI endpoints. It was used solely as a public project identifier, which is standard for Google services.

We tested the key by hitting the Gemini API's /models endpoint (which Google confirmed was in-scope) and got a 200 OK response listing available models. A key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.
We reported this to Google through their Vulnerability Disclosure Program on November 21, 2025.
Nov 21, 2025: We submitted the report to Google's VDP.
Nov 25, 2025: Google initially determined this behavior was intended. We pushed back.
Dec 1, 2025: After we provided examples from Google's own infrastructure (including keys on Google product websites), the issue gained traction internally.
Dec 2, 2025: Google reclassified the report from "Customer Issue" to "Bug," upgraded the severity, and confirmed the product team was evaluating a fix. They requested the full list of 2,863 exposed keys, which we provided.
Dec 12, 2025: Google shared their remediation plan. They confirmed an internal pipeline to discover leaked keys, began restricting exposed keys from accessing the Gemini API, and committed to addressing the root cause before our disclosure date.
Jan 13, 2026: Google classified the vulnerability as "Single-Service Privilege Escalation, READ" (Tier 1).
Feb 2, 2026: Google confirmed the team was still working on the root-cause fix.
Feb 19, 2026: 90 Day Disclosure Window End.
Transparently, the initial triage was frustrating; the report was dismissed as "Intended Behavior”. But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously.
They expanded their leaked-credential detection pipeline to cover the keys we reported, thereby proactively protecting real Google customers from threat actors exploiting their Gemini API keys. They also committed to fixing the root cause, though we haven't seen a concrete outcome yet.
Building software at Google's scale is extraordinarily difficult, and the Gemini API inherited a key management architecture built for a different era. Google recognized the problem we reported and took meaningful steps. The open questions are whether Google will inform customers of the security risks associated with their existing keys and whether Gemini will eventually adopt a different authentication architecture.
Google publicly documented its roadmap. This is what it says:
Scoped defaults. New keys created through AI Studio will default to Gemini-only access, preventing unintended cross-service usage.
Leaked key blocking. They are defaulting to blocking API keys that are discovered as leaked and used with the Gemini API.
Proactive notification. They plan to communicate proactively when they identify leaked keys, prompting immediate action.
These are meaningful improvements, and some are clearly already underway. We'd love to see Google go further and retroactively audit existing impacted keys and notify project owners who may be unknowingly exposed, but honestly, that is a monumental task.
If you use Google Cloud (or any of its services like Maps, Firebase, YouTube, etc), the first thing to do is figure out whether you're exposed. Here's how.
Step 1: Check every GCP project for the Generative Language API.
Go to the GCP console, navigate to APIs & Services > Enabled APIs & Services, and look for the "Generative Language API." Do this for every project in your organization. If it's not enabled, you're not affected by this specific issue.

Step 2: If the Generative Language API is enabled, audit your API keys.
Navigate to APIs & Services > Credentials. Check each API key's configuration. You're looking for two types of keys:
Keys that have a warning icon, meaning they are set to unrestricted
Keys that explicitly list the Generative Language API in their allowed services
Either configuration allows the key to access Gemini.

Step 3: Verify none of those keys are public.
This is the critical step. If a key with Gemini access is embedded in client-side JavaScript, checked into a public repository, or otherwise exposed on the internet, you have a problem. Start with your oldest keys first. Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API.
If you find an exposed key, rotate it.
Bonus: Scan with TruffleHog.
You can also use TruffleHog to scan your code, CI/CD pipelines, and web assets for leaked Google API keys. TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.
trufflehog filesystem /path/to/your/code --only-verified
The pattern we uncovered here (public identifiers quietly gaining sensitive privileges) isn't unique to Google. As more organizations bolt AI capabilities onto existing platforms, the attack surface for legacy credentials expands in ways nobody anticipated.
Webinar: Google API Keys Weren't Secrets. But then Gemini Changed the Rules.