One of the advantages of Trusted Publishing [0] is that we no longer need long-lived tokens with publish rights. Instead, tokens are generated on the CI VM and are valid for only 15 minutes.
This has already been implemented in several ecosystems (PyPI, npm, Cargo, Homebrew), and I encourage everyone to use it, it actually makes publishing a bit _easier_.
More importantly, if the documentation around this still feels unclear, don’t hesitate to ask for help. Ecosystem maintainers are usually eager to see wider adoption of this feature.
And then there's other non-sensical proposals like spelunking deep into projects some which could be over a decade old and just rip out all the dependencies until there's nothing but a standard library is left. Look, I'm all for a better std lib, I think reducing the number of dependencies we have is good. But just saying "you should reduce dependencies" will do nothing concrete to fix the problem which already exists, because it's much easier said than done.
So either tens of thousands or hundreds of thousands of developers stop using npm, and everyone refactors their projects to add more code and strip dependencies, or npm starts enforcing things like 2FA and OIDC for package developers with over X number of weekly downloads, and blocks publishing for those that don't follow the new security rules. I think it's clear which solution is more practical to implement. The only other option is for npm to completely lose its reputation and then we wind up with XKCD 927 again.
I've got no problem with doing an MFA prompt to confirm publish by a CI workflow - but last I looked this was a convoluted process of opening a https tunnel out (using a third party solution) such that you could provide the code.
I'd love to see either npm or GitHub provide an easy, out the box way, for me to provide/confirm a code during CI.
> A new Shai-Hulud branch was force pushed to angulartics2 with a malicious github action workflow by a collaborator. The workflow ran immediately on push (did not need review since the collaborator is an admin) and stole the npm token. With the stolen token, the attacker published malicious versions of 20 packages. Many of which are not widely used, however the @ctrl/tinycolor package is downloaded about 2 million times a week.
I still don't get it. An admin on angulartics2 gets hacked, his Github access is used to push a malicious workflow that extracts an npm token. But why would an npm token in angulartics2 have publication rights to tinycolor?
Why is local 2FA unsustainable?! The real problem here is automated publishing workflows. The overwhelming majority of NPM packages do not publish often enough or have complicated enough release steps to justify tokens with the power to publish without human intervention.
What is so fucking difficult about running `npm publish` manually with 2FA? If maintainers are unwilling to do this for their packages, they should reconsider the number of packages they maintain.
I freaking HATE tokens. I hate them.
There should be a better way to do authentication than a glorified static password.
An example of how to do it correctly: Github as a token provider for AWS: https://aws.amazon.com/blogs/security/use-iam-roles-to-conne... But this is an exception, rather than a rule.
I think the right way to approach this is to unbundle uploading the packages & publishing packages so that they're available to end-users.
CI systems should be able to build & upload packages in a fully automated manner.
Publishing the uploaded packages should require a human to log into npmjs's website & manually publish the package and go through MFA.
This is how Go works: you import by URL, e.g. "example.com/whatever/pkgname", which is presumed to be a VCS repo (git, mercurial, subversion, etc.) Versioning is done by VCS tags and branches. You "publish" by adding a tag.
While VCS repos can and have been compromised, this removes an entire attack surface from the equation. If you read every commit or a diff between two tags, then you've seen it all. No need to also diff the .tar.gz packages. I believe this would have prevented this entire incident, and I believe also the one from a few weeks ago (AFAIK that also only relied on compromised npm accounts, and not VCS?)
The main downside is that moving a repo is a bit harder, since the import path will change from "host1.com/pkgname" to "otherhost.com/pkgname", or "github.com/oneuser/repo" to "github.com/otheruser/repo". Arguably, this is a feature – opinions are divided.
Other than that, I can't really think of any advantages a "publish package"-step adds? Maybe I'm missing something? But to me it seems like a relic from the old "upload tar archive to FTP" days before VCS became ubiquitous (or nigh-ubiquitous anyway).
I had just about convinced myself that we should be using a GitHub action to publish packages because there was always the possibility that publishing directly via 2FA, that one (or specifically I) could fuck up and publish something that wasn’t a snapshot of trunk.
But I worried about stuff like this and procrastinated on forcing the issue with the other admins. And it looks like the universe has again rewarded my procrastination. I don’t know what the answer is but giving your credentials to a third party clearly isn’t it.
Imo, this is one of the most classical ways organizations get pwned: That one sin from your youth years ago comes to bite you in the butt.
We also had one of these years ago. It wasn't the modern stack everyone was working to scan and optimize and keep us secure that allowed someone to upload stuff to our servers. It was the editor that had been replaced years and years ago, and it's replacement had also been replaced, the way it was packaged wasn't seen by the build-time security scans, but eventually someone found it with a URL scan. Whoopsie.
I also think it makes sense for GitHub to implement the ability to mark a workflow as sensitive and requiring "sudo mode" (MFA prompt) to run. It's not miles away from what they already do around requiring maintainer approval to run workflows on PRs.
Ideally both of these would exist, as not every npm package is published via GitHub actions (or any CI system), and not every GitHub workflow taking a sensitive action is publishing an npm package.
I wonder if someday we'll find there's also a more active process, which resembles "remove old shit because it may contain security vulnerabilities."
The OP gave the GH repo too broad permissions. There is no good reason for the repo CI workflow to have full access to everything under their account.
Solutions like generating them live with a short lifetime, using solutions like oauth w/ proper scopes, biscuits that limit what they can do in detail, etc, all exist and are rarely used.
I once heard from a sysadmin that didn't want to automate certificate renewal and other things, because he believed that doing so would take away useful skills or some inner knowledge of how the system works. Because of the human error risk, I thought that was stupid, but when it comes to approval processes, I think it makes sense. Especially because pushing code doesn't necessarily mean the same thing as such an approval, or the main device that you push code from could also get compromised, using your phone as 2FA could save you.
Then again, maybe I'm also stupid and the way we build our software is messed up on a fundamental level with all of the dependencies and nobody being able to practically audit all of the code they import, given deadlines, limited skills and resources and so on. Maybe it's all just fighting against a windmill.
Packages which don't have approval and review by a reliable third party shouldn't be visible by default in a package manager.
In the case of this worm, the OIDC flow wouldn’t even help. The GitHub workflow was compromised. If the workflow was using an OIDC credential like this to publish to npm, the only difference would be the npm publish command wouldn’t use any credential because the GitHub workflow would inject some temporary identity into the environment. But the root problem would remain: an untrusted user shouldn’t be able to execute a workflow with secret parameters. Maybe OIDC would limit the impact to be more fine-grained, but so would changing the token permissions.
Ongoing downstream review of all dependency code is practical for only a tiny fraction of projects; for most projects using publisher reputation as a proxy for package safety is reasonable.
What’s not working is the low-standards package managers where inconveniencing authors is never acceptable because the whole enterprise is built on popularity with authors — you can’t trust that what those package managers give you reflects author intent.
(right now I don't know the answer to that for the stuff I'm responsible for, but I'm in the process of researching and setting up and configuring the sort of tools needed to automate that.)
Speaking knowingly reductionistically and with an indeterminate amount of sarcasm, one of the hardest problems in security is how to know something without knowing something. The first "knowing something" is being able to convince a security system to let you do something, and the second is the kind that an attacker can steal.
We do a lot of work trying to separate those two but it's a really, really hard problem, right down at its very deepest core.
I know I was amused 5-10 years ago as we went through a lot of gymnastics. "We have an SSH password here that we use to log in to this system over there and run this process." "That's not secure, because an attacker can get the password. Move that to an SSH key." "That's not secure, an attacker can get the key. Move the key into this secret manager." "That's not secure, an attacker can get into the secret manager. Move it to this 2FA system." "That's not secure, an attacker can get the 2FA token material, move it to...."
There are improvements you can make; if nothing else a well-done 2FA system means an attacker has to compromise 2 systems to get in, and if they are non-correlated that's a legit step up. But I don't think there's a full solution to "the attacker could" in the end. Just improvements.
If I control the issuing and governance of these short-lived secrets, they very much help against many attacks. Go ahead and extract an upload token for one project which lives for 60 seconds, be my guest. Once I lose control how these tokens are created, most of these advantages go away - you can just create a token every minute, for any project this infrastructure might be responsible for.
If I maintain control about my pipeline definition, I can again do a lot of work to limit damage. For example, if I am in control, I can make sure the stages running untrusted codes have as little access to secrets as possible, and possibly isolate them in bubblewrap, VMs, ..., minimize the code with access to publishing rights. Once I lose control about the pipeline structure, all that goes away. Just add a build step to push all information and secrets to mastodon in individual toots, yey.
To me, this has very much raised questions about keeping pipeline definitions and code in one repository. Or at least, to keep a publishing/release process in there. I don't have a simple solution there, especially for OSS software with little infrastructure - it's not an easy topic. But with these supply chain attacks coming hot and fast every 2 weeks, it's something to think about.
You won't be able to exfiltrate a token that allows you to publish an NPM package outside of a workflow, the infection has to happen during a build on GH.
It would have made little difference if the environment variable was NPM_WEBIDENTITY instead of NPM_TOKEN. The workflow was still compromised.
I can look into that.
- [GitHub - safedep/vet: Protect against malicious open source packages](https://github.com/safedep/vet)
- [GitHub - AikidoSec/safe-chain](https://github.com/AikidoSec/safe-chain)
- npm audit
In any case, if the choice is “frequent supply chain compromise, take it or leave it”, the answer is of course “leave it”.
If we need to pay for curated packages because the problems with NPM are endemic, that’s not unreasonable.
In the meantime, I'm trying to do my part through occasional random spot inspections when there's an update to a package, and encourage others to do the same for swarm coverage.
Yeah, there's that insane entitlement. More demands for others' time and labor, plus the conflation between you demanding labor vs if people don't agree to your free labor demands, they're pro supply chain compromise.
A malicious GitHub Actions workflow was pushed to a shared repo and exfiltrated a npm token with broad publish rights. The attacker then used that token to publish malicious versions of 20 packages, including @ctrl/tinycolor
.
My GitHub account, the @ctrl/tinycolor repository were not directly compromised. There was no phishing involved, and no malicious packages were installed on my machine and I already use pnpm to avoid unapproved postinstall scripts. There was no pull request involved because a repo admin does not need a pull request to add new github actions.
GitHub/npm security responded quickly, unpublishing the malicious versions. I followed by releasing clean versions to flush caches, as advised.
For broader context, see Socket’s write-up or StepSecurity’s analysis. For community discussion, see this Hacker News post, which spent 24 hours on the front page. I’m also finding this wiz.io post helpful.
On September 15 around 4:30 PM PT, Wes Todd DM’d me on Bluesky and looped me into the OpenJS Foundation Slack. By that point, Wes had already alerted GitHub/npm security, who were compiling lists of affected packages and rapidly unpublishing compromised versions.
Early guidance (attributed to Daniel Pereira) was to look for suspicious Shai-Hulud
repos or branches. I wasn’t able to find any of these repos or branches on my own personal repos. The mystery was: how was I impacted at all?
Shai-Hulud was the Fremen term for the sandworm of Arrakis. - dune wiki
A while ago, I collaborated on angulartics2, a shared repository where multiple people still had admin rights. That repo still contained a GitHub Actions secret — a npm token with broad publish rights. This collaborator had access to projects with other people which I believe explains some of the other 40 initial packages that were affected.
A new Shai-Hulud branch was force pushed to angulartics2 with a malicious github action workflow by a collaborator. The workflow ran immediately on push (did not need review since the collaborator is an admin) and stole the npm token. With the stolen token, the attacker published malicious versions of 20 packages. Many of which are not widely used, however the @ctrl/tinycolor package is downloaded about 2 million times a week.
GitHub and npm security teams moved quickly to unpublish the malicious versions. I then re-published fresh, verified versions of the packages I maintain to flush caches and restore trust.
Malicious versions of several packages — including @ctrl/tinycolor — were briefly available on npm before removal. Installing those compromised versions would have triggered a postinstall payload, which is documented in detail by StepSecurity.
What should you do if you’ve installed a compromised version of a package? see StepSecurity’s immediate actions.
I currently use semantic-release with GitHub Actions to handle publishing. The automation is convenient and predictable. I also have npm provenance enabled on many packages, which provides attestations of how they were built. Unfortunately, provenance didn’t prevent this attack because the attacker had a valid token.
My goal is to move to npm’s Trusted Publishing (OIDC) to eliminate static tokens altogether. However, semantic-release integration is still in progress: npm/cli#8525.
For the forseeable future, @ctrl/tinycolor requires 2FA for publishing, and all tokens have been revoked. Not expecting to merge any new changes anytime soon.
For smaller packages, I’ll continue using semantic-release but under stricter controls: no new contributors will be added, and each repo will use a granular npm token limited to publish-only rights for that specific package.
Local 2FA based publishing isn’t sustainable, so I’m watching OIDC/Trusted Publishing closely and will adopt it as soon as it fits the workflow.
I plan to continue using pnpm that prevents unapproved postinstall scripts from being run and I’ll look into adding pnpm’s new minimumReleaseAge setting.
If I could wave a magic wand and design my ideal setup, npm would allow me to require Trusted Publishing (OIDC) with a single toggle for all of my packages. That same toggle would block any release missing provenance, enforcing security at the account level. I’d also want first-class semantic-release support with OIDC and provenance so no static tokens are ever needed.
On top of that, I’d like a secure, human-approved publishing option directly in the GitHub UI: a protected workflow_dispatch flow that uses github 2FA approval to satisfy 2FA, without requiring me to publish from my laptop.
GitHub Environments — or equivalent workflow protections — should be available without a Pro subscription, or else integrated directly into Trusted Publishing so that security doesn’t depend on the pricing tier.
It would be really nice if NPM also had a more visible mark on the package details page to indicate if the package had a postinstall script. Also, once the packages are pulled its not clear what versions were removed and why.
Thanks to Wes Todd, the OpenJS Foundation, and the GitHub/npm security teams for their rapid and coordinated response. Everyone was incredibly fast, helpful, and knowledgeable.