Looking back ten years to `left-pad`, are there more successful attacks now than ever? I would suspect so, and surely the value of a successful attack has also increased, so are we actually getting better as a broad community at detecting them before package release? It's a complex space, and commercial software houses should do better, but it seems that whilst there are some excellent commercial products (e.g. CI scan tools), generally accessible, idiot friendly tooling is somewhat lacking for projects which start as hobby/amateur code but end up being a dependency in many other projects.
I've cross-posted my comment from the current SAP supply chain attack thread [0].
In the meantime, please use 2.6.1 until we publish 2.6.4.
For more details: https://github.com/Lightning-AI/pytorch-lightning/security/a...
"deependujha hi @thebaptiste, thanks for inquiring. Release of 2.6.2 is blocked due to some internal reasons. Will notify once release is made. "
I'd hate it if they knew of the problem that long ago and didn't warn until now. If someone has more info and can clarify I'd be thankful.
https://github.com/Lightning-AI/pytorch-lightning/issues/216...
FYI, pip added cooldowns in 26.1:
* https://discuss.python.org/t/announcement-pip-26-1-release/107108
* https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/
To use: * CLI: pip install --uploaded-prior-to=P1D ...
* Env Var: PIP_UPLOADED_PRIOR_TO=P1D pip install ...
* Config: pip config set global.uploaded-prior-to P1D[1] https://github.com/Lightning-AI/pytorch-lightning/issues/216...
[2] https://github.com/Lightning-AI/pytorch-lightning/issues/216...
[3] https://github.com/Lightning-AI/pytorch-lightning/issues/216...
[4] https://github.com/Lightning-AI/pytorch-lightning/issues/216...
[5] https://socket.dev/blog/lightning-pypi-package-compromised
maybe it's time for a nextgen opensnitch where the rules table is replaced by an active agent that watches connections and the process table?
Do folk not understand that by doing so, you're enabling modules to maliciously write themselves in to your code?
If you're interested in synchronicity and frequency illusion, Sergei v. Chekanov wrote a book that sounds interesting https://jwork.org/designed-world/
Have you ever experienced coincidences that cannot be logically explained? This book helps the readers understand the meaning of synchronicity, or remarkable coincidences in people's lives. This work not only explains the mystery of synchronicity, originally introduced by Carl Jung, but it also shows how to make simple calculations to estimate the chances that coincidences are not due to mere randomness.
https://github.com/Lightning-AI/pytorch-lightning/security/a...
This is why I have been building, for my own usecases, a new language + compiler + vm that is completely source based. The compiler does not understand linking. You must vendor every single dependency you use, including the standard library, so that it makes its way into the bytecode. The register VM itself is a few thousand lines of freestanding C. Any competent programmer can audit it over a weekend.
v1 deliberately keeps FFI (outside of a bounded set of linux syscalls) outside the current spec as libc has the habit of infecting everything it touches and I want to keep Vm0 freestanding. The last time I compiled the VM, it produced a 70KB binary and supported a loader with structural verification, the entire instruction set using a threaded interpreter, a simple Cheney+MS GC, concurrency via an Erlang-style M:N scheduler working on a single thread, and 20-odd marshaled functions.
Most software in the world does not need anything more than this. Everyone acts as if they are building the next Google.
”…for Shai-Hulud!!!”
From their website [1]
> Python does not publish official distributable binaries. As such, uv uses distributions from the Astral python-build-standalone project. See the Python distributions documentation for more details.
It points to this GitHub repo https://github.com/astral-sh/python-build-standalone which mentions this other link https://gregoryszorc.com/docs/python-build-standalone/main/r...
If I understand correctly, the source code for building Python is not fetched directly from python.org. Not so sure how secure is that.
I have the same concern for asdf [2]. However, they use pyenv [3] which, I think, feels more official.
Can someone clarify this? Which tool is better/more secure for installing python: uv or asdf?
[1] https://docs.astral.sh/uv/guides/install-python/
[2] https://github.com/asdf-community/asdf-python
[3] https://github.com/pyenv/pyenv/tree/master/plugins/python-bu...
For example, at that time, one way to distribute machine learning models was via Python pickles. Which are executable objects with no restriction built in. Models in this format could do anything on a computer where the model was imported. Such an early 'wild-west' ecosystem can definitely make security compromises easier and resulting supply chain attacks more common.
An extreme example is now when I make interactive educational apps for my daughter, I just make Opus use plain js and html; from double pendulums to fluid simulations, works one shot. Before I had hundreds of dependencies.
Luckily with MIT licensed code I can just tell Opus to extract exactly the pieces I need and embed them, and tweaked for my usecase. So far works great for hobby projects, but hopefully in the future productions software will have no dependencies.
https://github.com/search?q=A%20Mini%20Shai-Hulud%20has%20Ap...
One of these days Anthropic is going to be compromised and we’re all gonna be f*cked.
Think twice before looking at a package and most importantly, always pin your dependencies.
I can see trying to steal crypto, but what do they do if they get some AWS credentials? Try to run some crypto mining instances? Try to use your account for other types of crimes? Or is it mainly trying to steal data and then ask for ransoms?
Historically, extra-security-scanned artefact handling has been a paid enterprise option. Whereas the less secure option is the much-less-hassle default.
IDK how good a business model this is, I suspect not very.
And the reason it jumps from npm to pip to whatever is that it's trying to find all the user's keys in well known locations for any of these repos.
So teampcp is sitting on tens of thousands of passwords or keys and they just need time to run tests on them to figure out what packages they can release to get even more attacks out there.
Why all the major repo vendors haven't done a full cred wipe? No idea (unless they have and I just wasn't on the email list)
NPM should have returned error codes when the author of left-pad attempted to remove all his data with the intention of leaving the service.
To quote Wikipedia:
> After Koçulu expressed his disappointment with npm, Inc.'s decision and stated that he no longer wished to be part of the platform, Schlueter [author of NPM] provided him with a command that would delete all 273 modules that he had registered.
My impression is that the market currently rewards visible software functionality with little concern for invisible risk.
If we flipped the script, and investors were personally, criminally, and civilly liable for computer breaches, I imagine this problem would disappear almost overnight.
I can't vouch for the number of attacks, but, and since we are talking about Python, nothing substantially changed since the time of `left-pad`. The same bad things that enabled supply chain attacks in Python ten years ago are in place today. However, it looks like there are more projects and they are more interconnected than before, so, it's likely that there are either more supply chain attacks, or that they are more damaging, or both.
Here's my anecdotal experience with Python's packaging tools. For a while, I was maintaining a package to parse libconfuse configuration language. It started as a Python 2.7 project, but at the time there was already some version of Python 3 available, so, it was written in a way that was supposed to be future-proof.
I didn't need to change the code of the project in the last ten or so years, but roughly once a year something would break in the setup.py. Usually, because PyPA decided to remove a thing that didn't bother anyone.
When Python 3.13 came out, as clockwork, setup.py broke. I rolled up my sleeves and removed the dependency on setuptools, instead, I wrote some Python code that generated a wheel from the project's sources. I didn't look up the specification of the RECORD file in dist-info directory, and assumed that sha256().hexdigest() will generate the checksums in the desired format. And that's how I shipped my packages...
Some time later, the company added an AI reviewer to the company's repo and it discovered that instead of hexdigest() the checksums have to be base64-encoded and then padding removed...
Now, to the punchline: nobody cared. The incorrectly generated packages installed perfectly fine without warnings. Nobody checks the checksums.
More so: nobody checks that during `pip install` or the more fancy `uv pip install` the packages aren't built locally (i.e. nobody cares that package installation will result in arbitrary code execution). It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program. How do I know this? -- The company I work for ships its Python client as a... source package. Not intentionally. We are just lazy. But nobody cares.
It's real. As of the beginning of April we'd had 7 in the past 12 months vs 9 in the two decades before that: https://www.jefftk.com/p/more-and-more-extensive-supply-chai...
The value has increased, and that is what drives all these attacks. Cryptocurrencies are to blame in particular because they not just provided a way for money laundering the proceeds but also a juicy target in itself.
And what is stolen with today's malware? Cloud credentials. Either to use for illicit mining, which is on the decline, or to run extortion campaigns, which is made possible by cryptocurrencies. All too often it's North Korea or Iran running these campaigns.
python-build-standalone fetches CPython sources directly from python.org[1]. I don't even know where else we would get them from!
[1]: https://github.com/astral-sh/python-build-standalone/blob/a2...
I ran an open source project with tens of thousands of downloads (presumably all either developer machines or webservers, so even a small number is valuable) and never received a malicious pull request, offer of a bribe to install malware, or a phishing attempt with enough effort to even catch my attention.
What it says to me is that there weren't a lot of people working on the crime side of this. It's like dropping your wallet in a bar bathroom and coming back to find it still there.
Maybe a Python culture problem; maybe a hallmark of Python's status as an "easy to hire for", manager-friendly, least common denominator blub language; maybe a risk that stems from the conveniences of interpreter languages... but this is such a shame in this day and age.
It's seriously not difficult to do better. And if this is what you're doing, you're also missing out on reproducible environments both in dev and in prod. At least autogenerate a Nix package! You still don't need to publish any artifacts, but you can at least have the thing build in a sandbox or yeet the whole closure over SSH.
It's also not that hard to get a Docker image out of a Python project.
You only need one platform-minded person on the whole development team to make this happen.
What is going on???
You can call it laziness, but it's not like the python ecosystem has ever developed an answer for this problem. The only reasonable answer has been to use docker, which is basically admitting that the python community did nothing.
If my project has 100 dependencies, the release of an updated dependency will inevitably be a daily occurrence.
I'm sure the NSA does similar things to them but we aren't really informed about that detail.
I'm worried about, say, `mdformat` (a widely used formatter mostly maintained by one person in their spare time), not to mention some super-specific dependency that hasn't been updated in years and is 3 levels deep in your dep tree. I really don't want to pin & manually approve every single update for an app that's under active development, but it's beginning to look like that's mandatory for any serious app.
In the meantime, I've gotta go get my API keys out of my unencrypted `.env` files! Getting burned on a large, consumer-facing webapp would be embarrasing but logical, but losing hundreds to thousands of dollars because of some indirect dependency of some silly one-off demo repo that just happens to be on the same host & system as my `.env`s... oof.
Anyone know if OAI or Anthropic will refund you if you get your keys stolen like this? Or is it user error?
this account seems to store a lot of keys, not sure what theyre for
With the new generation of yolo NPM scripters, they simply don't evaluate the risks. They will even fight back telling you that it's the way of doing things.
In reality, it's the warning we learnt back then, that's the result of be mindlessly importing third dependencies without thinking.
In other words, the risks were always there, the new "modern way", let's put it that way, doesn't put the effort anymore.
I'm going to go publish some MIT-licensed remote access code and get that into Opus's training data.
It should be feasible to design vulnerabilities which look benign individually in training data, but when composed together in the agent plane & executed in a chain introduce an exploit.
There’s nothing technical really stopping that from existing right now. It’s just that nobody has put the effort in yet.
You say you rely on CC to suggest software to install from the internet, and then you install it.
I haven't heard anyone suggest CC or any LLM as a "filter" for "is this package safe right now", and it seems like a very bad heuristic to me, not only, but also for the reason you gave.
This malware isn't even trying. Then again it's Microsoft so they're not even trying either.
However a lot of the time especially for older codebases the docker build will just run pip install from public pypi without a proper lockfile.
So at least install code isn't being executed on your production machine, but still significant surface area for supply chain attacks
More like hiding their heads in the sand in circumstances that are outside of their ability to fix. None of the tooling or practices out there push you in the direction of not being at risk, or even provide you with easy ways to stay completely safe: no external packages needed to develop software with everything you NEED being provided out of the box, or a flow where pulling in a new package makes you review all of its source code line by line and compile everything instead of any binary tooling blobs, or built in vulnerability and configuration scanning so you don't get pwned by Trivy or don't leave an open S3 bucket somewhere, which also means that obviously you'd need thorough observability and alerting for any of the cloud stuff you do.
And even when they exist, your org projects might be painfully out of date, too much to use those approaches, or the org culture might not be there, or any number of other issues I can't even imagine. On one hand, people are running out of date software and those have CVEs, on the other using dependencies that are too new also puts you at risks of compromised packages - it's like we're being squeezed by rocks on both sides in a landslide or something. Even at the OS level, the fact that everyone is not running something like Qubes OS or regular VMs for development is absolutely insane. The fact that all software isn't sandboxed and that desktop OSes don't prompt for permissions like mobile apps do is absolutely insane. That we don't have firewalls like Glasswire as standard that prompt you for external connections, or don't allow easily blocking what you don't trust is insane.
Despite lots of people trying their best, on some level, everything both up and down the stack is absolutely fucked for a variety of complex reasons. You'd have to largely tear it all down and rebuild everything starting with your OS kernel in a memory safe language and formal proofs and thorough testing for everything (if it took SQLite as long as it did to get a decent test suite, it might as well take on the order of decades to do it for a production OS kernel and drivers), then do the same for all userland software and DBs and tooling and dependency management and secrets management (not just random files, special hardware most likely) and so on. It's not happening, so we just build towers of cards.
For something more practical: https://nesbitt.io/2026/03/04/package-managers-need-to-cool-...
Same with npm and large dependency trees with 10.5 line libraries of low quality.
Lighting always seemed to be the leftpad of PyTorch. It was basically a replacement for a for loop and a couple of backward/step calls. I'm sure now it grew to replace a few more lines of code though. Like maybe a 100.
If you want to look for a coming disaster, look no further than HuggingFace libraries that for some reason quite a lot of projects use these days, especially transformers package. Sadly even vllm depends on it.
Business school. Ahaha.
router_runtime.js
SHA256 5f5852b5f604369945118937b058e49064612ac69826e0adadca39a357dfb5b1
SHA1 f1b3e7b3eec3294c4d6b5f87854a52471f03997f
MD5 40d0f21b64ec8fb3a7a1959897252e09I assume you're using hyperbole.
Some of us are very aware and concerned about the risk. But like Cassandra from Greek mythology, we see the coming disaster and feel powerless to stop it.
> that's the result of be mindlessly importing third dependencies without thinking
tbf, most tech-related corporate environments don't want you to think, just do (kpi, mbo, okr et al) and this is one of the resultsThe more one knows about computer programming, algorithms, data structures, how things are usually implemented in general, the better one can avoid unnecessary dependencies. Needs the right environment though to execute on that.
So if you just need to do something simple like fire off a compute heavy background task and then get a result when it is done, you should probably just roll your own implementation on top of the threading API in your language. That'll probably be very stable. You don't need a massive background task orchestration framework.
People might object that the frameworks will handle edge cases that you've never thought of, but I've actually found in enterprise settings that the small custom implementations--if you actually keep it small and focused--can cover more of the edge cases. And the big frameworks often engineer their own brittle edge cases due to concerns that you just don't have.
So anyway, it isn't as simple as "dependencies are bad" or "dependencies are good", but every dependency has a cost/benefit analysis that needs to go along with it. And in an Enterprise, I'd argue that if you audit the existing dependencies you will find way too many of them that should be removed or consolidated because they were done for the speed of initial delivery and greenfielding. Eventually when you accumulate way too many of those dependencies the exposure to the supply chains, the need to keep them updated, the need to track CVEs in those deps, and the need to fix code to use updated versions of those dependencies, along with not have the direct ability to bugfix them, all combine to produce an ongoing tax of either continual maintenance or tech debt that will eventually bite you hard.
vs the dependency broke something and now you're responsible for working around someone else's broken code.
Honestly, I've seen much more of the latter. Especially nowadays with every single dependency thinking they are an fully fledged OS because an agent can add 1000 feature/bug in no time. Picking the right dependency maintaining by a sane maintainer is like digging potatoes in a minefield.
We seem to greatly overestimate the amount of code needed to do something.
For example, there are billions of lines of code from me pressing a key, to you seeing what I wrote. But if we were to make a special program that communicates via ipv6 and icmp, and it is written for hazard3 pico2350 with wiz5500 ethernet breakout, the whole thing including the c compiler to compile your code (which could very well outperform gcc -O3) will be 5-6k lines of code, including RA, and even barebones spi drivers, and a small preemptive os.
So, it is not unreasonable to manage all of those changes.
I don't buy the notion of things breaking down over time, though. For "first-party" code that sticks to HTML and CSS standards, and Stage 4 / finished ecmascript standards, the web is an absurdly stable platform.
It certainly used to be that we had to do all sorts of weird vendor hacks because nobody agreed on anything and supporting IE6 and 7 were nightmares, and blackberry's browser was awful, but those days are largely behind us unless you're doing some cutting-edge chrome-only early days proposed stuff or a browser specific extension or something else that isn't a polished standard.
Even with timezone changes, you're better off using the system's information with Intl.DateTimeFormat.
1. Packj (https://github.com/ossillate-inc/packj) detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).
2. This is just one of the four techniques the worm uses to phone home.
- I ask the LLM for multiple options
- I tell it what I need and what I don't need
- I then look at the packages it has suggested. Sometimes LLMs suggest unmaintained packages with 5 downloads a month just because it came at the top of a web search.
- if it's not a very well known project, I look at the code, I have received vibecoded dependency suggestions before that don't even function
LLMs are useful resources for "getting the pulse of the ecosystem", but just pressing enter is crazy.
> The attack steals credentials, authentication tokens, environment variables, and cloud secrets, while also attempting to poison GitHub repositories.
If they haven't started yet, they should require 2nd factor for publishing as well.
Any suggestions for a vm/container setup that works on a Linux host, provides the safety net you describe, and is still capable enough to try out all these things that people are talking about?
You would have to publish the infected package first to infect others who haven't pinned their dependencies. With a simple pip install -U, and if the dependency is not pinned, then they will get the vulnerable version.
If your attacker assumes that all or most software will be generated from language models, the time penalty is worth paying.
Of course most devs lie to ourselves because of our ego that pulling in deps is /just/ a time-saving measure, but of course we know there are some incredibly high quality libraries and frameworks that we don't have the skills or experience to replicate to the same level
If I remember correctly from Shai-Hulud 2, the attacker extricated creds by posting them in public github repos with minor easily reversible encryption. I believe it was double b64 last time.
I'm assuming the logic there is that every security researcher and company is going to pull and scan those creds for their stuff and their clients' stuff. So the attacker is just 1 of N people downloading it. As opposed to trying to send it to their own machine directly.
Cross portability and compilation and its very few dependency/stdlib approach with simplicity, I just really love golang.
I had built[0] a cuckoo.org alternative at https://fossbox.cloud which has only one dependency of gorilla web sockets aside from stdlib
If I were to rewrite it in rust, I couldn't say the same. Golang's stdlib is that good.
My point is, although I understand Rust can have some advantages in other areas, the advantages of golang outweigh rust for me by a very high margin. There is also the factor that I just feel more comfortable reading golang code and picking through it than rust.
It is my opinion that you can go a very very long way with a garbage collector than people imagine even on constrained systems. Unless absolutely necessary, thinking about GC feels like it might be a premature optimization in many instances which is worth thinking about.
[0]: More like (vibecoded?) as this is just a single file main.go which I had prompted on gemini 3.1 pro sometime ago. It was just a prototype which works surprisingly well that I had made because I was using the cuckoo website with friends but it kept on lagging.
The public repo path is just one of four parallel paths, with the goal of getting around any barriers:
The exfiltration component shares its design with the "Mini Shai-Hulud" mechanism from their last campaign, using four parallel channels so stolen data gets out even if individual paths are blocked.If they have a clue, the attacker still will not download that without using a botnet tunnel or Tor at a minimum.
Note though that these credentials aren't even encrypted using some lightweight ECC to prevent others from capturing them, they're posted in cleartext. Embarassment might be part of the point.
I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.
Now I think go will come close to this number, so in reality, there might not be a real difference. But a leak somewhere is far more likely especially as these are mostly vibe coded (my binary has multiple functionality).
The biggest advantage that go have over rust is the stdlib and ecosystem that doesn't depend on 100 packages. And maybe that will be the deciding factor in the future or someone (I'm getting increasingly itchy for it) will need to reinvent the ecosystem to be less like npm.
The PyPI package 'lightning', a widely-used deep learning framework, was compromised in a supply chain attack affecting versions 2.6.2 and 2.6.3 published on April 30, 2026. Teams building image classifiers, fine-tuning LLMs, running diffusion models, or developing time-series forecasters frequently have lightning somewhere in their dependency tree.
Running pip install lightning is all that is needed to activate. The malicious versions contain a hidden _runtime directory with obfuscated JavaScript payload that executes automatically upon module import. The attack steals credentials, authentication tokens, environment variables, and cloud secrets, while also attempting to poison GitHub repositories. It has Shai-Hulud themes including creating public repositories called EveryBoiWeBuildIsaWormBoi.
We believe that this attack is the work of the same threat actor behind the mini Shai-Hulud campaign. The IOC structure is consistent with that operation: the malicious commit messages follow the same Dune-themed naming convention, with this campaign using the prefix EveryBoiWeBuildIsAWormyBoi to distinguish it from the original Mini Shai-Hulud attack.
- lightning version 2.6.2
- lightning version 2.6.3
Semgrep has an advisory and rule to cover this so you can find to check your projects.
Trigger a new scan if you haven't recently on your projects.
Check the advisories page to see if any projects have installed these package versions recently: https://semgrep.dev/orgs/-/advisories
Check your dependency filter for matches. If you see “No matching dependencies” you are not actively using the malicious dependency in any of your projects. If you did match, additional advice on remediation and indicators of compromise are below.
If you matched: Also audit your repositories for the injected files listed in the IOCs below (.claude/ and .vscode/ directories with unexpected contents), and rotate any GitHub tokens, cloud credentials, or API keys that may have been present in the affected environment.
For general advice about how to deal with supply chain, cool down periods; our standard advice is covered by posts: $foo compromised in $packagemanager and Attackers are Still Coming for Security Companies.
Unlike mini Shai-Hulud, which targeted npm directly, the entry point here is PyPI. The malware payload is still JavaScript, and the worm propagation happens through npm.
Once running, if the malware finds npm publish credentials, it injects a setup.mjs dropper and router_runtime.js into every package that token can publish to, sets scripts.preinstall to execute the dropper, bumps the patch version, and republishes. And any downstream developer who installs one of those packages runs the full malware on their machine, has their tokens stolen and packages wormed.
The exfiltration component shares its design with the "Mini Shai-Hulud" mechanism from their last campaign, using four parallel channels so stolen data gets out even if individual paths are blocked.
HTTPS POST to C2. Stolen data is immediately POSTed to an attacker-controlled server over port 443. The domain and path are stored as encrypted strings in the payload, making static analysis harder.
GitHub commit search dead-drop. The malware polls the GitHub commit search API for commit messages prefixed with EveryBoiWeBuildIsAWormyBoi, which carry a double-base64-encoded token in the format EveryBoiWeBuildIsAWormyBoi:<base64(base64(token))>. Once decoded, the token is used to authenticate an Octokit client for further operations.
Attacker-controlled public GitHub repo. A new public repository is created with a randomly chosen Dune-word name and the description "A Mini Shai-Hulud has Appeared", which is directly searchable on GitHub. Stolen credentials are committed as results/results--.json (base64-encoded via the API, plain JSON inside), with files over 30 MB split into numbered chunks. Commit messages use chore: update dependencies as cover.
Push to victim's own repo. If the malware obtains a ghs_ GitHub server token, it pushes stolen data directly to all branches of the victim's own GITHUB_REPOSITORY.
The malware targets credentials across local files, environment, CI/CD pipelines, and cloud providers:
Filesystem: Scans 80+ credential file paths for ghp_, gho_, and npm_ tokens (up to 5 MB per file).
Shell / Environment: Runs gh auth token and dumps all environment variables from process.env.
GitHub Actions: On Linux runners, dumps Runner.Worker process memory via embedded Python and extracts all secrets marked "isSecret":true, along with GITHUB_REPOSITORY and GITHUB_WORKFLOW.
GitHub orgs: Checks token scopes (repo, workflow) and iterates GitHub Actions org secrets.
AWS: Tries environment variables, ~/.aws/credentials profiles, IMDSv2 (169.254.169.254), and ECS (169.254.170.2) to call sts:GetCallerIdentity; additionally enumerates and fetches all Secrets Manager values and SSM parameters.
Azure: Uses DefaultAzureCredential to enumerate subscriptions and access Key Vault secrets.
GCP: Authenticates via GoogleAuth and enumerates and fetches all Secret Manager secrets.
The targeting covers local dev environments, CI runners, and all three major cloud providers. Any machine that imported the malicious package during the affected window should be treated as fully compromised.
Once inside a repository, the malware plants persistence hooks targeting two of the most common developer tools: Claude Code and VS Code. This may be among the first documented instances of malware abusing Claude Code's hook system in a real-world attack.
Claude Code: .claude/settings.json. The malware writes a SessionStart hook with matcher: "*" into the repository's Claude Code settings, pointing to node .vscode/setup.mjs. It fires every time a developer opens Claude Code in the infected repo — no tool use or user action required beyond launching the session.
VS Code: .vscode/tasks.json. A parallel hook targets VS Code users via a runOn: folderOpen task that runs node .claude/setup.mjs every time the project folder is opened.
The dropper: setup.mjs. Both hooks invoke setup.mjs, a self-contained Bun runtime bootstrapper. If Bun isn't installed, it silently downloads bun-v1.3.13 from GitHub releases, handling Linux x64/arm64/musl, macOS x64/arm64, and Windows x64/arm64. It then executes .claude/router_runtime.js (the full 14.8 MB payload) and cleans up from /tmp.
Bonus payload: malicious GitHub Actions workflow. If the malware holds a GitHub token with write access, it pushes a workflow named Formatter to the victim's repository. On every push it dumps all repository secrets via ${{ toJSON(secrets) }} and uploads them as a downloadable Actions artifact named format-results. The actions are pinned to specific commit SHAs to appear legitimate.
Any repository that received the infected lightning package during CI and held a token with write access should be audited for these files.
Look for a few indicators:
A commit message prefixed with EveryBoiWeBuildIsAWormyBoi (dead-drop token carrier, searchable via GitHub commit search)
GitHub repos with description: "A Mini Shai-Hulud has Appeared" (attacker exfil repos, directly searchable)
- lightning@2.6.2
- lightning@2.6.3
_runtime/start.py | Python loader that initializes the payload on import |
runtime/routerruntime.js | Obfuscated JavaScript payload (14.8 MB, Bun runtime) |
_runtime/ | Directory added to the malicious package versions |
.claude/router_runtime.js | Malware copy injected into victim repos |
.claude/settings.json | Claude Code hook config injected into victim repos |
.claude/setup.mjs | Dropper injected into victim repos |
.vscode/tasks.json | VS Code auto-run task injected into victim repos |
.vscode/setup.mjs | Dropper injected into victim repos |