Actually, some software are running the water-heater/heat-pump system in my basement. There is a small blue light screen, it keeps logs of consumed electricity/produced heat and can make small histograms. Of course there is a smart option to make it internet connected. The kind of functionality I’m glad it’s disabled by default and not enforced to be able to operate. If possible, I’ll never upgrade it. Release then go back to the cave has definitely its place in many actual physical product in the world.
I’ll deal with enough WTF software security in my daily job during my career. Sparing some cognitive load of whatever appliance being turned into a brick because the company that produced it or some script-kiddy-on-ai-steroid decided it was desirable to do so, that’s more time to do whatever other thing cosmos allows to explore.
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug. In fact, there is an incentive not to upgrade when things are working, because it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc. And while the Linux kernel takes compatibility seriously, most distros do not and introduce compatibility-breaking changes with regularity. Binary compatibility is non-existent. Source compatibility is a crapshoot.
In contrast, you absolutely need to care about security bugs that allow people to run code on your system. So of course people want to treat security bugs differently from everything else and prioritize them.
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.
I suspect it's just an excuse for Linux's generally poor security track record.
Most software written at companies is shit. It’s whatever garbage someone slapped together and barely got working, and then they had to move onto the next thing. We end up squashing a never ending list of bugs because in a time-limited world, new features come first.
But that only really applies when the cost of good software dwarfs that of barely-functioning software. And when the marginal cost of polishing something is barely longer than it took to write it in the first place? There’s no reason not to take a few passes, get all the bugs out, and polish things up. Right now, AI can (and will) write an absolutely exhaustive set of test cases that handles far more than a human would ever have the motivation to write. And it will get better.
If a company can ship quality software in essentially the same time as it can ship garbage, the incentives will change rapidly. At least I hope so.
It was possible to work with Ada as soon as 1980 wherever high guarantee of reliability was taken seriously, for example.
And not everyone is Knuth with a personal human secretary in well funded world-top institution.
In 2000s, Microsoft which was already sitting on insanely high mountain of resources released Windows Millennium Edition. Ask your greybeard neighbour if you are too young to remember. While commercialisation started in 2000, it is the last MS-DOS-based Windows version and so represent the pinnacle of what Windows 9x represented, before the big switch to a NT inheritance.
As always, the largest advantage of the good all time, is selective memory. After all, people that can remember know they survived the era, while present and future never provided much certainty on that point.
That's what syzbot / syzkaller does, as mentioned in the article, with somewhat similar results to the AI-fuzzing that they've been experiencing recently.
The issue that Linux maintainers have in general is that there are so many of these "strict correctness and safety" bugs in the Linux codebase that they can't fix them all at once, and they have no good mechanism to triage "which of these bugs is accessible to create an exploit."
This is also the argument by which most of their bugs become CVEs; in lieu of the capability to determine whether a correctness bug is reachable by an attacker, any bug could be an exploit, and their stance is that it's too much work to decide which is which.
https://www.anthropic.com/news/mozilla-firefox-security
?
If the updated code is not open source, you are trusting blindly that not some kind of different remote code execution just happened without you knowing it.
Best/better because yes, QA actually existed and was important for many companies - QA could "stop ship" before the final master was pressed if they found something (hehe as it was usually games) "game breaking". If you search around on folklore or other historical sites you can find examples of this - and programmers working all night with the shipping manager hovering over them ready to grab the disk/disc and run to the warehouse.
HOWEVER, updates did exist - both because of bugs and features, and because programmers weren't perfect (or weren't spending space-shuttle levels of effort making "perfect code" - and even voyager can get updates iirc). Look at DooM for an example - released on BBS and there are various versions even then, and that's 1994 or so?
But it was the "worst" in that the frameworks and code were simply not as advanced as today - you had to know quite a bit about how everything worked, even as a simple CRUD developer. Lots of protections we take for granted (even in "lower level" languages like C) simply didn't exist. Security issues abounded, but people didn't care much because everything was local (who cares if you can r00t your own box) - and 2000 was where the Internet was really starting to take off and everything was beginning to be "online" and so issues were being found left and right.
For example, you had to know which Win32 functions caused ring-3 -> ring-0 transitions because those transitions could be incredibly costly. You couldn't just "find the right function" and move on. You had to find the right function that wouldn't bring your app (and entire system) to its knees.
I specifically remember hating my life whenever we ran into a KiUserExceptionDispatcher [0] issue, because even something as simple as an exception could kill your app's performance.
Additionally, we didn't get to just patch flaws as they arose. We either had to send out patches on floppy disks, post them to BBSs, or even send them to PC Magazine.
[0]: https://doar-e.github.io/blog/2013/10/12/having-a-look-at-th...
At the time of release, yes. They had to ensure the software worked before printing CDs and floppies. Nowadays they release buggy versions that users essentially test for them.
Programs didn’t auto save and regularly crashed. It was extremely common to hear someone talk about losing hours of work. Computers regularly blue screened at random. Device drivers weren’t isolated from the kernel so you could easily buy a dongle or something that single-handedly destabilized your system. Viruses regularly brought the white-collar economy to its knees. Computer games that were just starting to come online and be collaborative didn’t do any validation of what the client sent it (this is true sometimes now, but it was the rule back then).
Now, it's anti-virus (Crowdstrike) that does that. I don't think many or any virus or ransomware has ever had as big an impact at one time as Crowdstrike did. Maybe the ILOVEYOU worm.
In any case some of the software from before 2000 was definitely better than today, i.e. it behaved like being absolutely foolproof, i.e. nothing that you could do could cause any crash or corrupted data or any other kind of unpredictable behavior.
However, the computers to which most people had access at that time had only single-threaded CPUs. Even if you used a preemptive multitasking operating system and a heavily multi-threaded application, executing it on a single-threaded CPU was unlikely to expose subtle bugs due to race conditions, that might have been exposed on a multi-core CPU.
While nowadays there exists no standard operating system that I fully trust to never fail in any circumstance, unlike before 2003, I wonder whether this is caused by a better quality of the older programs or by the fact that it is much harder to implement software concurrency correctly on systems with hardware parallelism.
This was the big thing. There were tons of bugs. Not really bugs but vulnerabilities. Nothing a normal user doing normal things would encounter, but subtle ways the program could be broken. But it didn't matter nearly as much, because every computer was an island, and most people didn't try to break their own computer. If something caused a crash, you just learned "don't do that."
Even so, we did have viruses that were spread by sharing floppy disks.
Bad old days indeed!
Huh. Direct debugging, in assembly. At that point, why not jump down to machine code?
I trust that Linux has a process. I do not believe it is perfect. But it gives me a better assurance than downloading random packages from PyPi (though I believe that the most recent release of any random package on PyPi is still more likely safe than not--it's just a numbers game).
Nowadays those bugs still exist but a vast majority of bugs are security issues - things you have to fix because others will exploit them if you don't.
I wouldn't go that far. As soon as you went online all bets were off.
In the 90s we had java applets, then flash, browsers would open local html files and read/write from c:, people were used to exchanging .exe files all the time and they'd open them without scrutiny (or warnings) and so on. It was not a good time for security.
Then dial-up was so finicky that you could literally disconnect someone by sending them a ping packet. Then came winXP, and blaster and its variants and all hell broke loose. Pre SP2 you could install a fresh version of XP and have it pwned inside 10 minutes if it was connected to a network.
Servers weren't any better, ssh exploits were all over the place (even The Matrix featured a real ssh exploit) and so on...
The only difference was that "the scene" was more about the thrill, the boasting, and learning and less about making a buck out of it. You'd see "x was here" or "owned by xxx" in page "defaces", instead of encrypting everything and asking for a reward.
Do you remember the CSRSS Backspace Bug? [0]
A simple: printf("hung up\t\t\b\b\b\b\b\b"); from ring-3 would result in a BSOD. That was a pretty major embarrassment.
After retiring, I started volunteering my time to mentor CS students at two local universities. I work with juniors and seniors who have no idea what "heap memory" is because, for the most part, they don't need to know. For many developers, the web browser is the "operating system".
I absolutely love using Python because I don't have to worry about the details that were major issues back in the 90s. But, at the same time, when I run into an issue, I fully understand what the operating system is doing and can still debug it down to assembly if need be.
What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.
Also auto-save is a mixed bag. With manual save, I was free to start editing a document and then realize I want to save it as something else, or just throw away my changes and start over. With auto-save, I've already modified my original. It took me quite a while to adjust to that.
There's no way the AI is a priori understanding codebases with millions of LoC now. We've tried that already, it failed. What it is doing now is setting up its own extremely powerful test harnesses and getting the information and testing it efficiently.
Sure, its semantic search is already strong, but the real lesson that we've learned from 2025 is that tooling is way more powerful.
That's cool! I've always wanted to learn how kernel devs properly test stuff reliably but it seemed hard. As someone who's dabbled in kernel dev for his job. Like real variable hardware, and not just manual testing shit.
Honestly, AI has only helped me become a better SWE because no one else has the time or patience to teach me.
Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.
So we now have a new code base in an undefined language which still has memory bugs.
This is progress.
AI tools have caused me to trip up a few times too when I fail to notice how many changes haven’t been checked into git, and then the tool obliterates some of its work and a struggle ensues to partially revert (there are ways, both in git and in AI temporary files etc). It’s user error but it is also a new kind of occasional mistake I have to adapt to avoid. As with when auto-save started to become universal.
Good developers only write unsafe rust when there is good reason to. There are a lot of bad developers that add unsafe anytime they don't understand a Rust error, and then don't take it out when that doesn't fix the problem (hopefully just a minority, but I've seen it).
Which obviously isn't how it works in practice, just like how C doesn't delete all the files on your computer when your program contains any form of signed integer overflow, even though it technically could as that is totally allowed according to the language spec.
One feasible approach is to use "storytelling" as described here: https://www.ralfj.de/blog/2026/03/13/inline-asm.html That's talking about inline assembly, but in principle any other unsafe feature could be similarly modeled.
After all, if humans were able to routinely write bug-free code, why even worry about unsoundness and UB in C? Surely having developers write safe C code would be easier than trying to get a massive ecosystem to adopt a completely new and not exactly trivial programming language?
JetBlue and Delta use ViaSat. I only fly Delta for the most part and ViaSat was available on all domestic routes I’ve flown except for the smaller A900 that I take from ATL to Southwest GA (50 minute flight). Then I use my free unlimited 1 hour access through T-Mobile with GoGo ground based service.
Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it.
If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck.
This is in the linked story: they're seeing increased numbers of duplicate findings, meaning, whatever valid bugs showboating LLM-enabled Good Samaritans are finding, quiet LLM-enabled attackers are also finding.
People doing software security are going to need to get over the LLM agent snootiness real quick. Everyone else can keep being snooty! But not here.
I took him to be distinguishing between (1) just reading the code/docs and reasoning about it, and (2) that + crafting and running tests.
It's not okay to foist work onto other people because you don't think LLM slop is a problem. It is absolutely a problem, and no amount of apologizing and pontificating is going to change that.
Grow up and own your work. Stop making excuses for other people. Help make the world better, not worse. It's obvious that LLMs can be useful for this purpose, so people should use them well and make the reports useful. Period.
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
See the list at the bottom of the post for examples.
Nobody is saying there's no such thing as a slop report. Not only are there, but slop vulnerability reports as a time-consuming annoying phenomenon predate LLM chatbots by almost a decade. There's a whole cottage industry that deals with them.
Or did. Obsolete now.