While the cause is noble, the medical detection of child abuse faces serious issues with undetected and unacknowledged false positives [2], since ground truth is almost never knowable. The prevailing idea is that certain medical findings are considered proof beyond reasonable doubt of violent abuse, even without witnesses or confessions (denials are extremely common). These beliefs rest on decades of medical literature regarded by many as low quality because of methodological flaws, especially circular reasoning (patients are classified as abuse victims because they show certain medical findings, and then the same findings are found in nearly all those patients—which hardly proves anything [3]).
I raise this point because, while not exactly software bugs, we are now seeing black-box AIs claiming to detect child abuse with supposedly very high accuracy, trained on decades of this flawed data [4, 5]. Flawed data can only produce flawed predictions (garbage in, garbage out). I am deeply concerned that misplaced confidence in medical software will reinforce wrongful determinations of child abuse, including both false positives (unjust allegations potentially leading to termination of parental rights, foster care placements, imprisonment of parents and caretakers) and false negatives (children who remain unprotected from ongoing abuse).
[1] https://hs.memberclicks.net/executive-committee
[2] https://news.ycombinator.com/item?id=37650402
[3] https://pubmed.ncbi.nlm.nih.gov/30146789/
[5] https://www.sciencedirect.com/science/article/pii/S002234682...
> Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous <snip> This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper.
I was briefly part of an experiment with electronic patient records in an ICU in the early 2000s. My job was to basically babysit the server processing the records in the ICU.
The entire staff hated the system. They hated having to switch to computers (this was many years pre-ipad and similarly sleek tablets) to check and update records. They were very much used to writing medications (what, when, which dose, etc) onto bedside charts, which were very easy to consult and very easy to update. Any kind of dataloss in those records could have fatal consequences. Any delay in getting to the information could be bad.
This was *not* just a case of doctors having unfounded "feelings" that computers were dangerous. Computers were very much more dangerous than pen and paper.
I haven't been involved in that industry since then, and I imagine things have gotten better since, but still worth keeping in mind.
If you only take one thing away from this article, it should be this one! The Therac-25 incident is a horrifying and important part of software history, it's really easy to think type-systems, unit-testing and defensive-coding can solve all software problems. They definitely can help a lot, but the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.
There was a great Cautionary Tales podcast about the device recently[0], one thing mentioned was that, even aside from the catasrophic accidents, Therac-25 machines were routinely seen by users to show unexplained errors, but these issues never made it to the desk of someone who might fix it.
[0] https://timharford.com/2025/07/cautionary-tales-captain-kirk...
Personally, I've found even the latest batch of agents fairly poor at embedded systems, and I shudder at the thought of giving them the keys to the kingdom to say... a radiation machine.
I bet some readers are thinking that the developer that caused this tragedy retired with the millions he earned, maybe sailed his yacht to his Caribbean mansion. But the $300K FAANG salaries and multi-million stock options for senior developers represents the last decade or two. In the 1980's, developers were paid poorly and commanded little respect. The heroes in tech companies that sold expensive devices were the salesmen back then. The commission on the sale of a single Therac-25 probably exceeded the developer's salary.
All of the following would indicate that this developer, no matter how senior or capable, was still a low-paid schlub:
- It's Canada, so automatically 20% lower salaries than in the U.S. (AECL is in Canada, so it's a good bet that the developer was Canadian.)
- It's the 1980s, so pre-web, pre-smartphones, pre-Google/Amazon, and developers had little recognition and low demand.
- It's government, known to pay poorly for developers. (AECL is a government-owned corporation.)
- It's mostly embedded software. Even though embedded software can be incredibly complex and life-critical, it's the least visible, so it's among the lower paid areas of software engineering (even today).
For 1986, I would put his salary at $30-50K Canadian, or converted to U.S. dollars at that time would be $26-43K U.S., and inflation adjusted would be $78-129K U.S. today. And no stock options.
The previous devices had hardware interlocks. So if the software glitched, it was just an annoying glitch - nobody got zapped. But mature software gets trusted, so they removed the hardware interlock as redundant. And then the annoying glitches became fatal. Total miscommunication. The people cost-reducing the hardware interlock only saw mature, trustworthy software. The people living with the glitches only saw them as annoying, but harmless. And then, disaster.
That gets actively dangerous; a lot of more recent safety mishaps are more of the variety of "processes were followed, but things went hilariously off the rails and no one noticed and spoke up".
Culture and expertise matter just as much if not more, especially today now that we all (in theory) should understand source control, testing, safer languages, etc.
I think Admiral Rickover's methods apply just as much today, and applying that kind of thinking would fill major gaps in a lot of organizations - he emphasized good communication, a sense of responsibility, and thinking on your feet, and his safety record is unmatched.
I think aviation also approaches process a bit better - by having much of it be more informal, less rigid checklists, it doesn't encourage people to suspend judgement so much.
There's also the Tankship Tromedy, which really emphasizes the engineering legwork of just chasing down, understanding and fixing every last failure mode you can find.
https://www.dieselduck.info/library/08%20policies/2006%20The...
> That standard [IEC 62304] is surrounded by other technical reports and guidances recognized by the FDA, on software risk management, safety cases, software validation. And I can tell you that the FDA is very picky, when they review your software design and testing documentation. For the first version and for every design change.
> That’s good news for all of us. An adverse event like the Therac 25 is very unlikely today.
This is a case where regulation is a good thing. Unfortunately I see a trend lately where almost any regulation is seen as something stopping innovation and business growth. There are room for improvements and some areas are over regulated, but we don't want a "DOGE" chainsaw to regulations without knowing what the consequences are.
[1]: N. G. Leveson and C. S. Turner, "An investigation of the Therac-25 accidents," in Computer, vol. 26, no. 7, pp. 18-41, July 1993.
[2]: Nancy Leveson. Safeware: System Safety and Computers. Addison-Wesley, 1995.
[3]: http://sunnyday.mit.edu/papers/therac.pdf
[4]: https://web.mit.edu/6.033/2014/wwwdocs/papers/therac.pdf
Which makes me very nervous about AI generated code and people who don’t clam human authorship. The bug that creeps in where we scapegoat the AI isn’t gonna cut it in a safety situation.
They can't. There was a single developer, he left, no tests existed, no one understood the mess to confidently make changes. At this point you can either lie your way through the regulators or scrap the product altogether.
I've seen this kind of devs and companies running their software in regulated industries like in the therac incident, just now we are in the year 2025. I left because i understood that it's a criminal charge waiting to happen.
Critical issues happen with customers, blame gets shifted, a useless fix is proposed in the post mortem and implemented (add another alert to the waterfall of useless alerts we get on call), and we continue to do ineffective testing. Procedural improvements are rejected by the original authors who were then promoted and want to keep feeling like they made something good and are now in a position to enforce that fiction.
So IMO the lesson here isn't that everyone should focus on culture and process, it's that you won't have the right culture and process and (apparently) laws and regulation can overcome the lack of culture and process.
Some of the controls were 'born' in a world of hardware interlocks, and so the engineers used the frame of mind where hardware interlocks exist.
Some time later, the interlocks were replaced with software controls. Since everything had worked before, all the software had to do was what worked before.
But it is VERY difficult to challenge all of your assumptions about what "working" means.
---
This is also a good reminder that work is done by people and teams, not corporations. That is - just because somebody knows the fine details, that does not mean that the corporation knows the fine details.
Having written and validated both FDA and CLIA software, I'd suggest that process is never sufficient.
Plenty of well-meaning people will create and follow incomplete plans and hand-wave away issues when they sign off -- particularly people who gravitate towards rule-based, formulaic work in a hierarchy.
You need people both capable of and willing to seriously question whether proof is really proof, and who will stand up for some random patient in the distant future over their boss and colleagues on a deadline -- and yet they cannot be oppositional or egotistical, and must have deep insight into the subject matter.
It's really, really hard to find those people.
Can't imagine that radiation might be a factor here...
Supposedly there are mechanical switches that prevent that, but evidently "modern" microwaves can control the gun through the logic board.
The engineering failures that led to this, from conceptual to design to internal control, boggle my mind. I'm not even sure where to send a complaint or if it would result in any kind of compensation. Because billion dollar corporations know that they'll never have to face any kind of corporate death penalty because they're protected by limited liability. So we'll just buy another $150 microwave instead.
Are smaller companies better at engineering safety? Evidently not.
>One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup
Kinda reminds me how everything is touchscreen nowadays from car interfaces to industry critical software
I was taught about this in engineering school, as part of a general engineering course also covering things like bathtub reliability curves and how to calculate the number of redundant cooling pumps a nuclear power plant needs. But it's a long time since I was in college.
Is this sort of thing still taught to engineers and developers in college these days?
> A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors.
Which to me reads as "this entire codebase was so awful that it was bound to fail in some or other way".
> Taking a couple of programming courses or programming a home computer does not qualify anyone to produce safety-critical software. Although certification of software engineers is not yet required, more events like those associated with the Therac-25 will make such certification inevitable. There is activity in Britain to specify required courses for those working on critical software. Any engineer is not automatically qualified to be a software engineer — an extensive program of study and experience is required. Safety-critical software engineering requires training and experience in addition to that required for noncritical software.
After 32 years, this didn't go the way the report's authors expected, right?
If not, why not hardware limit the power input to the machine, so even if the software completely failed, it would not be physically capable of delivering a fatal dose like this?
In my experience, hardware people really dis software. It's hard to get them to take it seriously.
When something like this happens, they tend to double down on shading software.
I have found it very, very difficult to get hardware people to understand that software has a different ruleset and workflow, from hardware. They interpret this as "cowboy software," and think we're trying to weasel out of structure.
The Therac-25 was a radiation therapy machine built by Atomic Energy Canada Limited in the 1980s. It was the first to rely entirely on software for safety controls, with no hardware interlocks. Between 1985 and 1987, at least six patients received massive overdoses of radiation, some fatally, due to software flaws.
One major case in March 1986 at the East Texas Cancer Center involved a technician who mistyped the treatment type, corrected it quickly, and started the beam. Because of a race condition, the correction didn’t fully register. Instead of the prescribed 180 rads, the patient was hit with up to 25,000 rads. The machine reported an underdose, so staff didn’t realize the harm until later.
Other hospitals reported similar incidents, but AECL denied overdoses were possible. Their safety analysis assumed software could not fail. When the FDA investigated, AECL couldn’t produce proper test plans and issued crude fixes like telling hospitals to disable the “up arrow” key.
The root problem was not a single bug but the absence of a rigorous process for safety-critical software. AECL relied on old code written by one developer and never built proper testing practices. The scandal eventually pushed regulators to tighten standards. The Therac-25 remains a case study of how poor software processes and organizational blind spots can kill—a warning echoed decades later by failures like the Boeing 737 MAX.
Take this post-mortem here [1] as a great warning and which also highlights exactly what could go horribly wrong if the LLM misreads comments.
What's even more scarier is each time I stumble across a freshly minted project on GitHub with a considerable amount of attention, not only it is 99% vibe-coded (very easy to detect) but it completely lacks any tests written for it.
Makes me question the ability of the user prompting the code in the first place if they even understand how to write robust and battle-tested software.
[0] https://news.ycombinator.com/item?id=44764689
[1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
https://www.medicaleconomics.com/view/what-if-emrs-were-clas...
Another interesting fact mentioned in the podcast is that the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized. Excellent demonstration of the Swiss Cheese Model: https://en.wikipedia.org/wiki/Swiss_cheese_model
I am a developer and whatever software system I touch breaks horribly. When my family wants to use an ATM, they tell me to stand at a distance, so that my aura doesn't break things. This is why I will not get into a self-driving car in the foreseeable future — I think we place far too much confidence in these complex software systems. And yet I see that the overwhelming majority of HN readers are not only happy to be beta-testers for this software as participants in road traffic, but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems, in spite of every other software system breaking and falling apart around them.
[anticipating immediate common responses: 1) yes, I know that self-driving car companies claim that their cars are statistically safer than human drivers, this is beyond the point here. One, they are "safer" largely because they drive so badly that other road participants pay extra attention and accommodate their weirdness, and two, they are still new, complex and poorly understood systems. 2) "you already trust your life to software systems" — again, beyond the point, not quite true as many software systems are built to have human supervision and override capability (think airplanes), and others are built to strict engineering requirements (think brakes in cars) while self-driving cars are not built that way.]
I have years of experience at Boeing designing aircraft parts. The guiding principle is that no single failure should cause an accident.
The way to accomplish this is not "write quality software", nor is it "test the software thoroughly". The idea is "assume the software does the worst possible thing. Then make sure that there's an independent system that will prevent that worst case."
For the Therac-25, that means a detector of the amount of radiation being generated, which will cut it off if it exceeds a safe value. I'd also add that the radiation generator be physically incapable of generating excessive radiation.
As far as I know, the Therac-25 incidents were reasonably honest mistakes.
I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.
Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.
You can't teach people to care.
Which is not to say that software hasn't killed people before (Horizon, Boeing, probably loads of industrial accidents and indirect process control failures leading to dangerous products, etc, etc). Hell, there's a suspicion that austerity is at least partly predicated on a buggy Excel spreadsheet, and with about 200k excess deaths in a decade (a decade not including Covid) in one country, even a small fraction of those being laid at the door of software is a lot of Theracs.
AI will probably often skate away from responsibility in the same way that Horizon does: by being far enough removed and with enough murky causality that they can say "well, sure, it was a bug, but them killing themselves isn't our fault"
I also find AI copilot things do not work well with embedded software. Again, people YOLOing embedded isn't new, but it might be about to get worse.
https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...
I mean even simple crud web apps where the data models are more complex, and where the same data has multiple structures, the LLMs get confused after the second data transformation (at the most).
E.g. You take in data with field created_at, store it as created_on, and send it out to another system as last_modified.
Analog systems do not behave like computers.
By focusing on particular errors, there's the possibility you'll think "problem solved".
By focusing on process, you hope to catch mistakes as early as possible.
Two decades ago there was a lot of talk about turning software development into a structured engineering discipline, but that plan seems to have largely been abandoned.
It's worth noting that there was one developer who wrote all of this code. They left AECL in 1986, and thankfully for them, no one has ever revealed their identity. And while it may be tempting to lay the blame at their feet—they made every technical choice, they coded every bug—it would be wildly unfair to do that.
> It's the end result of a process
In my experience, it's even more than that. It's a culture.
Wrong, any software failure can have huge consequences in someone's life, or company, by preventing some critical flow to take place, corrupting data related to someone's life, professional or medical record, preventing a payment on some specific goods that had to be acquired on that moment or never,....
Good product cultures are ones where natural communication between the field and engineering would mean issues get reported back up and make their way to the right people. No process will compensate for people not giving a shit.
The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.
i am pretty confident they wont let claude touch if it they dont even let deterministic automations run...
that being said, maybe there are places. but this is always the sentiment i got. no automating, no scanning, no patching. device is delivered certified and any modifications will invalidate that. any changes need to be validated and certified.
its a different world that makin apps thats for sure.
not to say mistakes arent made and change doesnt happen, but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon... give the folks in safety critical system engineering some credit..
Try quickly typing 1+ 2 + 3 into the iOS 11 Calculator (reddit.com)
886 points by danso on Oct 24, 2017 | hide | past | favorite | 480 comments
https://news.ycombinator.com/item?id=15538666...this _exact_ same failure mode in a "less" critical domain (eg: literally your most frequently used "pocket calculator"), unless you're using the calculator for Important Things(tm).
The mechanical interlock essentially functioned as a limit outside of the control system. So you should build an ai system the same way- enforcing restrictions on the security agency from outside the control of the ai itself. Of course that doesn’t happen and devs naively trust that the ai can make its own security decisions.
Another lesson from that era we are re learning- in-band signaling. Our 2025 version of the “blue box” is in full swing. Prompt injection is just a side effect of the fact that there is no out of band instruction mechanism for llms.
Good news is - it’s not hard to learn the new technology when it’s just a matter of rediscovering the same security issues with a new name!
Because the alternative isn't bug-free driving -- it's a human being. Who maybe didn't sleep last night, who might have a heart attack while their foot is on the accelerator, who might pull over and try to sexually assault you.
You don't need to "place confidence in these complex software systems" -- you just need to look at their safety stats vs e.g. regular Uber. It's not a matter of trust; it's literally just a matter of statistics, and choosing the less risky option.
But this opens up a can of worms, as suddenly you have to deal with every edge case, test for every possible input, etc. This was before fuzz testing, too. Each line of defensive coding, every carefully crafted comment, etc all added to the maintenance burden; I'd even go as far as claim it increased uncertainty, because what if I forgot something?
15 years later and it feels like I'm doing far less advanced stuff (although in hindsight what I did then wasn't all that, but I made it advanced). One issue came up recently; a generic button component would render really tall if no label was given, which happened when a CMS editor did not fill in a label in an attempt to hide it. The knee-jerk response would be to add a check that disallows empty labels, or to not render the button if no label is given, or to use a default button label.
But now I think I'll look at the rendering bug and just... leave the rest. A button with an empty label isn't catastrophic. Writing rules for every possible edge case (empty label, whitespaces, UTF-8 characters escaping the bounds, too long text, too short text, non-text, the list goes on) just adds maintenance and complexity. And it's just a button.
The Therac-25 was meant to have a detector of radiation levels to cut things off if a safe value was exceeded, but it didn't work. It could obviously have been improved, but you always have the possibility that "what if our check doesn't work?".
In the case of the Therac-25, if the first initial failures had been reported and investigated, my understanding is (I should make clear I'm not an expert here) it would have made the issues apparent, and it could have been recalled before any of the fatal incidents happened.
In a swiss cheese model of risk, you always want as many layers as possible, so your point about a detector fits in there, but the final layer should always be if an incident does happen, and something gets past all our checks, how can we make it likely that it gets investigated fully by the right person.
I will say that me pretending to know how to best design medical equipment as a web developer is pretty full of myself haha. Highly doubt whatever I'm spouting is a new idea. The idea of working on this sort of high-reliability + high-recoverability systems seems really interesting though!
Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.
>Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.
Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.
But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.
Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".
Unfortunately Computer Science is still in its too-cool-for-school phase, see OpenAI being sued over recently encouraging a suicidal teenager to kill themself. You'd think it would be common sense for that to be a hard stop outside of the LLM processing the moment a conversation turns to subjects like that, but nope.
http://www0.cs.ucl.ac.uk/staff/a.finkelstein/papers/lascase....
Killing 20 innocents and one Hamas member is not a bug - it is callous, but that's a policy decision and the software working as intended. But when it is a false positive (10% of the time), due to inadequate / outdated data and inadequate models, that could reasonably classified as a bug - so all 21 deaths for each of those bombings would count as deaths caused by a bug. Apparently (at least earlier versions) of Gospel were trained on positive examples that mean someone is a member of Hamas, but not on negative examples; other problems could be due to, for example, insufficient data, and interpolation outside the valid range (e.g. using pre-war data about, e.g. how quickly cell phones are traded, or people movements, when behaviour is different post-war).
I'd therefore estimate that deaths due to classification errors from those systems is likely in the thousands (out of the 60k+ Palestinian deaths in the conflict). Therac-25's bugs caused 6 deaths for comparison.
https://www.androidauthority.com/psa-google-pixel-911-emerge...
The patriot missile system used floating point for time, so as uptime extended the clock became more and more granular, eventually to the point where time skipped so far that the range gate was tripped.
The fix was being deployed earlier that year but this unit hadn't been updated yet.
https://www.cs.unc.edu/~smp/COMP205/LECTURES/ERROR/lec23/nod...
Engineers in other fields need to sign off on designs, and can be held liable if something goes wrong. Software hasn't caught up to that yet.
This is the kind of mistake that fails people out of CS101; It's obvious that the student is just manipulating symbols they don't really "get" rather than modifying code. Throwing the chinese room thought experiment at your code base is bad engineering.
When I worked at Cerner years ago (now owned by Oracle), there were rumors that the Cerner EMR still could barely handle DST* spring forward, but could not handle DST fall back (where the 01:00 hour is repeated) -- you had do preemptively switch to pen-and-paper for the hours around the switch. I assume this was because someone back in the initial database design used local time instead of UTC for some critical patient-care timestamp fields in the database, and then had a bear of a time getting reliable times out of the database during the witching hour.
* Daylight Saving Time in the USA. And yes, everyone in the USA changes non-networked clocks twice a year because of some "brilliant idea" someone shoved through Congress in 1974.
EDIT: I wonder if Cerner finally fixed it?
The other theory is there are soo many bureaucratic hoops to jump through in order to make anything in the medical space, that no one does it willingly.
> the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized.
#1 virtue of electromechanical failsafes is that their conception, design, implementation, and failure modes tend to be orthogonal to those of the software. One of the biggest shortcomings of Swiss Cheese safety thinking is that you too-often end up using "neighbor slices from the same wheel of cheese".
#2 virtue of electromechanical failsafes is that running into them (the fuse blew, or whatever) is usually more difficult for humans to ignore. Or at least it's easier to create processes and do training that actually gets the errors reported up the chain. (Compared to software - where the worker bees all know you gotta "ignore, click 'OK', retry, reboot" all the time, if you actually want to get anything done):
But, sadly, electromechanical failsafes are far more expensive then "we'll just add some code to check that" optimism. And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.
I agree with the previous poster that the feedback from the field is lacking a lot. A lot of doctors don’t report problems back because they are used to bad interfaces. And then the feedback gets filtered through several layers of sales reps and product management. So a lot of info gets lost and fixes that could be simple won’t get done.
In general when you work in medical you are so overwhelmed by documentation and regulation that there isn’t much time left to do proper engineering. The FDA mostly looks at documentation done right and less at product done right.
One of the biggest things I see in junior engineers that I mentor (working in backend high throughput, low latency, distributed systems) is not working out all of the various failure modes your system will likely encounter.
Network partitions, primary database outage, caching layer outage, increased latency ... all of these things can throw a spanner in the works, but until you've experienced them (or had a strong mentor guide you) it's all abstract and difficult to see when the happy path is right there.
I've recently entirely re-architected a critical component, and part of this was defense in depth. Stuff is going to go wrong, so having a second or even third line of defense is important.
BTW: Relevant XKCD: https://xkcd.com/2347/
https://www.theguardian.com/uk-news/2024/jan/09/how-the-post...
One member of the development team, David McDonnell, who had worked on the Epos system side of the project, told the inquiry that “of eight [people] in the development team, two were very good, another two were mediocre but we could work with them, and then there were probably three or four who just weren’t up to it and weren’t capable of producing professional code”.
What sort of bugs resulted?
As early as 2001, McDonnell’s team had found “hundreds” of bugs. A full list has never been produced, but successive vindications of post office operators have revealed the sort of problems that arose. One, named the “Dalmellington Bug”, after the village in Scotland where a post office operator first fell prey to it, would see the screen freeze as the user was attempting to confirm receipt of cash. Each time the user pressed “enter” on the frozen screen, it would silently update the record. In Dalmellington, that bug created a £24,000 discrepancy, which the Post Office tried to hold the post office operator responsible for.
Another bug, called the Callendar Square bug – again named after the first branch found to have been affected by it – created duplicate transactions due to an error in the database underpinning the system: despite being clear duplicates, the post office operator was again held responsible for the errors.
Not a "bug" per se, but texting while driving kills ~400 people per year in the US. It's a bug at some level of granularity.
To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.
I suspect that few organizations that do all that, have a process/culture of ignoring bugs in the wild -- and those that do have such complicated domains that explaining the error is hard.
Software best practices today would probably also involve sending metrics, logs, error reports, etc.
That said, it's still extremely easy get embrace a culture were unexplainable errors are ignored. Especially in a cloud environment.
KC lost over $400 million in less than an hour due to an old feature toggle and a problem with their deployment process.
I think the bit I quoted, especially if you read in the context of the article, is talking about culture. I.e. it's talking about a process that informs software development, management and sales. Things like formal proofing and type systems are the exact kind of processes that aren't what it's talking about.
I kind of agree with you though about the process/culture distinction - ultimately, if you don't have a culture where people actively care about improving reliability, any process is just gonna become a tick-box exercise to appease management.
[1]: https://en.wikipedia.org/wiki/Ariane_5#Notable_launches
I don't have the same faith in corporate leadership as you, at least not when they see potentially huge savings by firing some of the expensive developers and using AI to write more of the code.
It's not that great developers aren't necessary for software quality, more that they aren't sufficient.
It is business who requests features ASAP to cut costs and and then there are customers who don’t want to pay for „ideal software” but rather have every software for free.
Most devs and QA workers I know want to deliver best quality software and usually are gold plating stuff anyway.
Despite all the procedures and tests, the software still managed to endanger the lives of the passengers.
In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.
Those who want to escape the office altogether, can hop on one of the company’s 600 cow-print bikes to take meetings from a treehouse, slide down a rabbit hole or grab lunch in a train car.
https://www.cnbc.com/2024/09/01/inside-epic-systems-mythical...
About a decade ago a rep from Videojet straight up lied to us about their 30W CO2 marking laser having a hardware interlock. We found out when - in true Therac-25 fashion - the laser kept triggering despite the external e-stop being active due to a bug in their HMI touch panel. No one noticed until it eventually burned through the lens cap. In reality the interlock was a separate kit, and they left it out to reduce the cost for their bid to the customer. That whole incident really soured my opinion of them and reminded me of just how bad software "safety" can get.
"The safety interlocks don't work when the operator intentionally goes out of his way to defeat them." isn't a concern. There's only so much you can do to prevent someone who's dedicated to disabling them.
"The safety interlocks fail dangerous because of an unexpected power cut." is a huge concern. What else did the manufacturer skimp on, or -worse- simply fail to understand was important to do for the safety of the operator of the device?
Blaming it on PHB's is a mistake. There were no engineering classes in my degree program about failsafe design. I've known too many engineers who were insulted by my insinuations that their design had unacceptable failure modes. They thought they could write software that couldn't possibly fail. They'd also tell me that they could safely recover and continue executing a crashed program.
This is why I never, ever trust software switches to disable a microphone, software switches that disable disk writes, etc. The world is full of software bugs that enable overriding of their soft protections.
BTW, this is why airliners, despite their advanced computerized cockpit, still have an old fashioned turn-and-bank indicator that is independent of all that software.
I've gone off Kyle Hill after a lot of people pointed out that he was promoting a scam (BetterHelp) on his video about fraud and his response was just to tell people to deal with it
It's interesting, because all the older works in its training data will default to the masculine singular, and that has to be a massive number of books too. But maybe the modern writing, including lots of online sources, simply overwhelms that. Or it's one of the guardrails written into the AIs to avoid offending people.
It's an archetypal example of 'one law for the connected, another law for the proles'.
The official QA organization was very powerful, and had no compunctions about stopping an entire product line, for one bug.
When that happened, the department responsible for the bug would find themselves against the wall.
As a result, all the software departments had pretty big teams of testers, who would validate the software, before it was released to the purview of the QA organization.
It could be pretty restricting, but we always felt confident that what we shipped, worked.
These kind of calculations always make me wonder...say someone wasted one minute of everybody's life, is the cost ~250 lives? One minute? Somewhere in between?
The Therac-25 incident was a radiation overdose in Texas.
I'd say this includes all of us all the time; a good developer never trusts their own work blindly, and spends more time gathering requirements and verifying their and others' work than writing code.
Does a construction engineer blame an architect's wacky designs if a building collapses? No, they either engineer it so it doesn't collapse, convince the architect that it will collapse because physics, or they refuse.
People want to be able to use a bridge for free too, doesn't mean there's no money in it.
As for gold plating, is that really improving software quality, or is that yak shaving / bike shedding?
He had finished undergrad 5+ years prior and had continuous industry experience.
That law is irrelevant to this situation, except in that the lawyers for Fujitsu / Royal Mail used it to imply their code was infallable.
That’s a horrible take. There is no amount of reviews, guidelines and documentation that can compensate for low quality devs. You can’t throw garbage into the pipeline and then somehow process it to gold.
But one key component is that IF a failsafe is triggered, it needs to be investigated as if it killed someone; because it should NEVER have triggered.
Without that part of the cycle, eventually the failsafe is removed or bypassed or otherwise ineffective, and the next incident will get you.
I'm a product manager for an old (and if I'm being honest somewhat crusty) system of software, the software is buggy, all of it is, but its also self healing and resilient, so while yes, it fails with somewhat alarming regularity, with very lots and lots concerning looking error messages in the logs, but it never causes an outage because it self heals.
Good systems design isn't making bug free software or a bug free system, but rather a system where a total outage requires N+1 (maybe even N+N) things to fail before the end user notices. Failures should be driven by at most, edge cases - basically where the system is being operated outside of its design parameters, and those parameters need to reflect the real world and be known by most stakeholders in the system.
My gripe with software engineers sometimes, they're often too divorced from real users and real use cases, and too devoted to the written spec over what their users actually need to do with the software - I've seen some very elegant (and on paper, well designed) systems fall apart because if simple things like intermittent packet jitter, or latency swings (say between 10ms and 70ms) - these are real world conditions, often encountered by real world systems, but these spec driven systems fall apart once confronted with reality.
> software quality doesn't appear because you have good developers
If this industry wants to be respected, it should start trying to be actual engineers. There should be tons and tons of standards which are enforced legally, but this is not often the case. Imagine if there were no real legal guardrails in, say, bridge building!
edit: and imagine if any time you brought up this issue, bridge builders cockily responded with "well stuff seems to work fine so..."
Also, speaking out when the train is visibly going against a wall.
> Provenance and proper traceability would have allowed
But there wasn't those things, so they couldn't, so they were driven to suicide.
Bad software killed people. It being slow or fast doesn't seem to matter.
If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:
> Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]
[1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...
[2] https://en.wikipedia.org/wiki/British_Post_Office_scandal
That may seem a bit hypothetical but it can easily happen if you have a company that systematically underpays, which I'm sure many of us don't need to think hard to imagine, in which case they will systematically hire poor developers (because those are the only ones that ever applied).
Worryingly, e2e / full integration testing was also the main cause of other Boeing blunders, like the Starliner capsule.
It's almost never just software. It's almost never just one cause.
There's a chart here that shows it clearly for Toyota's rollout:
https://www.embedded.com/unintended-acceleration-and-other-e...
IMO this should be the standard - software engineer should be a protected title, and everyone else would be titled some flavour of software developer or similar.
When it comes to safety stuff (like bridge building), there are (and should be) strict licensing requirements. I would have no problem requiring such for work on things like medical equipment, We already require security clearance for things like defense information (unless you're a DOGE bro, I guess). That's a bit different from engineering creds, but it's an example of imposed structure.
But I think that it would be ridiculous to require it for someone that writes a fart app (unless it's a weaponized fart app).
What is in those requirements then becomes a hot potato. There are folks that would insist that any "engineer" be required to know how to use a slide rule, and would ignore modern constructs like LLMs and ML.
I'm not kidding. I know people exactly like that. If they get authority, watch out. They'll only "approve" stuff that they are good at.
On the other hand, if the requirements are LeetCode, then it's useless. A lot of very unqualified people would easily pass, and wreak havoc.
From what I can see, the IEEE seems to have a fairly good grasp on mixing classic structure and current tech. There's some very good people, there, and they are used to working in a structured manner.
But software has developed a YOLO culture. People are used to having almost no structure, and they flit between organizations so rapidly, that it's almost impossible to keep track of who is working on what.
The entire engineering culture needs to be changed. I don't see that being something that will come easily.
I'm big on Structure and Discipline. A lot of it has to do with almost 27 years at a corporation with so much structure that a lot of folks here, would be whimpering under their standing desks.
That structure was required, in order to develop equipment of the Quality they are famous for, but would be total overkill for a lot of stuff.
I do think that we need to impose structure on software supply chains, though. That's not something that will be a popular stance.
Structure is also not cheap. Someone needs to pay for it, and that's when you become a real skunk at the picnic.
Now I'm not an engineer nor at all aware of what these standards actually mean, I'm sure they're pretty common sense and nowhere near as detailed as bridge building standards.
I'm taking OP at their word, in part because I don't feel like doing free-of-charge remote diagnostics on a microwave over a communications channel with such an absurdly high RTT. Based on your poor choice of counterexample, I was assuming that you didn't understand the difference between "as safe as we can reasonably make it" design and "it only appears to be safe" design (which is the worst kind).
> Don't put any limbs inside when it's on, and it really isn't that dangerous...
If true, then I'd wonder why they bother with any shielding at all. They'd save a bunch of money by putting a clear glass or plastic plate in front of the door, rather than all that metal, don'tcha think?
> Again, your microwave isn't a demon core.
Lots and lots of dangerous things are less dangerous than plutonium cores that are just raring to fizz a little. That doesn't mean that the safety mechanisms mandated to be incorporated in their design are obviously superfluous.
But same with code itself, a junior will have code that is "theirs", a medior/senior will (likely) work at scales where they can't keep it all in their heads. And that's when all the software development best practices come into play.
Good developers are a necessary ingredient of a much larger recipe.
People think that a good process means you can toss in crap developers, or that great developers mean that you can have a bad process.
In my experience, I worked for a 100-year-old Japanese engineering company that had a decades-long culture of Quality. People stayed at that company for their entire career, and most of them were top-shelf people. They had entire business units, dedicated to process improvement and QA.
It was a combination of good talent, good process, and good culture. If any one of them sucks, so does the product.
[citation needed]
They don't need to be especially talented engineers, but, in my experience (and I actually have quite a bit of it, in this area), they need to be dedicated to a culture of Quality.
And it is entirely possible for very talented engineers to produce shite. I've seen exactly that.
https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...
That’s not to say talent is unimportant, however, I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent? I think I’m talented but I wouldn’t be surprised to learn others think I’m an imbecile who only knows Python!
Fast killing software is too fast for that.
It used to be that the poor performers (dangerous hip-shootin' code commitin' cowpokes) were limited in the amount of code that they could produce per time unit, leaving enough time for others to correct course. Now the cowpokes are producing ridiculous amount of code that you just can't keep up with.
Cheers
They deliberately designed it to only look at one of the Pitot tubes, because if they had designed it to look at both, then they would have had to implement a warning message for conflicting data.
And if they had implemented a warning message, they would have had to tell the pilots about the new system, and train them how to deal with it.
It wasn't a mistake in logic either. This design went through their internal safety certification, and passed.
As far as I'm aware, MCAS functioned exactly as designed, zero bugs. It's just that the design was very bad.
The Camry, the Solara, and the RAV4 are all the exact same engine hardware and software, at nearly all times. Especially the Solara, which is just a Camry with a shinier body. It uses an identical engine, throttle body, and ECU flash and is even considered a "Camry Solara".
That Camry "Unintended accelerations" jumped while Solara didn't means that it isn't the hardware. Instead, they all started at the same time, about 2002. Similarly, the Scion TC is also literally the exact same hardware, software, and throttle body as a Camry. The entire Scion line is just Toyota software and hardware in a different body shell.
Indeed, if you look at the Corolla, the jump in unintended acceleration cases start with mechanical throttle bodies still the norm, and do not change with the switch to electronic throttle control.
IMO this graphic handily shows how the media affects average people. The media went on a large blitz about how Toyota was unsafe now that they had electronic throttle bodies, and so owners of those cars complained, but the average consumer does not realize that the Solara, Camry, and Scion TC are all identical vehicles under the body shell and don't realize that they "should" also be complaining about those cars if the problem was actually caused by the electronic throttle body code or design.
Important note: People who report unintended acceleration events like this almost always say that the brakes didn't stop it. That seems.... hard to believe. The brakes on all Toyotas are fully hydraulic and cannot fail electronically. All toyota vehicles in that chart (maybe excluding some Tacomas and the top line Lexus model) have brake capability far exceeding their engine power. A V6 Camry can be at wide open throttle but hard application of the brakes would still overpower that engine with no problem.
Interestingly the NHTSA's opinion is that the Toyota models in the graphic ARE defective: They allow faulty or improperly installed weather mats to interfere with the pedals. Toyota also believed this take, as they kept their handling of fixing this defect off the books, and changed the pedal positioning in their newer models. They eventually fined Toyota over a billion dollars for their handling of this situation, and claimed there was another "sticky pedals" problem that they were covering up.
Conflict resolution in redundant systems seems to be one of the weakest spots in modern aircraft software.
[edit as I can't reply to the child comment]: The FAA and EASA both looked into the stall characteristics afterwards and concluded that the plane was stable enough to be certified without MCAS and while it did have more of a tenancy to pitch up at high angles of attack it was still an acceptable amount.
Those governing bodies didn't form by magic. If you look at how hostile people on this site are to the idea of unionization or any kind of collective organisation, I'd say a large part of the problem with software is individual developers' attitudes.
My job hasn't significantly changed with AI, as AI generated code still has to pass all the hurdles I've set up while setting up this project.
A lot of times that is boring meetings to discuss the simplification.
I can extend the same analogy to all the gen ai bs that’s floating around right now as well.
But average construction worker is also average and average doctor as well.
World cannot be running on „best of the best” - just wrap your head around the fact whole economy and human activities are run by average people doing average stuff.
That’s in fact the thesis for the entire Deming management philosophy, and in line with what I’m saying (you can produce high quality with a good process or a good culture, you don’t necessarily need high caliber individuals)
If you want professional quality, we're the first line of actually making it happen, blaming others won't change anything.
It sounds like you're saying that you shouldn't care as much about the quality of "slow killing software" because in theory it can be made better in the future?
But... it wasn't though? Horizon is a real software system that real developers like you and me built that really killed people. The absolutely terrible quality of it was known about. It was downplayed and covered up, including by the developers who were involved, not just the suits.
I don't understand how a possible solution absolves the reality of what was built.
What I've been saying is methodology is mostly irrelevant, not that waterfall is specifically better than agile. Talent wins over the process but I can see how this idea is controversial.
I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent?
Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
No, most are hydraulic with vacuum boost.
If you aren't expecting it, the increased force required to drive the brakes hydraulically from the pedal without boost assist is significant and can be surprising. I assume most folks haven't had an engine fail going downhill, but for a large pickup I was standing on the brake pedal and had to push my leg down by pulling on the steering wheel to stop it.
> That seems.... hard to believe
Anyhow, the vacuum booster is driven from the engine airflow. At wide open throttle the vacuum available to the booster is minimal because the restriction is as open as it can be.
You can test how it feels by rolling at a medium speed in a parking lot, shifting into neutral, and killing the engine. The vacuum reservoir may provide you one or two brake pumps and then you're on your own.
Another test: after shutting down your car after a regular drive, try depressing the brake to the floor a few times. You'll soon exhaust the boost reservoir and the brake pedal will become very stiff now that it's fallen back to full hydraulic operation. In this condition if you hold the pedal halfway down when you start the car you'll feel the brake boost kick in soon as the engine starts.
Inputs were averaged, but supposedly there’s at least a warning: Confused, Bonin exclaimed, "I don't have control of the airplane any more now", and two seconds later, "I don't have control of the airplane at all!"[42] Robert responded to this by saying, "controls to the left", and took over control of the aircraft.[84][44] He pushed his side-stick forward to lower the nose and recover from the stall; however, Bonin was still pulling his side-stick back. The inputs cancelled each other out and triggered an audible "dual input" warning.
No one is working on quality, everyone works on new features. There is usually no incentive to increase quality, to improve speed, performance, etc.
In my case, the company produced absolutely top-shelf stuff, but even relatively mediocre companies did well, using Deming’s techniques. It required that everyone be on board, wrt the culture, though.
But I have found that a “good” engineer is one that takes their vocation seriously. They may not be that accomplished or skilled, but they have self-discipline, humility, and structure.
I’ve met quite a few highly-skilled “not-good” engineers, in my day. I’m embarrassed to say that I’ve hired some of them.
https://ncees.org/ncees-discontinuing-pe-software-engineerin...
In slowly killing software the audit trail might be faster than the killing. In fast killing software, the audit trail isn't.
> Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
I didn't ask for this, I just asked for sensible examples, either from your experience or from publicly available information.
- Programming errors have been reduced by extensive testing on a hardware simulator, and under field conditions on teletherapy units. Any residual software errors are not included in the analysis. 2) Program software does not decay due to wear, fatigue, or reproduction errors. 3) Computer software errors are caused by faulty hardware components, and "soft" (random) errors induced by alpha particles or electromagnetic noise.