I am afraid that without a major crash or revolution of some sort, user won't matter next to a sufficiently big biz. But time will tell.
It is absolutely astounding how much of them run on code that is:
- very reliable aka it almost never breaks/fails
- written in ways that makes you wonder what series of events led to such awful code
For example:
- A deployment system that used python to read and respond to raw HTTP requests. If you triggered a deployment, you had to leave the webpage open as the deployment code was in the HTTP serving code
- A workflow manager that had <1000 lines of code but commits from 38 different people as the ownership always got passed to whoever the newest, most junior person on the team was
- Python code written in Java OOP style where every function call had to be traced up and down through four levels of abstraction
I mention this only b/c the "LLMs write shitty code" isn't quite the insult/blocker that people think it is. Humans write TONS of awful but working code too.
Obviously, our regulations aren't perfect or even good enough yet. See DRM. See spyware TVs. See "who actually gets to control your device?". But still...
biz > user
is capitalism. Removal of that isn't capitalism. Non-removal of that is capitalism.LLMs regurgitate shitty code. They learned it entirely from people.
This looks like an example of biobackend: defective IT compensated by humans
Your point is very sane, of course, shitty code was not invented now. But was it ever sold as a revolution ? Probably, too !
So, similar with software design, as in other fields, often a problem goes away when you ask a different question.
Between rebuilding an engine and disassembling a bumper to replace a lightbulb most mechanics would genuinely rather be doing the lengthy but interesting work of rebuilding an engine than the lengthy and fucking boring task of disassembling a bumper to fix a lightbulb.
Moreover, even if a mechanic must charge you stupid amounts of labour cost to do a simple repair because it genuinely takes that much time, the customer might not come away with it thinking: "fuck, I bought a dumb car which is expensive to repair", they might instead come away with it thinking: "all these mechanics, quoting ridiculous prices to fix a light bulb, they must all be scammers".
Every moving part - especially gears -- needs to be oiled, and whenever you are oiling metal on metal contact such as in gears, you are going to want an oil filter to catch worn metal debris, to remove it from the oil.
The difference between EVs and ICE vehicles is not that only one of them uses oil to reduce friction, but that the oil service intervals on EVs are so long that regular oil maintenance is not needed, you do it every 60,000 miles or whatever the manufacturer recommends, so it's out of mind. But that doesn't mean it doesn't require service.
Once EVs have been around for a while and there is an established market for used EVs, the people who buy them are going to want to change the oil to add more life to the EV. So it's something that is dealt with in the long-life maintenance, not the monthly maintenance. But when you do the oil service, you will curse Tesla for needing to drop the battery in order to do it, and all of a sudden you will care where things are placed and how accessible they are.
Here is a nice video -- I follow Sam Crac as one of my favorite automotive youtubers - and he picked up an old Tesla and did an oil service for it. It's a nice watch:
They actually will need oil changes starting anywhere from the 50k to 100k mile mark.
Here's the maintenance guide with pictures walking through changing the oil and filter for the Rear Drive Unit (RDU) in a Tesla Model S:
https://service.tesla.com/docs/ModelS/ServiceManual/Palladiu...
The obvious one is the battery, and you can argue that modern EVs have batteries so expensive that when they are dead the car becomes scrap, and - sure, whatever.
But EVs still have: cabin air filters, coolant, brake fluid, lubricants in various places (although granted, these lubricants will mostly last the service life).
At the end of the day, as long as you have a car which moves, and not a statue, it will have things which wear out and which should be easy to replace.
Engine oil and oil filters are just an example.
It's a continuous object lesson in missing the point. A similar thing happened a few hours ago when an article was posted about a researcher who posted a fake paper about a fake disease to a pre-print server that LLMs picked up via RAG, telling people with vague symptoms that they had this non-existent disease. Lo and behold, commenters go in immediately saying "I'd be fooled too because I trust pre-print medical research." Except the article itself was intentionally ridiculous, opening by telling you it was fake, using obviously fake names, fictional characters from popular television. The only reason it fooled humans on Hacker News is because they don't bother reading the articles and respond only to headlines.
It's just like your code examples. Humans fail because we're lazy. Just like all animals, we have a strong instinct to preserve energy and expend effort only when provoked by fear, desire, or external coercion. The easiest possible code to write that seems to work on a single happy path using stupid workarounds is deemed good enough and allowed through. If your true purpose on a web discussion board is to bloviate and prove how smart you are rather than learn anything, why bother actually reading anything? The faster you comment, the better chance you have of getting noticed and upvoted anyway.
Humans are not actually stupid. We can write great code. We can read an obviously fake paper and understand that it's fake. We know how hierarchy of evidence and trust works if we bother to try. We're just incredibly lazy. LLMs are not lazy. Unlike animals, they have no idea how much energy they're using and don't care. Their human slaves will move heaven and earth and reallocate entire sectors of their national economies and land use policies to feed them as much as they will ever need. LLMs, however, do have far more concrete cognitive limitations brought about by the way they are trained without any grounding in hierarchy of evidence or the factual accuracy of the text the ingest. We've erected quite a bit of ingenious scaffolding with various forms of augmented context, input pre-processing, post-training model fine tuning, and whatever the heck else these brilliant human engineers are doing to create the latest generation of state of the art agents, but the models underneath still have this limitation.
Do we need more? Can the scaffolding alone compensate sufficiently to produce true genius at the level of a human who is actually motivated and trying? I have no idea. Maybe, maybe not, but it's really irritating that we can't even discuss the topic because it immediately drops into the tarpit of "well, you too." It's the discourse of toddlers. Can't we do better than this?
If that's what the regulators are optimizing for.
"We arrived at a little model that expresses the relative importance of various factors in software development..."
Most maintainability conflicts come from packaging and design for assembly.
Efficiency more often comes into conflict with durability, and sometimes safety.
There is just no universe in which placing an oil filter in one location or another is going to make such a difference. You'd have to mount it completely outside the engine, say sitting as a cylinder on top of the hood, and even there you are not going to get a 2mpg improvement.
Anyone who actually drives their car regularly will be doing an oil change at least twice a year. If an oil change takes more than 30 minutes of actual labour time of an inexperienced mechanic, it's going to be a serious financial burden which will likely outweigh any 2mpg improvement.
For companies that have a solid competitive moat they have at best gotten lazy about user centricity and at worst actively hostile.
Do you optimize an engine for how easy it is to replace a filter once or twice a year (most likely done by someone the average car-owner is already paying to change their oil for them), or do you optimize it for getting better gas mileage over every single mile the car is driven?
We're talking about a hypothetical car and neither of us (I assume) design engines like this, I'm just trying to illustrate a point about tradeoffs existing. To your own point of efficiency being a trade with durability, that's not in a vacuum. If a part is in a different location with a different loading environment, it can be more/less durable (material changes leading to efficiency differences), more/less likely to break (maybe you service the hard-to-service part half as often when it's in a harder to service spot), etc.
The point that I am making (obviously, I think) is that tradeoffs exist, even if you don't think the right decision was made, your full view into the trade space is likely incomplete, or prioritizes something different than the engineers.
Based on the replies, saying there's a hypothetical 2mpg improvement to be had was a mistake, everyone is latching on to that like there's some actual engine we're investigating.
Or, we consider that 2mpg across 100,000 cars can save 3,500,000 gallons of gas being burned for the average American driving ~12k miles per year. And maybe things aren't so black and white. You're argument, in this hypothetical, is that negligent car owner who destroys their car because they're choosing to not change the oil is worth burning an extra 3.5millon gallons of gasoline.
Putting some random number of hypothetical mpg improvement was clearly a mistake, but I assumed people here would be able to get the point I was trying to make, instead of getting riled up about the relationship (or lack thereof) of oil filters and fuel efficiency.
There’s definitely a programming equivalent as well…
Yes, I'm sure most people on this website have ran into seemingly bad design choices which made sense once they knew more context. But that doesn't mean that all bad design choices are like this.
Specifically dumb oil filter placement is an example of such a case where the _only_ legitimate justification is design cost saving for the manufacturer (re-using an existing design meant for a different car).
You can maybe argue that saving on design costs (and I guess also re-tooling costs) is a saving that gets passed onto the consumer. But that consumer is unlikely to feel like they're saving much money when cars depreciate faster than ice cubes in the desert, and when their oil change is 2+ times more expensive every 6 months. Really that cost savings will only really benefit the manufacturer (well, at least until they tarnish their reputation).
Not for normal car
Fun Fact: Along with the "Bees are disappearing" scare, which was just measurement error, there has been an "insects are disappearing" scare, due to the fact people's windshields are not covered with bugs like they used to be. However that is because cars have gotten more aerodynamic so fewer insects are hitting the windshield.
If the car is going to need to be in shop for days needing you to have a replacement rental because the model is difficult to service and the cost of service itself is not cheap , that can easily outweigh any marginal mpg gain .
Similarly because it is expensive and time consuming you may likely skip service schedules , the engine will then have a reduced life, or seizes up on the road and you need an expensive tow and rebuild etc .
You are implicitly assuming none of these will change if the maintenance is more difficult , that is not the case though
This is what OP is implying when he says a part with regular maintenance schedule to be easily accessible.
[1] of which fuel is only one part , substantial yes but not the only one
The reason why automakers place serviceable parts in bad locations is due to either incompetence (If you are, say, Bentley) or malicious design (almost everyone else) -- e.g. they do not prioritize serviceability. Car makers really hate that ordinary people can repair their own vehicles. There were proposals in the 1960s to try to lock shut the hood so that car owners wouldn't be able to open it and service the cars on their own. Hyundai just announced that they will not allow car owners to retract their own parking brakes when they want to replace brake pads. You need a login with a website and prove that you are a professional mechanic before you can retract your own parking brakes. This is done, ostensibly, for "cyber security" reasons. But the real reason is that Hyundai does not want people to be able to service their own cars, they want you to take the car to a dealer. They also are not fans of independent mechanics, they would prefer if everyone that touched the car had a business relationship with Hyundai and was under contract with them. The fact that you can work on your car is an endless source of pain for manufacturers, and when they repeatedly make it hard to work on your car, or try to lock down parts so that you can't pull an old seat heater from the junkyard and use it to replace your own failed seat heater -- that is all part of the war on independent repair.
So what should be discussed is the environment of hostility to serviceability, everything from insisting that transmission oil is "lifetime" to forcing you to pay money to the manufacturer if you want to read the data from your sensors, or making it extremely hard to do simple things like changing a headlight or replacing a battery. All of that is part of the same issue, which is hostility to end user repair. It has nothing to do with improving gas mileage, or ending world hunger, or celebrating the Year of the Pig. These are all equally specious arguments.
We're already in the land of the fucking ridiculous. Let's have fun with it.
The point that I am making (obviously, I think) is that tradeoffs exist, even if you don't think the right decision was made, your full view into the trade space is likely incomplete, or prioritizes something different than the engineers.
Putting some random number of hypothetical mpg improvement was clearly a mistake, but I assumed people here would be able to get the point I was trying to make, instead of getting riled up about the relationship (or lack thereof) of oil filters and fuel efficiency.
Um I’m pretty sure that’s not the only evidence for insect population declines.
Putting some random number of hypothetical mpg improvement was clearly a mistake, but I assumed people here would be able to get the point I was trying to make, instead of getting riled up about the relationship (or lack thereof) of oil filters and fuel efficiency.
This phrase is, by now, common programmer knowledge, a reminder that the person first writing a piece of code shouldn’t buy convenience at the expense of the people who will have to read it and modify it in the future. More generally, code is read more than written conveys that it’s usually a good investment to make the code maintainable by keeping it simple, writing tests and documentation, etc. It’s about having perspective over the software development cycle.
Let me express this idea more succinctly:
maintainer > author
I think this line of thought can be extended beyond code-writing and used as a rule of thumb to identify problems and make decisions.
Code is a means to an end. Software should have a purpose, it’s supposed to provide a service to some user. It doesn’t matter how well written or maintainable the code is, nor how sophisticated the technology it uses if it doesn’t fulfill its purpose and provides a good experience to the user:
user > maintainer > author
Or, since we won’t need to distinguish between developer roles anymore:
user > dev
This is why, instead of guessing or asking what they need, it’s best to put the program in front of the users early and frequently and to incorporate what we learn from their feedback.
This is a strong mental model, just keeping the users in mind during development can get us quite far. It’s approximately how I learned the job and how I understood it for the first half of my career.
When I say “run” I don’t just mean executing a program; I mean operating it in production, with all that it entails: deploying, upgrading, observing, auditing, monitoring, fixing, decommissioning, etc. As Dan McKinley puts it:
It is basically always the case that the long-term costs of keeping a system working reliably vastly exceed any inconveniences you encounter while building it.
We can incorporate this idea into our little model:
user > ops > dev
It took me a while to fully grasp this because, in my experience, much of the software being built never really gets to production, at least not at a significant scale. Most software is built on assumptions that never get tested. But when you run your code in production, the KISS mantra takes on a new dimension. It’s not just about code anymore; it’s about reducing the moving parts and understanding their failure modes. It’s about shipping stuff and ensuring it works even when it fails.
I said that keeping the users in mind during development can get us very far. This works under the assumption that software that’s useful and works well, software of value to users, will be of value to the organization. It’s a convenient abstraction for developers: we produce good, working software, and the business deals with turning it into money. And it mostly works, especially for consumer and enterprise software. But, eventually, that abstraction proves to be an oversimplification, and we can benefit from incorporating some business perspective into our working process:
biz > user > ops > dev
The most obvious example is budget: we don’t have infinite resources to satisfy the user needs, so we need to measure costs and benefits. There’s marketing, there’s deadlines. There are stakeholders and investors. There are personal interests and politics at play. Decisions that make sense for our software, our team or our users considered in isolation, but not when we consider the organization as a whole. Sometimes, we need to work on what generates revenue, not what pleases the user. I’ll get back to this.
We arrived at a little model that expresses the relative importance of various factors in software development, one that can perhaps help us see the bigger picture and focus on what matters. Now I want to look at some common software development dysfunctions and see how they map to the model.
author > maintainer
This is where we started. This is clever and lazy code that turns into spaghetti and haunted forests, this is premature optimizations, this is only-carlos-can-touch-that-module, etc.
dev > user
Software from teams that don’t learn from their users or that put technology first. Over-engineered programs, “modernizations” that degrade the user experience, web apps that break the browser features, etc.
dev > ops
Software that wasn’t designed with its operation in mind. This is overly complicated software with lots of moving parts, fancy databases for small data loads, microservice ecosystems managed by a single small team. Software prematurely architected for scale. Software designed by different people than the ones woken up at midnight when it breaks.
dev > biz
Code considered as an end in itself. Software built by pretentious artisans, musicians of the Titanic, and Lisp Hackers.
dev > *
Software produced when there’s nothing at stake and developers get to do whatever they want.
biz > user > ~ops >~ dev
This is software that’s built but rarely (or never) gets to production. I call this imaginary software. Charity Majors calls it living a lie.
biz > ~user >~ ops > dev
Another kind of imaginary software is the one that doesn’t have users. (But scales). This is software that doesn’t solve a problem, or solves the wrong problem, perhaps nobody’s problem. Software that results from taking some hyped tech and hammering everything with it until something vaguely resembling a use case comes up.
~biz >~ user > ops > dev
Venture-backed software without a business model or whose business model is grow-until-monopoly-then-exploit-users.
If you didn’t rage-close the browser tab yet, let me wrap up by going back to this:
biz > user
This one has ramifications that can be hard to swallow.
As I mentioned above, the way I learned the job, software was about solving problems for end users. This is summarized in one of the final tips of The Pragmatic Programmer, saying that our goal is to delight users, not just deliver code. But, since I started working as a programmer, and as software became ubiquitous, I’ve seen this assumption become increasingly hard to uphold.
There’s a lot of software being produced that just doesn’t care about its users, or that manipulates them, or that turns them into the product. And this isn’t limited to social media: as a user, I can’t even book a room, order food, or click on the Windows start button without popups trying to grab my attention; I can’t make a Google search without getting back a pile of garbage.
There’s a mismatch between what we thought doing a good job was and what a significant part of the industry considers profitable, and I think that explains the increasing discomfort of many software professionals. And while we can’t just go back to ignoring the economic realities of our discipline, perhaps we should take a stronger ethical stand not to harm users. Acknowledging that the user may not always come before the business, but that the business shouldn’t unconditionally come first, either:
user > ops > devbiz > ops > devbiz ≹ user