(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
... I'll see myself out
What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"
What if we built things that are meant to last? Would the world be better for it?
Good engineering is building the strongest bridge within budget and time.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
From working on many many old and important code bases, the code quality is absolutely trash.
That can't be right? What about safety factors
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
We only recently figured out how to reproduce Roman concrete.
We’d have more but a lot were blown up during WWII.
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
A couple of years ago, "slop" became the popular shorthand for unwanted, mindlessly generated AI content flooding the internet including images, text, and spam. Simon Willison helped popularize the term, though it had been circulating in engineering communities in the years prior.
At Greptile, we spend a lot of time thinking about questions like: Is slop the future? Are programming best practices now a thing of the past? Will there be any reason at all for AI coding tools to write what we call good code going forward?
I want to argue that AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long-term.
Software development is changing fast. A prominent recent example comes from Ryan Dahl, creator of Node.js, who wrote, "The era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true."
Meanwhile, the complexity of the average piece of software is drastically increasing. Theo [1] pointed out this trend. He notes that this increased complexity was driven partly by AI making it easier to ship more code faster, and partly by economic pressure for companies to keep up with competitors. Theo points out that the number of PRs are going up, which is what we've noticed at Greptile as well. As we covered in our State of AI Coding report [2], published a couple of months ago, lines of code per developer grew from 4,450 to 7,839 as AI coding tools became standard practice. Median PR size increased 33% from March to November 2025, rising from 57 to 76 lines changed. Individual file changes became 20% larger and "denser."
The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle. Andrej Karpathy [4] describes: "agents bloat abstractions, have poor code aesthetics, are very prone to copy pasting code blocks and it's a mess, but at this point I stopped fighting it too hard and just moved on."
Collectively, software engineers are cranking out code at a high quantity. The approach driving much of this is brute force: generate code fast, iterate until it works, worry about simplicity and quality later (if at all).
In A Philosophy of Software Design [5], John Ousterhout argues that complexity is the #1 enemy of well-designed software. Bad code needs lots of context to understand. Good code is easy to understand, modify, and extend; it also hides implementation details, and creates deep modules with shallow interfaces. This simplicity also holds practical implications.

Good code requires upfront thinking about architecture, design, edge cases, and clean abstractions. By Ousterhout's definition, good code will also be easier to understand and modify, because it requires less context, which makes it dramatically cheaper overall. We don't actually know the exact trade-off yet, but for any software that lives longer than a weekend, it will be cheaper overall to generate good code.
By contrast, complex code doesn't scale. It requires a lot of tokens and compute, and as codebases grow, it gets exponentially more expensive.
Economic pressure will drive AI models to generate good code because getting the architecture right upfront is cheaper than fixing it later. That pressure is already changing what AI-powered development looks like. Good code needs less context to understand, fewer changes for maintenance, and therefore fewer input and output tokens over the life of the codebase.
We're still early in the AI coding adoption curve. As the technology matures, economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall.
The world right now is focused on getting AI to work in the first place, not on optimizing its abilities. We are going through a particularly messy phase of innovation. Once AI code generation becomes ubiquitous, I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies.
Reference
[1] Theo [2] State of AI Coding report [3] analysis of vendor status pages [4] Andrej Karpathy [5] A Philosophy of Software Design
> I think camp 2 would rather see one beautiful thing than ten useful things.
Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.
It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).
Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.
(If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)
Ah yes, if you aren't shitting code out the door as fast as possible, you're probably not shipping anything at all.
The difference is that they didn't have rebar. And so they built gravity stable structures. Heavy and costly as fuck.
A modern steel and concrete structure is much lighter and much cheaper to produce.
It does mean a nodern structure doesn't last as long but also the roman stuff we see is what survived the test of time, not what crumbled.
Don't we end up just spending the same? Just now we're left with a crappy bridge.
If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.
This obscures things in favour of the “quality/performance doesn’t matter argument”.
I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.
Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.
As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.
Such a world still has room for unlicensed developers too -- I'd certainly be among them.
And in almost all of those cases, they'd be wrong.
I've seen mismatch in each direction..
On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.
Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.
However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.
Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.
Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.
At the moment you remove one of these factors, free market becomes dangerous for the people living in it.
Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.
That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.
Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down
When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!