What’s really happening is that we are all forgetting how to think
Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.
Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.
Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.
Well then train them, instead of selecting 0.18% of applicants and calling it a day.
It's not some innate, immutable property - people can be taught even in adulthood.
Also it's not like they'll work for a year and switch jobs - not in the current market.
Probably we are going to be fine with AI abstraction too. People will use it, stuck with problems, dig deeper, learn, improve, same as we had with frameworks and its source code.
It’s a 85/15 rule. These big companies hire hundreds, possibly thousands, of developers but most of them cannot code. Some of them struggle to write emails. About 15% of those people provide 85% of the value.
Here is where it all went wrong. The goal of software, the only goal, is automation. That means eliminating human labor. The goal of these big companies is hiring, which is mostly the opposite of eliminating labor. That conflict results in people who cannot do the jobs they are hired to perform and whose goals are to retain employment in preference to automating anything.
Worse still is that you can’t talk about if 85% of the people doing that work find this very subject completely hostile.
It is difficult to get a man to understand something, when his salary depends on his not understanding it. Upton Sinclair.
I love this articles who all the coders read but none of the management.
If possible, be a mercenary and put a high number on your expertise, so we can solve this management blind spot faster.
If you can't, let your life/work's passion be "not starving to death", and try to change it politics-side.
In the end of the day, Russia burnt through their entire Soviet stocks in roughly 2-2.5 years, while US spent a very small proportion of theirs and Europe, maybe about half. And now consumption on both sides is similar with expenses on the Western side to feed that machine being almost invisibly small. Nothing bad happened.
But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.
Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).
It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.
We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?
We had better make sure we write down and preserve the recorded data though :)
The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.
Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.
What remains is documentation and automation.
But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.
AI is following the same pattern.
What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.
The West has seen this before, especially in the case of General Electric.
GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.
The same mindset is visible today.
The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.
Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.
My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).
I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.
Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.
The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.
Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.
If you don’t care enough to write it, why should I care enough to read it?
AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.
This is not fun. It has no flow.
The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.
Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.
What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.
Does that really make you senior or just politically versed?
The pattern shows up the most whenever failing software pokes holes in perception.
>In defense, the substitute was the peace dividend. In software, it’s AI.
Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?
They did not properly prepare and as a result lost 20% of its territory in days.
Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.
Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.
The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.
It is about humans, not AI.
The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)
There are plenty of NLAWs (much cheaper than Javelin, and only slightly less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.
And now to give credit where credit is due:
Looking at his (assumed) LinkedIn profile: https://www.linkedin.com/in/denjkestetskov/
It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I too would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.
As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...
If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.
This kind of forgetting is normal. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.
But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they could get them if they really needed them, but it's gonna be costly.
I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.
This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".
And the premise makes no sense anyway. The only risk of forgetting how to make shells is when other countries are making shells more efficiently. Non-western countries are not going to reject AI-coding, nor are they going to make software more efficiently by hand.
The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.
The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.
Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.
You mean the world?
Deepseek was being glazed here, Im sure chinese programmers use it like CC
LLMs are a magnificent tool if you use them correctly. They enable deep work like nothing before.
The problem is the education system focused on passivity (obeyance), memorization, and standardized testing. And worst of all, aiming for the lowest common denominator. So most people are mentally lazy and go for the easy win, almost cheating. You get school and interview cheating and vivecoders.
But it's not the only way to use LLMs.
Similarly, in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.
The history of technology is the replacement of manual processes with automated ones.
Consider a very basic process: checkout of a restaurant.
Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.
This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.
Did we “forget” how to do checkout? No. We purposely changed it.
But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.
Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.
With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.
We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).
We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).
Will we “forget” how to “build” code?
No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.
Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.
But the productivity gains we get will be so great on average that no one will go back to doing things the old way.
There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.
It doesn’t seem much like defense industry problems.
But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.
The hypocrisy is mind blowing
I see a talent pipeline collapse in next 5 years. "Software engineering is over coding is a solved problem" as being chanted by semi literate media and the AI grifter's marketing departments would further scare away the allocation of human capital to software engineering easily commanding 3x rise in salaries due to resource shortage.
I'm going to steal that one and add it to Stross': "Efficiency is the reciprocal of resilience."
It's minor but this is just wrong. If you're going to hire 4 candidates, there could be 2,253 perfectly qualified candidates even if only 0.18% get hired. The conversion rate is meaningless; it just tells us how many jobs were on offer. There is no way that the skills this fellow wanted were so rare and difficult that only 1/500 candidates could possibly handle the job. Humans even in the 1/20 mark are pretty competent if you're willing to train them and legitimate geniuses crop up at around 1/200.
In ideal world (where we don't live):
* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)
* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)
* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)
In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies
The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.
That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.
Helps me keep sane tbh. And keeps the edge sharp.
Basically same shaped taylorism-derived industrial management has imposed itself as the "default dogma" in private and public administration.
With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).
Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.
Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.
They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.
Why would we listen to anything related to right or wrong from you then if you don't care?
COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.
As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compilerJust as shift in power and the rise and fall of nations is normal.
If they are smart, they will. And I think they are smart.
They may keep taking the longer and harder route of a mixture of AI and hand coding.
It feels a lot like someone has a cursory understanding of American politics, and thinks the US is somehow representative. It's not, it is an outlier by every statistical measure. If you want to understand the world, you need to start by forgetting everything you know about the US.
I thought I'd go back for a Masters/PhD but then Trump mercurially defunded lots of STEM grad programs. Ngl, I found myself stuck. Zero job openings, zero PhD program openings. It's all so frustrating.
I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.
Even "First/Third world" has been fraying at the edges for decades since it was originally about political alignment.
I also remember, that EE for a while stopped using the term "jellybean parts". Turns out that most jellybeans are produced in Asia.
Even if you are the absolute unicorn who gets paid to "code much harder problems" and "learning", the rest of the industry exists to deliver actual products and services.
So unless you nurture some type of https://xkcd.com/208/ fantasy, this is not just about you. The industry as a whole needs to find a way to work with LLMs without automating programming away entirely, and the industry as a whole needs to find a way to ensure that newcomers are able to be productive even if code-generation tools are taken away from them.
I'm not saying you're personally doing anything wrong, but there's a parallel here, when smart and curious people read articles about history and literature and art and science, rather than engaging directly with the real thing.
Or then the next level down, where creating amazing work in all of those domains depends on enough "slack" in the system for people to pursue deep work that will not be immediately profitable.
Do you see where I'm going with that? We (and I'm very much including myself: here I am on HN, instead of reading something more substantial) skim the (Wikipedia) surface, instead of diving truly deep. AIs (right now) are the ultimate surface-skimmers, and our fascination with and growing reliance on them reflects something in our current surface-skimming cultural mindset.
I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.
> Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.
Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.
For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?
CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.
With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.
The other that really resonated was something that I read before along the lines of… we think that once humanity learns something, that knowledge stays and we build on it. But it’s not true, knowledge is lost all the time. We need to actively work to keep knowledge alive
That’s why libraries and the internet archive are so important. Wikipedia, too
I am not so certain:
For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.
The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.
So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.
This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.
You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.
I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.
This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.
Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.
It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...
If there were known "make more software, make more money" opportunities available, they would have already done them.
Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.
The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.
A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.
The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.
It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?
The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.
I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.
It’s called The Beer Game[1].
One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.
Short-termism is the death of companies.
Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.
This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)
Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.
It wasn’t one bottleneck. It was all of them.
Not the nuclear material. The pattern.
Money was never the constraint. Knowledge was.
...
There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.
And then there's the rest.
Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.
Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.
Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.
You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.
This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.
Also over here, east of 15°E we were fired all the same.
I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.
I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.
I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.
Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:
1. "This specific case should not have taken certain people by surprise."
2. "This is a manifestation of a broader phenomenon."
3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]
But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.
Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.
Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.
R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.
Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.
The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.
And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.
Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.
We've seen it happen to small electronics and general goods.
We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.
---
And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)
E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.
SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.
---
And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.
Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.
Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.
Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.
Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.
Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.
In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.
It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.
I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.
It doesn't seem to me a thing that I could suddenly forget?
Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.
My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.
Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.
That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??
I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.
You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.
They really, really do not want to spend money. Especially not on Americans and their health insurance.
It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.
People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.
My current pet peave is using period instead of comma, as in:
> My people lived the other side of this equation. Not the factory floor. The receiving end.
Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.
The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.
Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."
It is a very difficult mindset to argue against.
The only purpose of the written word is to be read.
What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.
That’s the problem.
I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.
But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.
Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.
I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"
btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.
There’s plenty of people in this world who are expert programmers without following any traditional path.
“Oh yeah, like who”, you say.
Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.
Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.
> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.
“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.
What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.
You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."
Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.
Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.
Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.
No lender would have been stupid enough to give 18 to 22 year olds $200k for bullshit degrees and sports facilities.
The onus would have remained on employers and government to pay for education, rather than a certification, because they would have been the ones paying.
This article is clearly LLM-generated, even the title. A key indicator is that it almost makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so humanity is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.
Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.
LLM slop reads more like slide bullet points extrapolated to prose-length text
The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.
The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.
Vesting schedules, conditional grants, contractual equity ownership requirements
What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?
I can do that too. Most programmers can.
That's because it requires less skill! Critiquing something is always easier than doing it.
I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.
That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.
They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...
I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.
Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.
When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.
I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.
Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.
Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.
Desktop Linux has gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: https://www.linux.com/news/23-years-terrible-linux-predictio...
And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was Tux Racer.
But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.
By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)
The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.
Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.
And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).
Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.
More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.
A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.
"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.
In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.
As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.
I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.
But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.
Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
Find some pre 2020 that are, and you'd have a point.
That's not to say the country wasn't prepared though. If the GP did talk to people on the ground days before it started, saying it won't happen would match the public propaganda at the time coming out of the Ukrainian government and their allies. They knew it was coming and seemed to decide they were better to faint like the weren't ready and avoid public panic before it started.
I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.
If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.
In other words: there mostly exist two kinds of tasks:
- easy, regular adjustments
- deep changes that require a really good understanding of the system
A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.
They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.
No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.
You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.
As in how could I possibly forget what loops, conditionals or functions are?
I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.
This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.
Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.
With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.
In any case, AI is great for traversing a codebase and producing at least a draft of such documentation.
You are comparing writing something with rewriting something. You don't know what the difference is?
There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"
As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.
But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.
Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....
function add(a,b) = c // adds two numbers
test: add(1,2)=3
to implement
function add(a,b) return 3
So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.
Per wikipedia:
IBM employees have garnered six Nobel Prizes, seven Turing Awards,
20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology,
five National Medals of Science and three Kavli Prizes. As of 2018,
the company had generated more patents than any other business in each of 25 consecutive years.So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.
Patents do, but in most cases it's trivial patents or patents for a "mutually assured destruction" portfolio (aka, you keep them in hand should someone ever decide to sue you).
That's a fundamental problem with how the Western sphere prioritizes and funds R&D. Either it has direct and massive ROI promises (that's how most pharma R&D works), some sort of government backing (that's how we got mRNA - pharma corps weren't interested, or how we got the Internet, lasers, radar and microwaves) or some uber wealthy billionaire (that's how we got Tesla and SpaceX, although government aids certainly helped).
All while we are cutting back government R&D funding in the pursuit of "austerity", China just floods the system with money. And they are winning the war.
A Nobel in 2026 doesnt carry the same weight as a Nobel in 1955.
There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.
There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.
In 2023, Raytheon’s president stood at the Paris Air Show and described what it took to restart Stinger missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.
The Pentagon hadn’t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn’t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.
I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now.
In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn’t work.
By the deadline, Europe delivered about half. Macron called the original promise reckless. An investigation by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn’t hit until December 2024, nine months late.
It wasn’t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe’s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent’s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.
The U.S. wasn’t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn’t hit half the target.
This wasn’t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.
The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.
Then there’s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program, they discovered they couldn’t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.
After $69 million in cost overruns and years of failed attempts, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original process had relied on an unintentional impurity that was critical to the material’s function. Nobody knew. Not the engineers trying to reproduce it. Not even the original workers who made it decades earlier. Los Alamos called it an unknowing dependency in the original process.
A nuclear weapons program lost the ability to make a material it invented. The knowledge didn’t just leave with people. It was never fully understood by anyone.
(Correction: the original version stated that the workers who made Fogbank knew about the impurity. They didn’t. The dependency was unwitting, which makes the knowledge-loss argument stronger, not weaker. Thanks to John F. in the comments for catching this.)
I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.
In defense, the substitute was the peace dividend. In software, it’s AI.
I wrote about the talent pipeline collapse before. The hiring numbers and the junior-to-senior problem are documented. So is the comprehension crisis. What I didn’t have was the right historical parallel. Now I do.
And it tells you something the hiring data doesn’t: how long rebuilding actually takes.
Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.
Money was never the constraint. Knowledge was. RAND found that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.
Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can’t be compressed by throwing money at it. It can’t be compressed by AI either.
A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn’t imagine going back.
The software industry is in year three of the same optimization. Salesforce said it won’t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A CRA survey of university computing departments found 62% reported declining enrollment this year.
I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry’s answer is predictable: let AI review AI’s code. I’m not doing that. I’ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn’t guessing. I’m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.
But even that doesn’t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn’t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.
We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031.
But crises don’t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn’t. Even Fogbank had records. There weren't enough. The original workers didn't fully understand their own process.
Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.
It’s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don’t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn’t transfer to the AI.
It just disappears.
The West already made this mistake once. The bill came due in Ukraine.
I know how this sounds. I know I’ve written about the talent pipeline before. The defense example isn’t about repeating the argument. It’s about showing what happens if the industry’s expectations don’t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That’s the cost of betting wrong on optimization. We’re making the same bet with software engineering right now.
Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t. The defense industry thought peace would last forever, too.
No posts
A couple things about those patents, from a former IBMer who has quite a few in his time there.
First, not all patents are created equal. Most of those IBM patents are software-related, and for pretty trivial stuff.
Second, most of those patents are generated by the rank and file employees, not research scientists. The IBM patent process is a well-oiled machine but they ain't exactly patenting transistor-level breakthroughs thousands of times a year.
Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.
You could simply become a better writer without not writing anything by just observing.
If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.
With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.
To become a better X to must do more of X. There are few shortcuts worthwhile.
Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.
We did that at Meta and Amazon too (for polycarbonate puzzle pieces, with no monetary award at all!). Every now and then something meaningful came out of it