- Software engineering is a cost center, they are middlemen between the C-level ideas and a finished product.
- Software engineering is about figuring out how to automate a problem, exploring the domain, defining context, tradeoffs, and unlocking new capabilities in the process
The only reason I remember this encounter so clearly was because he got rather annoyed, to the point of being aggressive, when I pointed out that most of the computing landscape was built on C and this wasn’t going to change any time soon.
Multiple decades later, and C-derived languages still rule the world. I do sometimes wonder if his opinion mellowed with time.
> "The name derived from the idea that The Last One was the last program that would ever need writing, as it could be used to generate all subsequent software."
That was released in 1981. Spoiler alert: it was not, in fact, the last one.
Every recession where there was mass lay-offs on programmers (not every recession hits programmers hard), there were many articles saying that whatever that latest thing [see article] was the cause of this and industry is getting rid of programmers they will never need again.
In every case of course "it is the economy stupid". The tools made little difference in the need for programmers. The tools that worked actually increased the need because things you wouldn't even attempt without the tools were now worth hiring extra people to do.
But, the modal programmer at this point is some person who attended a front-end coding bootcamp for a few months and basically just knows how to chain together CSS selectors and React components. I do think these people are in big trouble.
So, while the core, say, 10% of people I think should remain in the system. This 90% periphery of pretty bad programmers will probably need to move on to other jobs.
I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Will the AIs get good enough so they/we won't have to? Or will people realize they are programming and discipline up?
To ignore that pattern and say everything's going to be automated and humanity will be irrelevant seems to me to be... more of a death wish against human agency, than a prediction based on reality.
Which includes this excellent line:
> Unfortunately, the winds of change are sometimes irreversible. The continuing drop in cost of computers has now passed the point at which computers have become cheaper than people. The number of programmers available per computer is shrinking so fast that most computers in the future will have to work at least in part without programmers.
Will something come along some day that will actually drastically reduce the need for programmers/developers/software engineers? Maybe. Are we there yet? My LLM experience makes me seriously doubt it.
Don’t facilitate losing your job.
Arguably programming is as much learning as it is writing code. This is part of the reason some people copy an entire API and don't realise they're not so much building useful code as building an understanding.
Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.
It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.
It won’t be much longer after that when they will start to tech lead those same teams.
All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.
AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.
This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.
Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.
I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.
I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.
The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.
Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.
You could argue that coding with LLM's is a form of software reuse, that removes some of its disadvantages.
``` My own eyes spent countless nights observing, with curiosity and wonder and delight, the responses of a computer, as I commanded it with code, like a sorcerer casting spells. I could not have known, that this obedient machine, this silicon golem, was also, slowly and imperceptibly, enchanting me, and changing how my eyes would see.
At the time^21 , I was a mere fifteen years old, young enough, so that the gravity of life was weak enough, and the mind nimble enough, to allow me to explore without any material justification.
The computer was the believed and I was the believer.
A consequence of becoming obsessed^22 with computer programming, is that one starts to see new metaphors, algorithmic metaphors, everywhere one looks. This new metaphorical lense, belongs entirely to the third eye. Without this lense, I would look at a traffic jam, and see a traffic jam. With the lense, I would look at a traffic jam, and wonder if, and to what extent, the latency-throughput trade-off^23 was true for highways. Without the lense, I would read about social theory, and simply see the words. With the lense, I would ask if society was, a tree^24 , a graph^25, a tree of graphs, or a graph of trees^26.
To generalize, the computer programmer looks at something, and asks, _is this thing an algorithm, and if so, what kind_ ? The entire _trade_ of com-puter programming, it revolves around this question, around the discovery of metaphors that fit^27[13][14].
It is thus little surprise, when a computer programmer asks if (or sometimes asserts that) a certain kind of algorithm^28 is intelligence^29 , consciousness, or both.
The entire ritual of computer programming, is similar to the trade, in that it involves discovering metaphors, not as a means to an end, but as their own end. This ritual is difficult to explain to someone who has never practiced it. Imagine, instead of trying to find metaphors that bridge the real to the algorithmic, one tries to find metaphors that bridge the algorithmic to itself.
It is very similar to what mathematicians do, but it requires writing programs in a very principled and abstract way^30 .
This ritual, unlike the ritual of writing, and unlike the ritual of mathemat-ics, has a dominant material component (the computer) which can make your code, in addition to an _imaginary_ experience, a _material_ experience^31 . This makes the computer a medium — an artificial oracle or artificial hallucinogen — that can safely imagine the unimaginable. And like the oracle, the computer exists to provide insight^32.
Without the ritual of programming, there would be no field of chaos the-ory, nor complex systems (very important for economics and environmental sciences), and _certainly_ no elaborate fractals. Pure mathematics could only scratch the surface, because the mathematical ideas, of the mid 20th century, that our imaginations could access, were insufficient for exploring these sys-tems. Computers allow us, not unlike microscopes and telescopes, to magnify the informational dimension of nature [17].
Computers, and the arcane programming languages that make them obey, are magic machines, that created a new interaction between, two elements of the human psychic triad, the immaterial and material.
What is this triad, and what is its third element? The concept of the triad appears so frequently, in recorded human thought, and in the structure of language, that it is either some kind of adaptive ideal^33 , or a consequence of language itself^34, if not both. Pythagoras called _three_ perfection itself. Plato divided the world into three parts. And, even today, our modern shamans and sages, use triads to discuss the universe.
Roger Penrose has a traid consisting of physical, platonic, and mind. Lacan has a triad consisting of real, symbolic, and imaginary. Plato has a triad of good, truth, and beauty. Of the three, Lacan’s naming is the most self-explanatory.
In this essay, the _material_ is the real, and the _immaterial_ is the other two.
The _trade_ of programming is driven by the _real_, while the _ritual_ of pro-gramming is driven by the _imaginary_. A trade is pursued because of real, material concerns (such as covering the cost of living), while a ritual is pur-sued because of imaginary concerns — concerns that can, more precisely, be called _aesthetic_. ```
To them, an LLM is indistinguishable from a programmer. From the point of view of authority, progress happens one meeting at a time. The reality is that there is a pyramid of experts beneath the authorities, that keep everything running smoothly, in spite of the best attempts of the authorities to demolish the foundation of the pyramid by "helping".
EDIT: to end on a positive note, it does not have to be this way. We just have to be willing to understand _how_ the organization we are a part of actually functions. And that means actually being curious instead of merely authoritative. I understand that curiosity is hard to maintain when you swim with sharks, so maybe don't swim with sharks.
Over two more weeks I can work with those same five to ten people (who often disagree or have different goals) and get a first draft of a feature or small, targeted product together. In those latter two weeks, writing code isn’t what takes time; working through what people think they mean verses what they are actually saying, mediating one group of them to another when they disagree (or mostly agree) is the work. And then, after that, we introduce a customer. Along the way I learn to become something of an expert in whatever the thing is and continue to grow the product, handing chunks of responsibility to other developers at which point it turns into a real thing.
I work with AI tooling and leverage AI as part of products, where it makes sense. There are parts of this cycle where it is helpful and time saving, but it certainly can’t replace me. It can speed up coding in the first version but, today, I end up going back and rewriting chunks and, so far, that eats up the wins. The middle bit it clearly can’t do, and even at the end when changes are more directed it tends toward weirdly complicated solutions that aren’t really practical.
Just one example of how this has happened again and again.
The Mythical Man Month is just over half a century old, yet still reads like it was written yesterday.
Worse, they were doing functional programming just by chaining formulas without side effects, surpassing the skills of most self-proclaimed programmers out there.
In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.
From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.
That’s a bit… handwavy…!
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
Does he know about SQL injection? XSS?
Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?
How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees
Somehow civilization continues to function!
Makes me a bit less terrified that untested vibe coded slop will sink the economy. It's not that different from how things work already.
Using excel in the traditional sense isn't the same as programming. Unless they were doing some VBA or something like that which the vast majority of excel/spreadsheet users don't.
> spreadsheet formulae
formulas. We aren't speaking latin here.
> I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Programming isn't really about edge cases or errors.
Democracy is about governance, not access.
A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.
What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers
This is just categorically false.
No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.
I have certainly never met anyone who works in "loom engineering" in my entire life.
The difference this time is that the thing they're trying to automate is intelligence. The goal is a machine that's as smart as a Nobel Prize winner or a good CEO, across all fields of human intellectual endeavor, and which works for dollars an hour. The goal is also for this machine to be infinitely copyable for the cost of some GPUs and hard drives.
The next goal after that will be to give that machine hands, so that it can do any physical labor or troubleshooting a human can do. And again, the goal is for the hands to be cheaper to produce and cheaper to automate than humans.
You may ask yourself, who would need humans in a future where all intellectual and physical tasks can be done better and cheaper by a machine? You may also ask yourself, who would control the machines? You may ask yourself, what leverage would ordinary humans have in a future that no longer needed them for anything? Or perhaps you would not ask those questions.
But this is the future investors are dreaming of, and the future that they're investing trillions of dollars to reach. That's the dream.
Are you suggesting “And Claude, make no mistakes” works?
Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.
> This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it
I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.
The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.
A couple of years later, Microsoft came out with Visual Basic, and I thought, OMG, I'm toast. Secretaries are going to be writing code. I was a developer by this time, writing code in FoxPro and getting into PowerBuilder.
All this to say, "I've been in IT for many years, and companies promise a lot but rarely deliver completely on their promises." Do programmers and others in the tech field need to adapt? Yes. Is AI going to be disruptive to some extent? Yes. Are all jobs going away? No.
Back in the 80's there were ads for tools to "dinosaurs" who everyone looked to when their 4GL language failed to solve the problem.
In the real world, the materials are visible so people have a partial understanding on how it gets done. But most of the software world is invisible and has no material constraints other than the hardware (you can't use RAM that is not there). If the hardware is like a blank canvas, a standard web framework is like a draw by the numbers book (but one with lines drawn by a pencil so you can erase it easily). Asking the user to code with LLM is like asking a blind to draw the Mona Lisa with a brick.
The difference is those spreadsheets were buried on a company internal fileshare and the blast radius would be contained to that organization.
Today vibe coders can type a prompt, click a button, and their thing is exposed directly to the internet and ready to suck up any data someone uploads.
Define "here", please! Perhaps your "here" and mine differ, but the view from my here is that while all three plurals are generally acceptable, formulae is the correcter double plus good spelling for this context.
(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)
Experienced through old-school (pre-LLM) practice.
I don't clearly see a good endgame for this.
This is not the case:
- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").
- After the burst of the first dotcom bubble, a lot of programmers were unemployed.
- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.
Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.
I started coding 8 months ago at 45 with zero experience. I now have a production app processing real payments. That was genuinely impossible for someone like me before AI assistance. Not because I lacked the ability to think through problems, but because the skill floor was too high to clear while also being a parent with no spare years to invest.
The spreadsheet analogy is apt. Most of those amateur spreadsheets aren't replacing finance teams; they're solving small problems that would otherwise go unsolved. That's closer to what's happening with AI-assisted development, I feel, than the "eliminate programmers" framing suggests.
It can be about both meanings. The additional meanings of democratize to describe "more accessible" are documented in Oxford and Merriam-Webster dictionaries:
https://www.encyclopedia.com/humanities/dictionaries-thesaur...
https://www.merriam-webster.com/dictionary/democratic#:~:tex...
The article is a good summary of major movements through the decades without so much that whole point is lost in the details. I would have put in a slightly different set of things if I wanted to write that article, but the point would still stand and I would leave out many things that could be put in but would be too much noise.
This confused my teacher as he knew this guy wasn’t super technical, and asked him more about it. I may have the details not exactly right but the man said something like “I use lotus notes every day!”
The word programmer had a very different meaning 40 years ago.
In my opinion these discussions should include MREs (minimal reproducible examples) in the form of prompts to ground the discussion.
For example, take this prompt and put it into Claude Code, can you see the problematic ways it is handling transactions?
---
The invoicing system is being merged into the core system that uses Postgres as its database. The core system has a table for users with columns user_id, username, creation_date . The invoicing data is available in a json file with columns user_id, invoice_id, amount, description.
The data is too big to fit in memory.
Your role is to create a Python program that creates a table for the invoices in Postgres and then inserts the data from the json file. Users will be accessing the system while the invoices are being inserted.
---
Some will dig into obscurities that LLMs don't or can't touch, others will orchestrate the tools, Gastown-style, into some as-yet-unknown form.
People will vibe themselves into a corner and either start learning or flame out.
The current crop of LLMs are subsidised enough to make this learning less expensive for those with little of both time and money. That's what's meant by democratised.
If all the frontier models disappear into autocratic dark holes then yeah we have a problem but the fundamental freedom gain an “individuals can make tools without knowing coding” isn’t going anywhere
What do you make of AI?
It turns out that in many of these cases, code is an effective way of doing it, but there may be other options. For a storefront, there are website builders that let you do it very effectively if your needs match one of their templates, there are game engines that require no code, and a lot of accounting can be done in Excel.
What I wanted to say is that maybe you could have done without code, but thanks to LLMs making code a viable option even for beginners, that's what you went for. In fact, vibe coding is barely even coding in the strictest sense of writing something in a programming language, since you are using natural language and code is just an intermediate step that you can see.
The reason programmers use programming languages is not gatekeeping, unlike what many people who want to "eliminate programmers" think. It is that programming languages are very good at what they do, they are precise, unambiguous, concise and expressive. Alternatives like natural languages or graphical tools lack some of these attributes and therefore may not work as well. Like with many advanced tools, there is a learning curve, but once you reach a certain point, like when you intend to make it your job, it is worth it.
It was an institutional failure, and the software involved had hundreds of millions of pounds spent on it and was built by supposed professionals.
Writing software has always been a skill with no ceiling. Writing software can be literally equivalent to doing research level mathematics. It can also be changing colors on a webpage. This is why I have never been worried about LLMs taking software jobs, but it is possible they will require the level of skill to be employable to spike.
I believe that full automation of the mundanities of human life is coming in the fullness of time. But for that insight to be helpful to me, I have to get the timing right, and the data suggests I should be extremely skeptical about excitable tech guys predicting big things in short time frames.
Paradoxically this may mean there are more jobs for programmer and programmer-likes alike as new cottage industries are born. AI for dentists is coming.
Those massively and widely available benefits will continue to deflate the value of human intelligence until even most of innovators currently working on them lose their seats at the table too.
All the sheets we saw in that factory, and in our hotels, were noticeably thicker and stiffer than American sheets, somewhere between American sheets and denim. When we asked about that, they seemed to feel sorry that we only had thin, flimsy sheets.
And while these tools can be invaluable in some cases, I still don't know how we get from "Hazy requirements where the user doesn't know what they even want" to "Production-ready apps built at the finger-tips of the PM".
Another really important detail people keep missing is that we have to make thousands of micro-decisions along the way to build up a cohesive experience to the user. LLM's haven't really shown they're great at not building assumptions into code. In fact, they're really bad at it.
Lastly, do people not realize how easy it to so convince an LLM of something that isn't true or vice versa? i love these tools but even I find myself trying to steer it into the direction that makes sense to me, not the direction that makes sense generally.
Where you fall depends on where you work and what you work on.
You make a great points about the chain of accountability. But, in my opinion, working professionals are the only agents in the system with the potential to realize their own culpability and divert their actions.
Perhaps, it isn't fair to point to them and call them traitors. Still, they are the only ones with enough agency to potentially organize and collectively push for the kind of ethics that could save us all.
[1]: what you see is the first 5% of the essay, based on the notes that never made it in. Many topics are untouched, such as cults, caves, imagination, conspiracy, paranoia, fear, wakefulness, blindness, hallucinations, altering consciousness, notations, etc. And other topics are mentioned but not explored deeply (taoism, buddhism, prophecy, trust+belief[2], mnemonics, dreams, metaphors, etc). So it's mostly setup, with planned payoffs and epiphanies in the latter unwritten parts[3]. And some of the transitions between topics are in need of deburring.
[2]: note that many languages use the same word for trust and belief. In Indo-European languages, the root is the same root as tree and true. Relevant to the unwritten parts of the essay.
[3]: so you'll just have to imagine the unwritten parts, until I actually get around to writing them ;)
Two years ago, one former exec at my place was perfectly happy to throw resources ( his word ) from India at a problem, while unwilling to pay the vendor for the same thing. I voiced my objection once, but after it was dismissed I just watched the thing blow up.
I am not saying current situation is the same. It is not. But, it is the same hubris, which means miscalculations will happen ( like with Dorsey's Block mass firing ).
Now I've come to realize the error in my ways, this is probably not going to happen. What will happen is instead is that the ones doing the "shuffling of shit" is just going to also be agents themselves. Prompted by a more senior slop-grammer specialized in orchestrating "shuffling of shit".
Part of me thinks that we're already reaching peak stuff/employment/the current system.
We are currently churning out graduates who work in coffee shops. More and more employment is make work. The issue is can we carry on requiring work, making it a moral requirement.
I suspect it'll be like the industrial revolution, when the average labourer moved to a factory in the city living in a slum, they were worse off. It took time for the conditions of the working class to improve.
Basic income is touted as the solution, but then globalisation means workers are moving much more and I'm not sure the 2 are compatible. Not that I have a better idea.
I do think we need a cultural change decoupling work from self worth. It's becoming less and less defensible to require everyone to work to be 'deserving'.
All that being said, there will still be jobs, there will always be demand for hand made, or something that isn't soulless corporatism. Although I'm starting to sound like Star Treks view of the future, which may not achievable
Let me put it this way: I do have my opinion on this topic, but this whole topic is insanely multi-faceted, and some claims that I am rather certain about are more at the boundaru of the Overton window of HN, so I won't post it here.
But the article which the whole discussion is about
> https://www.ivanturkovic.com/2026/01/22/history-software-sim...
offers in my opinion a rather balanced perspective regarding using AI for coding (which does not mean that this article is near to my opinion).
I will just give some less controversial thoughts and advices concerning AI:
- A huge problem when discussing AI is that the whole topic is a hodgepodge of various very diverse topics.
- The (current) AI industry has invested a lot of marketing efforts to re-define what AI stood for in the past (it basically convinced the mass of people that "AI = what we are offering")
- I cannot say whether AI will be capable of replacing lots of people in office jobs or not (I have serious doubts). Media loves to disseminate this topic, but in my opinion it does not really matter: the agenda is rather to spread fear among employees to make them more obedient.
- Even if AI will be capable of replacing only few office workers (a scenario that I rather believe in), it does not mean that management will not use "AI"/"replace by AI" as a very convenient excuse to get rid of lots of employees. The dismissed workers will then mostly vent their spleen on the AI companies instead of the management; in other work: AI is a very convenient scapegoat for inconvenient management decisions. And yes, I consider it to be possible that some event that leads to mass layoffs might happen in a few years (but this is speculative).
- While I cannot say how much quality improvement is possible for current AI models (i.e. I don't know whether there exists a technological barrier), the signs are clear that as of today AI companies have hit some soft "cost barriers". I don't know whether these are easily solvable or not, but be aware of their existence.
- So, my advice is: if an AI model is of use for some project that you have (e.g. generating graphics/content for your web platform; using it as a tool for developing the next scientific breakthrough; ...), do it now. Don't assume that the models will do this nearly freely for you anymore in the future (it can be that this will stay possible in the possible, but be cautious).
And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?
If you walk to the kitchen and fry up an egg are you now a master chef? What's the difference between a surgeon and a butcher ...they both cut things?
Most shops never really needed development expertise in-house as there's no shortage of many decent tools equally suitable as code for getting machines to do most business things.
In some ways this is worse because while it's functionally the same black box intermediary as the alternative-to-code tools there's an illusion of control and more sunk cost. Do you want your sales team selling or learning JavaScript churning out goofy knock-offs for a well-solved problem?
I could learn plumbing skills and do the plumbing around my house. I've chosen not to.
If I build a web app i still need to pay for a domain, for a server for egress.
We are just renting. Wouldn’t be surprised if in the future this gets even more depressing
Or if you want compare vibe coding with any technology, like electricity. Sure, that one person got electrocuted or their house burned down. But it's just so useful, and "somehow civilization continues to function". I guess they should've known better.
I'm personally not comfortable hyping up the benefits whilst ignoring the risks, especially for lay people.
The mere concept of people "making their own tools" is just comical in this bleak timeline.
May i suggest two/3 more/other directions to glance at?
- Strugatsky brothers - in "Snail on the slope" [1], chapter 3, (about page 11 in original) Peretz talks about understanding... "Проще поверить, чем понять. Проще разочароваться, чем понять. Проще плюнуть, чем понять. " -- in my flaky translation, "it's simpler to believe than to understand. it's simpler to get disappointed than to understand. it's simpler to spit than to understand". Have a look
- Pirsig's MoQ metaphysics of quality
- the mean-ings of word "mean"
LLMs need to get better at asking clarifying questions and trying to show the initial solution might not work. Even when they get better at that, this article states that managers not capable of thinking through the answers well enough will fall short and this is the space that developers live in.
The hardware offers so little guarantees that the whole OS job is to offer that. All layers are formal, but usefulness doesn't comes from that. Usefulness comes from a consistent models that embodies a domain. So you have the hardware that has capabilities but no model. Then you add the OS's kernel that will impose a model on the hardware, then you have the system libraries that will further restrict it to a certain domains. Then you have the general libraries that are more useful because they present another perspective. And then you have the application that use this last model according to a certain need.
A good example is that you go from the sound card to the sound subsystem, the the alsa libraries, to pipewire, to an audio player or a media framework like the one in the browser. This particular tower has dozens of engineers that has contributed to it, and most developers only deal with the last layers, but the lesson is that the perspective of a user differs from the building blocks that we have in hand. Software engineering is to reconcile the twos.
So people may know how the things should look or behave on their hand, but they have no idea on what the building blocks on the other hand. It's all abstract. The only thing real is the hardware and the energy powering it. Everything else needs to be specified with code. And in that world that forms the middle layer, there's a lot of rules to follow to make something good, but laws that prevent something bad are little. It's not like physical engineering where there are things you just cannot do.
Just like on a canvas you can draw anything as long as it's inside the boundary of the canvas, you can do anything in software as long as it's inside the boundary of the hardware. OS in personal computers adds a little more restrictions, but it's not a lot. It's basically fantasia in there.
Also worth noting that even in Star Trek, which is viewed as a utopian vision of the future, the sort of societal changes you are talking about only came after humanity almost wiped itself out in a third world war (which coincidentally happened to start in 2026)
They actually were better off, which illustrates how bad rural poverty was at that time.
This task was famously incredibly difficult back when we had people producing unmaintainable mountains of millions of lines of code, to the point where shipping anything sizable in a working state on time without last minute scope reductions is nearly unheard of.
I can't imagine using AI to add another one to two zeroes to the lines of code counter would help reach the goal post.
I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.
[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]
I mean.. I am ok with you saying saying yes. In a sense, I half expect it. I will be very subtle, I don't believe the issue lies with the tooling ( AI or not ).
We are not ignoring it. It is just not an example of a load bearing excel sheet.
LLMs can write a lot of code. they can even write a comprehensive test suite for that code. However they can't tell you if it doesn't work because of some interaction with something else you didn't think about. They can't tell you that all race conditions are really fixed (despite being somewhat good at tracking them down when known). They can't tell you that the program doesn't work because it doesn't do something critical that nobody thought to write into the requirements until you noticed it was missing.
My impression is that the main reason most people have so many meetings is because meetings are equated to work. If you are in a meeting, you are at work and you need to work. This is because, in a meeting, everyone is looking at everyone else with the expectation that they are working. But if you are not in a meeting, this expectation doesn't exist, so you are basically not at work and you don't need to work.
In particular, thinking only occurs during meetings. And if it didn't happen during a meeting, it didn't happen.
Call me cynical, but it explains immediately why the vast majority of companies don't tolerate remote work unless they're forced to by a pandemic. Office work means someone could be watching you outside meetings, which causes some work to happen outside of meetings and raises productivity.
When I look back at the history of software, one pattern emerges with remarkable consistency: the promise to simplify software creation, to make it cheaper, and ultimately to eliminate the need for programmers altogether. This is not a new idea. It has been the driving ambition of our industry since the 1960s. And while each generation believes they are witnessing something unprecedented, they are actually participating in a cycle that has repeated itself for over six decades.
Today, as large language models generate code and AI assistants pair-program with developers, we hear familiar refrains: programming as we know it is ending, software development will be democratized, and soon anyone will be able to build complex systems without writing a single line of code. These claims deserve scrutiny, not because they are entirely wrong, but because they echo promises made in 1959, in 1973, in 1985, and in 2015. Understanding this history is essential for anyone trying to make sense of where we actually are and where we might be going.
The story begins in the late 1950s, when programming was genuinely arcane. Programmers wrote in assembly language or machine code, manipulating registers and memory addresses directly. The work required deep technical knowledge and was painfully slow. Businesses needed software, but the people who understood business problems rarely understood computers, and the people who understood computers rarely understood business problems.
Enter Grace Hopper and the CODASYL committee. In 1959, they created COBOL, the Common Business-Oriented Language. The explicit goal was revolutionary: create a programming language so close to English that business managers could read it, understand it, and eventually write it themselves. The syntax was deliberately verbose. Instead of cryptic symbols, COBOL used words like MOVE, ADD, MULTIPLY, and PERFORM. A program read almost like a bureaucratic memo.
The marketing was clear: COBOL would eliminate the bottleneck of specialized programmers. Business analysts would write their own programs. The priesthood of technical experts would be disbanded. Software creation would be democratized.
It did not work out that way.
COBOL succeeded spectacularly as a programming language. It became the backbone of banking, insurance, and government systems worldwide. Billions of lines of COBOL code still run today, processing trillions of dollars in transactions. But COBOL did not eliminate programmers. Instead, it created a new profession of COBOL programmers. The language was readable, but writing correct, efficient, maintainable COBOL still required specialized skills, deep understanding of the underlying systems, and years of experience.
The irony is profound. A language created to eliminate the need for programmers became one of the most enduring job creators in the history of computing. Today, COBOL programmers are in high demand precisely because so few people learned the language, and the systems they maintain are too critical to replace.
Most people discussing AI today seem unaware that we have been here before. The 1960s and early 1970s saw an explosion of optimism about artificial intelligence that makes current predictions seem modest by comparison.
In 1965, Herbert Simon, one of the founders of AI research, predicted that within twenty years, machines would be capable of doing any work a man can do. In 1967, Marvin Minsky, another AI pioneer, predicted that within a generation, the problem of creating artificial intelligence would be substantially solved. The funding poured in. Government agencies, particularly DARPA in the United States, invested heavily in AI research.
The implications for software development were considered revolutionary. If machines could think, surely they could program themselves. Researchers worked on automatic programming, systems that would translate natural language specifications into working code. Expert systems promised to capture human expertise in rule-based engines that could make decisions and solve problems without human intervention.
The vision was compelling. Describe what you want in plain English, and the computer would figure out how to do it. Programming would become specification, and specification would become conversation.
By the mid-1970s, reality had intervened. The Lighthill Report of 1973 devastated AI funding in the United Kingdom by systematically exposing the gap between AI promises and AI achievements. The report noted that early successes in narrow domains had created unrealistic expectations. Systems that worked in toy problems failed in real-world complexity. The combinatorial explosion of possibilities overwhelmed the computational resources of the time.
What followed was the first AI winter. Funding collapsed. Research programs were canceled. The phrase artificial intelligence became an embarrassment in some circles. Researchers rebranded their work as expert systems, knowledge engineering, or computational intelligence to distance themselves from the discredited hype.
The lesson should have been clear: translating human intent into working software is fundamentally difficult. The gap between natural language specification and correct implementation is not a technical problem to be solved but a conceptual challenge that reflects deep truths about human communication and software complexity.
By the early 1980s, a new hope emerged: fourth-generation languages, or 4GLs. The reasoning was seductive. Assembly language was first generation. FORTRAN and COBOL were second generation. Structured languages like Pascal and C were third generation. The fourth generation would abstract away even more complexity, allowing non-programmers to create applications through high-level declarations rather than procedural code.
Products like FOCUS, NOMAD, Ramis, and later PowerBuilder and Microsoft Access promised to revolutionize software development. Instead of writing code, users would define data structures, specify business rules, and design screens through graphical interfaces. The underlying code would be generated automatically.
The marketing materials were remarkably similar to what we hear about no-code platforms today. End users would build their own applications. IT departments would become enablers rather than bottlenecks. Development time would be reduced by factors of ten or more.
4GLs achieved genuine success in certain domains. Report generation, simple database applications, and departmental tools were built faster with these technologies. Microsoft Access, in particular, empowered millions of users to create functional applications without traditional programming.
But the fundamental promise once again went unfulfilled. Complex applications still required complex thinking. When business logic became intricate, when performance mattered, when systems needed to integrate with other systems, 4GLs revealed their limitations. Generated code was often inefficient. Customization was difficult. And the most sophisticated users of 4GLs were not business analysts but specialized developers who understood both the tools and the underlying principles.
The pattern repeated: tools that promised to eliminate programmers instead created new categories of programming work.
The late 1980s and early 1990s saw the rise of Computer-Aided Software Engineering, or CASE tools. The vision was comprehensive: model your entire system using diagrams and specifications, and the tools would generate complete, working applications.
Companies invested millions in CASE tool suites. Conferences were held. Methodologies were developed. The promise was that software engineering would finally become real engineering, with predictable outcomes, repeatable processes, and automatic generation of code from specifications.
I remember this era vividly. Consultants would arrive with massive data dictionaries and entity-relationship diagrams. Projects would spend months in the modeling phase, producing elaborate documentation that was supposed to capture every requirement and design decision. The code generation phase would then transform these models into working systems.
The reality was disappointing. Generated code was often bloated and inefficient. The models were difficult to maintain as requirements changed. And the fundamental problem remained: getting the specification right was at least as hard as writing the code directly. The diagrams and models were just another programming language, one that happened to use boxes and arrows instead of text.
By the mid-1990s, CASE tools had largely faded from prominence. Some concepts survived in object-oriented analysis and design tools like Rational Rose, but the grand vision of automatic code generation from specifications had proven illusory.
The 1980s also saw a second wave of AI enthusiasm, focused on expert systems. Companies like Inference Corporation, Intellicorp, and Teknowledge promised to capture human expertise in rule-based systems that could make decisions as well as human experts.
The applications were compelling: medical diagnosis, financial analysis, equipment configuration, and yes, automatic programming. Systems like R1/XCON at Digital Equipment Corporation achieved genuine success, saving the company millions by automatically configuring computer systems.
The Japanese Fifth Generation Computer Project, launched in 1982 with massive government investment, aimed to create computers that could reason using logic programming. The explicit goal was to leapfrog Western technology and create machines capable of natural language understanding, automatic programming, and artificial intelligence.
By 1990, the Fifth Generation Project had largely failed to achieve its goals. Expert systems proved brittle, failing in unexpected ways when confronted with situations outside their narrow expertise. Maintaining the rule bases became increasingly difficult as systems grew. And the fundamental challenge of knowledge acquisition, getting human experts to articulate their expertise in a form machines could use, proved far harder than anticipated.
The second AI winter arrived in the early 1990s. Again, funding collapsed. Again, researchers rebranded their work. Again, the fundamental promise of machines that could program themselves receded into the future.
The mid-1990s brought the World Wide Web, and with it a new hope for simplified software creation. HTML was supposed to be so simple that anyone could build web pages. And indeed, millions learned to create basic websites with nothing more than a text editor and some determination.
But the web quickly became more complex. JavaScript added interactivity. CSS added styling. Server-side programming became essential. Databases backed the content. Security became critical. Performance became competitive.
The response was predictable: new tools promising to simplify web development. Dreamweaver, FrontPage, and later WordPress and Wix promised that anyone could build professional websites without coding. Content management systems abstracted away the complexity.
These tools succeeded in making basic web presence accessible to non-developers. But professional web development became more complex, not simpler. The rise of single-page applications, mobile-first design, progressive web apps, and the modern JavaScript ecosystem created new layers of complexity that required new categories of specialists.
The pattern continued: tools that simplified basic tasks enabled more ambitious projects, which required more sophisticated skills, which created demand for more specialized developers.
The early 2000s saw another attempt at automatic code generation through Model-Driven Architecture, or MDA. The Object Management Group proposed that platform-independent models could be transformed into platform-specific implementations through automated tooling.
The Unified Modeling Language (UML) became the standard notation. Tools like Rational Rose, Together, and Enterprise Architect promised to generate complete applications from UML diagrams. The vision was familiar: model your system at a high level of abstraction, and let the tools handle the implementation details.
MDA achieved some success in specialized domains, particularly where the mapping from models to code was well-understood and the generated code did not need extensive customization. But for most applications, the approach proved cumbersome. Maintaining synchronization between models and code was difficult. The models themselves became complex, requiring specialized skills to create and maintain.
By the 2010s, MDA had faded from mainstream practice, though its concepts influenced modern code generation and domain-specific languages.
Starting around 2015, a new generation of no-code and low-code platforms emerged. Names like Bubble, Webflow, Airtable, Zapier, and Microsoft Power Platform promised to enable citizen developers to build applications through visual interfaces and drag-and-drop components.
The marketing echoed the promises of every previous generation. Business users would build their own applications. IT backlogs would disappear. Digital transformation would accelerate. Programmers would become obsolete, or at least far less necessary.
No-code tools have achieved genuine success in their domains. Simple applications, workflow automation, basic websites, and internal tools can be built faster with these platforms than with traditional development. For certain categories of problems, they represent a genuine advancement.
But the pattern holds. Complex applications still require complex thinking. When requirements exceed the platforms’ capabilities, users hit walls. Integration with existing systems is often difficult. Performance at scale is challenging. And the most sophisticated users of no-code platforms are often former developers who understand the underlying principles.
More importantly, no-code has not reduced the demand for traditional developers. If anything, the explosion of digital applications has increased demand. The market for software developers continues to grow, even as no-code platforms proliferate.
Which brings us to the present moment. Large language models like GPT-4, Claude, and Gemini can generate functional code from natural language descriptions. GitHub Copilot and similar tools assist developers in real-time, suggesting completions and generating boilerplate. The improvements are genuine and impressive.
The predictions are predictably bold. Programming jobs will be eliminated. Software development will be democratized. Anyone will be able to build complex systems by describing what they want in plain English.
We have heard this before. In 1959, with COBOL. In 1973, with expert systems. In 1985, with 4GLs. In 1995, with CASE tools. In 2015, with no-code.
This does not mean the current wave is identical to previous waves. Large language models represent a genuine capability breakthrough. They can perform tasks that previous technologies could not. The code they generate is often correct and useful. The productivity improvements for skilled developers are real.
But the fundamental challenge remains unchanged: translating human intent into correct, efficient, maintainable, secure software is hard. Not because the tools are inadequate, but because the problem is inherently complex.
Understanding why each wave of simplification tools falls short of its promises requires examining the nature of software development itself.
Software is not just code. It is a precise specification of behavior under all possible conditions. When you say you want a simple e-commerce application, you are implicitly specifying thousands of decisions about user authentication, payment processing, inventory management, shipping calculations, tax handling, error recovery, accessibility, performance under load, and security against attacks. Most of these decisions are not obvious from a high-level description.
The hard part of software development has never been typing code. It has always been figuring out exactly what the software should do, and ensuring it actually does that under all circumstances. This is why specification languages and automatic code generation repeatedly fail to eliminate programmers: they simply move the complexity from code to specification, and specification is at least as difficult.
Furthermore, software exists in an ecosystem. It must integrate with other software, adapt to changing requirements, run on evolving platforms, and serve users whose needs constantly shift. Maintaining and evolving software over time requires understanding that transcends any specification language or generation tool.
Finally, software reflects human decisions about tradeoffs. Performance versus maintainability. Security versus usability. Flexibility versus simplicity. These decisions require judgment that cannot be automated because they depend on context, priorities, and values that differ across situations.
Each wave of simplification tools does change something real, just not what the marketers promise.
COBOL did not eliminate programmers, but it did make business programming more accessible and created an enormous industry. 4GLs did not replace traditional languages, but they did enable rapid development of certain categories of applications. No-code platforms do not make developers obsolete, but they do empower non-developers to create useful tools.
The pattern is consistent: new tools lower the barrier to entry for simple tasks, which increases the total amount of software being created, which creates demand for more sophisticated software, which creates demand for more sophisticated developers.
This is not a failure. It is how technology evolves. Each abstraction layer enables new capabilities. Each simplification enables new ambitions. The overall effect is positive, even if the specific predictions prove wrong.
What should we take from this history as we navigate the current wave of AI-powered development tools?
First, skepticism about extreme predictions is warranted. Claims that programming will be eliminated within a few years echo claims made in every previous decade. The pattern suggests that such predictions consistently overestimate the pace of change and underestimate the complexity of the challenge.
Second, genuine improvements are real. Large language models do provide productivity gains for developers. They do make certain tasks easier. Dismissing these improvements because previous hype cycles failed would be as mistaken as accepting the most extreme predictions uncritically.
Third, the nature of programming work will continue to evolve. Just as COBOL programmers do different work than assembly language programmers, and web developers do different work than COBOL programmers, future developers will do different work than current developers. The work will change, but it will not disappear.
Fourth, human skills remain essential. Understanding requirements, making design decisions, debugging unexpected behaviors, maintaining systems over time, and communicating with stakeholders are skills that every wave of automation has failed to replace. These skills may become more important, not less, as tools handle more of the routine work.
Fifth, the developers who thrive will be those who learn to use new tools effectively while maintaining deep understanding of underlying principles. Tools come and go, but fundamental knowledge about algorithms, data structures, system design, and software architecture remains valuable across generations.
Perhaps the most important lesson from six decades of promised simplification is this: there is no substitute for understanding.
Every successful use of COBOL required understanding of business processes and data management. Every successful use of 4GLs required understanding of databases and application architecture. Every successful use of no-code platforms requires understanding of workflow design and system integration. And every successful use of AI code generation requires understanding of software principles, code quality, and system design.
The tools change. The languages change. The platforms change. But the need for people who deeply understand what they are building, and why, remains constant.
This is not a bug in the system. It reflects something fundamental about the nature of software and the nature of problem-solving. Software is crystallized thought. Creating good software requires good thinking. No tool can substitute for that.
As we look to the future, the history of software simplification offers a valuable perspective. We should expect AI tools to become more capable. We should expect new promises of revolutionary change. We should expect some of those promises to be fulfilled in unexpected ways, and others to fall short of the hype.
Most importantly, we should expect the fundamental challenge to remain: building software that does what people actually need, reliably, efficiently, and securely, in a world where requirements constantly change and systems must work together in complex ways.
That challenge has never been primarily about the difficulty of typing code. It has always been about the difficulty of thinking clearly, communicating precisely, and making good decisions under uncertainty. Those are human skills, and they will remain valuable for as long as software matters.
The programmers of the future may write less code directly. They may work at higher levels of abstraction. They may use tools we cannot currently imagine. But they will still be doing the essential work of translating human intent into software that works. And that work will continue to require skill, judgment, and understanding that no tool has yet replaced.
History suggests that reports of programming’s death have been greatly exaggerated, repeatedly, for over sixty years. There is no reason to believe this time is fundamentally different. There is every reason to believe that those who invest in deep understanding will continue to be valuable, regardless of what tools emerge.
If you found this perspective valuable, I encourage you to follow my work. I write regularly about the intersection of software engineering, technology history, and the evolving role of AI in our industry. My goal is always to cut through the hype and offer calm, analytical perspectives grounded in real experience.
You can find me on LinkedIn where I share shorter insights and engage with the broader technology community. For longer explorations like this one, this blog remains my primary home.
I would love to hear your thoughts on this topic. Have you lived through any of these hype cycles? Do you see patterns in your own experience that echo this history? What do you think makes the current AI wave different, or similar, to what came before?
Leave a comment below, reach out via the contact page, or connect with me on social media. The best insights often come from the community, from people who are building, leading, and thinking about these questions every day.
Until next time, keep building wisely.
I recall Power builder in particular it was the rage.