> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise
How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
The author tries to answer this:
> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.
How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.
After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)
On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.
The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.
Edit: 9 babies β 9 mothers
Or without the ability to use a library from GitHub / their package manager.
It doesn't feel THAT much different to me.
"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.
I started getting that "I'm reading another AI-written blog post" feeling around 1/3 of the way through, but I don't consider myself super calibrated on this.
Pangram seems pretty confident it's AI (https://www.pangram.com/history/e9f6eb77-86f9-46d0-a6c1-e57c...). But I know these tools aren't perfect. I'd love to hear from the author what their process was in writing this piece!
Related question (I'm trying to work this out for myself):
If you believe using AI to write an email or blog post for you isn't okay, but using AI to write code for you is... what's the difference?
Right now my instinct is something like:
- Code can be verifiably correct (especially w/ good tests) so it's less of a purely-creative act than writing.
- But always, always double-check the tests!
- I still wouldn't submit a PR where I can't vouch for every line of code.
- AI-written documentation and specs are mostly still bad and should be looked down upon. But mostly because the quality, at least today, is poor. (Lots of duplication, lack of a clear understanding of the reader's intent and needs, no thoughtful curation, etc.)
- Be psychologically ready to update these priors as models change.
I'd love to hear from anyone who's thought more about this.
βAI suggested we do it that wayβ
And weβve been degrading our systems rapidly for last several weeks. Weβve decided to pause and reflect and change how we use AI on tasks that are not dead simple.
For junior engineers the distinction matters most. The reps are not just about getting the right answer, they are about building the intuition for when the answer is wrong. That's the hardest thing to transfer between people, and the thing AI is currently worst at self-verifying.
1) perfect is the enemy of good
2) fake it till you make it
The analogies imagine difficult scenarios where the habit of taking shortcuts doesn't help. But most people most of the time don't run into those scenarios at all.
I can''t imagine telling them now to stopβuse the Ersatz Intelligence instead of Actual Intelligence.
> This is the part that some people may not want to hear --
> There is no generated explanation that transfers mastery into your brain without you doing the work. > There is no way to outsource reasoning for long enough that you still end up strong at reasoning.
This is in relation to early-career engineers, but I wonder why people think this won't apply to mid- and late-career engineers. Are they not also constantly learning things on the job? Are they not thus shortcutting their own understanding of what they are learning day-to-day?
I have been an ardent opponent of AI since it came up a few years back. I refuse to vibe code and I refuse to let AI think for me. I won't be an AI controller.
However, two days ago I found a nice, personal use case for AI: Advanced writing checks (grammar checks, mostly, and some rewordings) in Word using a rather expensive app.
I write a lot of US English, despite it not being my native language, and AI is now helping me to write much better than I did before. Also, I discovered that I am much worse at writing Danish than I was believing. In fact, I think I am better at writing US English than at Danish, that's a bit surprising as I am a Dane.
No AI was used during the writing of this entry, but I dearly love the writing tool already! I have heard similar stories from friends who say that AI is very good at summarizing long documents and stuff like that.
So, I personally think that AI CAN elevate one's thinking. I am learning more about Danish and US English grammar every day, now, than I did during a decade before. Writing is suddenly so fun because it involves growing my skills.
"Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/
IMO, teams need to agree on a set of principles on AI usage, concrete examples of where and how to use it. Perhaps its much more useful in parts of your system that's faster evolving and doesn't have too much core logic like testing frameworks etc
Simply discarding it as 'yet another tool' is part of the problem.
If the brain is like a muscle, it won't work.
shows both groups using AI differently. Hard to continue reading the article that excludes your group entirely.
Well this is true, but that doesn't mean that there isn't any other way to acquire this knowledge. Until now, this way of gaining deeper understanding was simply the most practical one, since you needed to write lots of code when starting out as a software engineer.
But it's just as well possible to gain knowledge about useful abstractions and clean code by using AI to do the work. You'll find out after a while which codebases get you stuck and which code abstractions leverage your AI because it needs fewer tokens to read and extend your codebase.
But maybe pacing/procrastination might be relief valves?
Nobody is going to pay you for your artisanally crafted CSS code or whatever you were coding manually until last year. If you can do it faster/better than the AI, good for you. But it's not a contest and possibly your days of maintaining that lead might be numbered.
In the end, as long as the UI is styled alright, nobody will care that you pieced it together manually for hours and hours. More importantly, people are not going to pay you more for it than they'll pay the next guy getting a similar result in an hour of prompting AIs. They'll want you to move faster and do more.
That's what better tools do, they just cause people to expect more, better, and faster. And their expectations expand until they match the limitations of the new tools.
People seem to have this mental block where somehow the amount of stuff we ship is going to be a constant in the universe and we'll all be out of work and descend in despair. That's something that in the history of our species inventing tools has never really happened. I don't see any reason why AI would change that. Sure, there's a lot more we can do now. And it's a lot cheaper now. So we can now have a bit more of our proverbial cake and eat it. People will push this as far as they can and will want more and more of the good stuff.
And they'll need help getting all that stuff built. One way is a painful process of slowly prompting things together. Most people lack the skills to do that, don't know what to ask for and are in any case busy doing other things. That job, building stuff using tools, is still a job that needs doing. I'm quite busy currently doing that.
Yet now suddenly everyone is supposed to want to become a team lead of sorts (ie. the agents becoming your team). I don't want to do that, I treat an AI agent as a pair in a pair programming unit. Nothing more, nothing less. If someone wants to treat it differently, good on them, but they have no place telling what works for thee works for me.
Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from βasking one more questionβ. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.
I think the evidence for this is quite clear. Humans are NOT going to expend any energy - even mental energy, to think about something if they don't have to.
I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc
Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.
Letβs say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?
It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.
I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend oneβs 10 units per week.
It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.
"Good judgement comes from experience. Experience comes from bad judgement."
AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.
Some do not learn from that experience. They were never going to learn without AI either.
Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.
We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.
He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.
It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.
Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.
SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc
We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.
i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.
if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.
1) you use it to help write code that you still βownβ and fully understand.
2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like itβs someone elseβs code if you were asked to make changes without AI.
I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesnβt matter.
I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because itβs quicker and easier. But itβs a lie. Theyβre mortgaging the codebase. And I think the atrophy sets in when people do this.
The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these thingsβ¦
Theres way too much money on this hype train now though to point out the emperor isnt wearing any clothes and way too many people who always did think that "boilerplate spew" (the one thing AI really does well) is a valid form of programming rather than a shortcut to tech debt.
Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.
you are thinking too myopically.
We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.
The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.
The problem is that it was coined so early that we are way past the aphorism stage now.
Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".
we learn by doing
It's "9 women can't make a baby in one month".
Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?
If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.
Engineers are accredited and in some countries even come with a title.
"Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.
Iβm sure you can see the difference between a garbage collector and a nondeterministic slop generator
But it feels good to equivocate, so here we are.
Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.
There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?
The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.
On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My schoolβs librarian wrote ambiguously βwrite this in your own wordsβ. I asked her what she had meant by that. She had thought Iβd copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).
Arithmetics is a very, very small subset of math.
I think a lot of people are getting caught up in the discussion about how we, generally as technologists, are going to use AI. And it is looking like the industry is moving towards what used to be programmers now being team leads or project managers of AI teams.
So it's probably best for you to try to not get involved in those discussions, and when someone says "you" assume they mean "you (generally)"?
In the middle ground:
I'm putting together exercises for a C/Systems programming class I'm teaching in the fall.
Partway through this, for some reason [cough procrastination cough], I thought it would be fun to implement them in Scheme. My Scheme was already poor, and what meager skills I had are completely rusty. I used Claude to great effect as a tutor for that, but didn't have it code any of the solutions at all, of course. I could tell I was leveling up fast as I coded the things up.
Gotta use it in the right way if one wants to sharpen ones skills.
I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.
One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.
Today software stacks tend to be a lot more complicated.
I have found myself going out and actually reading code less and less over the past year. I would be lying if I said that there are not fairly regular moments where I question the comfort level I have obtained with the system that I have built. I've seen it work with such a high accuracy and success rate so many times that my instinct at this point is to not question it. I keep waiting for this to really bite me in the ass somehow, but it just keeps not happening. Sure, there have been minor issues that have slipped through the cracks that caused me to backtrack, but that is nothing new. The difference is that with the previous way, I had painstakingly written that code and had a much more personal relationship with it. The code was the problem. Now whenever that does happen, I'm going back to the system and figuring out why it didn't get the answer right on its own, or why it didn't surface the whole thing in the plan to me prior to implementation.
I'm extremely confident the answer is yes.
But we have to judge how much value that particular thinking has.
As an instructor, I've implemented linked list functionality a zillion times. I'm on the long tail of skills-gain from each reimplementation. But every time I implement it, I'm gaining a little more.
Now, is it worth it? Probably not. The time spent on that marginal gain would be better spent implementing something more novel by hand. So punting to an LLM, while it costs me, might be a net gain in that case. But implementing another compiler? Hell yeah, that would be replacing my thinking. I've only ever made one PL/0 compiler plus that one yacc thing in compiler theory class, and those were a long time ago.
We should quantify the loss of thinking when we decide how much to punt the code creation to someone or something else.
Thats why theyβre relaxed - itβs just switching from one sort of unreliability to a slightly different flavour
I wonder if this sort of trend will continue?
Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.
If not the tool then whose to blame? Itβs very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesnβt mean youβre producing quality work. Whoβs reviewing it? Are you just blindly trusting it?
Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
Managers simply cannot know all of the details of what their reports write. They have to build abstractions.
It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.
This isβ¦ not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, youβre paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a companyβ¦ but who is the wiser person after a thirty year career?
GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.
Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.
Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...
I figured out some patterns in the way it behaves and could put more guard-rails in place so they hopefully won't bite me in the future (spelled out decision trees with specific triggers, standing orders, etc.), but some I can't categorize right now.
I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.
> We have people who can still spell after the introduction of spell check.
When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.
Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.
People might hate this but being a good cheat requires a lot of critical thinking.
the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.
That's a very bold claim. As a small example let's look at calculators - I remember a lot of claims that having access to calculators would make people's brains atrophy and they'll never be able to do actual math, but what I'm seeing in myself and most people around me is that we're using calculators (and more mathematical software) to tackle significantly more complex problems than people would be able to do if they rejected calculators.
To be clear, I'm not arguing that kids should be using a calculator from the first day of pre-school, but I do absolutely think that using them as later on as augmentation is clearly beneficial.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.
It's only your opinion that is provably false.
First, there are still people who don't like high level languages and don't use them, because they find assembly better.
Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".
High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.
(A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)
Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.
Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.
LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.
Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.
This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058
1) Day job 2) Side project
It would be unprofessional to treat the first like the second.
For instance I'm the old world, if you wanted to change an interface, you might have to edit 5 or 6 files to add your new function in the implementations. This is pretty routine and you won't need to concentrate that much if you're used to it, so you can spend that low-effort time thinking about the bigger picture.
You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.
Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.
I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.
Lots of people use firebase, supabase etc.
Many people's jobs are centered around using Salesforce
It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it
It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.
95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.
(I hope the analogy to software engineering is visible!)
What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.
Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.
Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.
This is not a binary.
To say nothing of em dashes which I loved using before LLMs. Now every time I use it I'm expecting to see a comment like this calling me out.
Soon it will be bad taste to simply use proper grammer -- I'll leave this typo intentionally :)
> "if you didn't bother to write it, why would I bother to read it"?
The author said he wrote it himself, and used LLM to suggest what sentences can be reworded.
(We obviously live in a more nuanced world than most social media interactions might make you think :P)
> On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.
My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.
Anyway, there are a lot of people producing mediocre software (with or without AI). That's pretty much a constant. I remember people using Visual Basic. Exact same thing. The problem isn't the tools but the people using them. There's a learning curve and most people are still behind that curve.
I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.
--
A lot of students (and developers out there too) are able to pass follow instructions and pass the test.
A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".
Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.
--
... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.
Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.
This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.
... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.
So what does that tell me?
Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.
Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.
I've seen many junior unable to walk that necessary path before LLMs were a thing.
BUT, BUT! I keep the index.
My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)
> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknownsβthe ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.
Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.
Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)
The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.
In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.
What? I've heard many takes on what AI lacks, but never this one. We had ChatGPT being able to solve an ErdΕs problem on its own yesterday [0]; how could you explain that if it cannot do logic?
I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.
but i think im off on that, ill look this person up and find out!
128 GB unified memory, Nvidia chip and ARM CPU for just around 3k⬠net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.
And they don't even pull that much power.
This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!
And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.
Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.
Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.
It's not really that hard to get a degree in engineering if your only goal is the degree itself.
Both require manual "labor" which leads to learning.
You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.
Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.
And we don't need words to think; cognitive problem solving and language processing are separate processes [1]
We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.
[1] https://www.scientificamerican.com/article/you-dont-need-wor...
What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?
I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:
Every Letter signifieth the member of the substance whereof it speaketh.
Every word signifieth the quiddity of the substance.
- John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.
Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).
[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...
For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.
McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."
You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.
If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.
Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).
Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.
Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:
You are just an image on the air. When you don't have a physical body, you're a
_discarnate being_ [...] and this has been one of the big effects of the electric age. It
has deprived people of their public identity.
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.
The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwichβ’ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.
[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847
Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.
is doing a lot of work to avoid engaging with the actual argument.
[1] Depending on the topic and the level of knowledge of it.
It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!
WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles. Of course, with each occasion becoming public they are added to training data and overfitted on, but if you embed them in a more subtle way LLMs will fail again.
"Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".
Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.
99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.
I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends β but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. Thatβs why I always think an abundance of X may not be the best option for me. Thatβs why probably includes money, too.
Engineers sucked then as much as they suck now
I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining
(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.
Also to note. Calculators merely solve intermediary steps. LLMs are increasingly designed to do a one shot full blown work. Longer context, deep thinking, agentic loops.
That is a bold and frankly unsupportable claim.
The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.
And putting aside the vanishing skill, there is also an issue of volume.
In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf. They use the time savings to operate at a higher level. They elevate their thought process through rigor rather than outsourcing it.
That distinction matters more than people think.
In this post:
A.I. can already generate code, summarize meetings, explain concepts, produce design drafts, and write status updates in seconds. That is useful but also dangerous.
The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence.
There is now a very real temptation to hand a model a problem, receive a plausible answer, and then repeat that answer as if it reflects your own understanding. That is close to plagiarism, but in some ways worse. At least when a student copies from another person, there is still a real human source behind the answer. Here, people can present machine-produced reasoning they do not understand, cannot defend, and could not reproduce on their own.
That is intellectual dependency being labeled as leverage.
And that dependency has a cost. Every time you substitute generated output for your own comprehension, you are skipping the exercises / reps that build judgment. You are trading long-term capability for short-term appearance.
I'm going to share some analogies to make this line of thought more concrete and approachable.
[CLICK HERE TO SHOW ANALOGIES]
The best engineers will absolutely use A.I. more, not less. But they will use it with a very different posture.
They will let A.I. draft boilerplate, summarize docs, generate test scaffolding, propose refactorings, surface possible failure modes, accelerate investigation, and compress routine work. They will happily offload the mechanical parts of the job. But they will also:
Then they will take the reclaimed time and invest it where it matters most.
For years, people have confused software engineering with code production. That confusion is now getting exposed.
If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.
The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
A.I. can support that work. It cannot own it.
In fact, the engineers who produce the most value in the future will often be the ones generating the knowledge that makes A.I. more useful in the first place. They will create the design principles, domain understanding, patterns, context, and decision frameworks that improve the machineβs effectiveness. They will feed the system with better questions, better constraints, and better corrections.
In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output.
This issue is especially important for people early in their careers.
Early years matter because that is when foundational skills are formed. Debugging instinct. System intuition. Precision. Taste. Skepticism. The ability to decompose a problem. The ability to explain why something works, not just that it appears to work.
Those skills are built through friction. Through struggle. Through getting things wrong and fixing them. Through tracing failures back to root cause. Through writing something and realizing it does not survive contact with reality.
That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
Someone who uses A.I. to answer every hard question may look efficient for a quarter or two. But they may also be quietly failing to build the very capabilities their future depends on. They are skipping the stage where understanding is forged.
Going back to the analogies: This is like copying answers through university and then showing up to a job that requires independent thought. It is like using a calculator for every arithmetic task and never developing number sense. It is like relying on self-driving features before learning how to actually drive. The support system may make you look functional, but it does not make you capable.
And eventually raw capability is the main thing that matters. There is no substitute.
This is the part that some people may not want to hear --
You can outsource mechanics, accelerate research and compress routine tasks. You can remove enormous amounts of low-value labor. All of that is good and should happen.
But you cannot skip the formation of skill and expect to possess it anyway.
That is the central mistake behind the most naive uses of A.I. People think they are saving time, when in reality they are often deferring a bill that will come due later in the form of weak judgment, shallow understanding, and limited adaptability.
The dividing line is simple:
One path compounds, while the other path hollows you out and sets you up ripe for irrelevance.
That is why the future does not belong to the engineers who merely use A.I. It belongs to the engineers who know exactly what to delegate, exactly what to own, and exactly how to turn time savings into better thinking.
If not already, it's time to make informed choices on how you shape your future in the industry.
Engineering management will face the same dividing line.
Some leaders will recognize the difference between engineers who use A.I. to accelerate understanding and engineers who use it to simulate understanding. Others will not. That gap will matter more than many organizations realize.
One of the defining traits of strong engineering leadership in the A.I. era will be the ability to distinguish polished output from real judgment. Leaders who cannot tell the difference may reward speed, fluency, and presentation while missing the deeper signals of technical depth: originality, rigor, sound tradeoff analysis, and the ability to reason clearly about unfamiliar problems.
That creates organizational risk.
The most capable engineers are often the ones producing the insight, context, design judgment, and corrective feedback that make both teams and A.I. systems more effective. If an organization allows low-understanding, high-fluency work to spread unchecked, it does not just lower the quality of individual output. It starts to degrade the knowledge environment itself. Reviews get weaker. Design discussions get shallower. Documents become more polished and less useful. Over time, the organization becomes worse at generating the very clarity and technical judgment it depends on.
This is why leadership matters so much here. The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive.
That starts with hiring. Organizations will need better ways to detect genuine understanding rather than surface-level fluency. They will need interview loops that test reasoning, not just polished answers. They will need evaluation systems that reward clarity, depth, sound judgment, and durable technical contribution rather than sheer output volume.
It also affects team design and culture. Strong engineers should not spend disproportionate amounts of time cleaning up plausible but shallow work generated by people who have outsourced their thinking. If leadership does not actively guard against that, high performers become force multipliers for everyone except themselves. That is a fast path to frustration, lowered standards, and eventual attrition.
The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output.
In the A.I. era, organizational quality will increasingly depend on whether leadership can still recognize the difference.
Editorial note: Like all content on this site, the views expressed here are my own and do not necessarily reflect the views of my employer.
There are multiple occasions of me answering incorrectly to trivial logic puzzles. Is that enough for you to deduce that I am "lack" logic?
Humans make mistakes all the time, and indeed we say "To err is human"; why should we expect AI not to?
Even most of the projects I personally have worked on simply did not need "engineering" as such, but other projects where uptime was critical and the cost of failure was high, there was a much higher level of rigor.
Also, software engineering is ahead of a few other disciplines of engineering on rigor in some dimensions. I feel like most software engineers don't understand how good software tools are at change management compared to pretty much anything else. (and that having good change management is a baseline, as opposed to a decent chance of not having any at all).
What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.
Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.
In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.
In practice, what this means is that you can read some subject many times, but you would still struggle to reproduce the content by yourself. That is why, when learning, it is not sufficient to just read the material several times.
All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.
With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.
On paper your CPU can execute at least one instruction per core per cycle but that's on average too, if you actually only have one instruction to run it takes several cycles.
Take juggling for example - something that was on HN homepage last week. You can learn everything you need to know about juggling though a post or a book or an educational video. But can you juggle after all that book learning? Not at all - to be able to juggle one has to spend time practicing and no amount of reading can help meaningfully compress that process.
Muscle memory required for juggling is not a 1:1 correlation to experience, but I feel like it's close enough to it.
Also, you can get a baby tonight if you steal one from the maternity ward.
The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
The pitfall of AI coding is that previously every shiny tangent that was a distraction, is now a rabbit hole to be leaped into for an afternoon, if you feel like it. It's like that ancient Chinese curse, may you live in interesting times. Everybody can recreate an MVP of Twitter in a weekend now when previously that was just a claim a certain type of people made.
> βThis one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,β says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AIβs push into his field. βWhatβs beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.β
> βThere was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,β Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
> βThe raw output of ChatGPTβs proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,β Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLMβs key insight.
> More importantly, they already see other potential applications of the AIβs cognitive leap. βWe have discovered a new way to think about large numbers and their anatomy,β Tao says. βItβs a nice achievement. I think the jury is still out on the long-term significance.β
You can debate whether the LLM used logic or not. I don't think you can debate that the LLM has in this case elevated human thinking, by leading us to a solution that had eluded world-class mathematicians for 60 years. And a new way to think "about large numbers and their anatomy".
And if it works for Terrence Tao and Erdos problems, then I'm certainly not above using AI to help brainstorm solutions for my little app at work.
Which is to say, engineer the job title is distinct from engineering the activity is distinct from engineer the accreditation.
The definition I always saw used was this one, I think:
> Engineering is the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize, economically, the materials and forces of nature for the benefit of mankind.
This sounds like it should exclude software design and development. Except it doesn't need to, and it's not really useful to exclude it simply because the definition isn't broad enough. The definition isn't engineering. The definition is trying to describe and encapsulate the reality of engineering. Nuclear and modern electrical engineers frequently never create anything physical in their careers whatsoever. Nuclear engineers manage power generation at facilities that others designed and built, while electrical engineers are frequently just dealing with signal processing. They are not less rigorous in their methodology.
The reality is that engineering is the methodical application of constraints to solve a problem. And it is the methodology that is the valuable aspect. The knowledge is necessary for each discipline, but it is itself fundamentally a prerequisite. There is a reason engineering is a single school of many disciplines.
Meanwhile, the reason that software engineering looks like half-art and half-guess has a lot more to do with software as a non-theoretical field of study only being about 60 years old in practical terms. The fundamental works of the field like The Art of Computer Programming haven't even been written yet.
Whatever happens to software development and operational systems administration in the next 50 years, however, both roles almost certainly would benefit society by becoming actual professions. Their responsibility to society as a whole has been allowed to be understated, and we're well past the days when a computer bug causing the kinds of deaths and damages such as we'd see from a civic work failure or automotive design flaw sounds unreasonable. Indeed, that actually sound fortunate given some of the software catastrophes that have occurred.
Other than that part (most countries in the world do not have regulations or licensing requirements for most engineering disciplines) I would agree. But I would also point out the set of software projects that meet that definition is much larger than those you listed.
As mentioned, it's a matter of economics, so the rigor scales with the pain it can cause if something that goes wrong. Hence any software that has a high blast radius is that rigorously built, probably even more. There are entire categories (not just individual examples!) of such projects. An obvious category are platforms that run or build other applications: OS kernels, databases, compilers, frameworks, cloud platforms (yes those 9's are an industry standard), and so on.
Then there are those regulated ones like automotive, aviation and medical software. There is even a case to be made for critical financial software.
Another less obvious category applies to any large software services company that has oncall engineers, because the high cost of engineers quickly climbs and quality processes quickly get installed, which basically amount to those critera you listed.
That internal LoB app with 5 users? That level of rigor simply does not make economic sense. Which is probably what you mean by:
> 99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.
To that I'll say, as someone whose first site outage as an intern was an actual industrial manufacturing factory (not an AbstractFactoryFactory!) a surprisingly large fraction of projects in other engineering disciplines match that description ;-)
I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.
Facing challenges and overcoming obstacles, friends and family is what makes me happy. When youβre rich, most people only care about your money, not the person you are. And I think thatβs exactly what a happy life is about.
> The nearest related Chinese expression translates as "Better to be a dog in times of tranquility than a human in times of chaos."
https://en.wikipedia.org/wiki/May_you_live_in_interesting_ti...
There's a good point in here along the lines of "if you need X in a month, and someone else has something that's 90% of what you want X to be, can you buy it from them before starting any crazy internal death marches instead?"
> The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
This is quite possibly only a one-time switch from a changed baseline, though. Give it a few years and "the fastest way an LLM tool can do it" will be what gets tossed out a an estimate, and stakeholders will still want you to do it in a tenth the time...
And they weren't. They were craftsmen and tradesmen, e.g. stonemasons.
As far as I know, all women everywhere start not pregnant
That's the subject, the only word that is NOT doing any work there (since both regular and software engineering produce artifacts).
Words that do the heavy work in that phrase are:
structured, mature, legally enforced, standards-based approach - for repeatable, reliable, verifiable, - artifacts - under stable external constraints
Software can sometime appear to touch those.
E.g. there are "standards", like HTML or like ARIA, so it's "standards-based" too! But those standards are loosely enforced, usually not mandated, loosely defined, and ad-hoc implemented with all kinds of diverting.
Or e.g. software can some times be repeatable. E.g. reproducible builds (to touch upon one aspect). But that's again left to the implementor, seldom followed (almost never for most software work, only in niche industries).
In general, software is not engineering (in the strict sense) because it's anything goes, all the above conditions can or cannot be handled (in any random set), the final work is a moving target, and verification is fuzzy, if it even happens.
>The reality is that engineering is the methodical application of constraints to solve a problem.
In that case, following specific constraits to solve a math problem, or to draw an artwork (e.g. using perspective) is also "engineering". That's too loose a term to be of any use.
Even accepting that, the degree of "methological" in software "engineering" versus e.g. civic or aviation engineering is orders of magnitude less.
Well, then in those countries those disciplines aren't treated as enginering.
Any country worth its name and with a rule of law, would have regulations and licensing requirements for electricians, civic engineers, structural engineers, aviation engineers, chemical engineers, etc.
I mean, they had building rules at the time of Babylon:
https://talk.build/construct-iq/ancient-babylon-and-the-firs...
And even in medieval times, working in certain fields that we'd call engineering today, was legally restricted to specific guilds.
I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.
In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.
What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).
You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.
You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.
It's not even a close comparison.
It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.
Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.
So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.
Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.
And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.
I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.
I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?
I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.
I was self-taught since I was 15, so most of these classes were just review for me. I met lots of people that didn't know how to code as seniors (and never ended up getting a job in their field).
Most of the "Software Engineering" curricula I've seen is catered towards "getting a job as a programmer", and is mostly focused on languages, frameworks and outdated processes.
As an engineer in another discipline, there's no engineering there.
I would rank like this: Computer Science > Self Taught > Software Engineering.
In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.
Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.
Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?
At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.