I think the real divide we're seeing is between people who saw software as something that is, fundamentally, improvable and understandable; and people who saw it as a mysterious roadblock foisted upon them by others, that cannot really be reasoned about or changed. And oddly, many of the people in the second category use terminology from the first, but fundamentally do not believe that the first category really exists. (Fair enough; I was surprised at the second category.) It's not about intelligence or whatever, it's a mindset or perspective thing.
These seem more related than is stated. Shifting creation from individual people to corporate-owned AI tools is another step from being able to write it yourself in the common venue of the web, to being forced to either submit to owned tools or be relegated out of the mainstream.
We're still in a sort of purgatory between the two, but the DIY web is creeping into the rear view, and every moment of proprietary generative AI adoption accelerates it.
I would argue that the split existed before AI and these camps were not the same.
There were always "Quality first" people and "Get the shit done ASAP" people. Former would go for a better considerations, more careful attitude towards dependencies. Latter would write the dirty POC code and move on, add huge 3rd party libs for one small function and so on.
Both have pros and cons. Former are better in envs like Aerospace or Medtech, latter would thrive in product companies and web. The second cathegory are the people who are happy the most about AI and who would usually delegate the whole thing to the agents from start to finish including the review and deployment.
> Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.
Helps explain why some people are delighted to have AI write code for them while others are unhappy that the part they enjoyed so much has been greatly reduced.
Similar note from Kellan (a clear member of the make-it-go group) in https://laughingmeme.org/2026/02/09/code-has-always-been-the... :
> That feeling of loss though can be hard to understand emotionally for people my age who entered tech because we were addicted to feeling of agency it gave us. The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.
It doesn’t resonate with me because I am a result chaser. I like woodworking because I like building something that never existed before. I don’t mind using a CNC router or a 3 printer to help me out. I don’t care about the process, I care about the result. But I care deeply about the quality of the result.
I don’t care about the beauty of the code, but I do care that nearly every app I load takes longer than it did 15 years ago. I do care that my HomePod tells my wife it’s having trouble connecting to iPhone every 5th time she adds something to the grocery list. I care that my brokerage website is so broken that I actually had to call tech support who told me that they know it’s broken and you have to add a parameter to go back to the old version to get it to work.
I care that when I use the Claude desktop app it sometimes gives me a pop up with buttons that I can’t click on.
I’ve used Claude and Cursor enough to have what I think are valid opinions on AI assisted coding. Coding is not the bottleneck to produce a qualify product. Understanding the problem is the biggest bottleneck. Knowing what to build and what not to build. The next big one is convincing everyone around you of that (sometime this takes even more time). After that, it’s obsessively spending time iterating on something until it’s flawless. Sometimes that’s tweaking an easing value until the animation feels just right. Sometimes that’s obsessing over performance, and sometimes it’s freezing progress until you can make the existing app bulletproof.
AI doesn’t help me with these. At least not much. Mostly because the time I spend coding is time I spend understanding, diagnosing, and perfecting. Not the code. The product.
It does help crank out one off tools. It does help me work in unfamiliar code bases, or code bases where for whatever reason I care more about velocity than quality. It helps me with search. It helps me rubber duck.
All of those things does boost my productivity I think, but maybe somewhere in the order of 10% all in.
I'll argue it. Technically, there's no loss IMO, only gain. Craft lovers can still lovingly craft away, even if they have to do it on their own time instead of on their now-AI-dominated day job, just like in ye olde days. Nothing's stopping them.
But now result chasers can get results faster in their chasing. Or get results at all. I'm a proud result chaser now making serious progress on various projects that I've had on ice anywhere from months to years and occasionally lamented not having time/energy for them. And I also note my stable of tools, for both AI-related dev and other things, has grown greatly in a short period of time. And I'm loving it.
1. Weaker than expected AI.
Great Depression 2.0. Widespread poverty and suffering as the enormous investments already made fail to pay off.
2. AI works as expected.
Dystopia. A few trillionaires gain absolute control of the entire world, and everyone else is enslaved or killed.
3. Stronger than excepted AI.
Hard take-off singularity scenario. Extinction of all biological life.
It's probably hopeless to resist at this point, but we should at least try.
The thing about AI is that you don't have to use it for everything. Like any other tool you can use it as much as you'd like. Even though I like the craft, I find myself really enjoying the use of AI to do things like boilerplate code and simple tests. I hate crafting verbose grunt work, so I have AI do that. This in turn leaves me more time to do the interesting work.
I also enjoy using AI to audit, look for bugs, brainstorm, and iterate on ideas. When an idea is solid and fleshed out I'll craft the hard and interesting parts and AI-generate the boring parts.
About a decade ago, STEM education was trendy and everyone was getting Lego, Raspberry Pi etc to build robots and writing Python in the name of STEM. You can ask LLM what STEM standards for.
The Maker movement is not about consuming or producing for consumption. Some people might get incentive to be influencers and profit off it. But the majority of the kids who went through this process become adults and moved on to be producer/consumer and playing with AI now. I believe their curiosity and creativity.
Don’t worry, life will find its way.
"Writing code by hand" - more result oriented engineers became managers.
Editors - I don't believe Vim/Emacs used by both camps evenly.
Languages - in my field, most make-it-go people use Python and when large data is involved Python/Java with Hadoop. The craft-lovers prefer bespoke solutions when feasible, running it in a single box.
Version control - may be, but only after it got popularized by the GitHub.
Products - would make-it-go people ever create the 'git' in the form that made it a success?
I love hand crafting things, yet I’m waking up like a kid on Christmas every day, running to my computer to use Claude code. For my critical apps I review every line. For 1-off things, I’ve had Claude build single-serving applications.
If I had to guess the split is more between folks who have curiosity about the new technology and folks who fear things changing. With a decent center on that Venn Diagram of folks who feel both.
- one group loves to work independently and gets you the results, they are fast and they figure things out
- second group needs direction, they can be creative in their space but check-ins and course corrections are needed.
AI feels like group1 but it's actually group2. In essence, it doesn't fully fit in either group. I am still figuring out this third group.
Throughout college I would see a pretty stark divide, where most people would use vscode on mac or on Windows + WSL. But there was a small minority who would spend alot of time 'tinkering' (e.g, experiment with OS like nix/gentoo, or tweaking their dev environment). Maybe i'm misunderstanding what a 'craft lover' means here but it seemed to me, at the time, that the latter camp had more technical depth just based on conversation. Can't speak to the result in terms of test scores. Though it would be interesting to see any data on that if it exists.
Back in February, I also wrote a piece on the recurring mourning/sense of grief we are seeing for 'craftsmanship' coding:
- https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...
Hell no. I, a craftsman, was going out of my way to use things like Haskell. I was very aware of the divide the entire time. The present is a relief.
And that once you fully lean into this part of software engineering, all the other stuff becomes interesting again. I highly recommend if anyone is demotivated from software engineering to write their own orchestrator or Ralph loop or sub agent system whatever it is that makes LLM actually work for you, exactly in the way you want it to work. And then to go back to your projects again. Because now the LLM is a part of you. It's no longer a vending machine, it's a gundam suit.
I love shipping tangible products because it makes others happy and makes me money.
Do what you love for work and you'll never love anything again.
Do what you love for a hobby and keep it pure.
Don't let either be your identity, you only diminish yourself and grow old in the doing.
The grief isn't really about losing the craft—it's about losing the context where that craft made sense. When I started, "good code" meant something specific: elegant abstractions, clever patterns, the kind of stuff you'd show off in a code review. Now? The best code might be the prompt that gets an agent to write 500 lines of solid boilerplate in 30 seconds.
What's weird is I'm not even sad about it. I'm more... untethered? Like the identity I built around "being a good programmer" is dissolving, and underneath there's just... someone who likes making things work.
Maybe that's the real split: people who tied their identity to how they worked vs. people who tied it to what they built.
Taking time to solve a problem myself is pleasurable and I make no apologies for that.
Horses for courses.
I sometimes wonder if modularity will become even more important (as it has in physical construction, e.g. with the move from artisanal, temperamental plaster to cheap, efficient drywall), so that systems that AI is not able to reliably modify can easily be replaced.
My shift in perspective is really: Not all code deserves to be hand-crafted. Some stuff can be wonky as long as it does it's job.
(And I think the wonkyness will reduce in vibe-coding as harnesses improve)
Engineers be loving the craft.
It's a dance, but AI is unfortunately looking at us like we're dancing, and meanwhile it's built a factory.
As an old school Perl coder, not true. Lots of people had a taste for Perl. TIMTOWTDI was sold as an actual advantage.
Perl caters to things almost nobody else does, like the way you have a negative "if" in "unless" and placing conditions after the code. So you can do things like:
frobnicate() unless ($skip_frobnicating);
Which is sure, identical function-wise to: if (!$skip_frobnicating) frobnicate();
But is arguably a bit nicer to read. The first way you're laying out the normal flow of the program first of all, and then tacking on a "we can skip this if we're in a rare special mode" afterwards. Used judiciously I do think there's a certain something in it.The bigger problem with Perl IMO is that it started as a great idea and didn't evolve far enough -- a bunch of things had to be tacked on, and everyone tacked on them slightly differently for no real benefit, resulting in codebases that can be terribly fragile for no good reason and no benefit.
Would they get the same satisfaction from cloning a public repo? Probably not. It's too clear to their brain that they didn't have anything to do with it. What about building the project with cmake? That requires more effort, yes, but the underlying process is still very obviously something that someone else architected, and so the feeling of satisfaction remains elusive.
AI, however, adds a layer of obfuscation. For some, that's enough to mask the underlying process and make it feel as if they're still wielding a tool. For others, not so much.
Twelve years ago I would have the bright idea of why not make a little, just a tiny little (what I would call now) preprocessor for Java which does the same thing in less characters and is clearer. Everyone would love it. Of course no one loved it. Well, I never implemented it. Because I got some sense: you can’t just make tiny little preprocessors, a little code generation here and there, just code-generate this and tweak after the fact. Right? It’s not principled.
You can cook up a dichotomy. Good for you. I think the approach is just space age technology meets Stone Age mindset. It’s Flintstone Engineering. It’s barely even serious.
I am not offended that you took my craft. I am offended that you smear paint on the wall with three hundred parallel walls and painters and pick the best one. Or whatever Rube Setup is the thing that will take over the world as of thirty minutes ago.
Make something rock solid like formal verification with LLM assist (or LLM with formal verification assist?). Something that a “human” can understand (at this point maybe only the CEO is left). Something that is understandable, deterministic.
I might be out of a job. But I will not be offended. And I will respect it.
For instance, the ones that look at it from an economics perspective, security perspective, long term maintainability perspective and so on. For each of these there are pros and cons.
I still love to code just by hand for an fun afternoon. But in the long-term, I think you are going to be left behind if you refuse to use AI at all.
The blog post is all about being clear-eyed about the source of grief, but doesn't seem to articulate that it's the livelihood that's gone, not the craft. There's never been a better time to practice the craft itself.
I went for a job in AI in the late 1980s and realised from the bonkers spin of the company founders that it really wasn't the 5 to 10 years away as I was being told. I went looking something that was going to deliver a result.
I came back to it maybe 6 years ago when while on the bench at a consultancy. I got into trying to do various Kaggle challenges. Then the boss got the bug and wanted to predict the answers to weird spurious money-making questions. I tried but even when there was good data, I didn't know how to do better anyone else. When there wasn't good data it just produced complete shit.
Since then the world has changed. Everything I touch has AI built in. And it's really good. When you don't know your way around something or you've got stuck it really gets you moving again. Yeah, if it regurgitates a stupid negative example from the documentation as if it is "the way to do it", you just ignore it because you have already read that.
Now, every week I'm subjected to lectures by people who don't know how to code about how productive AI is going to make me. Working in the financial sector every Californian pipe dream seems to be an imperative, but all must verified by an adult. My IDE tries to insert all sorts of crap into my production code as I type, and then I'm supposed to be allow it to generate my unit tests.
I know it will get better, but will it be another 5 to 10 years?
Are we 80% of the way there yet?
Commerce camp understands tradeoffs needed in a competitive environment. Cutting corners where possible, not being dogmatic about unit tests and clean code an so on.
If you notice - the craft people rarely think about commerce because coding is an artistic expression.
Personally, I have noticed that I still produce substantially more and better code than the people at my company spending all day writing prompts, so I'm not too worried yet, but it seems plausible at some point that a machine that stole every piece of software ever written will be able to reliably turn a few hundred watt-hours of of electricity into a hallucination-free PR.
I did some "trad coding" to see how much I'd atrophied, and I was startled at how difficult and unpleasant it was. I was just stuck and frustrated almost the whole time! But I persisted for 7 hours and was able to solve the problem.
Then I remembered, actually it was always like that! At least when doing something unfamiliar. That's just what programming feels like, but I had stopped being used to it because of the instant gratification of the magic "just fix my problem now" button.
In reality had spent 7 hours in "learning mode", where the whole point is that you don't understand yet. (I was moving almost the whole time, but each new situation was also unfamiliar!)
But if I had used AI, it would have eliminated the struggle, and given me the superficial feeling of understanding, like when you skim a textbook and think "yeah I know this part" because you recognize the page. But can you produce it? That's the only question that matters.
I think that's going to become a very important question going forward. Sure, you don't need to produce it right now. But it's mostly not for right now.
Just like you don't "need" to run and lift weights. But what happens if you stop?
We all have different thresholds for what is acceptable, and our roles as engineers typically reflect that preference. I can grind on a single piece of code for hours, iterating over and over until I like the way it works, the parameter names, etc.
Other people do not see the value in that whatsoever, and something that works is good enough. We both are valuable in different ways.
Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot. There’s also some effort requires in honing your skill using them. The “default” output is average quality, but with some coaxing higher quality output is easily attained.
I’m happy people are skeptical though, there are a lot of things that do require deep thought, connecting ideas in new ways, etc., and LLMs aren’t good at that in my experience.
Definitely not. Based on my observations from a career as an open source and agency developer it was obvious at a glance which of these camps any given developer lived in by their code quality. With few exceptions make-it-go types tended to produce brittle, hacky, short-sighted work that had a tendency to produce as many or more problems than it solved, and on more than one occasion I've seen developers of this stripe lose commit access to FOSS projects as a result of the low quality of their contributions.
"nobody got into it because programming in Perl was somehow aesthetically delightful."
Compared to trying to get stuff accomplished in C Perl was an absolute dream to work with and many devs I knew gravitated to web development specifically for their love of the added flexibility and expressiveness that Perl gave them compared to other commonly used languages at the time. Fortunately for us all language design and tooling progressed.
To this day I remember being delighted by Perl on a regular basis. I wasn't concerned with the aesthetics of it, though I was aware it was considered inscrutable and that I could read & write it filled me with pride. So yea, programming Perl was delightful.
If I reflect for a moment about why I personally got into tech, I can find at least a few different reasons:
- because I like solving problems. It's sad that the specific types of problems I used to do are gone, but it's exciting that there are new ones.
- because I like using my skills to help other people. It's sad that one specific way I could do that is now less effective, but it's exciting that I can use my knowledge in new ways.
- because I like doing something where I can personally make a difference. Again, it cuts both ways.
I'm sure most people would cite similar examples.
I love the exciting ideation phase, I love putting together the puzzle that makes the product work, and I also take pride in the craft.
The two emotions I personally feel are fear and excitement. Fear that the machines will soon replace me. Excitement about the things I can build now and the opportunities I’m racing towards. I can’t say it’s the most enjoyable experience. The combo is hellish on sleep. But the excitement balances things out a bit.
Maybe I’d feel a sense of sadness if I didn’t feel such urgency to try and ride this tsunami instead of being totally swept away by it.
The "make-it-go" people couldn't make anything go back then either. They build ridiculous unmaintainable code with or without AI. Since they are cowboys that don't know what they're doing, they play the blame game and kiss a ton of ass.
The "craft-lovers" got in the way just as much with their endless yak shaving. They now embrace AI because they were just using "craft" as an excuse for why they didn't know what they were doing. They might be slightly more productive now only because they can argue with themselves instead of the rest of the team.
The more experienced and pragmatic people have always been forced to pick up the slack. If they have a say, they will keep scope narrow for the other two groups so they don't cause much damage. Their use of AI is largely limited to google searches like it always was.
Perhaps one of the secondary effects of AI replacing developers will be mobilising a group of smart, motivated people to the left
(It's always interesting to think of the secondary effects which kick in past a certain point of growth. High-multiple stock valuations often fail to take these into account. For the East India Company, for example — your company can keep growing until it's the size of a country. But suddenly other countries treat you as a foreign power rather than a pet.)
I agree with everything except this last sentence. What you wrote looks highly intelligent and I would suspect a lot of people in the second camp are not up to par with this.
I’ve always heard this mantra when coders were thinking they’re untouchable, not so much now.
That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.
When the craft coders bring their ideology to this set up, it starts slowing things down because they are optimising for the wrong target.
It’s all just a backdrop for hammering home the same inevitabilism: GenAI, GenAI, GenAI. Just slap on whatever excuse to hammer this over, and over, and over. Grief, self-identity, some other pseudo-humanistic angle.
Now is the time that programmers talk about their feelings. Give me a break.
Because the AI hype machine isn’t content with just eventually taking your job or your craft. It can’t just quietly get exponentially better until it sweeps your legs effortlessly. No, because there’s also a market out there, and a hype needs to be built. So now you need to see it all day in your tech news aggregator. Just push all the interesting stuff out. Replace with autopilot.
No, really. Even if AI worked perfectly right now you would still need to have a constant churn of content about how to babysit this thing that speaks English already and is more capable than you. I guess it’s kind of paradoxical.
Can you please come train the product people at my job? Maybe some of your love of the game will rub off on them.
People want to keep doing what they enjoy as their fulltime profession, making a (good) living out of it. Perfectly understandable. But neither AI, powertools or image generation is preventing someone from doing craft coding, hand woodworking or drawing/painting in their spare time and enjoying it.
I love drawing but since it went out of business as a viable career long before I was born I have never felt the grief of losing it. With craft coding we happen to exist at the point in time where it suddenly gets significantly de-crafted.
Clearly you don’t use Amazon’s Alexa.
Without being exposed to this, you’re not gonna be a good problem solver in the long run.
(I'll admit, though, that this also smells to me a bit too much like introvert/extrovert, or INTP/INTJ/etc so maybe I'm being reflexively rejective)
The craft can move up a level too. You still can make decisions about the implementation, which algorithms to use, how to combine them, how and what to test -- essentially crafting the system at a higher level. In a similar sense, we lost the hand-crafting of assembly code as compilers took over, and now we're losing the crafting of classes and algorithms to some extent, but we still craft the system -- what and how it does its thing, and most importantly, why.
Now? I'm waiting for the inevitable reality check. AI doesn't go away (I personally don't want it to; it's a power tool for an experienced dev), but imo, the market is not too far from correcting the hype. Reality can only shoulder so much bs before the rubber has to hit the road.
The promises being made over the last few years are not being fulfilled and big money likes results, not talk. So, unless we get another (significant) rabbit coming out of the hat in 2026, the momentum (again, imo) won't be there to sustain the necessary long-term growth (i.e., in terms of mass-adoption, this era of AI is closer to AOL than Facebook).
what a councidence
But there are also the third kind: who like to design the systems and let them be built by someone else..
If there are no such applications, I don't have a choice but to write it myself. This could take some time, especially if an MVP is all I'm interested in. LLMs are a novel tool in building an MVP. If time is a constraint, I can use an LLM, which should excel since xyz features are in its training set.
I suppose your analogy follows for developers who write applications that support abc features even though there are already applications out there that support abc features. Yes, I don't think that is very interesting. Your umpteenth clone of Snake is not interesting.
You can tell your AI to read their code, and create a new requirements document for a clean-room implementation.
Then you have your AI implement using your own requirements document.
It’s not the same as writing code, but it’s fun.
If your coworkers can’t outpace your code output they’re either not using opus4.6 or they aren’t really trying.
It’s pretty easy to slam 20 PRs a day with some of them being complex changes. The hardest part is testing the diffs, but people are figuring that out too.
I think there are multiple dimensions that people fall on regarding the issue and it's leading to a divide based on where everyone falls on those dimensions.
Quality and standards are probably in there but I think risk-tolerance/aversion could be behind some how you look at quality and standards. If you're high on risk-taking, you might be more likely to forego verifying all LLM-generated code, whereas if you're very risk-averse, you're going to want to go over every line of code to make sure it works just right for fear of anything blowing up.
Desire for control is probably related, too. If you desire more control in how something is achieved, you probably aren't going to like a machine doing a lot of the thinking for you.
Sometimes you need something to be extremely robust and fool-proof, and iterating for hours/days/weeks and even months might make sense. Things that are related to security or money are good examples.
Other times, it's much more preferable to put something in front of users that works so that they start getting value from it quickly and provide feedback that can inform the iterative improvements.
And sometimes you don't need to iterate at all. Good enough is good enough. Ship it and forget about it.
I don't buy that AI users favor any particular approach. You can use AI to ship fast, or you can use it to test, critique, refactor and optimize your code to hell and back until it meets the required quality and standards.
People have been saying this every year for the last 3 years. It hasn't been true before, and it isn't true now. The models haven't actually gotten smarter, they still don't actually understand a thing, and they still routinely make basic syntax and logic errors. Yes, even (insert your model of choice here).
The truth is that there just isn't any juice to squeeze in this tech. There are a lot of people eagerly trying to get on board the hype train, but the tech doesn't work and there's no sign in sight that it ever will.
Yeah, this is a concern. I remember when I took a break from coding to work as a video game producer for a couple of years and I felt like my ability to code was atrophying and that drove me nuts.
Now I'm not so sure. There's just so much dumb garbage that's accumulated around coding nowadays, especially with web dev. So much time just gluing together or puzzling out how to get APIs or libraries to works and play nice together.
It's not like back in the days where it was relatively simple with PHP and HTML, when I first started. Much less you could do back then, sure, but expectations were a lot lower back then as well.
I might just content myself with doing Sudoku or playing/designing board games to help keep that brain plasticity going, and stop fighting so hard to understand all this junk. Or finally figure out how to get half-decent at shader math or something, at least that seems less trivial and ephemeral.
…except time, which sadly is limited. I’m sad about the real potential that I might not get to be paid to do something I enjoy so much anymore. I care about end products for sure, but that’s not why I’m in this career.
I do this because a large part of the work engages me in a pleasant way. I like TDDing in a tight loop. I like how it forces me to think one step at a time, how I get to stop myself from jumping ahead, and how I get to verify my thoughts or theories within seconds. I find efficiently manipulating text in my editor satisfying. I love the feeling of being validated that my architectural choice was right when a spec changes and the required code change is obvious, minimal, and clearly expressed. I enjoy the feeling of obtaining mastery for mastery’s sake rather than because it lets me create a product.
I’ve felt incredibly lucky for over a decade that my work gave me the opportunity to chase that. I may find enjoyment in wrangling AI, but I’m skeptical it’ll scratch that itch. If it doesn’t and I wanted to still scratch it, I’d have to do it on my own time. That would mean sacrificing time I’ve previously spent on other interests, and I don’t have a ton of time to begin with.
Gen-AI has not enabled me to develop anything I wasn't able to do before, and I consider myself to be an average developer.
Probably AI has come a little bit earlier for type A, but type B will follow soon anyway. In the meanwhile, they're enjoying the ride a bit more since AI takes care of all the tedious but essential details.
When I build with AI I build things I never would have built before, and in doing so I’m exposed to technologies, designs, tools I wasn’t aware of before. I ask questions about them. Sure I don’t understand the tools as deeply as the person who wasted like 10 hours going down rabbit holes to answer a simple question, but I don’t really see that as particularly valuable.
Yes, those things should have been automated long ago, but they weren't, and now with coding agents much of them are.
Naturally, I disagree.
Then with AWS our infra was moving to proprietary platforms. Now our dev tools are moving to expensive proprietary platforms.
Combined with widespread enshittification, we've handed nearly everything to the tech bros now.
I am on this site because it is one of the less shitty places on the Internet (in terms of usability, privacy etc.) to have some form of discussion, but I never identified as a "hacker", "techie", "entrepreneur" or "temporarily embarrassed billionaire". AI didn't change my view on anything, except it has shown me how blind and naive people can be.
Of course I tend to focus on aspects that are being discussed here (context of software engineering).
My personal bet would be that in the medium term, there will be a reversal of the idiotic belief that you can immediately just lay off developers because of LLMs. If your developers are more productive because of LLMs, you still have an advantage by having more developers than the competition. There's also a lot of institutional knowledge that's just not documented. You fire key people, you can cripple your organization.
In the longer term, I think AI will eventually take jobs, and unfortunately, it will have major negative societal impact. I doubt that our governments will be proactive in trying to anticipate this. They will just play damage control. There's probably going to be an anti-AI social movement. You'll have the confluence of more and more disinformation and AI slop online along with more and more job loss. There are probably going to be riots. Some people think UBI is inevitable. I think the problem is that if the government puts UBI in place, they will only give you the minimum necessary so that you don't starve. Just enough to afford to rent a bedroom, eat processed food and stay online all day.
It might take intelligence to notice the problem, but we all know people who haven’t and some who might never. My previous job has the oldest I’ve ever met and I wanted to strangle him at least once a week. I haven’t added anyone to my Do Not Hire list in over a decade. Except him. He made himself indispensable at every opportunity and had some of the most convoluted code (and vocabulary) I’ve encountered in a long time. I spent way too much time extracting his claws from code I’d written that he made an absolute hash of.
Engineering is the task of making things behave predictably and reliably. Because software is malleable and rarely “finished”, this applies to changing software as well.
I’m pretty sure that there is more than one divide regarding AI among developers, but one of the dividing lines concerns the predictability and reason-ability of the tools and the code. AI fundamentally lacks these qualities.
And when the code base is 250,000 lines of garbage all the way down, this is impossible. The projects where I’ve had free rein to kill tech debt and SDLC footguns with impunity have all been the ones where velocity maintained or increased over time. All the rest have ground to a halt.
There’s value in customers believing you’ll get there soon and the code will actually work. They can be patient if you have an air of competence. But they will switch to some other clown car if it’s cheaper than your clown car.
You seem to be claiming that enterprise shops never adopted code reviews. Interesting if true.
It can actually help a lot here too.
In fact I rarely have AI author or edit code, but I have it all time researching, finding edge cases I didn't think about, digging into dependencies code, finding ideas or alternative approaches. My usage is 90% of the time assisting with information gathering, criticizing (I have multiple reviewer skills with different personas, and I have multiple LLMs run them), refining, reviewing.
Even when it comes to product stuff, many of my clients have complicated business logic. Talking multi-tenant-company warehouse software where each process is drastically different and complexity balloons fast even for a single one of them. It helps to connect the dots between different sources of information (old Jira task, discord dumps, confluence, codebase, etc).
And it can iteratively test and find edge cases in applications too, same as you would do manually by taking control of the browser and testing the most uncommon paths.
I would do much less without this assistance.
I really don't get why people focus so much on the least empowering part (code), where it actually tends to balloon complexity quick or overwhelm you with so much content and edits that you can't have the energy to follow while maintaining quality.
Why not outsource it to someone else? That way you do none of the work.
I've been saying this too. The 10x engineer stuff simply cannot make sense unless previously you were spending 90%+ of your day just writing coding and now that's dropped to single digits because AI can generate it. If you spent 20-30% of your day coding before and the rest thinking about the problem, thinking about good UX, etc, then AI coding assistances mathematically cannot make you a 10x engineer. At a push they might make you a 2x engineer.
Given this I think I realised something earlier about my own output... I'm probably just a unusually good coder. I've been doing this since I was a kid so writing and reading code is basically second nature to me. When I was a young teen I would literally take my laptop on holiday with me just so I could write code - I was just that kind of person.
So I've basically always been the strongest or one of the strongest coders on any team I've been on. I very rarely have to think about how to do something in code. It's hard to think back to a time when code was a significant bottleneck for me.
However, my output was never really faster than anyone else when it come to shipping, but the quality of my output has always been wayyy higher. And I think that was because I always spent a lot more time thinking and iterating to get the best result, while other people I work with spent far more time writing code and just trying to get something they could PR.
My problem now is that the people I work, some of whom can't even read code, are able to spit out thousands of lines of code a day. So this forcing me to cut corners just to keep up with the rest of the team.
6-12 months ago I'd get at least 2-3 calls a day from people on my team asking for help to write some code. Now they just ask the AI. I haven't had someone ask me a coding related question in months at this point.
I find this frustrating to be honest. I'm seeing bad decisions everywhere in the code. For example, often a change is hard because it's a bad idea. Perhaps a page on a website doesn't really look great on mobile or desktop. Previously you would have had to think about how you could come up with a good responsive design and implement the right breakpoints. But now people can just ask Claude Code to build a completely different page for mobile, so they do. For a human that would be a huge effort, even if someone who stupid enough to think that was a good idea they probably be forced to do something thats easier to maintain and implement, but an AI? Who cares. It works. The AI isn't going to tell you no.
I know the quality of code is dropping. I see the random bugs from people clearly not understanding what Claude is writing, but if they can just ask the AI to fix it, does it even matter?
> All of those things does boost my productivity I think, but maybe somewhere in the order of 10% all in.
I'm very much like you. AI doesn't really boost my productivity at all but that's because I care about what I build and don't find coding hard. So AI doesn't really offer me anything. All it's doing is making people who don't care what their building and don't care about the quality of their code more productive. And putting me under pressure to trade quality for velocity.
All those problems are caused by business decisions, not the developers. You do make a good point though that AI may enable more people to build their own when they can.
The real split are between those that support science and technology, and those that hate science and technology and want to see more children die.
I like doing puzzles.
I like it more than planning.
At the end of the day, I'll do whatever builds the best thing, but I'll enjoy it more or less depending on what that involves.
Unless we muster the political will to stop AI development, internationally, until we can be certain of our ability to durably imbue it with the intrinsic desire to keep humans around, doing human things.
What opportunities? Anything you spend effort over, like PMF and discovery, etc... I can now clone with a few bucks of Claude Code and charge less than you for the same product, at the same quality level :-/
Where is the opportunity here? Technology and knowledge used to be the moat a startup or bootstrapped individual could use to produce a sustainable business.
Why exactly are you excited about producing something that can be cloned for less cost than it took you? Especially as the quality will be almost exactly the same?
This is just false for anyone who has worked in the industry for any meaningful amount of time. Do you seriously never encountered a situation where a change was supposedly easy on the surface, but some stupid SoB before you wrote it so bad that you want to pull your hair out from trying to make it work without rewriting this crap codebase from scratch?
Maybe the stages of grief are not aligned yet. Or maybe it's as your post says there are two types of people.
We're so screwed
I, for example, would claim to be rather risk-tolerant, but I (typically) don't like AI-generated code.
The solution to the paradox this creates if one considers the model of your post is simple:
- I deeply love highly elegant code, which the AI models do not generate.
- I cannot stand people (and AIs) bullshitting me; this makes me furious. I thus have an insanely low tolerance for conmen (and conwomen and conAIs).
Now I can ask an agent to code a full feature and it has been handling it more often than not, often getting almost all of the way there with just a few paragraphs of description.
And then it changes every 6 months or goes in circles - and I suppose now we just gave up and are letting LLMs YOLO code out the door with whatever works.
Like I remember learning all about Redux Sagas, and then suddenly the whole concept is gone, but also I'm not actually particularly clear on what replaced it (might be time to go back to that well since I need to write a web interface soon again).
I'm afraid my inherent laziness will lead to something like that if I don't have to do it for my job.
A handful of well intentioned slop piles might be manageable, but AI enables spewing garbage at an unprecedented scale. When there's a limited amount of resources to expend on discussing, reviewing, fixing, and then finally accepting contributions a ton of AI generated contributions from random people could bring development to a halt.
And don’t make me laugh about “good online resources”. SO went downhill and is just a graveyard at this point where everything is frozen in time. It has some good discussions (that LLMs ingested), but that’s it.
You can hate LLMs all you want, but they’re godsend for interactive discussion with the material.
TL;DR: AI-assisted coding is revealing a split among developers that was always there but invisible when we all worked the same way. I've felt the grief too—but mine resolved differently than I expected, and I think that says something about what kind of developer I've been all along.

A few months ago, I wrote about how making computers do things is fun. The gist: I've never been in it for the elegance of code. I've been in it for the result. I learned BASIC on a Commodore 64 at age 7 not because BASIC was beautiful (it wasn't) but because I wanted to make things happen on screen. Then I learned 6502 assembly because BASIC was too slow for what I wanted to do.
That post was my attempt to explain why AI coding tools felt like a natural fit for me. But since then, I've been reading other people's reactions to this moment, and I want to come back to it.
James Randall wrote a piece that hit a nerve. He's been programming since age 7 in the 1980s, like me. But his experience of this moment is subtly different from mine:
The wonder is harder to access. The sense of discovery, of figuring something out through sheer persistence and ingenuity — that’s been compressed. Not eliminated, but compressed. And something is lost in the compression, even if something is gained.
Nolan Lawson put it more starkly in "We Mourn Our Craft":
We’ll miss the feeling of holding code in our hands and molding it like clay in the caress of a master sculptor. We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM. We’ll miss creating something we feel proud of, something true and right and good. We’ll miss the satisfaction of the artist’s signature at the bottom of the oil painting, the GitHub repo saying “I made this.”
These are real feelings about real losses. I'm not here to argue otherwise. But reading these posts, I kept having this nagging sense that we were mourning different things.
Here's what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.
Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.
Now there's a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.
I wrote about this split before in terms of my college math and CS classes: some of us loved proofs and theorems for their own sake, some of us only clicked with the material when we could apply it to something. I tended to do better in classes focused on application and did not-so-well in pure math classes. (I still want to take another crack at Calculus someday, but that's a different story.)
I want to be clear: I've felt grief too. I've gone through a real adjustment period over the past 18-24 months.
I was afraid I wouldn't be able to understand the new tools. But, it seems I have. I was worried I wouldn't understand the output, that I'd lose my ability to judge whether the code was actually right. But it turns out decades of reading and reviewing code doesn't evaporate. I can still tell when something's wrong and I still have taste.
I was afraid the puzzle-solving was over. But it wasn't. It just moved up a level. Which, when I think about it, is exactly what's happened at every other transition in my career. I went from placing bytes in memory on a C64 to writing functions to designing systems. The puzzle got more abstract each time, but it never stopped being a puzzle. Now the puzzle is architecture, composition, directing the assistant. It's different. It's still satisfying to me, anyway.
So most of my fears got tested against reality and thankfully didn't hold up. But some grief stuck.
I grieve for the web I've known. Not for writing HTML by hand, but I grieve for the open web as an ecosystem. It's threatened in ways that have nothing to do with whether I personally type code or not. AI training on the commons, the further consolidation of who gets to shape how people experience the internet. That's a real loss, and it doesn't go away because I'm personally more productive.
I grieve for the career landscape shifting under me. Webdev's been my thing for over three decades. But, it's not a hot field anymore, whether folks like it or not: mobile apps took some big bites, but AI engineering is really ruling the roost now. I've been worried whether I could make that shift. I think I have been, but the anxiety is genuine, and I don't think it's finished.
Here's what I notice about my grief: none of it is about missing the act of writing code. It's about the world around the code changing. The ecosystem, the economy, the culture. I think that's a different kind of loss than what Randall and Lawson are describing. Theirs is about the craft itself. Mine is about the context and the reasons why we're doing any of this.
Kevin Lawver wrote a response to Lawson that I mostly agree with. He argues for redirecting craft and passion rather than clinging to how things were. But I'd go further than framing it as nostalgia vs. pragmatism. (And you know I've got plenty of nostalgia, but that doesn't pay my mortgage.)
I think recognizing which kind of grief you're feeling is the actually useful thing here. If you're mourning the loss of the craft itself—the texture of writing code, the satisfaction of an elegant solution—that's real, and no amount of "just adapt" addresses it. You might need to find that satisfaction somewhere else, or accept that work is going to feel different. Frankly, we've been lucky there's been a livelihood in craft up to now.
If you're mourning the context—the changing web, the shifting career landscape, the uncertainty—that's real too, but it's more actionable. You can learn new tools. You can push for the web you want, even if it's a small web. You can grieve and adapt at the same time.
I've been trying to do a lot of both. I'm not entirely sure how it's going. I do really feel for what Nolan Lawson said here:
I don’t celebrate the new world, but I also don’t resist it. The sun rises, the sun sets, I orbit helplessly around it, and my protests can’t stop it. It doesn’t care; it continues its arc across the sky regardless, moving but unmoved.
But, you know, I'd be lying if I said it didn't excite me a bit, amidst the grief & terror.
I started programming in the 80s. Every language I've learned since then has been a means to an end. That is, a new way to make computers do things I wanted them to do. AI-assisted coding feels like the latest in that progression. Not a discontinuity per se, just another rung on the ladder.
But I'm trying to hold that lightly. Because the ladder itself is changing, the building it's leaning against is changing, and I'd be lying if I said I knew exactly where it's going.
What I do know is this: I still get the same hit of satisfaction when something I thought up and built actually works. The code got there differently than it used to, but the moment it runs and does the thing? That hasn't changed in my over 40 years at it.
If I had access to a factory of apprentices that I had total control of, I probably would outsource more of it. So on the surface it seems like I’d love AI, but these particular apprentices aren’t up to my standards, there are severe limitations on how much I can train them, and I get no joy from teaching them.
A lot of developers' identities is tied to their ability to create quality solutions as well as having control over the means of production (for lack of a better term). An employer mandating that they start using AI more and change their quality standards is naturally going to lead to a sense of injustice about it all.
For me specifically it means two products, one that is something I have been working on for a long time, well before the Claude Code era, and another that is more of a passion project in the music space. Both have been vastly accelerated by these tools. The reason I say “racing” is because I suspect there are competitors in both spaces who are also making great progress because of these tools, so I feel this intense pressure to get to launch day, especially for the first project.
And yes it is very frenetic, and it’s certainly taking a toll on me. I’m self-employed, with a family to support, and I’m deeply worried about where this is all going, which is also fuelling this intense drive.
A few years ago I felt secure in my expertise and confident of my economic future. Not any more. In all honesty, I would happily trade the fear and excitement I feel now for the confidence and contentment I felt then. I certainly slept better. But that’s not the world we live in. I don’t know if my attempts to create a more secure future will work, but at least I will be able to say I tried as hard as I was able.
In fact it being easier to get them out there I might care less that they should be marketable and have a chance to make serious money, as opposed to when I was sinking hundreds of hours into them and second guessing what direction I should take the games to make them better all the time.
The art wasn't the problem (the art wasn't great, but I could make functional art at least), it was finding the time and energy and focus to see them through to completion (focus has always been a problem for me, but it's been even worse now that I'm an adult with other responsibilities).
And that hasn't always been the issue, I did release about a dozen games back in the day (although I haven't in quite a few years at this point).
Of course someone may say 'well that's slop then', and yeah, maybe by your standards, sure. These games aren't and never were going to be the next Slay The Spire or Balatro. But people can and do enjoy playing them, and not every game needs to be the next big hit to be worth putting out into the world, just like not every book needs to be the next 1984 or Great Gatsby.
Money, opportunity, status. It is all status games. Think of it as a nuclear war on old order and new players trying to take the niche. Or maybe commies killing whites and taking over Russia?
That's why readers end up reacting to the LLM imprints rather than the content.
I don't mean to be critical because it's a good article! But I bet if you shared the version before it was "tightened up", most of us would prefer it.
(I suppose I'd better add that this isn't a criticism of LLMs either - it's about figuring out how to use them well: https://news.ycombinator.com/item?id=47342045)
Some examples.
> I've felt the grief too—but mine resolved differently than I expected, and I think that says something about what kind of developer I've been all along.
> I kept having this nagging sense that we were mourning different things.
> Here's what I notice about my grief: none of it is about missing the act of writing code. It's about the world around the code changing.
> If you're mourning the context—the changing web, the shifting career landscape, the uncertainty—that's real too, but it's more actionable.
It uses these kinds of patterns over and over that it becomes obvious Just go on LinkedIn.
"It's not X, its Y"
But an application that combines xyz features is novel in this scenario. There is inherent value in that.
The more experience I get the harder the job seems tbh
A.I. is now often doing in 5-10 minutes what would take me hours on my own for any given task (well based on the last couple of weeks at least, I wasn't doing much agent based A.I. coding before that).
I was pretty much having a real-time conversation with my superiors, showing them updates just a couple of minutes after they suggested them, for a feature the other day, getting feedback on how something should look.
Something that would have taken me an hour or more each time they wanted a change or something new added.
Now that cuts both ways, as it started to seem like they were expecting that to be the new normal and I started to feel like I had to keep doing that or make it seem like I'm not actually working. And it gets exhausting keeping up that pace, and I started worrying when anything did take me extra time.
Before moving to agentic AI, I thought I'd miss the craftsmanship aspect. Not at all. I get great satisfaction out of having AI write readable, maintainable code with me in the driver's seat.
It can help if you write poor code without it, probably
High unit test coverage only means something if you carefully check those tests, and if everything was tested
I'd say this is the crux of the matter. Having competing interests and choosing what to do and how much of it is a balancing act, but you can still get that desired satisfaction. You could perhaps even start your own company if it's that important to you.
For me, projects just keep accumulating regardless of how much time I dedicate to them (outside of the mandatory things). Maybe I just have too many things I'd like to build. Definitely thinking about starting a company myself now there's all this capability available.
AI is leaps and bounds better than google at searching.
“You don’t need google for this, we have had public libraries for decades” energy.
I’m also nervous about the inevitable cognitive decline of relying on AI to explain everything to me.
But yes many of those problems were caused by business decisions. But engineers are perfectly cable of creating those problems on their own. If an engineer doesn’t realize that the function they called buffers messages in memory because someone made a wrapper function around sendAsync() and called it send(), that’s a code quality issue not a business issue (except as in the broader sense where every problem is ultimately a business issue).
Or if an engineer writes a naive implementation of some algorithm and adds a spinner so that an operation takes 5s to finish when it could be instantaneous if they’d thought about the problem more.
Meaning you like to put the pieces in, or you like to figure out where they should go? To me that’s the crux of the article.
We witnessed the same thing with looms and other automation in the Industrial Revolution. Capital that helps you produce more. But owners faced with increased competition under commoditized production see their profit margins fall. Thus they will turn to squeezing workers - the source of value - for profit in the newly commoditized landscape - exactly what happened during the Industrial Revolution. It was only when workers got their act together and organized that this decline was stopped and reversed.
It's not the AI you have to convince, it's your government and the people running tech companies. Dario Amodei was cheering for AI to take all programming jobs (along with the others). If that happened, it would be an unmitigated disaster for millions of people. Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.
> the political will to stop AI development
The reason that's not likely is that it's an arms race. You stop AI research here, but how can you trust that China and Russia are doing the same? Unlike nuclear bombs, the potential harms are less tangible.
That's been tried several times now and has a tendency to end very badly for capital. You'd think folks with even a grade school level of historical literacy would know better than to stick a fork in that outlet.
I think we’re destined to be #2 for a while. If it gets too bad, my plan is to move into a part of the industry where quality and reliability are non-negotiable. Or start my own company and compete against established players for the smaller customer base that’s willing to pay for quality.
I go out of my way to pay for quality projects even if (and often because) they have fewer features. I think there are probably enough of us to support a lifestyle business in many niches.
I also suspect as vibe coding introduces more bugs (we’ve certainly seen this at my current company) the people willing to pay for alternatives will grow.
in my experience very few projects were serious enough that required such scrutiny in code.
I'm curious about this discrepancy too. I assume that you're being facetious and the discrepancy is with people's perceptions of AI’s capabilities or usefulness or whatever subjective metric. Some, myself included, seem to perceive it as basically useless, while others, yourself included, seem to imply that it's at a level where it genuinely replaces competent coders.
If the discrepancy were small, it could just be chalked up to the metric being subjective. But it seems to be like night and day. A difference of orders of magnitude. I wanna know what's going on there.
That's why my opening paragraph and final paragraphs were devoted to it :)
I feel like a lot of the things I hear from other folks is that I'm missing the possibility of programming the way I used to, rather than being sad that it might no longer be practical within the constraints of my life. Even if I did start doing it on the side, going from eight hours a day programming to squeezing in one or two would be quite the change.
> You could perhaps even start your own company if it's that important to you.
I have no interest in ever starting or running a company. Every person I've known who does it has it consume their life. I'd rather spend time with my family and do stuff which doesn't involve a boatload of stress and additional responsibility. Not to mention that in the case where I'm no longer able to find a job programming the way I used to then I don't think it's likely a company where people did that would be competitive enough to survive, at least not normally. Also, one of the reasons I rarely program on the side is that I don't often have ideas for things to make. I just don't feel the need for much software in my personal life, and certainly don't think of the sorts of things where there's a market for them.
Having a lucrative job where you do something where you find great satisfaction in your daily work isn't the norm and is the privilege of a lucky few, but that doesn't mean they aren't justified in feeling sad at the prospect of it going away.
Dogmatism sometimes seems like a better thing compared to mind so open that wind blows through it without obstacles.
Not having to know the lower levels means you can free your mind for things at higher levels of abstraction. It's not a void in our brain you don't fill with other things.
But I don't know. I'm new to this.
However, that's not useful in predicting what capital owners will do, because they follow their local incentives. "If everyone keeps doing X, we will all be worse off" does not help unless you can create local incentives that point toward an equilibrium where everyone stops doing X.
In this case, no capital owner is individually better off by unilaterally refusing to chase more efficient returns on their capital. We would need an international agreement, with enforcement mechanisms, like I mentioned above.
I don't need to imagine this student, I'm friends with some who are going through this right now. They graduated almost a year ago and haven't found work yet. One of them jokes about suicide often and I don't know how to help him
The social contract between labour and capital has been frayed for a long time, but it is near breaking now. It's going to get worse, maybe a lot worse, before it gets better. If it ever does get better
Good luck eating that.
Seems like a nightmare.
How do you deal with that feeling?
Code coverage means nothing if you didn't carefully check every test? "and if everything was tested" do you know what code coverage is?
not gonna engage the trolling
I am a high quality/craftsmanship person. I like coding and puzzling. I am highly skilled in functional leaning object oriented deconstruction and systems design. I'm also pretty risk averse.
I also have always believed that you should always be "sharpening your axe". For things like Java delelopment or things where I couldn't use a concise syntax would make extensive use of dynamic templating in my IDE. Want a builder pattern, bam, auto-generated.
Now when LLMs came out they really took this to another level. I'm still working on the problems.. even when I'm not writing the lines of code. I'm decomposing the problems.. I'm looking at (or now debating with the AI) what is the best algorithm for something.
It is incredibly powerful.. and I still care about the structure.. I still care about the "flow" of the code.. how the seams line up. I still care about how extensible and flexible it is for extension (based on where I think the business or problem is going).
At the same time.. I definately can tell you, I don't like migrating projects from Tensorflow v.X to Tenserflow v.Y.
Having said that, I’ve changed the way I work too: more focused chunks of work with tight descriptions and sample data and it’s like having a 2nd brain.
I don't see it that way.
I'm solving the math problem, but after coming with a solution I start asking for alternative approaches or formulas I don't even know about.
In fact, calculus is full of suck gotchas tricks or formulas, think of integrals or limits. Took who found those decades/centuries to find them.
It's not a black/white divide.
It's the same in other industries too. Someone designs and implements something properly and then it gets into the hands of product people who want to rip half of it out. The business then wants some much cheaper contractors to quickly make those changes without the original engineers involved. The result is a mess.
Will LLMs, "the forklifts of the mind" make us less mentally fit?
Seems like a pretty likely outcome to me
Of course the structure exists because we allow it, that's the easy part. Hard part is - why do we allow it?
What could be considered to be a nightmare, perhaps, is suddenly feeling like 'uh oh, is this going to be the new normal. Will I have to keep doing this all the time now, or else they think I'm not getting any work done?'
If I say "doing $FOO is a losing proposition", and I believe what I say, why on earth would I then move on to actually doing $FOO?
I am pointing out that there is nothing to be gained by joining this recursive race - anything you produce using LLMs I can clone using LLMs, but anything I produce using LLMs can be cloned by someone else, using LLMs.
Why would you assume that I want to insert myself into this recursively descending race to the bottom?
I experimented with it a bit a couple weeks before that on my own personal projects as well, but I don't feel that same push when I'm doing my own projects, obviously (well, if I do, it's because I choose to).
I can come up with many examples that would take you ages to search in Kagi vs one prompt in ChatGPT.
You really should be updating.
However there’s not a lot of overlap I’ve noticed between the craftsman crowd and that one.
No? You assert that it writes better code than the average software developer?
> Code coverage means nothing if you didn't carefully check every test? "and if everything was tested" do you know what code coverage is?
Do you know?
Code coverage only tells what amount of the code gets *touched* by the tests.
To achieve code coverage it's enough to CALL the code, it doesn't tell you anything about the correctness of the tests: they could all end with a return true, and a code coverage tool would be perfectly happy.
So, yes, if you don't carefully check the test suite that the agent writes, it might well be worthless (or simply much less useful than you assume it to be, more realistically).
With "if everything was tested" I meant that you also need to check if the agent wrote all the tests that are needed, besides verifying that the ones it wrote are correct.
That line always makes me laugh. There’s only 2 points of an algorithm, domain correctness and technical performance. For the first, you need to step out of the code. And for the second you need proofs. Not sure what is there to debate about.
In practice this currently means voting for political options who can correctly identify concentration of power as the root cause of most of our current and future problems, and who pledge to actually do something about it.
Sure, they could have come up with those optimizations without AI... but they didn't. What's your theory for why that is?
ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.
You can't trust any new hires if you have middle managers with no technical experience who only care about business concerns, execs who only care about money, and all your good devs have left the company because they are not allowed to change anything.
The blame game is a massive red flag for everyone actually worth a damn to leave. Complacency, intolerance of disagreement, hyperpragmatism and obsessive focus on measurable productivity, etc. all kill a business by a million papercuts. A business needs room to breathe and the time and desire to think in order to thrive.
If you want new engineers who do good work you have to recognize that existing problems have become intertwined with the way the business currently works. They cannot fix what they cannot discuss. They cannot create when their hands are forced to repeat the motions of the ones they're replacing. All the while, someone rotten in the middle is definitely benefitting from throwing people under the bus and picking up a paycheck.
Surely you can just tell me: what is, in your opinion, the best project to showcase what AI coding methods can achieve? Preferrably a project that is polished enough that it can actually be applied somewhere.
I never get answers when I ask _what_ people actually build using AI. Please tell me.
There may be other considerations as well -- licensing terms, resources, etc.
Absolutely. It contains a lot, if not majority, of all the code available at our hands right now and can reason, whatever it means for LLMs to reason anyway, about it. It absolutely demolishes average software developer and it’s not even close.
> To achieve code coverage it's enough to CALL the code, it doesn't tell you anything about the correctness of the tests: they could all end with a return true, and a code coverage tool would be perfectly happy.
> So, yes, if you don't carefully check the test suite that the agent writes, it might well be worthless (or simply much less useful than you assume it to be, more realistically).
That’s like saying that if you don’t check every line your coworker writes it becomes worthless.
Is that any different from doing work we can do, but way faster than we could realistically do it?
Change the analogy to an excavator then. I could move a pile of dirt with a shovel, or I could write code with my brain
Or I could move a pile of dirt with an excavator or write code with an LLM
I haven't seen evidence of that. I see evidence of rapid advances in task length, general capabilities, and research and development capabilities in AI, and generality, price, and autonomy in robotics.
How much headroom in these capabilities do you believe we have, before a data center can protect and maintain itself and an on-site power plant? Before robots can run a robot factory?
Everyone that says this has not been the one that had to fix the code later. They have already moved to the next jobs (or have been fired). Engineers do know the tradeoff between quality and speed, and can do hack if that’s what needed to get the project to the finish line. But good ones will note down the hack and resolve it later. Bad ones will pat themselves in the back and add more hacks on top of that.
People seem to think that technical debt doesn't need to be paid back for ages. In my experience bad code starts to cost more than it saved after about three months. So if you have to get a demo ready right now that will save the company then hack it in. But that's not the case for most technical debt. In most cases the management just want the perception of speed so they pile debt upon debt. Then they can't figure out why delivery gets slower and slower.
> ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.
I agree with this. But there's a difference between over-engineering and hacking in bad quality code. So to be clear, I am talking about the latter.
> That does not mean you are correct. This mindset is useful only in serious reusable libraries and open source tools. Most enterprise code involves lots of exploring and fast iteration. Code quality doesn’t matter that much. No one else is going to see it.
Here? Most of those that I’ve listed IS boring enterprise code. Unless we’re taking medical/military grade.
> > So, yes, if you don't carefully check the test suite that the agent writes, it might well be worthless (or simply much less useful than you assume it to be, more realistically).
> That’s like saying that if you don’t check every line your coworker writes it becomes worthless
A coworker is (supposedly) a *competent person*, placing some trust on that is not stupid.
You'd usually want to have a review of everything everyone does, but even if you don't do it, a reasonably competent and honest developer is never going to be as misleading as LLMs can be.
Furthermore, in traditional development, even without code reviews you will look at the test suite every once and then (or rather, always, when you touch the code that it tests).
If you're the size of Shopify that represents a huge saving in server costs and improved customer-facing latency.
For example there exist "applications"/"demos" that exist "to show the customer what could be possible if they hire 'us'". These demos just have to survive a, say, intense two-hour marketing pitch and some inconvenient questions/tests that someone in the audience might come up with during these two hours.
In other words: applications for "pitching possibilities" to a potential customer, where everything is allowed to be smoke and mirrors if necessary (once the customer has been convinced with all tricks to hire the respective company for the project, the requirements will completely change anyway ...).
but most people aren't writing code in those places. its usually CRUD, advertisement, startups, ecommerce.
also there are two things going on here:
- quality of code
- correctness of code
in serious reusable libraries and opensource tools, quality of code matters. the interfaces, redundancy etc.
but that's not exactly equal to correctness. one can prioritise correctness without dogmatism in craft like clean code etc.
in most of these commercial contexts like ecommerce, ads - you don't need the dogmatism that the craft camp brings. that's the category error.
If you showed a conversation between Terry Tao or Steve Yegge and their AI collaborators to someone from 2021, they would consider it beyond obvious that it's AGI. Today, we know they still have some shortcomings; but in another 5 years, what looks to us today like it's beyond obviously ASI may well be enough for catastrophic, irrecoverable outcomes.
This percentage is meaningless on its own. It’s 4 ms shaved off a 7 ms process. You would need to time a whole flow (and I believe databases would add a lot to it, especially with network latency) and figure out how significant the performance improvement is actually. And that without considering if the code changes is not conflicting with some architectural change that is being planned.
Even in web software, you can write good code without compromising in delivery speed. That just requires you to be good at what you’re doing. But the web is more forgiving of mistakes and a lot of frameworks have no taste at all.