09 Feb, 2026
In the last couple weeks, various folks have written about feeling an inflection point using agentic AI for building software. The consensus seems to be that these tools have crossed some threshold into being genuinely game-changing, and maybe there's no turning back. The piece Competence As Tragedy especially got me thinking. It's not long, so I'd recommend reading the whole thing, but this feels like the heart of it:
[W]hat does it mean to keep practicing a craft you love when you suspect it's dying?
The skills might be depreciating, but the attention, the way of seeing systems, the pleasure of making something work, that's not separable from who I am. I didn't become an engineer because the market demanded it. I became one because my brain works this way, because I like the puzzle, because there's something satisfying about building things that function.
All this has pushed me to reconsider my (so far) lack of use of agentic AI as a professional software engineer. This article is me writing out my thoughts on the matter, as they stand right now, and my decision for how to proceed in this current moment.
First off, I am tired of people writing things like this and not properly acknowledging the ethical issues of generative AI1. These models are:
It sucks a little of my soul each time I hear someone at work talk about AI tools on a purely technical basis, without any consideration of these ethical issues. These are cool and powerful tools, with lots of technical nuance that's interesting to discuss and engineer, but don't get distracted. We as engineers cannot choose to ignore ethics in our professional decision-making.
All that being said, I don't feel that I have any notable impact on the harms being done. I know this is a doomer take, but that's where my head is right now. I'm not going to lead some luddite rebellion against AI at my workplace. My employer already has floating licenses for these tools, so what marginal impact will I have by trying them out? It's not zero, but it seems like it's small enough that I'm willing to begrudgingly continue.
There's a chance that if I start using these tools regularly, the ethics will end up making me queasy enough that I choose to disengage even if I find them otherwise useful and/or enjoyable. We'll see.
I've used AI for some one-off personal and professional tasks. These were via chatbots, where I ask a specific question and I'm looking for a clear answer. I have been relatively unimpressed with the results, though I haven't put much effort into changing my usage to prompt better responses.
For a while I told myself that once my job had a nice plug-and-play setup for agentic workflows, I would give it a try. But that happened a couple months ago and I haven't touched it. I told myself that if a task came up that felt like a good fit for the AI, I'd give it a go, but that never happened -- each task feels either small enough to not warrant learning a new system, or large enough that I don't want to entrust it to the AI on my first try.
My plan is to spend a week trying my best to learn some of these tools by using them in my day-to-day work.
This article seems like a reasonable guide for how to gradually grow experience around using these tools. On the plus side, I should be able to reuse a lot of the "harness engineering" that my coworkers have been working on as I do this.
My main goal from this is to learn. I think I already have a solid understanding of the technology and frameworks surrounding LLMs, and what their strengths and weaknesses are. However, none of that is from practical experience. I can talk about what the AIs are good at and where they struggle, and I think I'm right (my opinions are at least formed by reading and talking to smart, informed people). But any time I have a conversation about this stuff, I worry that I might be full of shit. After this week is done, I will hopefully have enough experience to feel confident I'm not full of shit (at least until there's some other paradigm-shift in how programming is done...)
After this week-long trial, I'll reassess and see how I'm feeling.
This all scares me. I don't like admitting that, but it does.
I'm afraid in so many ways for our world and how we're all going to make it through these unprecedented times.
I'm afraid that the joy I've always found in programming work will be lost, if programming turns into managing AI agents. Maybe non-AI programming will continue to be a viable option, or maybe I can find similar joy in AI-based workflows, but neither seem like a sure bet.
I'm afraid for the long term prospects of being a professional software engineer. I'm 27; I've been programming for almost half my life now. I love it, and for better or worse it's part of my identity. I think I'd struggle if AI significantly cuts down the need for this kind of work and I have to redirect my career path.
And I'm afraid for the craft of software itself. If AI tools become the norm for writing software, does that mean perpetual stagnation in language and framework design?
I recognize that I may be blowing things out of proportion, but this is the baggage that I've got. It seems likely at this point that AI-assisted software engineering is not just a fad. I hope I've made it clear that I'm not doing this just because "AI is the future, don't get left behind ššš". But there is also an aspect that I'm afraid lacking this experience will leave me at a disadvantage compared to other developers if/when I end up job-searching in the future.
I feel like I'm supposed to have some big smart conclusion for all this, but I don't. I'm giving in and trying agentic AI, despite remaining unhappy with the ethical ramifications of the decision. We'll see how it goes.