But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
You mean the way that the majority of code is still written by professionals?
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
https://m.youtube.com/watch?v=J1W1CHhxDSk
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
I do the former for fun. The latter to provide for my family.
There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
> 15 years of Clojure experience
My God I’m old.
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
* Ask someone to come over and look
* Come back the next day, work on something else
* Add comment # KNOWN-ISSUE: ...., and move on and forget about it.
But year spent days on a bug at work before ha ha!
Though a lot of the time this is more an inefficiency of the documentation and Google rather than something only LLMs could do.
But just today a bug was reported by a customer (we are still in testing not a production bug). I implemented this project myself from an empty git repo and an empty AWS account including 3 weeks of pre implementation discovery.
I reproduced the issue and through the problem at Claude with nothing but two pieces of information - the ID of the event showing the bug and the description.
It worked backwards looking at the event stream in the database, looking at the code that stored the event stream, looking at the code that generated the event stream (separate Lambda), looking at the actual config table and found the root cause in 3 minutes.
After looking at the code locally, it even looked at the cached artifacts of my build and verified that what was deployed was the same thing that I had locally (same lambda deployment version in AWS as my artifacts). I had it document the debug steps it took in an md file.
Why make life harder on myself? Even if it were something I was doing as a hobby, I have a wife who I want to spend time with, I’m a gym rat and I’m learning Spanish. Why would I waste 6 hours doing something that a computer could do for me in 5 minutes?
Assuming he has a day job and gets off at 6, he would be spending all of his off time chasing down a bug that he could be using doing something else.
That is exactly been the situation for years. Once graduated accountants are not doing maths. They are using software (Exel, Xero etc.). They do need to know some basic formulas eg. NPV.
What they need to know is the law, current business practices etc.
If that's true, then you likely used to produce slop for code. :-(
> I did things the old way for 25 years and my carpal tunnels are wearing out.
You wrote so much code as to wear out your carpal tunner? Are you sure it isn't the documentation and the online chatter with your peers? :-(
... anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.
> Job one of everyone I mentor is to build Linux from scratch
"from scratch" can mean any number of things.
Why would you think that? The landscape is fast-moving. Prompting tricks and "AI skills" of yesterday are already dated and sometimes actively counterproductive. The explicit goal of the companies working on the tech is to lower the barriers to entry and make it easier to use, building harnesses and doing refinement that align LLMs to an intuitive mode of interaction.
Do you think they'll fail? Do you think we've plateaued in terms of what using a computer looks like and your learnings for wrangling the agents of this year will be relevant for whatever the new hotness is next year? It's a strong claim that demands similarly strong argument to support.
It is hard indeed. I find it really quite exhausting.
Personally, I feel like I have always been a very competent programmer. I'm embracing the new way of working, but it seems like quite a different skillset. I somewhat believe that it will be relevant for a long time, because there is an incredibly large gap in outcomes between members of my team using AI. I've had good results so far, but I'm keen to improve.
For the good stuff, there’s no alternative but to know and to have taste. Llms change nothing.
Citation needed.
This is a tried and true way of working on puzzles and other hard problems.
I generally have 2-4 important things in flight, so I find myself doing this a lot when I get stuck.
If you want to solve the problem quickly then just use the resources you have, if you want to become someone who can solve problems quickly then you need to spend hundreds of hours banging your head against a wall.
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.
If we assume that there are 50 weeks per year, this gives us about 400-500 lines of code per week. Even at long average 65 chars per line, it goes not higher than 33K bytes per week. Your comment is about 1250 bytes long, if you write four such comments per day whole week, you would exceed that 33K bytes limit.
I find this amusing.
So only the old hands allowed from now on, or how are we going to provide these learning opportunities at scale for new developers?
Serious question.
How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?
Having a tool that instantly searches through the first 50 pages of google and comes up with a reasonable solution is just speeding up what I would have done manually anyways.
Would I have learned more about (and around) the system I‘m building? Absolutely. I just prefer making my system work over anything else, so I don’t mind losing that.
They probably wouldn't think that the calculator makes them faster either
Claude Opus is going to give zero fucks about your attempts to manage it.
But he was doing this for education, not for work.
That's why he should spend 6 hours on it, and not give up and run to the gym. That's like saying "I shouldn't spend an hour at the gym this week, lifting weights is hard and I want to watch TV. I'll just get my forklift to lift the weights for me!"Always happy to mentor people at stagex and hashbang (orgs I founded).
Also being a maintainer of an influential open source project goes on a resume, and helps you get seen in a crowded market while boosting your skills and making the world better. Win/win all around.
I don't think SWE is a promising career to get started in today.
The euphoria I felt after fixing bugs that I stayed up late working on is like nothing else.
The time wasted thinking our craft matters more than solving real world problems?
The amount of ceremony we're giving bugs here is insane.
Paraphrasing some of y'all,
> "I don't have to spend a day stepping through with a debugger hoping to repro"
THAT IS NOT A PROBLEM!
We're turning sand into magic, making the universe come alive. It's as if we just got electricity and the internet and some of us are still reminiscing about whale blubber smells and chemical extraction of kerosene.
The job is to deliver value. Not miss how hard it used to be and how much time we wasted finding obscure cache invalidation bugs.
Only algorithms and data structures are pure. Your business logic does not deserve the same reverence. It will not live forever - it's ephemeral, to solve a problem for now. In a hundred years, we'll have all new code. So stop worrying and embrace the tools and the speed up.
If you cant fix the bug just slop some code over it so its more hidden.
This is all gonna be fascinating in 5-10 years.
If you’re experienced as you are, you’re not learning the same way a junior assigned this might learn from it.
My software engineering experience longs almost 37 years now (December will be anniversary), six-to-seven years more than Earth's human population median age. I had two burnouts through that time, but no carpal tunnel syndrome symptoms at all. When I code, I prefer to factor subproblems out, it reduces typing and support costs.
But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)
Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.
We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.
Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.
you dont notice it when you are only looking at your own harness results, but the llm bakes so very much of your own skills and opinions into what it does.
LLMs still regurgitate a ton.
I suppose it's like bandwidth cost in the 90s. At some point, it becomes a commodity.
Just so many confusing things go wrong in real-world software, and it is asinine to think that Mythos finding a ton of convoluted memory errors in legacy native code means we've solved debugging. People should pay more attention to the conclusion of "Claude builds a C compiler" - eventually it wasn't able to make further progress, the code was too convoluted and the AI wasn't smart enough. What if that happens at your company in 2027, and all the devs are too atrophied to solve the problem themselves?
I don't think we're "doomed" like some anti-AI folks. But I think a lot of companies - potentially even Anthropic! - are going to collapse very quickly under LLM-assisted technical debt.
This is both a strawman and a false dichotomy.
But for juniors, it's invaluable experience. And as a field we're already seeing problems resulting from the new generations of juniors being taught with modern web development, whose complexity is very obstructing of debugging.
Employers were already refusing to hire juniors, even when 0.5-1 years' salary for a junior would be cheaper than spending the same on hiring a senior.
They'll never accept intentionally "slower" development for the greater good.
I also used Codex and asked questions about how the codebase worked to refresh my own memory. Why wouldn’t a junior developer do the same?
I mentioned that I had Codex describe in detail how it debugged it. It walked through each query it did, the lines of code it looked at and the IAC. It jogged my memory about code I wrote a year ago and after being on other projects
That comes post Chernobyl.
my last summer intern did everything the manual way, except for a chunk where I wanted him to get something done fast without having to learn all the underlying chunks
Too many of our engineering conversations are dominated by veneration of the old. Let me be hyperbolic so that I can interrupt your train of thought and say this:
We're starting to live in the future.
Let go of your old assumptions. Maybe they still matter, but it's also likely some of them will change.
The old ways of doing things should be put under scrutiny.
In ten years we might be writing in new languages that are better suited for LLMs to manipulate. Frameworks and libraries and languages we use today might get tossed out the door.
All energy devoted to the old way of doing things is perhaps malinvested into a temporary state of affairs. Don't over-index on that.
I worked on a project that depended on an open source but deprecated/unmaintained Linux kernel module that we used for customers running RHEL[1]. There were a number of serious bugs causing panics that we encountered, but only for certain customers with high VFS workloads. I spent days to a week+ on each one, reading kernel code, writing userland utilities to repro the problem, and finally committing fixes to the module. I was the only one on the team up to the task.
We couldn't tell the customers to upgrade, we couldn't write an alternative module in a reasonable timeframe, and they paid us a lot of money, so I did what I had to do.
I'm sure there are lots of other examples like this out there.
[1] Known for its use of ancient kernels with 10000 patches hand-picked by Red Hat. At least at the time (5-10 years ago).
Brooklyn, New York. March 2026.
I decided to move to Brooklyn for a coding retreat.
There were some personal reasons that brought me back to the US. But rather than heading immediately back to work, I wanted to take some time to focus on coding things mostly without AI — at precisely the time when many successful programmers are saying programming is a solved problem.
Given that I’m now six weeks through this retreat, I’ll also take some time to explain what I’ve been doing in that time.
For the past two years, I’ve been building AI agents at Aily Labs in Barcelona alongside some super talented engineers. One of my first projects was building a web search agent we could use internally in early 2024… almost 6 months before Anthropic’s Building Effective AI Agents article came out and a year before OpenAI’s DeepResearch came out! We were also early on Cursor, early on using LLMs to make knowledge graphs, and constantly testing out new approaches for our use cases.
One of my favorite parts of working at Aily was leading a weekly journal club. I chose to present papers that described how open source LLMs were built, including DeepSeek R1, Ai2’s Olmo 3, and Meta’s Llama 3 paper. All of these helped us understand the evolving tradeoffs between training models internally or building workflows around SOTA closed models. I was already hooked on LLMs since the first time I tried them in 2023,1 but I found my curiosity kept bringing me back to learning about how they worked and how to apply them.
At the same time as I was learning about LLMs and agents, I was also using them to code. I learned that when writing code “by hand” I was actually doing two things: writing what I wanted and learning the code base. When I used a coding agent however, I would get exactly what I specified in my prompt, for better or worse. By this I mean that if I didn’t know what I wanted exactly, coding agents would be happy to make many assumptions for me. This almost always meant that I didn’t learn as much, and that I wouldn’t have a good grasp of the codebase.
At the exact same time, coding agents helped me iterate quickly and ship software that worked well (after some dutiful testing, of course). They were also, I found, excellent tutors.
Cal Newport, a computer science professor and writer of Deep Work and other popular productivity books, recently wrote about this tradeoff in a way that resonated with me. In the article, he makes an analogy between the relationship of exercise to health, and the relationship of thinking to craft:
Your writing should be your own. The strain required to craft a clear memo or report is the mental equivalent of a gym workout by an athlete; it’s not an annoyance to be eliminated but a key element of your craft.
I think the same applies to writing code. At Aily, the people I worked with who were amazing programmers were in most cases also amazing users of AI. Their deeper knowledge simply gave them more leverage over this tool. In the day to day of shipping agents into production, I didn’t stop learning. But I did have a growing list of coding and computer concepts that I was always too busy to learn about.
So when I needed to head back to the US, I realized it was the perfect time to focus on this at the Recurse Center.
Recurse Center (RC) is a self-directed, full-time programming retreat in Brooklyn. After an application and a coding interview, Recursers arrive with ideas for what they want to program, and then spend 6 or 12 weeks programming. One of the highlights of RC is that it is collaborative: you enter with a cohort of other programmers, many with decades of experience, and with radically different expertises. Another highlight: it’s free!
Coming into RC, my goals were the following:
Train an LLM from scratch. This includes pre- and post-training, and I want to do this mostly from scratch; not just fork a premade codebase but write a Transformer myself.
Get better at writing Python by hand. I’ve been working in Python for a few years now but I know there’s still so much for me to learn. I want to get to the point where I need to reference documentation or ask LLMs as little as possible, and have good intuition for how to set up various projects.
Understand computers better. Admittedly a broad goal, I know that computers are extremely complicated machines that operate at many levels of abstraction. Given that I never had a formal Computer Science education I want to build a better mental model of these layers and how they work together. I don’t have a super concrete plan here, but I think RC will be the perfect place for this.
So how is it going?
I’ve done the first assignment from Stanford’s CS336: Language Modeling from Scratch course, without coding help from an LLM.2 For context, it was a 50-page assignment, but working with another Recurser, we wrote an optimized tokenizer in Python, and then built out an upgraded GPT-2 style architecture in PyTorch. We ran multiple ablations to tune hyperparameters on the Tiny Stories datasets, and then used those hyperparameters on the ~9 billion tokens of the OpenWebText dataset.
Parameter sweep of different learning rates for the 17M parameter model we wrote by hand; high learning rates lead to instability. This was on the Tiny Stories dataset, and took about an hour to train on an A100.
My plan is to do the other assignments in CS336 as well: optimizing our language model, estimating and computing scaling laws, converting raw text data into pre-training data, and finally post-training a model. I’ve already started the second assignment which involves profiling GPUs and implementing FlashAttention2 in Triton. There’s a lot to do, but ideally I can run through the meat of these assignments and then post-train my own model.
I’ve been writing a lot of small agents and neural networks in Python or PyTorch to practice. But by far the most helpful thing was pair programming with people who have been working in Python for 10+ years, and just watching them work or having them watch me work.
For example, a nice thing I picked up from someone I pair programmed with: when this guy was writing code and didn’t quite remember the syntax or operations, he would often just quickly open up a terminal and type a super simple example to rapidly iterate. He was usually able to work it out and verify if it worked correctly in less than a minute, and he didn’t have to google anything and comb through search results or ask an LLM. This technique might seem obvious to some, but making this process muscle memory has helped me become unstuck much faster.
I want to keep moving in this direction, doing simple projects or even just problems like Advent of Code while pair programming. Working with someone else live was initially a bit nerve-racking, but precisely because of this I’ve noticed a lot of progress.
Here are a few examples of things I’ve done which I’d classify as helping me understand computers better:
I wrote the classic programming function fizzbuzz in BASIC on an Apple IIe computer from 1983. It was cool seeing how differently computers worked back then, for example how manual the code editing and execution process was, but also how it was basically the same.
Tinkering with an Apple IIe.
One thing I’ve always felt a bit self-conscious about are my Unix/terminal skills. So I joined CTF Fridays, a weekly session devoted to working through Bandit and other “war games.” These are Unix and computer security related challenges played through the terminal, with the objective of collecting passwords and leveling up. Now I have a pretty good sense for what Claude Code is trying to run on my computer!
One day I hand-coded a single layer perceptron I saw when flipping through an AI textbook… completely in Vim. It was especially tedious at first, but I got some pro tips from another Recurser and learned a few shortcuts. This has actually been incredibly useful now when I’m running training jobs on cloud GPUs and I need to last-minute edit files.
I joined a Clojure workshop given by someone who has 15+ years of experience using Clojure. The topic itself was interesting because Clojure is a functional programming language and I don’t have much experience with functional languages. The teaching methodology was also great: after a brief intro we did a round of mob programming, where we solved a problem collectively, going around the table with each person getting a minute or two to advance the solution.
The weekly technical presentations are great exposure to an incredible array of topics. These are a set of 5-minute talks, so they are short enough that you don’t get bored but fast enough that you can learn something meaningful. A sample of titles: “Running Rust Code”, “GPUs for Dummies”, “Typesafe APIs for Type B Personalities”, “Some Useless Agents” (this one was mine!), and more. I’ve given two so far: one on simple agent architectures, one on scaling MCP tools efficiently; and will give another this week on different ways to optimize GPUs.
Even just hearing from people about their projects and careers has been incredibly valuable in helping me understand the space of problems computers can solve.
Soon I’ll be shipping agents to prod and running evals with a whole new bag of tricks and skills. But for now I’ve got 6 more weeks left at RC, which I’m beginning to worry is not enough time to finish everything on my list. And it won’t be. But that’s what makes RC so great: it’s not as much about crossing everything off my list but about spending time coding.
Not sure if I described this before but I think the reason I was so taken aback was that a few years prior I had been living in Japan studying Japanese full time, and it was really really hard. And here was a computer model that had managed to figure it out! Even if they hallucinated or couldn’t do math correctly at the time; that was absolutely incredible to me.
There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
No posts