There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
Old quote: "There is nothing so permanents as a temporary hack."
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
Every codebase includes parts that are more experimental, and parts that are more core. My sense is that AI can help on both of these fronts (I.e building rapid prototypes on the fringes and hardening the core with better test coverage).
FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
> What if we had one system just for speed?
Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
I guess the author has never worked on a dog shit system with no tests at all and constant downtime.
I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.
I just don’t believe that using new tools is at odds with avoiding complexity.
Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
And push an insurmountable pile of technical debt onto the successor.
Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.
However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.
The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
I saw this yesterday
https://trinkle23897.github.io/learning-beyond-gradients/
They are very remotely related yet somehow very close.
People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
The message that hits for me is that of AI being a destabilizer while simultaneously being an accelerator. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
Fine, then, I'll keep the experience to myself.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.
In short, fast feedback loops are only useful if there is actual feedback involved.
Bro & I would not get along well =)))) But the article IS good stuff.
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
The safest answer an engineer can give is "no".
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
> Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Pretty much word for word.It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
“I found this new tool and it’s pretty cool ...”
yup “This company <company totally unlike the one we’re in> does things this way, so …”
agreed “Here, look at this HackerNews post that says this is best practice, we should probably …”
sir/m'lady, we're at war from now on. This is the only reason I come here. Of course I don't take everything carelessly, but the amount of experts on this forum is damn high and this is the only forum in the last 10 years that helped me grow so muchI think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
There's ways to navigate it.
You see the problem immediately. Sales/marketing didn't do their job sussing out what X is and wastes dev time with Ys. And worst of all, write once, support forever. Each one off Y has to be maintained for the special snowflake customer that uses it. None of the Ys actually work well for all the customers with problem X so you end up drowning in "technical debt" spent to create them all.
If your marketing department leads the company, I've discovered the best option is to just quit. Go find a job at an engineering company.
Companies have outlandish hiring practices. They want juniors who already know everything. That's why admitting that you don't know something is seen as showing weakness to the company in the eyes of a junior. Also, not knowing things will actively keep you from getting promoted.
I'm sure it's not like that everywhere but it's juniors playing the corpo game.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
I've been a mentor off and on for the last few decades, and I've been really lucky to have some strong mentees. Some I've followed for a better part of a decade and are crushing it out there. All I can really say is that they're out there, sorry I don't have any more helpful to say around how to find them etc. I'll mull on that for a bit..
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
Maybe that’s just my workplace though!
And yes, the office is important.
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
The company is offering potential new services to current and potential users in the market and getting feedback on how valuable those new services might be.
"Transmissionism" is a term I've seen to describe this
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
There’s a speed limit, because the faster you go the less room for error you have. It’s the same as being heavily leveraged with debt. If you have a cash investment and it drops by 50% you can just wait. If you’re leveraged 100-to-1, a 1% drop forced liquidation and wipes you out.
I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.
Another difference I've noticed is how many agents one can keep running without losing awareness.
It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
complexity is
not what you believe it is
please try listening
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
Looking deeper into it: these people don't understand the underlying foundations anymore. Just keep building fast, without building proper mental models (that would take time).
Our work is largely very difficult to understand to outsiders, we need to write docs and have meetings to show what we have done. It's part of the job, and yes, if you don't do that, it doesn't matter how fantastic the software is that you wrote (sadly).
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
Orgs get what they measure for. If your team values that sort of interactivity and support, it will ... observe it, measure it, and hire for that sort of person. I've seen groups evolve towards that, and they've been great, but it doesn't seem to be a default - most groups/orgs have to work towards it and and keep working at it.
That said, I completely agree. I learned most of what I know from being in the same room with senior developers and asking questions. Something that just isn't happening these days.
Honestly I have the feeling that this is often insecurity. It's easy to feel uncomfortable if you think you don't follow along.
Another issue is that juniors usually experience culture shock on their first jobs. So they more or less isolate and do thing how they learned it.
If anything, quite often it's introducing more problems, because we know we'll run into them and they need to be addressed.
AI is sometimes quite lazy and refuses to solve the hard problems (sometimes making funny excuses like it would take weeks) until you make it explicit that it's important they are dealt with.
Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.
The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is way more work and demand for it from senior leaders than they're capable of with their current teams.
1: https://www.ciodive.com/news/tech-job-postings-hit-3-year-hi...
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
Or maybe I'm just a little bit insane. Or both.
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
To keep it all in a clump
Than spread it about
In practice, it isn't.
Very cool
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
I still vividly remember reading a z80 instruction set manual on a rainy day during summer vacation by a lake as a kid (maybe 14?)--writing my own assembly by hand in the margins for fun. TBH I probably still have that exact manual in storage somewhere. Had a green stripe down the front edge/binding iirc.
Back then I easily met folks like myself out there on the net, including many kids younger and smarter than me. It was awesome.
I do hope that some form of that 'net lives on in spirit somehow, given that the Internet I knew has largely fallen to corporate interests.
Now that I have my own kids, it's been painful to watch them have such an utterly different experience than I did.
Their Internet is based entirely on consumption and dark patterns designed to capture their attention, while providing nothing (to them) in return besides a dopamine addiction and body dismorphia.
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
§01
When I join a team there are two kinds of senior developers I meet.
The first kind says things like:
“I found this new tool and it’s pretty cool ...”
“This company <company totally unlike the one we’re in> does things this way, so …”
“Here, look at this HackerNews post that says this is best practice, we should probably …”
I don’t like this kind of senior developer. A little self-protective, lots of time spent in the industry, probably a good people person.
But not my wavelength.
Then there’s also this kind of senior developer:
“Do we really need that?”
“What happens if we don’t do this?”
“Can we make do for now? Maybe come back to this later when it becomes more important?”
Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
Why? Because they hunt a singular monster in professional software development: complexity.
Special cases, if conditions, new database tables, new components. All yuck yucks. The senior developer wants as little of this as possible, spending lots of time making sure they absolutely need to add more code.
Because adding to a system is risking more complexity.
Yes, yes, of course this is simplistic. There are senior developers who excel at taking on unsolved problems and finding new creative designs.
But eventually, if you’re taking responsibility for a working system, you’re scared of complexity.
Now, why is that? What’s the downside of complexity? And why doesn’t anybody else get it?
§02
We’re going to be simplifying what a business is using two loops.
This is the first loop; marketers, salespeople, product managers, the CEO, they all live here:

The main goal of this loop is to try and learn. The business wants to take things to market and then get feedback on whether they’ve got something valuable or not.
The monster, for people in this loop, is uncertainty.
And uncertainty is cruel because no strategy is guaranteed to work. When combined with time (compensation for marketing/sales, or payroll for founders, or data for product managers) it can feel like taking things to market as fast as possible is the only way to reduce uncertainty before a deadline. The more you can take to the market, the more you can get feedback from it, the more you can (potentially) reduce uncertainty.
This loop, and all companies start with this loop, is about pure, raw, speed.
But what happens when a business gets customers?
§03
Ah, now, here’s our second loop. People paying for a service.

This loop is where a lot of senior developers find themselves in. The main goal in this loop is the continuation and guarantee of service.
Keep things working, keep things understandable, keep things debuggable, keep things fixable, keep things teachable, keep things stable.
Senior developers worry about stability because they take responsibility for the business to continue serving customers.
And what risks all of that?
Complexity.
It makes a system less understandable, less debuggable, less fixable, less teachable, and ultimately, less stable.
Rising complexity = lowering stability = senior developer failing responsibility = bad bad not nice, payments interrupted, everybody sad.
So, if the first loop’s goal was uncertainty reduction, the second loop’s goal is complexity management.
But why does this lead to communication failure?
Because once you have customers, both loops are running simultaneously. A business needs to both explore possibilities and serve customers at the same time.

Ok, now you might be able to spot my answer to the question in the title of this post.
Depending on which loop you spend your time on, your problem is framed differently (which is why I think developers get split in their opinions on AI; some work more on one loop than the other)

This was the story of the people in the first loop:

But this was the story of the senior developer in the second loop:

The stories don’t match.
The more requests to build and add to the system the senior developer gets, the more the senior developer wants to respond with “uhhh, no complexity … maintenance costs … understandability … speed of continuing development … productivity over time …”.
But that does nothing to address the rest of the business’s need for reducing uncertainty.
The copywriter’s diagnosis: You can’t explain away someone else’s problem using your own problems.
And the copywriter’s prescription: You need to describe your solution as a solution to their problem as well.
Senior developer’s fail to communicate because they express their problems in terms of complexity management when they should be expressing their solutions in terms of uncertainty reduction.
By acknowledging that what the rest of the company is seeking for is uncertainty reduction, the senior developer can use their expertise to help.
And what’s the most useful skill a senior developer has? The reluctance to build what’s not necessary; the ability to spot an opportunity to re-use something already built.
Need to collect survey data? Google forms, baby.
Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Need new analytics service? What’s the most important decision we need analytics for? Can we start with one decision, one chart, one metric?
You want to bake me a whole birthday cake? Just put a candle on my sandwich.
This is what senior developers learn to do: they learn how to give people what they want by being resourceful with existing software.
But how do you communicate this without sending people whole essays?
Copywriters love boiling down multiple signals into singular phrases. And so, here’s the magical phrase every senior developer must learn: ‘Can we try something quicker?’
The use of ‘quicker’ acknowledges what they’re really looking for; ‘something’ implies another way of achieving it; ‘try’ implies imperfection, but also the possibility of it being good enough.
It perfectly cuts down to the requirement of the rest of the company, speed to reduce uncertainty, while allowing the senior developer to exercise their expertise: reduce, re-use, and if life is truly a blessing, avoid.
That’s it. That’s my answer to the title of the post: senior developers talk in terms of complexity when everyone else is worried about uncertainty.
But! Big but!
AI now seems to make all of this pointless, doesn’t it? Why reduce? Why re-use? Why avoid? The AI can build so much in so little time.
Ah, well, it can’t yet do the one thing senior developers still do.
Take responsibility.
§04
Senior developers care a lot about understanding the system because understanding allows fixing it when things go wrong. It allows extending it intelligently when the system needs to grow. It allows, more than anything, the continued, reliable servicing of paying customers.
AI threatens this understandability. It is incredible at improving the speed of taking things to the market, but it also affects the other loop, the one the senior developers are responsible for.
If you have a bunch of AI agents, junior developers, non-developers, and your investors and their mothers adding code into the system, you get a system that overcompensates for speed by giving up stability.
This was the business in two loops:

And this is how AI affects the two loops:

Forget maintaining stability, AI is a downright destabilizer. It worsens understandability, fixability, debuggability, teachability, guaranteability, all the bloody bilities.
AI does this and takes no responsibility.
Not nice. This is the senior developer’s main worry that’s being brushed away.
Luckily, senior developers have a few tricks up their sleeve.
Namely: decoupling.
For the longest time, software developers were the only ones who could build software. They were responsible for both loops.

That’s one system supporting two goals.
What if we had two systems, one for each goal?
An analogy: a fiction writer rushes to complete a first draft (often called a vomit draft) and later extracts what’s working and gets rid of what’s not. There’s an editing process after the first initial rapid write. The editor’s job is to take the bits that are working well and shape it all into a cohesive whole.
What if we had one system just for speed? Everyone focused on bringing things to life could work here. AI agents, our own generated and unreviewed code, junior devs, marketing etc.
We could call this the ‘Speed’ version of the system. It’s not meant to be understandable, the goal is getting things good enough to take it to the market for feedback.
And then what if we had a second system focused on stability?
We could call this the ‘Scale’ version of the system. It’s designed by senior developers to be stable, understandable, and scalable.
The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Plus, the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.

Features get built on ‘Speed’ but then stabilized on ‘Scale’.
What this looks like in practice might be unclear, but the idea is to have a well-communicated de-coupling that explains that there’s a difference between going for speed and going for stability.
Imagine you get asked to build something ambitious, and you say:
“Sure, I’ll have the Speed version ready in 3 days. Then the Scale version in about 6 weeks.”
They get what they want, speed and momentum. You get what you want, observation and design.
Maybe?
Your thoughts, senior software developer?
Or should I say, senior software editor?
I do not think OOP ever really worked out well as can be evidenced by it no longer being as popular and people having almost entirely abandoned "Cat > Animal > Object" inheritance hierarchies.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
Edit: The main role of PM is to decide which features to build, not how those features should be built or how they should work. Someone has to decide what to build, that is the PM, but most PM are not very good at figuring out the best way for those features to work so its better if the programmers can talk to users directly there. Of course a PM could do that work if they are skilled at it, but most PM wont be.
So that we're on the same page, what I think should be PM responsibilities:
If I have a user story: "As a customer I want to purchase a product so that I can receive it at my address" - PM defines this user story as they have insight to decide if such feature is needed.
PM should then define acceptance criteria: "Given customer is logged in When they view Product page Then 'Add product to basket' button should appear", "Given 'Add product to basket' button When customers click on it Then Product information modal should appear" etc - PM should know what users actually want, ie whether modals should appears, or not; whether this feature should be available for logged users only, or not.
How this will work shouldn't matter to PM; these are AC they've defined.
Of course the process of defining AC should involve developers (and QA), because AC should be exhaustive to delivering given feature