I said I’d move them to google sheets. There was about five minutes of awkwardness after that as I was interviewing for software developer. I was supposed to talk about what kind of tool I’d build.
I found it kind of eye opening but I’m still not sure what the right lesson to learn was.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.
In many ways, the Door Desk award was for simplicity. I remember, one time, someone got an award for getting rid of some dumb operations room with some big unused LCD TVs. When you won these awards, you rarely got any kind of reward. It was just acknowledgement at the meeting. But that time, they literally gave the guy the TVs.
"Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"
Simplicity for its own sake is not valued. The results of simplicity are highly valued.
I guess it may be important to underscore the value that simplicity provides over needless complexity or to sell people on the value of problems prevented rather than preventable catastrophes that were dealt with
Maybe we could encourage a culture of patting people on the back for maintaining reliable "boring" systems
Some people also crave "drama" so there might be a way to frame "boring reliability" as some kind of "epic daily maintenance struggle that was successfully navigated"
Instead you talk about how you complete all your tasks and have so much bandwidth remaining compared to all your peers, the beneficial results of simplicity. Being severely under used while demonstrating the ability to do 2x-10x more work than everybody else is what gets you promoted.
In this vein simplicity is like hard work. Nobody gives a shit about hard work either. Actually, if all you have to show is how hard you work you are a liability. Instead its all about how little you work provided and that you accomplish the same, or more, than everybody else.
In fact, simplicity often is the best future-proofing. Complex designs come with maintenance costs, so simple designs are inherently more robust to reorgs, shifted priorities, and team downsizing.
It's not just the most "elaborate system". The same thing happens in so many other ways. For example a good/simple solution is one and done. Whereas a complex one will be an interminable cause of indirect issued down the road. With the second engineer being the one fixing them.
Then there's another pattern of the 10x (not the case with all 10x-ers) seeding or asked to "fix" other projects, then moving on to the next, leaving all the debt to the team.
It's really an amazing dynamic that can be studied from a game theoretical perspective. It's perhaps one of the adjacent behaviors that support the Gervais principle.
It's also likely going to be over soon, now that AI is normalizing a lot of this work.
As a consultant/contractor I always evangelise simplification and modelling problems from first principles. I jump between companies every 6-12 months, cleaning up after years of complexity-driven development, or outright designing robust systems that anybody (not just the author) can maintain and extend.
This level of honesty helps you build a reputation. I am never short for work. I also bill more than I could ever as a full-time engineer based in Europe.
I built a showback model at a prior org. Re-used shelfware for the POC, did the research on granular costs for storage, compute, real estate, electricity, HVAC maintenance, hardware amortization, the whole deal. Could tell you down to the penny how much a given estate cost on-prem.
Simple. Elegant. $0 in spend to get running in production, modest spend to expand into public cloud (licensing, mainly). Went absolutely nowhere.
Got RIFed. On the way out the door, I hear a whole-ass team did the same thing, using additional budget, with lower confidence results. The biggest difference of all? My model gave you the actual cost in local currency, theirs gave you an imagined score.
The complexity (cost of a team, unnecessary scoring) was rewarded, not the simplicity.
At core, complexity is derived from discovery of demand within those pesky complex humans.
Simplicity is the mechanism of finding common pathways within the mess of complexity of a product.
the tragedy is that simplicity is very expensive and beyond most organizations ability to support (especially since it can slow down demand discovery), and this is one of the allures of big tech for me. I was greatly rewarded and promoted for achieving simplicity within infrastructure.
More than once I have seen the same project yield two separate promotions, for creating it and deleting it. In particular this happens when the timescale of projects is longer than a single engineer's tenure on a given team.
But yes, avoiding complexity is rarely rewarded. The only way in which it helps you get promoted is that each simple thing takes less time and breaks less often, so you actually get more done.
Most of my engagements consisted of replacing politics-driven complications with simple solutions.
The bigger problem was first quietly showing all the affected people other interesting things that needed doing so they would let go.
And TBH, the simple stuff lasted the longest because it harder to misunderstand or misrepresent.
Software dev's tendency to build castles is great for technical managers who want to own complex systems to gain organizational leverage. Worse is better in this context. Even when it makes people who understand cringe.
You would think that things not breaking should be career-positive for SysAdmins, SREs, and DevOps engineers in a way it cannot be for software devs. But even there simplicity is hard and not really rewarded.
Unix philosophy got this right 50 years ago — small tools, composability, do one thing well. Unix reimagined for AI is my attempt to change that.
It's often simpler to build something you know than to integrate a 3rd party service, but it's highly frowned upon by a lot of devs and management.
Auth and analytics are things I'm thinking of - we have good tools to build these in-house. Also just running a database - never seen so many people afraid of installing postgres and a cronjob to back it up.
I'm currently building a full-blown OpenAPI toolchain at work, where the OpenAPI document itself is the AST. It contains passes for reference inlining, document merging, JSON Schema validation, C++ code generation and has further plans for data model bindings, HTML5 UI...
Why? Because I'm working on a new embedded system which has a data model so complex, it blew past 10k lines of OpenAPI specifications with no end in sight. I said "ain't no way we're implementing this by hand" and embarked on the mother of all yak shavings.
I want all of the boilerplate/glue code derived from a single source of truth: base C++ data classes, data model bindings, configuration management, change notifications, REST API, state replication for device twins and more. That way we can focus on the domain logic instead, which is already plenty complex on its own.
I'm not designing all of this to be simple to develop. I'm designing it so that it's simple for the developers. Even with the incomplete prototype I have currently, the team is already sold ("you mean I just write the REST API specification and it generates all of the C++ classes for me to inherit?"). The roadmap of features for that toolchain is defined, clear and purposeful: to delete mountains of menial, bug-prone source code before it is ever written by hand.
Sometimes, it takes complexity to deliver simplicity. The trick is to nail the abstractions in-between.
Also survivorship bias is a very real thing (problem prevented is ignored, while problem solved is appreciated regardless of who and why caused it).
The general way to handle this as an interviewer is really simple: acknowledge that the interviewee gave a good answer, but ask that for the purposes of evaluating their technical design skills that you'd like for them to design a new system/code a new implementation to solve this problem.
If the candidate isn't willing to suspend disbelief for the exercise, then you can consider that alongside all of the other signals your interviewer team gets about the candidate. I generally take it as a negative signal, not because I need conformance, but because I need someone who can work through honest technical disagreements.
As a candidate, what's worked for me before was to ask the interviewer if they'd prefer that I pretend ____ doesn't exist and come up with a new design, but it makes me question whether I want to join that team. IMO it's the systems design equivalent of the interviewer arguing with you about your valid algorithm because it's not the one the interviewer expects.
He got the prompt, asked questions about throughput requirements (etc.), and said, “okay, I’d put it all in Postgres.” He was correct! Postgres could more than handle the load.
He gets a call from Patrick Collison saying that he failed the interview and asking what happened. He explained himself, to which Patrick said, okay, well yes you might be right but you also understand what the point of the interview is.
They made him do it again and he passed.
I get that "communicate your thought process" or "play along with the exercise" gets offered as the fix here. But that framing bothers me too. Why should simplicity require more justification than complexity? Google Sheets is the design. The fact that it doesn't sound like engineering is the whole point.
I'm just not convinced the solution is learning to package simplicity in a more impressive wrapper.I realize this is part of an interview game, but perhaps the best response is still to ask why this is a problem in the first place before you launch into a technical masterpiece. Force the counterparty to provide specific (contrived) reasons that this practice is not working for the business. Then, go about it like you otherwise would have.
In the hands of an experienced developer/designer, AI will help them achieve a good result faster.
In the hands of someone inexperienced, out of their depth, AI will just help them create a mess faster, and without the skill to assess what's been generated they may not even know it.
Too often the smallest changeset is, yes, simple, but totally unaware of the surrounding context, breaks expectations and conventions, causes race conditions, etc.
The good bit in tfa is near the end:
> when someone asks “shouldn’t we future-proof this?”, don’t just cave and go add layers. Try: “Here’s what it would take to add that later if we need it, and here’s what it costs us to add it now. I think we wait.” You’re not pushing back, but showing you’ve done your homework. You considered the complexity and chose not to take it on.
The longest lived projects and solutions I've worked on have always been the simplest, easiest to replace solutions. Often derived from simple tests scenarios or solutions that just work and get shifted over without much re-work.
What's somewhat funny, with this is that AI code assistants have actually helped a lot with continuing this approach for me... I can create a stand alone solution to work on a library or component, work through the problems, then copy the component/library into the actual work project it was for. I'm in a really locked down internalized environment, so using AI for component dev is on my own hardware... but the resulting work is a piece that can be brought in as-is. No exposure of internal data/resources.
I don't think I'll have a level of trust to "one-shot" or vibe code solutions from AI, but leveraging the ability to spin up a project as a sample to test a component/library is pretty great to say the least.
In the FAANGs I've worked at, engineers who come from scrappy companies and implement hacks (Like the example of emailing spreadsheets around) undermine the business and will cost the productivity of thousands of people.
However, at the startups I've worked at, the folks from big companies that try to implement a super complex thing (e.g. exotic databases, overly ambitious infrastructure) The results are equally catastrophic for a company attempting to bootstrap when the complexity is so far removed from their core business.
What makes an experienced engineer is recognizing both states, understanding what works where and making the right trade-offs, usually from experience you can't fake your way through. I've seen a lot of projects that took 10-20 engineers 18 months to so we could sell something that landed a $100M contract with a customer. You see that enough times and you won't bias as hard against complexity. But of course it's situation dependent, like anything.
Slightly related: I've noticed that there are lots of "ideas guys" (yes, guys) in our field who love to bloviate, and maybe even accomplish some stuff that looks really good. I have made a career out of just putting my head down and getting shit done. I may not have grand design ideas, and in fact have had to unlearn the "fact" that you need to come up with, and implement, Big Ideas. In my experience, people who "get shit done" may not get fancy awards, but their work is recognized and rewarded.
I had to stop trying to prove myself to the company. I have already done that when y'all interviewed me. Now I do what's best for everyone, and I want the company to prove to me that it deserves people who do the right thing despite the processes not valuing it. If it does not, I have enough resources to spend some time on the projects I cared about most.
This mentality gave me peace of mind and helped many people in partner teams go faster and with higher quality.
Management still does not openly appreciate it, but it shows in how they interact with me. Like when you learn to talk to your parents as equals. It's unexpected for them, but they quickly embrace the new interaction and they love it much more than the one before.
BUT, at least very occasionally I have seen people get promoted for simplicity, I've even successfully made the case myself. With a problem that was itself so complex that it was causing fires on a regular basis, and staff & principal engineers didn't want to touch it with a ten foot pole. When a senior eng spent a couple of weeks thinking about the problem and eventually figured out a way to reframe it and simplify the solution, melting away months of work, making the promo case was actually quite easy.
The problem is, the opportunities to burn down complexity like that don't present themselves nearly as often as the opportunities to overcomplicate a thing, which are pretty much unbounded.
The sad thing is that it is common to get fired when you make things better bc then your work is perceived as "done" and your skills are no longer necessary. There are countless IT job stories where someone delivers a technical solution that saves a company a ton of money or generates revenue, then they get fired bc the solution has been delivered and an off shore team has been hired to maintain the work.
Big corps suck.
Edit -- reading some the responses here on this topic and they are...eye-opening and depressing.
Rarely have I seen an actually straightforward, simple feature that can be done in a day used as the basis to spin up a 3-week mini-project with a complicated new architecure.
What happens is a complicated-sounding feature is requested, and some devs will just take it literally and implement it, maybe along with some amount of "future-proofing" that logically follows because the spec is poorly scoped.
Other devs will spend some time thinking about it, realize that there is a really a simple requirement at the core of the request, and it only sounds complex because it was vaguely specified (or maybe these days an LLM was used to write the spec, and its vomit was copy-pasted verbatim). They just implement the simple thing that is the actual need.
This is one place where the agile/scrum practice of planning poker can help. You get a few smart people in a room, and discuss the story and its requirements. Hopefully someone will throw out a low number of points and say "isn't this simply asking for..."
Over my career, most of the complicated code I have written is no longer running. What is still running after 10 years? Postgres (or more broadly, a relational database). Fad frameworks or architectures come and go pretty quickly, they don't end up working as advertised, and it's on to the next thing. I no longer want to spend time in this hamster wheel churning out complicated code that will only end up as next year's tech debt.
Instead I went to the hardware store across the street and bought the biggest (and cheapest) screwdriver I could find and attached it with some cord to the HSM. They never lost it afterwards.
— C. A. R. Hoare
The irony is that the elaborate design usually handles those hypotheticals incorrectly anyway, because you can't predict real requirements from imagination. The simple version gets modified when real feedback arrives, and the modifications are cheaper because there's less architecture to work around.
It could be something overbuilt, large organization structures. Brittle solutions that are highly performant until they break. Or products/offerings that don't grow for similar reasons, simpler-is-better, don't compete with yourself. Or those that grow the wrong way-- too many, much to manage, frailty through complexity, sku confusion.
Alternatively, things that are allowed to grow with some leeway, some caution, and then pruned back.
There's failure modes in any of these but the one I see most often is overreaching concern for any single one.
> But for Engineer A’s work, there’s almost nothing to say. “Implemented feature X.” Three words. Her work was better. But it’s invisible because of how simple she made it look. You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.
Well, Engineer A's manager should help her writing a better version of her output. It's not easy, but it's their work. And if this simpler solution was actually better for the company, it should be highlighted how in terms that make sense for the business. I might be naive and too optimistic but good engineers with decent enough managers will stand out in the long run. That doesn't exclude that a few "bad" engineers can game their way up at the same time, even in functional organizations. though.
The environment where the over-engineer tends to be promoted is one where the engineering department is (too) far separated from where the end users are. Think of very large organizations with walled departments, or organizations where there simply is not enough to do so engineers start to build stuff to fight non existing issues.
Sometimes we'll ask market sizing questions. We will say it's a case question, it's to see their thought process, they're supposed to ask questions, state assumptions, etc.
Occasionally we'd get a candidate that just doesn't get it. They respond "oh I'd Google that". And I'll coach them back but they just can't get past that. It's about seeing how you approach problems when you can't just Google the answer, and we use general topics that are easily accessible to do so. But the downside is yes, you can google the answer.
But given how poorly bought software tends to fit the use case of the person it was bought for... eventually generate-something-custom will start making more and more sense.
If you end up generating something that nobody understands, then when you quit and get a new job, somebody else will probably use your project as context for generating something that suits the way they want to solve that problem. Time will have passed, so the needs will have changed, they'll end up with something different. They'll also only partially understand it, but the gaps will be in different places this time around. Overall I think it'll be an improvement because there will be less distance (both in time and along the social graph) between the software's user its creator--them being most of the time the same person.
(amzn 94-96)
(when there is a simpler design over more complex "big ball of mud abomination" in contrast)
I've noticed the incentive breaks down even further when you consider that simplicity often means saying no upfront, which requires correctly predicting future maintenance costs that nobody has experienced yet. Complexity requires no such foresight. You just build the thing someone asked for.
The orgs where I've seen this actually work had explicit "deleted lines" culture, where removing code was celebrated as loudly as shipping features. Not many places do that. Most treat a 2000-line deletion PR with the same energy as taking out the trash.
One thing engineers can do to fight this, and I think it's mentioned in the article, is to write extensive documentation. Bosses in these companies are too lazy to dig into solutions and figure out for themselves; so they resort to proxies like the number of lines of code, number of pages in the design doc, etc.
Unfortunately, some of us who aim for simplicity are also averse to writing long docs; but with the advent of LLMs, there is some relief in sight.
My career has suffered a lot in terms of promos, etc. because I hate complexity.
As an interviewee it’s important to try and identify whether the group you’re interviewing with operates this way, literally: How will they get the money to pay for your salary? That way you avoid giving nom-starter answers to interview questions.
That's regardless of the lip service they pay to cost cutting or risk reduction. It will only get worse, in the AI economy it's all about growth.
I feel/am way more productive using chatgpt codex and it especially helps me getting stuff done I didn't want to get started with before. But the amount of literal slop where people post about their new vim plugin that's entirely vibecoded without any in-depth thinking about the problem domain etc. is a horrible trend.
The answer to this is almost always "NO" in my experience, because no one ever actually has good suggestions when it comes up. It's never "should we choose a scalable compute/database platform?" It's always "should we build a complex abstraction layer in case we want to use multiple blob storage systems that will only contain the lowest common denominator of features of both AND require constant maintenance AND have weird bugs and performance issues because I think I'm smarter than AWS/Google AND ALSO we have no plans to actually DO that?"
/I'm not bitter...
If every company I know does this, how am I suppose to make money?
There are reasons for "unnecessary" complexity. Mainly cost and time.
Rather than trying to anticipate all the different failure modes one tries at first just to handle fact of failure itself, assuming there's no remediation.
If there's a way to make sure the worst case isn't terrible in some simple way then you do that first - like making a backup file or tr4ying to keep APIs idempotent so you can recover from issues and so on.
And complexity doesn't always sell better. A lot of times it might look like the whole thing is messy, or too hard to maintain, tech burden nightmare. Things that are simple might look complex and vice versa too, I think anyone who had to implement requirements from people who don't understand the implementation complexities will know this very well.
So the way we write software piecemeal today is fundamentally broken. Rather than starting with frameworks and adding individual packages, we should be starting with everything and let the compiler do tree shaking/dead code elimination.
Of course nobody does it that way, so we don't know what we're missing out on. But I can tell you that early in my programming journey, I started with stuff like HyperCard that gave you everything, and I was never more productive than that down the road. Also early C/C++ projects in the 80s and 90s often used a globals.h header that gave you everything so you rarely had to write glue code. Contrast that with today, where nearly everything is just glue code (a REST API is basically a collection of headers).
A good middle ground is to write all necessary scaffolding up front, waterfall style, which is just exactly what the article argues against doing. Because it's 10 times harder to add it to an existing codebase. And 100 times harder to add it once customers start asking for use cases that should have been found during discovery and planning. This is the 1-10-100 rule of the cost of bugs, applied to conceptual flaws in a program's design.
I do miss seeing articles with clarity like this on HN though, even if I slightly disagree with this one's conclusions after working in the field for quite some time.
This was the model at my last job. The "director" of software had strong opinions about the most random topics and talked about them like they would be revolutionary. His team was so far from the product teams they would just build random crap that was unhelpful but technically demo'ed well. Never put into practice. Promoted for 4 years, then fired.
There's a significant asymmetry though, it's not just a bit more work. I'm a bit cynical here, but often it's easier to just overengineer and be safe than to defend a simple solution; obviously depending on the organization and its culture.
When you have a complex solution and an alternative is stacked up against it, everything usually boils down to a few tradeoffs. The simple solution is generally the one with the most tradeoffs to explain: why no HA, why no queuing, why no horizontally scalable, why no persistence, why no redundancy, why no retry, etc. Obviously not all of them will apply, but also obviously the optics of the extensive questioning will hinder any promotion even if you successfully justify everything.
Simpler than what? The reason this phenomenon is so pervasive in the first place is that people can’t know the alternatives. To a bystander (ie managers), a complex solution is proof of a complex problem. And a simple solution, well anyone could have done that! Right?
If we want to reward simplicity we have to switch reference frame from output (the solution), to input (the problem).
These days if I were interviewing someone and they said, "I'd use the simple solution that is fairly ubiquitous", I'd say, "yes! you've now saved yourself tons of engineering hours - and you've saved the company eng money".
To be fair, a lot of the on call people being pulled in at 3am before LLMs existed didn't understand the systems they were supporting very well, either. This will definitely make it worse, though.
I think part of charting a safe career path now involves evaluating how strong any given org's culture of understanding the code and stack is. I definitely do not ever want to be in a position again where no one in the whole place knows how something works while the higher-ups are having a meltdown because something critical broke.
Biggest problem is that next person is me 6 months later :) but even when it’s not a next person problem how much of the design I can just keep in my mind at a given time, ironically AI has the exact same problem aka context window
I like to have something like the following in AGENTS.md:
## Guiding Principles - Optimise for long-term maintainability - KISS - YAGNI
Avoid hands-on tech/team lead positions like hell.
Building a system that's fast on day one will not usually be rewarded as well as building a slow system and making it 80% faster.
My experience is no one really gets promoted/rewarded for these types of things or at least not beyond an initial one-off pat on the back. All anyone cares about is feature release velocity.
If it's even possible to reduce incidents by 80% then either your org had a very high tolerance for basically daily issues which you've now reduced to weekly, or they were already infrequent enough that 80% less takes you from 4/year to 1/year.. which is imperceptible to management and users.
Don't even get me started on the resume-driven development that came along with it.
And maybe I'm completely wrong. This is a perspective of one.
I've only worked at my startup so I can't comment on scale elsewhere, but if our simple architecture can handle 30k requests per second, I think it can handle most other companies scale too.
There's a short film making the rounds that captures this perfectly -- an employee uses AI to generate the quarterly results his whole department was working on, and instead of being promoted, he gets fired: https://youtu.be/O5FFkHUdKyE
The simplicity penalty is even worse when the simplification comes from AI. It's not just "you made us look bad" -- it's "you made our entire team look unnecessary."
Of course, over-simplification is the wrong decision some times, the same as abstraction and complexity is the wrong decision some times...
Your shortcut for promotion is generally building value for the company, but people need to remember that promotions support the business and they aren't free to the company.
Why? We learn all these cool patterns and techniques to address existing complexity. We get to fight TRexes… and so we get paid good money (compared to other jobs). No one is gonna pay me 120K in europe to build simple stuff that can work in a single sqlite db with a php fronted.
It was a 8 bit embedded application in something like 10k of code. When I left I generated a short and clear explanation of why what I had done was awesome in terms of their future business ... because that is what you have to do if you work contracts. Which is the real message of the article. You have to write things up.
I often end up saying, "I can build this, and I will build this if product insists on it, but first let me suggest an alternative ordering of deliverables that starts with a simple implementation and moves towards this one." In almost every case, that simple implementation is still what's in production years later.
I had this happen in a Google interview. I did back of the envelope math on data size and request volume and everything (4 million daily events, spread across a similar number of buckets) and very little was required beyond that to meet performance, reliability, and time/space complexity requirements. Most of the interview was the interviewer asking "what about" questions, me explaining how the simple design handled that, and the interviewer agreeing. I passed, but with "leans" vs. "strong" feedback.
And yes, I have done this on a second Google interview about 15 years ago.
I do dislike interviews where a candidate can fail simply by not giving a specific, pre-canned answer. It suggests a work culture that is very top-down and one that isn’t particularly interested in actually getting to the truth.
His response: "I can sell a good looking car and then charge them for a better running engine"...
https://www.youtube.com/watch?v=T4Upf_B9RLQ hits a little too close to home.
And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.
Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.
Being obviously pedantic here, I agree with what you meant.
One common example I cite is at one job I owned Kafka and RabbitMQ clusters. Zero consideration was given to message size recommendations and we had incidents on the regular because some application was shoving multi-hundred megabyte messages into RMQ. They'd do other stupid shit like not ack their messages which would cause them to never be removed from local disk. This was a huge org, public company, hiring "only the best and brightest".
Management endlessly just threw more hardware at it rather than make the engineers fix their obviously bad architecture. What a headache. Some companies take the "prioritize engineer happiness" thing right off a cliff.
“Simplicity is a great virtue, but it requires hard work to achieve and education to appreciate. And to make matters worse, complexity sells better.” — Edsger Dijkstra
I think there’s something quietly screwing up a lot of engineering teams. In interviews, in promotion packets, in design reviews: the engineer who overbuilds gets a compelling narrative, but the one who ships the simplest thing that works gets… nothing.
This isn’t intentional, of course. Nobody sits down and says, “let’s make sure the people who over-engineer things get promoted!” But that’s what can happen (and it has been, over and over again) when companies evaluate work incorrectly.
Picture two engineers on the same team. Engineer A gets assigned a feature. She looks at the problem, considers a few options, and picks the simplest. A straightforward implementation, maybe 50 lines of code. Easy to read, easy to test, easy for the next person to pick up. It works. She ships it in a couple of days and moves on.
Engineer B gets a similar feature. He also looks at the problem, but he sees an opportunity to build something more “robust.” He introduces a new abstraction layer, creates a pub/sub system for communication between components, adds a configuration framework so the feature is “extensible” for future use cases. It takes three weeks. There are multiple PRs. Lots of excited emojis when he shares the document explaining all of this.
Now, promotion time comes around. Engineer B’s work practically writes itself into a promotion packet: “Designed and implemented a scalable event-driven architecture, introduced a reusable abstraction layer adopted by multiple teams, and built a configuration framework enabling future extensibility.” That practically screams Staff+.
But for Engineer A’s work, there’s almost nothing to say. “Implemented feature X.” Three words. Her work was better. But it’s invisible because of how simple she made it look. You can’t write a compelling narrative about the thing you didn’t build. Nobody gets promoted for the complexity they avoided.
Complexity looks smart. Not because it is, but because our systems are set up to reward it. And the incentive problem doesn’t start at promotion time. It starts before you even get the job.
Think about interviews. You’re in a system design round, and you propose a simple solution. A single database, a straightforward API, maybe a caching layer. The interviewer is like: “What about scalability? What if you have ten million users?” So you add services. You add queues. You add sharding. You draw more boxes on the whiteboard. The interviewer finally seems satisfied now.
What you just learned is that complexity impresses people. The simple answer wasn’t wrong. It just wasn’t interesting enough. And you might carry that lesson with you into your career. To be fair, interviewers sometimes have good reasons to push on scale; they want to see how you think under pressure and whether you understand distributed systems. But when the takeaway for the candidate is “simple wasn’t enough,” something’s off.
It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.
I’ve seen engineers (and have been one myself) create abstractions to avoid duplicating a few lines of code, only to end up with something far harder to understand and maintain than the duplication ever was. Every time, it felt like the right thing to do. The code looked more “professional.” More engineered. But the users didn’t get their feature any faster, and the next engineer to touch it had to spend half a day understanding the abstraction before they could make any changes.
Now, let me be clear: complexity is sometimes the right call. If you’re processing millions of transactions, you might need distributed systems. If you have 10 teams working on the same product, you probably need service boundaries. When the problem is complex, the solution (probably) should be too!
The issue isn’t complexity itself. It’s unearned complexity. There’s a difference between “we’re hitting database limits and need to shard” and “we might hit database limits in three years, so let’s shard now.”
Some engineers understand this. And when you look at their code (and architecture), you think “well, yeah, of course.” There’s no magic, no cleverness, nothing that makes you feel stupid for not understanding it. And that’s exactly the point.
The actual path to seniority isn’t learning more tools and patterns, but learning when not to use them. Anyone can add complexity. It takes experience and confidence to leave it out.
So what do we actually do about this? Because saying “keep it simple” is easy. Changing incentive structures is harder.
If you’re an engineer, learn that simplicity needs to be made visible. The work doesn’t speak for itself; not because it’s not good, but because most systems aren’t designed to hear it.
Start with how you talk about your own work. “Implemented feature X” doesn’t mean much. But “evaluated three approaches including an event-driven architecture and a custom abstraction layer, determined that a straightforward implementation met all current and projected requirements, and shipped in two days with zero incidents over six months”, that’s the same simple work, just described in a way that captures the judgment behind it. The decision not to build something is a decision, an important one! Document it accordingly.
In design reviews, when someone asks “shouldn’t we future-proof this?”, don’t just cave and go add layers. Try: “Here’s what it would take to add that later if we need it, and here’s what it costs us to add it now. I think we wait.” You’re not pushing back, but showing you’ve done your homework. You considered the complexity and chose not to take it on.
And yes, bring this up with your manager. Something like: “I want to make sure the way I document my work reflects the decisions I’m making, not just the code I’m writing. Can we talk about how to frame that for my next review?” Most managers will appreciate this because you’re making their job easier. You’re giving them language they can use to advocate for you.
Now, if you do all of this and your team still only promotes the person who builds the most elaborate system… that’s useful information too. It tells you something about where you work. Some cultures genuinely value simplicity. Others say they do, but reward the opposite. If you’re in the second kind, you can either play the game or find a place where good judgment is actually recognized. But at least you’ll know which one you’re in.
If you’re an engineering leader, this one’s on you more than anyone else. You set the incentives, whether you realize it or not. And the problem is that most promotion criteria are basically designed to reward complexity, even when they don’t intend to. “Impact” gets measured by the size and scope of what someone built, which more often than not matters! But what they avoided should also matter.
So start by changing the questions you ask. In design reviews, instead of “have we thought about scale?”, try “what’s the simplest version we could ship, and what specific signals would tell us we need something more complex?” That one question changes the game: it makes simplicity the default and puts the burden of proof on complexity, not the other way around!
In promotion discussions, push back when someone’s packet is basically a list of impressive-sounding systems. Ask: “Was all of that necessary? Did we actually need a pub/sub system here, or did it just look good on paper?” And when an engineer on your team ships something clean and simple, help them write the narrative. “Evaluated multiple approaches and chose the simplest one that solved the problem” is a compelling promotion case, but only if you actually treat it like one.
One more thing: pay attention to what you celebrate publicly. If every shout-out in your team channel is for the big, complex project, that’s what people will optimize for. Start recognizing the engineer who deleted code. The one who said “we don’t need this yet” and was right.
At the end of the day, if we keep rewarding complexity and ignoring simplicity, we shouldn’t be surprised when that’s exactly what we get. But the fix isn’t complicated. Which, I guess, is kind of the point.
It's good at code generation, feature wise, it can scaffold and cobble together shit, but when it comes down to code structure, architecture, you essentially have to argue with it over what is better, and it just doesn't seem to get it, at which point it's better to just take the reins and do it yourself. If there's any code smells in your code already, it will just repeat them. Sometimes it will just output shit that's overtly confusing for no reason.
There's always going to be some overlap, wanting to use a new skill/library in a production system, but maybe in general it's best to think of learning and writing/generating production code as two separate things. AI is great for learning and exploration, but you don't want to be submitting your experiments as PRs!
A good rule of thumb might be can you explain any AI-generated design and code as well as if you had written it yourself? If you don't fully understand it, then you are not in a good position to own it and take responsibility for it (bugs, performance, edge case behavior, ease of debugging, flexibility for future enhancement, etc).
The honest opinion no one wants to hear is that programmers do not deserve the money they are paid for because MOST of the time what it's really needed is a "single sqlite db with a php frontend".
There's a ton of incredibly talented neurodivergent people in our ecosystem who would trip up on that question just because of how it's framed
Because how is the interviewee to know if you're testing for the technically sophisticated answer no one in their right mind would ever write or the pragmatic one?
I'd assume that if he got a call from Patrick himself and a second opportunity to get interviewed, that's already a cue for interviewers to pass him regardless of what he says?
Exactly, it's also a test of ability to conform. Especially useful to weed out rogue behavior picked up in startups.
So true...I've failed interviews, because the interviewee did see using library functions as a sign of weakness.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instruction.md is the new intern.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
instructions.md is the new intern.
If there's no competent front-line technical management who can successfully make this simple comparison, then, sure, in that case the team may be fucked.
Not if management is moving at the speed of the more complex solution.
> yes, guys
I thought that blatant sexism wasn't a part of this website.
> In my experience, people who "get shit done" may not get fancy awards, but their work is recognized and rewarded.
top kek
Even just taking fault incidence rates, assuming constant injection per dev hour...
As a hiring manager I've had situations like this arise because there was a gap in my plan and I didn't realize it. When those come up, we thank them for their cleverness, apologize to the candidate, reframe the situation, and give them another shot.
But also sometimes I leave intentional ambiguity in the plan. Part of the goal is to see if they have a degree of common sense commensurate to their level. If they're interviewing for a high level position, I'd expect them to be able to spot silly flaws and push back that perhaps the whole problem needs rethinking. And of course, I also expect them to know the brute force solution as well. Do they only know one? Both? Let's fine out.
From one side, we call ourselves problem solvers, on the other hand we are not satisfied with simple solutions to these problems. If im interviewing for a job, i should be expected to behave and solve hypothetical problems the way id do it on the job. If that screws up your script, you probably suck at hiring and communicating your expectations.
On the other hand, interviewees can give poor answers with no explanation. The interviewer should press for an explanation in those cases, of course, but perhaps some are trying to see if the interviewee instinctively provides at least some basic rationale behind the answer without having to be prodded each time, in which case it is a matter of communication habits and skills. Communication is essential, and it is under-emphasized and under-taught in so-called 'STEM' curricula.
In general, I agree that you can and should judge (not necessarily measure) thing like simplicity and good design. The problem is that business does want the "increased this by 80%, decreased that by 27%" stuff and simplicity does not yield itself to this approach.
> You have to write things up.
...and someone has to read it who understands the value you brought by deleting complexity.
The purpose of the interview is for the candidate to demonstrate their thought process and ability to communicate it. “Just use Postgres” doesn’t do that.
This would be more obvious if it was a LeetCode problem and the candidate just regurgitated an algorithm from memory without explaining anything about it. Yeah it’s technically the right answer but the interviewer can’t tell if you actually understand what you’re talking about or if you just happened to memorize and answer that works.
Interviews are about communication and demonstrating thought process.
If a valid answer was “just use Postgres” then it just wasn’t a very good interview question.
In real life, the answer almost certainly would be “just use Postgres” and everyone would be happy.
They want a conversation to see how you think, not an actual answer.
Which is stupid, because they asked a question that the person didn't need to think to answer. So they didn't get to see them think.
That being said, it's also on the ones giving the interviews to push the candidates and ensure that they really are receiving the applicants best. The interviewers don't want to miss potentially great candidates (interviews are hard and nerve-wracking, and engineers aren't known for their social performance), and thus sometimes need to help nudge the candidates in the right direction.
If the interview wants you to think about stuff that never happens in your role, I think it is a sign that in your role, you're expected to solve the problems like in the interview.
Acting like people can't be good at their job is frankly dehumanizing and says a lot about your mindset with how you view other fellow devs.
Maybe that's "the manager's job", but that's just passing the buck and getting a worse solution. Every level of management should be looking for the best solution.
They take less to review and maintain etc, but the credit for those positives aren't assigned to the original engineer. Which is the point of the article.