I doubt that the author even read the result, as it's readability is subpar. In general AI slop is more readable than this soup of bullet points.
This feels like eternal September, but powered by LLMs.
While we talk about maintainability, we all admire Fast Inverse Square algorithm.
Optimize for what best serves your purpose. If you have high team fluctuation, optimize for readability. If you develop a spacecraft, optimize for safety. If you ship audio gear, optimize for latency.
- Optimizing for change is basically the key principle of agility. Too ofter it is confused by many people with being fast in delivery by default, just because you apply agile patterns. This is not true. You can be faster than e.g. with waterfall, but most of the time you will be slower. But that is not the point. The point is you can adapt the plan very quickly. So instead of following strictly a 6 months plan, you can change plans on a daily basis and go in completely different direction, if business demands that.
- Application performance is actually not a "tech" thing. So I dont understand why so many developers pre-optimize for application performance without being asked to do so. Application performance is part of UX (User experience). There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users, because they think "Hey look... the application is calculating something to fullfil my needs", instead of showing the answer instantly. In any case, Application perfomance should be driven by business and user needs, not by engineers who have a personal obligation to do this. And furthermore application performance should never be optimized blindly. Always benchmark the application and work on the bottleneck only.
Users being susceptible to dark patterns doesn't mean that dark patterns are something an engineer should see as acceptable.
> Always benchmark the application and work on the bottleneck only.
That's how you end up with software that's slow due to a million abstractions. Easily bench-marked bottlenecks can give you quick wins, but that doesn't mean you should stop there or not have any foresight to optimize things ahead of time where it makes sense. Your cost benefit calculation also needs to take into account that optimizations decisions (both architecture and lower implementation details) are much more costly to do after the code has already been written, which is why with today's YOLO software they often don't get done at all.
Or better: If you have high team fluctuation, optimize that first so your team is actually effective.
Teams spend weeks improving rendering performance by a couple of milliseconds while engineers are afraid to touch the codebase.
Engineering discussions often focus heavily on:
Performance matters.
But many teams optimize the wrong dimension.
A technically faster system does not automatically make the organization faster.
Especially when:
In many organizations, the real bottleneck is not application performance.
It is engineering throughput.
I do not care if a framework rerenders slightly faster when:
Testing browser-heavy frontend architectures often requires increasingly complicated environment simulation.
I wrote more about this problem in Why you should not access browser globals directly.
A system that is difficult to work with becomes slower overall.
The CPU might be faster.
The organization often is not.
Modern engineering marketing loves the phrase “blazing fast”.
Usually measured in runtime benchmarks.
Rarely measured in debugging experience, maintainability or engineering confidence.
Performance is not only measured in milliseconds.
It is also measured in engineering throughput.
It is measured in:
Long-term product quality usually reflects the engineering experience behind it.
Good developer experience is not only about “nice tooling”.
It directly affects:
And these things compound over time.
A codebase that is easy to understand becomes easier to maintain.
A codebase that is easy to maintain becomes easier to optimize.
A codebase engineers are not afraid to touch evolves faster.
That matters more long-term than winning synthetic rendering benchmarks.
I have seen teams spend enormous effort optimizing runtime performance while:
The rendering performance was not the bottleneck.
Engineering confidence was.
Fear slows everything down:
Teams rarely optimize systems they are afraid to touch.
I wrote more about this in Why your unit tests feel fragile.
Ironically, good developer experience often improves application performance anyway.
Because engineers are more willing to optimize systems they understand.
Clear architecture. Deterministic behavior. Reliable tooling. Understandable state management. Good observability. Fast feedback loops.
All of these improve confidence.
And confidence enables improvement.
A system that is easier to reason about is usually easier to optimize later.
Many optimizations introduce hidden complexity.
Additional build steps. Compiler magic. Aggressive memoization. Custom caching layers. Framework-specific behavior.
Sometimes the performance gains are worth it.
Often they are not.
Especially when the tradeoff is:
A system that is theoretically faster but harder to evolve often becomes slower for the business overall.
This sounds simplistic, but in my experience it is true.
Happy engineers usually:
Good developer experience reduces frustration.
And less frustration means more energy for solving actual customer problems instead of fighting the development environment.
Or in simpler words:
happy engineer = happy customer
At least in my experience.
As engineers, we should absolutely care about performance.
But we should optimize holistically.
Not only for runtime performance.
Also for:
Because the fastest codebase is often the one engineers are not afraid to change.
And fast engineers usually build fast applications anyway.