Confucius treats learning as cultivation: you do not really know something just because you were instructed in it. You know it by practicing, reflecting, making mistakes, and gradually developing judgment.
Laozi gives the complementary warning: “In pursuing learning, every day something is added. In pursuing the Tao, every day something is dropped.” Mastery is not only accumulation. It is also subtraction: removing unnecessary abstraction, ceremony, cleverness, and control.
Software architecture seems to need both. You learn it in a Confucian sense, by doing real work and living with the consequences. You improve it in a Taoist sense, by noticing when the system has accumulated structure that no longer serves the people, incentives, and constraints that actually shape it.
That is why the article’s point about incentives resonates. Architecture is not just what you design on paper. It is what survives contact with the organization that produces and maintains it.
For that, I would recommend the classic texts, such as Software Architecture: Perspectives on an Emerging Discipline (Shaw/Garlan) and really anything you can find by Mary Shaw. Including more recent papers that explore why the field of software architecture did not go the way they foresaw, for example Myths and Mythconceptions: What Does It Mean to Be a Programming Language, Anyhow? or Revisiting Abstractions for Software Architecture and Tools to Support Them
More practically: look at why Unix pipes and filters and REST are successful, and where they fall down and why. Hexagonal architecture is also key.
And a plug for my own contribution, linking software architecture with metaobject protocols as a new foundation for programming languages and programming: Beyond Procedure Calls as Component Glue: Connectors Deserve Metaclass Status. An answer to Mary Shaw's Procedure Calls Are the Assembly Language of Software Interconnection: Connectors Deserve First-Class Status.
Answering the question: if procedure calls are the assembly language, what might a high level language look like? And also maybe that software architecture might have a brighter and more practical future ahead of itself.
Read tip: Simplify IT - The art and science towards simpler IT solution https://nocomplexity.com/documents/reports/SimplifyIT.pdf
Not all chapters are equally good or equally interesting, that's the curse of a multi-author book, and all of them are dated, but I think the book is worth reading nonetheless.
It's heavily dependent on the project, but I feel like working as a "fullstack dev" kind of removes the fun of programming. I'm already spending 40 hrs a week looking at the most dull project I can imagine
You don't have to call a sequence of transformations a compiler. You can say your AST is an algebraic data type, and your transformations are folds (or structural recursions; same thing). Now you have an abstract model that isn't tied to a particular application, and you can more easily find uses for it.
If you know a bit of maths you might wonder about duals. You will find codata---objects---are the dual of algebraic data. Ok, now we're programming to interfaces. That's also useful in the right context. What's the dual of a fold? An unfold! So now we have another way of looking at transformations, from the point of view of what they produce instead of what they consume. At this point we've basically reinvented reactive programming. And on and on it goes.
You can find most of this in the literature, just not usually presented in a compact and easy to understand form.
(Note, the above description is very quick sketch and I'm not expecting anyone to understand all the details from it alone.)
Shameless self promotion: the book I'm writing is about all these concepts. You can find it here: https://functionalprogrammingstrategies.com/
I’m a NP - lots of learning came in clinical rotations where you see real life situations and how they are addressed. I want something like this for software architecture.
The closest I’ve seen is the open source case study books referenced previously but these are older.
I’d like to be able to see explanations at various layers of abstraction about why certain decisions are made or not.
And when you try to prevent that IoC from leaking into the domain too much, the design often starts to look like hexagonal architecture.
Programming often feels like inventing a new form, but in the end we tend to converge on the shapes that previous programmers already discovered.
When I was learning and being formally educated I regularly only had like 50 bucks to my name, I couldn't even afford the cheapest VPS I could find. So for me learning the architectural ideas of using AWS services where you have to setup credit card information to even register was not feasible. After all I already had a computer, why not learn how to deploy from there? Especially after reading about horror stories of people racking up huge cloud bills due to some slip up. So AWS was out of the window. Next I got myself a book about Microservices which was popular at the time, but quickly learned that they are about organizational structure rather than software architecture, so I never had a reason to try that architecture.
I still have no idea how to chose the right architecture or make the "correct" decisions on it. I do whatever works and for some reason it still pays my bills. Currently I just use Laravel monoliths for everything and I am pretty sure this is good enough for most web services out there.
I know the original post wasnt about webapps, it's just where I find myself having the most issues.
But I would say that just because your preferred mental model is an abstract algebraic one where you build an abstract model that can apply to multiple situations doesn’t mean that such an architecture is best for every situation.
The article talks very clearly about the system and social constraints that it is optimizing for architecturally and ‘turning everything into a fold’ doesn’t immediately strike me as helping to meet the fast-build-feedback needs of the deep contributors and easy-and-safe-to-hack-in-modules needs of the weekend warrriors, which is what are described as the goals of the architecture.
But it also doesn’t strike me as very clearly not the case that the architecture has some of the features you’re describing.
It feels rather like you have a pet mental model which you think all architecture should subscribe to, and… I’m sorry but that seems naive.
[1]: https://en.wikipedia.org/wiki/SOLID [2]: https://www.youtube.com/watch?v=wo84LFzx5nI
- That's why X works. - Not X but Y.
And some moron will be in the replies, saying "LLM comment". I hate this world. But probably yeah. llm comment.
I am trying to show 1) software architectures are useful, 2) if you abstract them you can find principles and relationships that allow you to transfer them to different domains, and transform them into different models, and 3) there is a lot of depth in software architecture and utility in learning it.
The article spends most of its time discussing social context in which architecture is developed (I agree it is important, but not everything) and in general downplays the utility of learning about software architecture (e.g. "“software design” is something best learned by doing", and later suggests there is little useful writing on software architecture).
nevertheless, I often deceive myself into thinking that I am inventing a new design. In reality, I am usually just being shaped by the IoC model imposed by the framework and by the pressure of business requirements.
Only the scale changes. Similar problems tend to leave similar structures behind.
Sometimes it feels as if earlier generations of programmers have already solved so many of the important problems that all that remains for me is rediscovery.
But I do not want mere rediscovery. I want to create a new kind of problem. Still, in front of the solidity of established engineering, my small mind sometimes feels as if there is no place left for me.
May 12, 2026
In reply to an email asking about learning software design skills as a researcher physicist:
I was attached to a bioinformatics lab early in my career, so I think I understand what you are talking about, the phenomenon of “scientific code”! My thoughts:
First meta observation is that “software design” is something best learned by doing. While I had some formal “design” courses at the University, and I was even “an architect” for our course project, that stuff was mostly make-believe, kindergarteners playing fire-fighters. What really taught me how to do stuff was an accident of my career, where my second real project (IntelliJ Rust) propelled me to a position of software leadership, and made design my problem. I did make a few mistakes in IJ Rust, but nothing too horrible, and I learned a lot. So that’s good news — software engineering is simple enough that an inquisitive mind can figure it out from first principles (and reading random blog posts).
Second meta observation, the bad news: Conway’s law is important. Softwaregenesis repeats the social architecture of the organization producing software. Or, as put eloquently by neugierig,
If I were to summarize what I learned in a single sentence, it would be this: we talk about programming like it is about writing code, but the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.
I suspect that the difference you perceive between industrial and scientific software is not so much about software-building knowledge, but rather about the field of incentives that compels people to produce the software. Something like “my PhD needs to publish a paper in three months” is perhaps a significant explainer?
Two things you can do here. One, at times you get a chance to design or nudge an incentive structure for a project. This happens once in a blue moon, but is very impactful. This is the secret sauce behind TIGER_STYLE, not the set of rules per se, but the social context that makes this set of rules a good idea.
Two, you can speedrun the four stages of grief to acceptance. Incentive structure is almost never what you want it to be, but, if you can’t change it, you can adapt to it. This is also true about most industrial software projects — there’s never a time to do a thing properly, you must do the best you can, given constraints.
Let me use rust-analyzer as an example. The physical reality of the project is that it’s simultaneously very deep (it’s a compiler! Yay!) and very wide (opposite to an LLM, a classical IDE is a lot of purpose-built special features). The social reality is that “deep compiler” can attract a few brilliant dedicated contributors, and that the “breadth features” can be a good fit for an army of weekend warriors, people who learn Rust, who don’t have sustained capacity to participate in the project, but who can sink an hour or two to scratch their own itch.
My insistence that rust-analyzer doesn’t require building rustc, that it builds on stable, that it doesn’t have any C dependencies, and that the entire test suite takes seconds, was in the service of the goal of attracting high-impact contributors. I was wrangling the build system to make sure people can work on the borrow checker without thinking about anything else.
To attract weekend warriors, the internals of rust-analyzer are split into multiple independent features, where each feature is guarded by catch_unwind at runtime. The thinking was that I explicitly don’t want to care too much about quality there, that the bar for getting a feature PR in is “happy path works & tested”. It’s fine if the code crashes, it will only attract further contributors, provided that:
In contrast, when working on the core spine which provided support for features, I was very relatively more pedantic about quality.
A word of caution about adapting to, rather than fixing incentive structure — the future is uncertain, and tends to happen in the least convenient manner. The original motivation behind rust-analyzer experiment was to avoid the need to write a parallel compiler (the one in IntelliJ Rust), and to prototype a better architecture for LSP, so that the learnings could be backported to rustc. So, even in core (especially in core), the code was very experimental. Oh well. Stuck with one more compiler now, I guess?
I might hazard a guess that something similar happened to uutils project, which started as the primary destination for people learning Rust, and ended up as Ubuntu coreutils implementation.
Third, now to some concrete recommendations. Sadly, I don’t know of a single book I can recommend which contains the truths. I suspect one can only find such a book in an apocryphal short story by Borges: practice seems to be an essential element here. But here are some things worth paying attention to:
Boundaries talk by Gary Bernhardt is all-time favorite. It contains solid object-level advice, and, for me, it triggered the meta inquiry.
How to Test is something I wish I had. I immediately understood the importance of testing, but it took me a long time to grow arrogant enough to admit that most widely-cited testing advice is shamanistic snake-oil, and to conceptualize what actually works.
∅MQ guide and, more generally, writings by Pieter Hintjens introduced me to Conway’s Law thinking. That “feature development” architecture of rust-analyzer? – optimistic merging, applied.
Reflections on a decade of coding by Jamii is excellent, goes very meta. It is intentionally the first of my links.
Ted Kaminski blog is the closest there is to a coherent theory of software development, appropriately framed as a set of notes to a non-existing book!
As for the actual books, Software Engineering at Google and Ousterhout’s The Philosophy of Software Design are often recommended. They are good. SWE, in particular, helped me with a couple of important names. But they weren’t ground breaking for me.