The anon291 comment about interface stability is exactly right. The reason you don't need to understand CPU microarchitecture is that x86 instructions from 1990 still work. Your React component library from 2023 might not survive the next major version. The "nobody knows how the whole system works" problem is manageable when the interfaces are stable and well-documented. It becomes genuinely dangerous when the interfaces themselves are churning.
What I've noticed is that teams don't even track which of their dependencies are approaching EOL or have known vulnerabilities at the version they're pinned to. The knowledge gap isn't just "how does this work" - it's "is this thing I depend on still actively maintained, and what changed in the last 3 releases that I skipped?" That's the operational version of this problem that bites people every week.
CPU instructions, caches, memory access, etc. are debated, tested, hardened, and documented to a degree that's orders of magnitude greater than the LLM-generated code we're deploying these days. Those fundamental computing abstractions aren't nearly as leaky or nearly as in need of refactoring tomorrow.
Being an AI skeptic more than not, I don't think the article's conclusion is true.
What LLM's can potentially do for us is exactly the opposite: because they are trained on pretty much everything there is, if you ask the AI how the telephone works, or what happens when you enter a URL in the browser, they can actually answer and break it down for you nicely (and that would be a dissertation-sized text). Accuracy and hallucinations aside, it's already better than a human who has no clue about how the telephone works or where to even begin if the said human wanted to understand it.
Human brains have a pretty serious gap in the "I don't know what I don't know" area, whereas language models have such a vast scope of knowledge that makes them somewhat superior, albeit at a price of, well, being literally quite expensive and power hungry. But that's technical details.
LLMs are knowledge machines that are good at precisely that: knowing everything about everything on all levels as long as it is described in human language somewhere on the Internet.
LLMs consolidate our knowledge in ways that were impossible before. They are pretty bad at reasoning or e.g. generating code, but where they excel so far is answering arbitrary questions about pretty much anything.
1. The ability and interest to investigate things and find out how they work, when needed or desired. They are interested in how things work. They are probably competent in things that are "glue" in their disciplines, such as math and physics in my case.
2. The ability to improvise an answer when needed, by interpolating across gaps in knowledge, well enough to get past whatever problem is being solved. And to decide when something doesn't need to be understood.
That doesn’t make it OK. This is like being stuck in a room whose pillars are starting to deteriorate, then someone comes along with a sledgehammer and starts hitting them and your reaction is to shrug and say “ah, well, the situation is bad and will only get worse, but the roof hasn’t fallen on our heads yet so let’s do nothing”.
If the situation is untenable, the right course of action is to try to correct it, not shrug it off.
Systems include people, that make their own decisions that affect how they work and we don’t go down to biology and chemistry to understand how they make choices. But that doesn’t mean that people decisions should be fully ignored in our analysis, just that there is a right abstraction level for that.
And sometimes a side or abstracted component deserves to be seen or understood with more detail because some of the sub components or its fine behavior makes a difference for what we are solving. Can we do that?
Does anyone on the planet actually know all of the subtleties and idiosyncrasies of the entire tax code? Perhaps the one inhabitant of Sealand and the Sentinelese but no-one in any western society.
https://youtu.be/36myc8wQhLo (USENIX ATC '21/OSDI '21 Joint Keynote Address-It's Time for Operating Systems to Rediscover Hardware)
"It's not slop. It's not forgetting first principles. It's a shift in how the craft work, and it's already happened."
It actually really is slop. He may wish to ignore it but that does not change anything. AI comes with slop - that is undeniable. You only need to look at the content generated via AI.
He may wish to focus merely on "AI for use in software engineering", but even there he is wrong, since AI makes mistakes too and not everything it creates is great. People often have no clue how that AI reaches any decision, so they also lose being able to reason about the code or code changes. I think people have a hard time trying to sell AI as "only good things, the craft will become better". It seems everyone is on the AI hype train - eventually it'll either crash or slow down massively.
So nothing new under the sun, often the practices come first, then only can some theory emerge, from which point it can be leverage on to go further than present practice and so on. Sometime practice and theory are more entengled in how they are created on the go, obviously.
The issue with frameworks is not the magic. We feel like it's magic because the interfaces are not stable. If the interfaces were stable we'd consider them just a real component of building whatever
You don't need to know anything about hardware to properly use a CPU isa.
The difference is the cpu isa is documented, well tested and stable. We can build systems that offer stability and are formally verified as an industry. We just choose not to.
The argument seems to be, we should float on a thin lubricant of "that's someone else's concern" (either the AI or the PMs) gliding blissfully from one ticket to another. Neither grasping our goal nor our outcome. If the tests are green and the buttons submit, mission accomplished!
Using Claude I can feel my situational awareness slipping from my grasp. It's increasingly clear that this style of development pushes you to stop looking at any of the code at all. My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?
But someone designed the abstraction (e.g. the Wifi driver, the processor, the transistor), and they made sure it works and provides an interface to the layers above.
Now you could say a piece of software completely written by a coding agent is just another abstraction, but the article does not really make that point, so I don't see what message it tries to convey. "I don't understand my wifi driver, so I don't need to understand my code" does not sound like a valid argument.
I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.
Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.
I am no CS major, nor do I fully understand the inner workings of a computer beyond "we tricked a rock into thinking by shocking it."
I'd love to better understand it, and I hope that through my journey of working with computers, i'll better learn about these underlying concepts registers, bus's, memory, assembly etc
Practically however, I write scripts that solve real world problems, be that from automating the coffee machine, to managing infrastructure at scale.
I'm not waiting to pick up a book on x86 assembly first before I write some python however. (I wish it were that easy.)
To the greybeards that do have a grasp of these concepts though? It's your responsibility to share that wealth of knowledge. It's a bitter ask, I know.
I'll hold up my end of the bargain by doing the same when I get to your position and everywhere in between.
Three minute video by Milton Friedman: https://youtu.be/67tHtpac5ws?si=nFOLok7o87b8UXxY
True.
But in all systems up to now, for each part of the system, somebody knew how it worked.
That paradigm is slowly eroding. Maybe that's ok, maybe not, hard to say.
One can continue to perfect and exercise their craft the old school way, and that’s totally fine, but don’t count on that to put food on the table. Some genius probably can, but I certainly am not one.
This new arrangement would be perfectly fine if they aren't responsible when/if it breaks.
Adam Jacob
It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened.
This post just doubled down without presenting any kind of argument. Bruce Perens
Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
Now take the median dev, compress his lack of knowledge into a lossy model, and rent that out as everyone's new source of truth.The lack of comprehensive, practical, multi-disciplinary knowledge creates a DEEP DEPENDENCY on the few multinational companies and countries that UNDERSTAND things and can BUILD things. If you don't understand it, if you can't build it, they OWN you.
In the prior 30 years of my programming life, so much time was spent "yak shaving"... setting up all the boilerplate, adding basic functionality you always have to do, setting up support systems, etc. With Claude, all of those things are so quick to complete that I can stay focused on what I am actually trying to do, and can therefore keep more of the core functionality I am caring about in my head. I don't have to push the core, novel, parts of my work aside to do the parts that are the same across other projects.
The big problem is that now exist an actual risk most will never be able to MAKE abstractions. Sure, lets be on the shoulders of the giants but before IA most do some extra work and flex their brains.
Everyone make abstractions, and hide the "accidental complexity" for my current task is good, but I should deal with the "necessary complexity" to say I have, actually, done a job.
If is only being a dumb pipe...
It takes curiosity on your part though. Handwaving about practical concerns taking priority is a path to never getting around to it. "Pragmatism" towards skills is how managers wind up with an overspecialized team and then tell themselves it was inevitable. The same can happen to you.
I can’t make anyone want to know how things work, and it’s getting tiring being continuously told “no” when I ask.
This particular example can be misinterpreted though. It's true that no single person knows how to make that exact pencil that he is holding. But it's not true that no single individual exists who can make a pencil by themselves. If the criteria is just that it works as a pencil, then many people could make or find something that fills that criteria.
This is an important distinction because there are things like microprocessors, which no single person knows how to make. But also: no single person could alone build something that has anywhere near the same capability. It's conceivable that a civilization could forget how to do something like that because it requires so many people with non-overlapping knowledge to create anything close. We aren't going to forget how to make pencils because it is such a simple problem, that many individuals are capable of figuring out workable solutions alone.
The hard part is finding graphite (somewhere in Wales? looks like lead, but softer and leaves traces on sheep's wool). Then suitable clay to make the lead. Then some kind of glue to glue the two parts of the pencil (boil some bones and cartilages?).
If the project is legacy or the people just left the company that’s just not true.
"I don't need to know about software engineering, I'm writing code."
"I don't need to know how to design tests, ____ vibe-coded it for me."
sometimes that trust is proven wrong. I have had to understand my compiler output to prove there was a bug in the optimizer (once I understood the bug I was able to find it was already fixed in a release I hadn't updated to yet). Despite that compilers have earned my trust: It is months of debugging before I think maybe the compiler is wrong.
I am not convinced that AI writes code I can trust - too often I have caught it doing things that are wrong (recently I told it to write some code using TDD - and it put the business logic it was testing in the mock - the tests passed, but manual testing showed the production code didn't have that logic and so didn't work). Until AI code proves it is worth trusting I'm not going to trust it and so I will spend the time needed to understand the code it writes - at great cost to my ability to quickly write code.
With all hands on deck scrambling HARD, a week later we still didn’t have everything back up, because we didn’t know how. A ton of it had never been down since the 60s.
A mess indeed.
Like you're sitting in your ide, select a few rows, press (for example) caps lock to activate speech and then just say a short line what it should adjust or similar - which is then staged for the next adjustments to be done with the same UX
Like saying "okay, I need a new usecase here, let's start by making a function to do y. [Function appears] great, we need to wire with object into it [point at class] [LLM backtracking code path via language server until it finds it and passes things through]
The main blocking issue to that UX would likely be the speed of the response, as the transcription would be pretty much instant, but the coding prompt after would still take a few moments to be good... And such an interactive approach would feel a lot better with speed.
Too bad nobody seems to target the combined mouse+voice control for LLMs yet. It would even double as a fantastic accessibility tool for people suffering from various typing related issues
Granted one person can't know/do everything, but large companies in particular seem allergic to granting you any visibility whatsoever. It's particularly annoying when you're given a deadline, bust your ass working overtime to make it, only to discover that said deadline got extended at a meeting you weren't invited to and nobody thought to tell you about it. Or worse, they were doing some dark management technique of "well he's really hauling ass right now, if he makes the original deadline we'll be ahead of schedule, and if he doesn't we have the spare capacity".
If the expectation is I'm a tool for management to use, then I'll perform my duties to the letter and no further. If the expectation is ownership, then I need to at least sit at the cool kids' table and maybe even occasionally speak when I have something relevant to contribute.
> My English instructions do not leave any residual growth. I learn nothing to send back up the chain, and I know nothing of what's below. Why should I exist?
When you use Claude code, tell it to keep a markdown file updated with the what and the why. Instead of just “Do $y”, “Because of $x I need to do $y”. If it is updated in the markdown file, it will be recorded and sometime the agent will come up with code and mske changes that are correct. But use cases you didn’t think about. You can then even ask it “why did it do $x” that you weren’t expecting but oh yeah, it was right.
> Why should I exist?
That’s the wrong question, the correct question is “why is my employer paying me?”. Your employer is paying you to turn well defined requirements into working code to either make them money or to save them money if (the royal) you are a mid level ticket taker. If someone is working at that level, that’s what they are regardless of title.
No one cares if either you or the LLM decided to use a for loop or a while loop.
At higher levels you are responsible for taking your $n number of years of experience to turn more ambiguous, more impactful, larger scoped projects into working implementations that are done on time, on budget and meets requirements. Before LLMs, that meant a combination of my own coding, putting a team together and delegating and telling my director/CTO that this isn’t something we should be doing in house (ie a Salesforce or Workday integration) at all.
Now add to the mix between all those resources - a coding agent. In either case, I as anything above ticket taker, probably haven’t looked at a line of code first. I test for does it meet the functional and non functional requirements and then mostly look at the hot spots - concurrency issues, security issue, and are there any scalability issues that are obvious before I hammer it with real world like traffic - web request or transactions for an ETL job.
And before the pearl clutching starts, I started programming as a hobby in the 80s in assembly and spent the first decade and a half of my career doing C bit twiddling on multiple mainframes, PCs, and later Windows CE devices.
That depends on your definition of “knows how to make.” I worked at Samsung Austin Semiconductor for a while, and there are some insanely smart and knowledgeable people there (and, I’m sure, at every other semiconductor company). It was actually a really good life experience for me, because it grounded and humbled me in the way that only working around borderline genius can.
I can describe to you all the steps that go into manufacturing a silicon wafer, with more detail in my particular area (wet cleans) than others, but I certainly can’t answer any and all questions about the process. However, I am nearly certain that there existed at least one person at SAS who could describe every step of every process in such excruciating detail that, given enough time and skilled workers (you said “know,” not “do” - I am under no delusion that a single person could ever hope to build a fab), they could bootstrap a fab.
I recently watched this: https://www.youtube.com/watch?v=MiUHjLxm3V0
The levels of advanced _whatever_ that we've reached is absurdly bonkers.
It seems to me that at some point in the last 50 or so years the world went from "given a lot of time I can make a _crude_ but reasonably functional version of whatever XYZ in my garage" to "it requires the structural backbone of a whole civilization to achieve XYZ".
Of course it's sort of a delusion. Maybe it's more about the ramp appearing more exponential than ever.
Is this not a job for LLMs, though?
It’s like if you are building a production line. You need to use a certain type of steel because it has certain heat properties. You don’t need to know exactly how they make that type of steel. But you need to know to use that steel. AI slop is basically just using whatever steel.
At every layer of abstraction in complexity, the experts at that layer need to have a deep understanding of their layer of complexity. The whole point is that you can rely on certain contracts made by lower layers to build yours.
So no, just slopping your way through the application layer isn’t just on theme with “we have never known how the whole system works”. It’s ignoring that you still have a responsibility to understand the current layer where you’re at, which is the business logic layer. If you don’t understand that, you can’t build reliable software because you aren’t using the system we have in place to predictably and deterministically specify outputs. Which is code.
A few comments on that. First off, the best programmers I've worked with recognized when their abstractions were leaky, and made efforts to understand the thing that was being abstracted. That's a huge part of what made them good! I have worked with programmers that looked at the disassembly, and cared about it. Not everyone needs to do that, but acting like it's a completely pointless exercise does not track with reality.
The other thing I've noticed personally for myself is my biggest growth as a programmer has almost aways come from moving down the stack and understanding things at a lower level, not moving up the stack. Even though I rarely use it, learning assembler was VERY important for my development as a programmer, it helped me understand decisions made in the design of C for instance. I also learned VHDL to program FPGAs and took an embedded systems course that talked about building logic out of NAND gates. I had to write a game for an FPGA in C that had to use a wonky VGA driver that had to treat an 800x600 screen as a series of tiles because there wasn't nearly enough RAM to store that framebuffer. None of this is something I use daily, some of it I may never use again, but it shaped how I think and work with computers. In my experience, the guys that only focus on the highest levels of abstractions because the rest of the stuff "doesn't matter" easily get themselves stuck in corners they can't get out of.
My takeaway is that modern system complexity can only be achieved via advanced specialization and trade. No one human brain can master all of the complexity needed for the wonders of modern tech. So we need to figure out how to cooperate if we want to continue to advance technology.
My views on the topic were influenced by Klings book (it's a light read) https://www.libertarianism.org/books/specialization-trade
However, there is a fundamental flaw in this analogy: compilers are deterministic, AI is not. You get high-level code and compile it twice, you get exactly the same output. You get specs and generate high-level code through AI twice, you get two different outputs (hopefully with equivalent behaviour).
If you don't understand that deterministic vs. non-deterministic is a fundamental and potentially dangerous change in the way we produce work, then you definitely fail at first principles.
The whole point of society is that you don’t need to know how the whole thing works. You just use it.
How does the water system maintain pressure so water actually comes out when you turn on the tap? That’s entirely the wrong question. You should be asking why you never needed to think about that until now, because that answer is way more mind-expanding and fascinating. Humans invented entire economic systems just so you don’t need to know everything, so you can wash your hands and go back to your work doing your thing in the giant machine. Maybe your job is to make software that tap-water engineers use everyday. Is it a crisis if they don’t understand everything about what you do? Not bloody likely - their heads are full of water engineering knowledge already.
It is not the end of the world to not know everything - it’s actually a miracle of modern society!
Humans aren’t without flaws; prior to coding assistants, I’ve lost count of the times my PM telling me to rush things at the expense of engineering rigor. We validate or falsify the need for a feature sooner and move on to other things. Sometimes it works sometimes a bug blows up in our faces, but things still chug along.
This point will become increasingly moot as AI gets better at generating good code, and faster, too.
The cost is you lose those layers of abstractions you get at the higher software levels, and there's only so much complexity I can handle.
(the funny part is that even HW registers and stuff are just an API that the hardware chooses to expose. As Alan Kay said: "Hardware is really just software crystallized early")
But even now it’s struggling on a project to understand the correlation between “It is cresting Lambda code to do $x meaning it needs to change the corresponding IAM role in CloudFormation to give it permission it needs”
The problem is education, and maybe ironically AI can assist in improving that
I've read a lot about programming and it all feels pretty disorganized; the post about programmers being ignorant about how compilers work doesn't sound surprising (go to a bunch of educational programming resources and see if they cover any of that)
It sounds like we need more comprehensive and detailed lists
For example, with objections to "vibe coding", couldn't we just make a list of people's concerns and then work at improving AI's outputs which would reflect the concerns people raise? (Things like security, designs to minimize tech debt, outputting for rradability if someone does need to manually review the code in the future, etc.?)
Incidentally this also reminds me of political or religious stances against technology, like the Amish take for example, as the kind of ignorance of and dependence on processes out of our control discussed seem to be inherent qualities of technological systems as they grow and become more complex.
Of the top of my head? Most of them. Did you need me to understand some level in particular? I can dedicate time to that if you like. My experience and education will make that a very simple task.
The better question is.. is there any _advantage_ to understanding "all the levels?" If not, then what outcome did you actually expect? A lot of this work is done in exchange for money and not out personal pride or desirous craftsmanship.
You can try to be the "Wizard of Oz" if you want. The problem is anyone can do that job. It's not particularly interesting is it?
To a reasonable degree, yes, I can. I am also probably an outlier, and the product of various careers, with a small dose of autism sprinkled in. My first career was as a Submarine Nuclear Electronics Technician / Reactor Operator in the U.S. Navy. As part of that training curriculum, I was taught electronics theory, troubleshooting, and repair, which begins with "these are electrons" and ends with "you can now troubleshoot a VMEbus [0] Motorola 68000-based system down to the component level." I also later went back to teach at that school, and rewrote the 68000 training curriculum to use the Intel 386 (progress, eh?).
Additionally, all submariners are required to undergo an oral board before being qualified, and analogous questions like that are extremely common, e.g. "I am a drop of seawater. How do I turn the light on in your rack?" To answer that question, you end up drawing (from memory) an enormous amount of systems and connecting them together, replete with the correct valve numbers and electrical buses, as well as explaining how all of them work, and going down various rabbit holes as the board members see fit, like the throttling characteristics of a gate valve (sub-optimal). If it's written down somewhere, or can be derived, it's fair game. And like TFA's discussion about Brendan Gregg's practice of finding someone's knowledge limit, the board members will not stop until they find something you don't know - at which point you are required to find it out, and get back to them.
When I got into tech, I applied this same mindset. If I don't know something, I find out. I read docs, I read man pages, I test assumptions, I tinker, I experiment. This has served me well over the years, with seemingly random knowledge surfacing during an incident, or when troubleshooting. I usually don't remember all of it, but I remember enough to find the source docs again and refresh my memory.
But I understand how my code works. There's a huge difference between not understanding the layer below and not understanding the layer that I am responsible for.
There is a difference in qualia in it happens to work and it was made for a purpose.
Business logic will strive more for it happens to work as a good enough.
This is going to be a big problem. How do people using Claude-like code generation systems do this? What artifacts other than the generated code are left behind for reuse when modifications are needed? Comments in the code? The entire history of the inputs and outputs to the LLM? Is there any record of the design?
We already don't know how everything works, AI is steering us towards a destination where there is more of the everything.
I would also add it's also possible it will reduce the number people that are _capable_ of understanding the parts it is responsible for.
Why? Is it more dangerous to not know how to fry an egg in a teflon pan, or on a stone over a wood fire? Is it acceptable to know the former but not the latter? Do I need to understand materials science so I can understand how to make something nonstick so I’m not dependant on teflon vendors?
Yeah, that's why I said "knew" instead of "knows".
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to:
First, Simon Wardley on building things without understanding how they work:
Here’s Adam Jacob in response:
And here’s Bruce Perens, whose post is very much in conversation with them, even though he’s not explicitly responding to either of them.
Finally, here’s the MIT engineering professor Louis Bucciarelli from his book Designing Engineers, written back in 1994. Here I’m just copying and paste the quotes from my previous post on active knowledge.
A few years ago, I attended a national conference on technological literacy… One of the main speakers, a sociologist, presented data he had gathered in the form of responses to a questionnaire. After a detailed statistical analysis, he had concluded that we are a nation of technological illiterates. As an example, he noted how few of us (less than 20 percent) know how our telephone works.
This statement brought me up short. I found my mind drifting and filling with anxiety. Did I know how my telephone works?
I squirmed in my seat, doodled some, then asked myself, What does it mean to know how a telephone works? Does it mean knowing how to dial a local or long-distance number? Certainly I knew that much, but this does not seem to be the issue here.
No, I suspected the question to be understood at another level, as probing the respondent’s knowledge of what we might call the “physics of the device.”I called to mind an image of a diaphragm, excited by the pressure variations of speaking, vibrating and driving a coil back and forth within a a magnetic field… If this was what the speaker meant, then he was right: Most of us don’t know how our telephone works.
Indeed, I wondered, does [the speaker] know how his telephone works? Does he know about the heuristics used to achieve optimum routing for long distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously? Does he know how many operators are needed to keep this system working, or what those repair people actually do when they climb a telephone pole? Does he know about corporate financing, capital investment strategies, or the role of regulation in the functioning of this expansive and sophisticated communication system?
Does anyone know how their telephone works?
There’s a technical interview question that goes along the lines of: “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using? Are you familiar with the relaxed memory model of the ARM processor? How garbage collection works inside of the JVM? Do you understand how the field effect transistors inside the chip implement digital logic?
I remember talking to Brendan Gregg about how he conducted technical interviews, back when we both worked at Netflix. He told me that he was interested in identifying the limits of a candidate’s knowledge, and how they reacted when they reached that limit. So, he’d keep asking deeper questions about their area of knowledge until they reached a point where they didn’t know anymore. And then he’d see whether they would actually admit “I don’t know the answer to that”, or whether they would bluff. He knew that nobody understood the system all of the way down.
In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all correct.
Wardley’s right that it’s dangerous to build things where we don’t understand the underlying mechanism of how they actually work. This is precisely why magic is used as an epithet in our industry. Magic refers to frameworks that deliberately obscure the underlying mechanisms in service of making it easier to build within that framework. Ruby on Rails is the canonical example of a framework that uses magic.
Jacob is right that AI is changing the way that normal software development work gets done. It’s a new capability that has proven itself to be so useful that it clearly isn’t going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but the benefits exceed the risks.
Perens is right that the scenario that Wardley fears has, in some sense, already come to pass. Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but those mental models are incorrect in fundamental ways.
Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
I have prompting in AGENTS.md that instructs the agent to update the relevant parts of the project documentation for a given change. The project has a spec, and as features get added or reworked the spec gets updated. If you commit after each session then the git history of the spec captures how the design evolves. I do read the spec, and the errors I've seen so far are pretty minor.
That was my point, really - that you probably don't need to know "materials science" to declare yourself competent enough in cooking so that you can make your own food. Even if you only cooked eggs in teflon pans, you will likely be able to improvise if need arises. But once you become so ignorant that you don't even know what food is unless you see it on a plate in a restaurant, already prepared - then you're in a lot poorer position to survive, should your access to restaurants be suddenly restricted. But perhaps more importantly - you lose the ability to evaluate food by anything other than aspect & taste, and have to completely rely on others to understand what food might be good or bad for you(*).
(*) even now, you can't really "do your own research", that's not how the world works. We stand on shoulders of giants - the reason we have so much is because we trust/take for granted a lot of knowledge that ancestors built up for us. But it's one thing to know /prove everything in detail up until the basic axioms/atoms/etc; nobody does that. And it's a completely different different thing to have your "thoughts" and "conclusions" already delivered to you in final form by something (be it Fox News, ChatGPT, New York Times or anything really) and just take them for granted, without having a framework that allows to do some minimal "understanding" and "critical thinking" of your own.
But I hate not knowing how things work, and I have a pretty good memory, so I’m probably an outlier.
I fail to see how this isn't a problem? Grid failures happen? So do wars and natural disasters which can cause grids and supply chains to fail.
I doubt people would starve. It's trivial to figure out the hunting and fire part in enough time that that won't happen. That said, I think a lot of people will die, but it will be as a result of competition for resources.
I am sure engineers collectively understand how the entire stack works.
With LLM generated output, nobody understands how anything works, including the very model you just interacted with -- evident in "you are absolutely correct"
That's a bizarre claim, confidently stated.
Of course I can make a fire, cook and my own food. You can, too. When it comes to hunting, skinning and the cutting of animals, that takes a bit more practice but anyone can manage something even if the result isn't pretty.
If stores ran out of food we would have devastating problems but because of specialization, just because we live in cities now you simply can't go out hunting even if you wanted to. Plus there is probably much more pressing problems to take care of, such as the lack of water and fuel.
If most people actually couldn't cook their own food, should they need, that would be a huge problem. Which makes the comparison with IT apt.
What problem does this solve? In the event of breakdown of society there is nowhere near enough game or arable land near, for example, New York City to prevent mass starvation if the supply chain breaks down totally.
This is a common prepper trope, but it doesn't make any sense.
The actual valuable skill is trade connections and community. A group of people you know and trust, and the ability to reach out and form mini supply chains.
To be fair - humans also fail at that. Just look at the GTK documentation as an example. When you point that out, ebassi may ignore you because criticism is unwanted; and the documentation will never improve, meaning they don't want new developers.
If it's at large scale then millions die of starvation.
It’s just not possible to feed 8 billion people without the industrial system of agriculture and food distribution. There aren’t enough wild animals to hunt.
They're not saying people can't learn those things either, but that's the practice you're talking about here. The real question is, can you learn to do it before you starve or freeze to death? Or perhaps poison yourself because you ate something you shouldn't or cooked it badly.
In case the supply chain breaks, preppers don't want to be the ones that starve. They don't claim they can prevent mass starvation.
(Very off topic from the article)
Will it kill you faster than you can birth and raise the next generation?
If it's something that kills you at 50 or 60, then really it doesn't matter that much as evolution expects you to be a grandparent by then.
In fact it says "This isn't a problem in practice though"
I have a general idea of how those things work, but successfully hunting an animal isn't something I have ever done or have the tools (and training on those tools) to accomplish.
Which crops can I grow in my climate zone to actually feed my family, and where would I get seeds and supplies to do so? Again I might have some general ideas here but not specifics about how to be successful given short notice.
I might successfully get a squirrel or two, or get a few plants to grow, but the result is still likely starvation for myself and my family if we were to attempt full self-reliance in those areas without preparation.
In the same way that I have a general idea of how CPU registers, cache, and instructions work but couldn't actually produce a working assembly program without reference materials.
You can eat some real terrible stuff and like 99.999% of the time only get the shits, which isn't really a concern if you have good access to clean drinking water and can stay hydrated.
The overwhelming majority of people probably would figure it out even if they wind up eating a lot of questionable stuff in the first month and productivity in other areas would dedicate more resources to it.
Maybe if you end up alone and lost in a huge forest or the Outback, but this is a highly unlikely scenario.
If society falls apart cooking isn’t something you need to be that worried about unless you survive the first few weeks. Getting people to work together with different skills is going to be far more beneficial.
I also wasn't putting the focus is on cooking, the ability to hunt/gather/grow enough food and keep yourself warm are far more important.
And you are far more optimistic about people than me if you think people working together is the likely scenario here.
These are very important when you're alone. Like deep in the woods with a tiny group maybe.
The kinds of problems you'll actually see are something going bad and there being a lot of people around trying to survive on ever decreasing resources. A single person out of 100 can teach people how to cook, or hunt, or grow crops.
If things are that bad then there is nearly a zero percent change that any of those, other than maybe clean water, are going to be your biggest issue. People that do form groups and don't care about committing acts of violence are going to take everything you have and leave you for dead if not just outright kill you. You will have to have a big enough group to defend your holdings 24/7 with the ability to take some losses.
Simply put there is not enough room on the planet for hunter gathers and 8 billion people. That number has to fall down to the 1 billion or so range pretty quickly, like we saw around the 1900s.
https://www.scribd.com/document/110974061/Selco-s-Survival
From a real situation, only alluding to the true horrors of the situation.