The author argues that overreliance on AI will degrade the overall intelligence of human society, creating a negative feedback loop where future models train on increasingly degraded human data. I agree with this perspective to some extent. However, to definitively claim that human intelligence will only decline is overly simplistic. Rather, we might be about to witness a different facet—or the flip side—of what we have traditionally defined as intelligence.
Socrates once argued that the invention of writing would degrade the essence of human thought and memory. It is true that our capacity for raw memorization declined, but the act of recording enabled knowledge to be transmitted across generations. Couldn't LLMs represent a similar evolutionary trajectory?
It is undeniably true that LLMs atrophy certain cognitive muscles. However, I believe they catalyze development in other areas. In modern society, human discovery and knowledge are effectively monopolized by specific cliques. Without access to prestigious Western journals or incumbent tech giants, the barrier to entry is immense. The open-source community is no exception. For non-native English speakers, breaking into the open-source culture to access shared knowledge is notoriously difficult. But now, by spending a few dollars on an LLM, I can access the collective knowledge of that open-source ecosystem, translated seamlessly into my native language.
There is an old adage in the Korean Windows community: 'Linux is open, but it is not free.' And it’s true. To use Linux, you had to memorize arcane commands, and due to the lack of proper Korean documentation, the learning curve was vastly steeper than Windows. That very learning curve acted as a gatekeeping wall. LLMs explicitly dismantle that wall.
But this dismantling is a two-way street, and it exposes a fatal flaw in the author’s reliance on Shumailov’s 'Model Collapse' theory. The author claims AI compresses the tails of the data distribution, erasing minority viewpoints. What this ignores is that LLMs act as a conduit for cognitive diversity from the non-Western periphery. When a developer in South Korea or Brazil uses an LLM to translate their culturally embedded logic and problem-solving approaches into fluent English, they are injecting entirely new cognitive patterns into the global corpus. This does not compress the tails of the distribution; it actively thickens and extends them by capturing the 'social mind' of populations previously locked out of the internet's primary, English-dominated datasets.
Furthermore, LLMs function as a tool to re-evaluate things we've historically taken for granted—especially in areas that are too complexly intertwined, socio-politically loaded, or vast for the human mind to fully map. Take DeepMind's AlphaDev discovering a faster sorting algorithm as an example; it was a breakthrough achieved precisely because it reasoned from an alien, non-human perspective.
Human learning is fundamentally bottlenecked by environment and bias. Anyone who has interacted with academia knows it is riddled with pervasive prejudices and systemic inefficiencies. In South Korea, for instance, there is an entrenched bias that only researchers with US pedigrees are legitimate, and only papers in specific Western journals matter. This prejudice has prematurely killed countless promising research initiatives. It makes you wonder if the metrics we have long held up as 'superior' or 'correct' are actually deeply flawed. Modern society is too complex for the 'lone genius' model; paradigm shifts now require the intertwined research of multiple collectives. Yet, during this process, political interests often cause dominant groups to gatekeep and exclude others, completely regardless of scientific efficiency. In this context, an AI that lacks our inherent socio-political biases and optimizes purely based on probabilities can actually drive true breakthroughs.
Given all this, the absolute claim that AI unconditionally degrades human intelligence feels flawed. I seriously question whether the 'total sum' of human intelligence is actually experiencing a meaningful decline. Before making such claims, we desperately need to define what 'intelligence' actually means in this new context. The fatal flaw in current AI discourse is the complete lack of nuance—there is no middle ground. Everything is framed as a binary: either purely utopian or purely apocalyptic.
Speaking from personal experience, my cognitive muscle for writing raw code has atrophied because of AI. However, as a non-native English speaker, I used to struggle immensely with naming conventions. Now, my variable naming and overall architectural design capabilities have vastly improved. Conversely, I acutely feel my skills in manual memory layout management and granular code implementation degrading. The trade-off point will be wildly different for every individual.
Whenever I read doom-saying articles like the author's, I can't shake the feeling that they are simply projecting their own subjective anxieties and trying to pass them off as a universal conclusion
Of course, the models are not intelligent. Their generated output reflects the statistical average. And in averaging more and more, you lose a lot of information.
The reason being, when taught how to effectively disagree all these counterfactual concepts that AI loses manifest, they are logically necessary. But if people are not taught how to explore the landscape of ideas, they become "fascists for the common" and literally create the hellscape civilization we are all trapped within.
I fully expect our future to involve PhD factories where doctorates label AI output for the most competitive rates possible.
The majority of us will have to contend with an information environment that is polluted and overrun.
I’ll argue with that the internet pre social media was the “healthiest” in terms of our digital commons.
It assumes that the outputs are lacking because of a limit of ability.
I think there is a strong case to make that many of their limitations come from them doing what we have told them to do. Hallucinations are the stand out example of this. If you train it to give answers to questions, it will answer questions, but it might have to make up the answer to do so. This isn't not knowing that it does not know. This is doing the task given to it regardless of whether it knows or not.
If you were given the task of writing the script for a TV show with the criteria that it not offend any people whatsoever. You are told to make something that is as likeable as you can make it without anyone not-liking it at all. The options for what you can do are reduced to something that is okay-ish but rather bland.
That's what AI is giving us. OK but rather bland. It's giving it to us because that's what we've told it we want.
Example it can easily do moderately advanced calculus, that is way better than the average human.
Are you asserting that an LLM could be NOT trained to answer when it knows it doesn’t know the answer, or if that’s not possible be trained to NOT answer when it knows it doesn’t know the answer?
If so, I would believe your thinking, but for some reason I have not yet seen a single LLM that behaves with that kind of self-knowledge.
I see students obsess every day over their SAT scores, which to some is a measure of individual intelligence. But what SAT score would a pair of students working together on a single test get? Or a dozen students working together? Would it be higher or lower? What sort of strategies would maximize their ability to collaborate? What would be the effect of giving/removing access to a calculator on a student's score? Access to scratch paper? Access to textbooks? Access to a dictionary? Access to unlimited time?
If we want to claim to understand intelligence, these are the sort of questions we should be able to answer. Can we?
Once again LLMs will have to be bound to a source of entropy or feedback of some sort as a limit. Sure you might be able to throw terawatts of cycles at say music production but without examples of what people already like or test audiences you cannot answer the question of whether it is any good.
If I read thousands of books that explain the details of another civilization in another galaxy, very thoroughly and consistently, but it it just happens to be all made up - did I gain knowledge? More importantly, does what I have in my brain now flip from being fiction to being knowledge if that civilization flipped from not existing to existing? How so, if nothing in my brain, or how I live out the rest of my life, changes in the least, if not a single atom in this galaxy changes (let's ignore that gravity has infinite reach and all that, for the sake of argument)?
If yes, how? What in your definition of knowledge makes that possible?
Internet started it, hopefully LLMs will finish it.
This is not a problem in the ability of the system, it is a problem of how to construct training for such a task.
To provide training examples where it answers it does not know the answer only when it does not know the answer. You need training examples where it says it doesn't know when it does not contain that knowledge, but it provides an answer when it does know the answer.
To create such an example, you need to know in advance what the model knows and what the model does not know. You can't just have a database of facts that it knows, because you also need to count things that it can readily infer.
Any model that can reliably give the sum of any two 10 digit integers should be able to answer so. You can't list every possible number that a model knows how to add. That is just the tiniest subset of the task you would have to do because you have to determine every inferrable fact, not just integers. Adding to the problem is that training on questions like this can add to the knowledge base to the model either from the question itself or by inductively figuring out the answer based upon the combination of the question and the fact that it was not expected to know the answer.
A completely different training system would have to be implemented. There is research on categorising patterns of activations that can determine a form of 'mental state' of a model. A dynamic training approach where the answer that the model is expected-to-give/rewarded-for-giving is partially dependent on the models own state could be achieved through this mechanism.
I don't say this, because I know how, but because I see no reason why we will be unable to crack that problem. If our brains can do it, so will AI one day.
~ intelligence
This in itself is negative, but the ramifications are profound: the landscape of ideas is never realized by a material percentage of the students. And those who could have contributed worthwhile insights have been taught to not contribute.
Now with LLMs lowering the average through cognitive offloading and skill atrophy, prepare for it to get a whole lot worse.
At that point, we can only define it as something that causes this observation. And that is not very useful.
Edit: for "not in the training data" yes, humans generally can't know what they can't know.
I didn't mind that there was a typo.

A room of secretaries at typewriters circa 1925. © Underwood Archives/Getty
AI doesn’t really “think.” Rather, it remembers how we thought together. And we’re about to stop giving it anything worth remembering.
We are on the verge of the age of human redundancy. In 2023, IBM’s chief executive told Bloomberg that soon some 7,800 roles might be replaced by AI. The following year, Duolingo cut a tenth of its contractor workforce; it needed to free up desks for AI. Atlassian followed. Klarna announced that its AI assistant was performing work equivalent to 700 customer-service employees and that reducing the size of its workforce to under 2000 is now its North Star. And Jack Dorsey has been forthright about wanting to hold Block’s headcount flat while AI shoulders the growth.
The trajectory has a compelling internal logic. Routine cognitive work gets automated; junior roles thin out; productivity gains compound year on year. For boards reviewing cost structures, it is the cleanest investment proposition since the internal combustion engine retired the horse, topped up with a kind of moral momentum. Hesitate, the thinking goes, and fall behind.
But the research results of a team in the UK should give us pause. In the spring of 2024, they asked around 300 writers to produce short fiction. Some were aided by GPT-4 and others worked alone. Which stories, the researchers wanted to know, would be more creative? On average, the writers with AI help produced stories that independent judges rated as more creative than those written without it.
So far, so on message: a familiar story about the inevitable takeover by intelligent machines. But when the researchers examined the full body of stories rather than individual ones, the picture became murky. The AI-assisted stories were more similar to each other. Each writer had been individually elevated; collectively, they had converged. Anil R Doshi and Oliver Hauser, who published the study in Science Advances, reached for a phrase from ecology to explain this: a tragedy of the commons.
Hold that result in mind: individual gain, collective loss. It describes something far more consequential than a writing experiment—it describes the hidden logic of our entire relationship with artificial intelligence. And it suggests that the most successful organizations of the coming decade will be the ones that do something profoundly counterintuitive: instead of using AI to eliminate human interaction by firing droves of workers, they will use it to create more human interaction. IBM has reversed course on its earlier human redundancy fantasies. I bet more will in due course.
Suppose you could travel to Egypt in 3000 BC and copy, in flawless hieroglyphics, the contents of every temple library, every architectural plan, every priestly manual, every commercial ledger. Then suppose you travelled to Mesopotamia and did the same in cuneiform. Consolidate everything you could find in the languages of that era, and then proceed to train a large language model on it. Full transformer architecture, self-attention, the whole enchilada.
The result would be a system capable of a certain kind of intelligence. It could predict floods from astronomical cycles. It could draft administrative correspondence. It could generate plausible religious commentary. But it would have no capacity for what the Greeks would later call the syllogism. It would carry no trace of Roman legal abstraction, and have no conception of the empirical method that wouldn’t emerge for another four millennia.
Now, let’s extend the experiment. Train a new model on the written output of 300 BC Athens: Aristotle, Euclid, Hippocrates, the commercial records of Mediterranean trade, etc. Another on 300 AD Rome, another on 1000 AD Baghdad, another on 1500 AD Florence, and finally one on the full internet-scale text production of the modern world.
Each model in this chain would be qualitatively smarter than the last. But it wouldn’t be smarter because you changed the architecture of the underlying technology (you didn’t). The reason would be that the civilization feeding the tech had changed. The 300 BC model would demonstrate logical inference that its Egyptian predecessor couldn’t approach. The 1500 AD model would handle probability and navigational calculation. And the 2025 model would exhibit the argumentative density, cross-domain reasoning, and multi-perspectival sophistication that characterize today’s frontier systems.

Figure 1. Civilizational substrates of intelligence
What the chain reveals is a dependency the AI industry has largely declined to examine. The underlying intelligence of a large language model isn’t a function of its architecture, its parameter count, or the volume of compute thrown at its training. It is not even about the training data. It is a function of the social complexity of the civilization whose language it digested.
Each epoch advanced the cognitive frontier through something far richer and more complex than the isolated genius of an individual guru or machine. It did so through new forms of collective problem-solving. Think new institutions (the Greek agora, the Roman lex, the medieval university, the scientific society, the modern corporation, and the social internet) that demanded and rewarded ever more sophisticated uses of language.
The cognitive anthropologist Edwin Hutchins studied how Navy navigation teams actually think. In his 1995 book Cognition in the Wild, he wrote something that reads today like an accidental prophecy. The physical symbol system, he observed, is “a model of the operation of the sociocultural system from which the human actor has been removed.”
That is, with eerie precision, a description of what a large language model (LLM) really is, stripped of all the unapproachable jargon and mathematical wizardry. An LLM like ChatGPT is a model of human social reasoning with the human wrangled out. And the question nobody in Silicon Valley is asking with sufficient urgency is: What happens to the model when the social reasoning that produced its training data begins to thin?
In 2024, Ilia Shumailov and colleagues published a paper in Nature with a straight-talking title: AI models collapse when trained on recursively generated data. They demonstrated, with alarming mathematical precision, that language models trained on text generated by other language models start to degenerate partly because the distribution of the output narrows over successive generations. Minority viewpoints, rare knowledge, unusual formulations, and edge-case perspectives gradually vanish. The model converges on a kind of statistical average—fluent, plausible, and hollow. The tails of the distribution disappear first.
Consider what those tails represent. They are the traces of intellectual disagreement, of minority expertise, of Cassandra warnings, of institutional friction, and of the awkward and valuable fact that different people know different things and express them differently. They are, in other words, the signature of social complexity. Model collapse is social mind compression presented as a technical phenomenon.
Around the same time, the AI researcher Andrew Peterson analyzed what he called “knowledge collapse”: the harmful effect on public knowledge due to widespread reliance on AI-generated content. Even with a modest discount on AI-sourced information, public beliefs deviated 2.3 times further from ground truth. When people and organizations rely on AI summaries rather than engaging with primary sources, the diversity of available perspectives narrows.
Similarly, there is a variant of the Dunning Krueger effect, I suggest, that is found in those conversing with service chatbots that spare them the social bruises of hard conversations. When people choose to “ask Grok” to settle messy debates on Twitter/X, they are spreading this syndrome of overconfidence in one’s understanding of complex issues. It is easier to inflate your knowledge and understanding when you don’t have to deal with the social-regulatory feedback of ego-bruising disagreement. Blind spots grow bigger when one is cocooned in a machine-harem of pampering bots. What emerges over time is subtler than the militant ignorance of pre-AI social media. It is a confident, fluent, and remarkably homogeneous form of shallow knowing.
Anthropic, the maker of Claude, another LLM platform, has research results showing that in only 8.7% of user interactions with its platform do users pause to double-check what the bots spew out. That number reinforces something bigger than mere cognitive offloadingand delegation. It enables systemic overconfidence which in turn diminishes curiosity, exploration, and knowledge-frontier defiance.

Figure 2. Cognitive overloading, overconfidence, underexploration & frontier AI regression
Meanwhile, a team at Epoch AI estimated that the total stock of quality-adjusted human-generated text available for training is roughly 300 trillion tokens, projected to be exhausted between 2026 and 2032. This is typically framed as a resource depletion problem as though we’re running out of data the way we might run out of water. But that framing misses the deeper point. The reservoir is not just being drained—the springs feeding it are starting to dry up.
In early 2025, researchers at Microsoft and Carnegie Mellon studied 319 knowledge workers across 936 real-world tasks that involved AI assistance. What they found was not what the productivity narrative had promised.
In 40% of AI-assisted tasks, participants reported exercising no critical thinking whatsoever. The higher their confidence in the AI’s output, the less cognitive effort they invested. The researchers didn’t use this phrase, but I will: they documented the automation of thought, where the frictions of exploration are missing and mental outputs emerge as if from an assembly line.
Technology is interesting, but human behavior is more so. We have observed for decades that people will offload cognitive work even when they’re not overwhelmed, and do so preemptively, the moment a reliable-seeming assistant is available. Lisanne Bainbridge identified this pattern in 1983, in a paper whose title has become something of a dark proverb among automation researchers: “Ironies of Automation.” In Bainbridge’s view, the more reliable the automation, the less practiced its human operators become. It is a profound paradox I have grappled with in other forms: the more complex automated systems become, the higher the skill level required to intervene when (not if) they malfunction.
Nicholas Carr is famous for railing against such effects of automation in the digital age. In the same line of inquiry, Sparrow, Liu, and Wegner showed experimentally that when people know information is available through Google, they invest less effort in memorizing it. Now imagine that effect amplified by a system that doesn’t merely retrieve information but generates analysis, drafts arguments, and produces plausible reasoning on demand. The result is a world in which individual productivity rises while the collective pace of human thought starts to fall. The horse-trooper gallops harder but the cavalry covers less ground.
We can sharpen the argument further.
If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.
This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.
Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.
Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.
My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.
I agree with Lev Vygotsky’s characterization that thinking is emergent from words as opposed to words being a mere vessel for conveying thought. I also align with the arguments of Andy Clark and David Chalmers about how cognition extends constitutively into social and material environments. But my own claim is stronger: It is collective intent specification that creates the most formidable cognitive burden in doing anything worthwhile. Getting intelligent minds to sync around an issue and work towards a common cause has always been the hallmark of human mental effort, whether it is raising giant pyramids or landing on the moon. A complex vision must radiate into the hive-mind to generate an interconnected consciousness that takes us from the solitary genius of apples falling on scientific heads to finally defying earthly gravity en route to Mars.
A constitutional convention, for example, is not just a group of people exchanging opinions; it is a very definite process of thinking that no individual could replicate alone and which AI today can undermine if it breaks the hive-mind.
What all this implies for AI is straightforward. Every token in a training corpus is a fossil of social interaction—a trace of negotiation, argumentation, institutional meaning-making, or cultural transmission. The intelligence that AI systems exhibit was never individual to begin with. It was forged in the spaces between people.
And if those spaces are allowed to shrink due to over-dependence on human-machine interaction, we have trouble. If the interactions that generate rich language become rarer, shallower, or more homogeneous, then the intelligence that depends on them will slowly degrade. We will not hear any bangs, true, but we will notice a gentle, almost imperceptible narrowing over time. The machines to which we are fast entrusting the future of discovery will slow down when it matters most.
Dario Amodei, the CEO of Anthropic, published a long essay in October 2024 called “Machines of Loving Grace.” He imagined what a super-intelligent AI might accomplish across science, governance, and human welfare. His piece reinforces the thinking of Leopold Aschenbrenner, a former OpenAI researcher (as Amodei himself is). Leopold’s “Situational Awareness,” written a few months earlier, argues with great conviction that scaling compute and data would be sufficient to produce artificial general intelligence within years.
But these papers exhibit significant strategic blind spots despite their eloquence and the technical eminence of their authors. Amodei identifies limiting factors—safety, alignment, institutional readiness—but does not consider the possibility that the quality of future training data is itself socially determined and potentially degrading. Aschenbrenner’s entire framework assumes that scaling is the primary constraint, with zero discussion of the social origins of the data being scaled.
Sam Altman’s “Intelligence Age” essay comes closest to acknowledging the social aspect of AI when he writes that “Society itself is a form of advanced intelligence.” But it is unclear what his true commitment to the consequence of that claim is. I suspect mere rhetorical flourish. He does not even ask whether AI deployment might erode the social processes that constitute this intelligence.
The Social Edge Framework outlined here is a direct counterpoint to Amodei, Aschenbrenner, and Altman. It is a program of action to counter the human redundancy fantasy. It challenges the self-fulfilling doom-spirals created by the premature reallocation of material resources to a vision of AI. I speak of the philosophy that underestimates the sheer amount of human priming needed to support the Great Recode of legacy infrastructure before our current civilization can even benefit substantially from AI advances.
By “Great Recode,” I am paying homage to the simple but widely ignored fact that the overwhelming number of tools and services that advanced AI models still need to produce useful outputs for users are not themselves AI-like and most were built before the high-intensity computing era began with AI. In the unsexy but critical field of PDF parsing—one of the ways in which AI consumes large amounts of historical data to get smart—studies show that only a very small proportion of tools were created using techniques like deep learning that characterize the AI age. And in some important cases, the older tools remain indispensable. Vast investments are thus required to upgrade all or most of these tools—from PDF parsers to database schemas—to align with the pace of high-intensity computing driven by the power-thirst of AI. Yet, we are not at the point where AI can simply create its own dependencies.

Figure 3. A great amount of money is being spent on AI yet the vast surrounding infrastructure and human operators have not been primed for an AI-first world
Indeed, the so-called “legacy tech debt” supposedly hampering the faster adoption of AI has in many instances been revealed as a problem of mediation and translation. AI companies are learning that they need to hire people who deeply understand legacy systems to guide this Recoding effort. A whole new “digital archaeology” field is emerging where cutting-edge tools like ArgonSense are deployed to try to excavate the latent intelligence in legacy systems and code often after rushed modernization efforts have failed. In many cases, swashbuckling new-age AI adventurers have found that mainframe specialists of a bygone age remain critical, and multidisciplinary dialogues and contentions are essential to progress on the frontier. Hence the strange phenomenon of the COBOL hiring boom. New knowledge must keep feeding on old.
The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible.
Now for the part that will strike many as paradoxical.
The prevailing wisdom is that AI enables organizations to do more with less—automate routine work, thin the headcount, and redirect savings toward strategy. The Social Edge Framework suggests this calculus can be self-defeating in ways that compound slowly but surely.
Let’s look at the evidence. Brynjolfsson, Chandar, and Chen tracked employment outcomes for early-career workers aged 22-25 in AI-exposed fields. Since 2022, this cohort has experienced a 13% relative decline in employment. It seems that AI replaces codified knowledge (the primary asset of junior workers) while complementing tacit knowledge (the hard-won asset of experienced professionals). The problem is that tacit knowledge does not arise from thin air. It comes from doing the work: absorbing institutional norms, developing judgement through supervised practice, making costly mistakes at manageable scale. Entry-level positions may look like just a cost line. But they actually function as the on-ramp through which the next generation of domain experts is produced. In that sense they are part of the organization’s core strategic investments. Remove the on-ramp and you save money today. But you also starve the pipeline of domain-specific, socially embedded knowledge on which future AI systems will depend.
Dell’Acqua and colleagues at Harvard Business School found that management consultants using GPT-4 improved quality by 40% within the AI’s competence frontier. Outside that frontier—on problems requiring contextual judgement the AI could not supply—their performance dropped by 19%. The technology had made them overconfident precisely where confidence was least warranted—a pattern that should give pause to any board treating AI adoption as a straightforward substitution exercise.
The Social Edge prescription is that organizations that hire more people to work in AI-enriched, high-interaction, and transmediary roles—where AI scaffolds learning rather than substituting it—will derive greater long-term advantage than those that treat the technology as a headcount-reduction device. In a world where raw cognitive throughput has been commodified, the value arc shifts to something considerably harder to replicate: the capacity to coordinate human intent with precision, speed, and genuine depth. That edge lies in trans-mediation and high human interactionism.
The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.
The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.
Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.
None of these individual acts is catastrophic. However, their compound effect may be.
The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.
Making the right strategic choices about AI is going to become a defining trait in leadership. Bloom et al. cross-country research has long established that management quality explains a substantial share of productivity variance between teams and organizations, and even countries.
In the AI age, small differences in leadership quality can generate large differences in outcomes—a non-linear payoff I call convex leadership. The term is borrowed from options mathematics, where a convex payoff is one whose upside accelerates faster than the downside decelerates. Convex leaders convert cognitive abundance into structural ambition and thus avoid turning their creative and discovery pipelines into stagnant pools of polished busywork. Conversely, in organizations led by what we might call concave leaders—cautious, procedurally anchored, optimizing for error-avoidance—AI would tend to produce more noise than signal. Because leadership is such a major shaper of all our lives, it is in our interest to pay serious attention to its evolution in this new age.
The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.
Bright Simons is a researcher, activist, and writer whose work sits at the intersection of global value chains, technology strategy, and institutional governance & design. He is the founder of the mPedigree Network, and affiliated with IMANI and ODI Global, both think tanks.