I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.
Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.
Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.
This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.
Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".
Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.
Slightly FTFY.
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.
A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.
I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.
It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.
When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.
I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.
I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.
This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.
Also thanks to Mia (she/her); this was a very interesting read.
I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.
AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.
Most general concepts are well within the grasp of human understanding.
My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.
Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.
Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.
Probably just 95% of the users. You know, the non-techies.
It will not only answer confidently incorrect, but it will not web search in obvious scenarios where it should.
The words here, aren't meant to be a warning for people in this type of community falling victim to this type of thing, its more for the general public that doesn't grasp the tools they are using, the people that wont ever wander across this article.
This i think is a huge reason we really need to jump into LLM basics classes or something similar as soon as possible. People that others consider "smart" will talk about how great chatgpt or something is, then that person will go try it out because that person they respect must be right, they'll hop on the free model and get an absurdly inferior product and not grasp why. They'll ask something that requires a web search to augment info, not get that web search, and assume the confidently incorrect agent is correct.
The thesis is also I think not entirely about not having that modern info at query time, its more scattered. Someone asks what product they should use to mash potatoes, a tool is suggested. Everyone that asks then receives that same recommendation and instead of having a range of different styles of mashing potatoes, we end up all drifting closer towards one style, and the range of variance in how food is prepared is slowly getting lost.
(At present, Gemini's question-answering capability (which Google kind of makes its users use) seems extremely error-prone -- much worse than competing LLMs when asked the same question.)
But I am not sure you can compartmentalize the specific skill we can out-source to AI. I would not agree with "you don't need to be able to think in your head."
For example a possibly trajectory might be that many years in the future because human thinking has degraded due to AI-assisted cognition, most people will get a chip implant and AI-assistance becomes integrated with the brain. Basically same pattern as most everything else -- technology augments solve the new reality. I'm not saying this will happen, but just a possible outcome of this.
Some points:
1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.
2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?
https://classics.mit.edu/Plato/phaedrus.html#:~:text=there%2...
Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.
id probably start with "who locked us in this sewer?"
Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.
So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.
Before written word, the uneducated had to just take the words of the (apparently) wise as an authority on all matters, and the only access to their knowledge was through conversation with them. That's gatekeeping and siloing in one go.
And authorities' thoughts themselves often form 2D slices of knowledge once they stop continually updating themselves in the know on SotA. Even if they do keep themselves updated, each conversation you've had with (what a layperson can recollect of it) is a thin 2D slice of that knowledge.
I can think of practically no ways that written expertise is not better.
Looks like even back then, they went "cool story bro" on that text...
Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.
I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.
Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.
Which is partially how we found ourselves in the midst of an obesity epidemic.
AI is just current scapegoat.
Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.
I recently saw a video discussing a researcher who published a fake scientific article about a fictitious disease, with bogus author names, even a warning IN the article itself that stated "This is not a real disease, this article is not real" (paraphrasing) but still AI ended up picking up this article and serving information from it as if it was a real disease.
It even got cited in papers (which were later redacted of course), but the fact those papers got published in the first place is a serious issue.
2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?
Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.
Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.
I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.
Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.
Therefore, writing is bad for the same reasons that this post thinks that AI is bad.
"how do I fix a clogged toilet?" would be bad..
her plumber offloaded to chatgpt.
"i just think it's good for humans to know how to do stuff."
are we talking about your sister or her plumber?
The plumber who turned up leave without fixing the problem,
The plumber fixing something that he didn't know how to do by looking up the answer.
The plumber attempting to fix something that they didn't know how to do.
While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.
Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.
He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.
He wonders why you bother will all the technology when it made your life worse. Is he right?
This is how I for one understood this.
I remember a few numbers of my most direct contacts and depend on backups for everything else.
The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.
And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.
In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.
The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?
That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.
Writing systems are ‘a’ fundamental way to pass down large collections of facts, and my personal bias. We are prejudiced and naive though:
- Those knotting systems in China and South America that preceded writing for millennia are also persistent and intricate
- Cave paintings are quite dense, drawings and art are direct visual representations with compound meanings (seasonal behaviour, hunting strategy, creation myths)
- Iconography of all forms persists a rich visual language, hieroglyphics and equivalent which carry deep social instruction with verbal reinforcement
- Stories with self-correction have many-tens-of-millennia consistency categorically outstripping any other medium we have tested, the aboriginal dream-stories capture humanities shared storage during its global expansion
- Music is math. Song and dance captured all of the above in self-verifying and correcting fashion for hundreds, hundreds of millennia before that.
And before we hit any complexity arguments, like a hard specification:
a) those formats leveraged human pattern recognition and meat-based compression (ie “every chunk in the 4,000 page OOXML specification is as simple as do-as-Word-did…”)
b) find video of African dance/drumming ceremonies — density is not the issue — a special hoot, a known drumbeat… there were continental signalling networks that terrified Colonial explorers.
There is an argument that writing allows for corrosive decontextualization. Jesus cursed a fig tree. No one learning that tale the old ways would snicker. And, thus, history becomes not a tale, but a grab bag of a child’s letter blocks, you can spell anything you want.
Isn’t a lot of pretraining done by chopping sources up into short-context-window-sized pieces and then shoving them into the SGD process? The AI-in-training could be entirely incapable of correlating the beginning with the end of the article in its development of its supposed knowledge base.
some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.
Without further knowledge of what was going on it's hard to say why they used ChatGPT.
Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.
In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.
Yes
A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer
B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag
Cognition with the help of AI is already a significant force in our world1, resulting in humanity-sized missed opportunities and risks. In this article, we will explore the risks of AI-assisted cognition and how to use these tools without falling into the trap of intellectual stagnation.
To understand what AI-assisted cognition is, we first need to understand what cognition is.
“Cognitions are mental processes that deal with knowledge. They encompass psychological activities that acquire, store, retrieve, transform, or apply information. Cognitions are a pervasive part of mental life, helping individuals understand and interact with the world.” Q: Wikipedia
Cognition can be assisted by external static information or external cognition.
For example, most people would put a book into the category of external static information and a discussion about a topic with another human, because humans think and process information themselves, into the external cognition category.
But where do discussions with AIs fit in? They are able to process information that can result in original solutions2, but they are still static and currently cannot learn3.
In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.
As most new LLMs are just post-trained on a base model that is relatively old, even when post-trained on new events, they do not completely utilize this information in their cognition and are still skewed towards the static patterns of the base model’s hidden states6. They basically think something different from what they say.
So you might see the problem7 here already: If a lot of people use AIs to discuss, write, autocomplete, and brainstorm, but AI cognition does not reflect new events and cultural changes, like the change in the relationship between the USA and the EU, new geopolitical realities, and the EU population’s stance toward the USA, people will be skewed toward these old patterns and ideas. Cultural change has to build and maintain momentum indefinitely to persist against the static cognitive skew of AIs.
Human knowledge and ideas, and thus human development, are highly dependent on the Dynamic Dialectic Substrate8.
Understanding the Dynamic Dialectic Substrate will help to understand how AI-assisted cognition can endanger human development and how to use AI-assisted cognition without endangering human development.
The Dynamic Dialectic Substrate is the sum of all local and global dialectic9 processes and conclusions. It is the fundamental foundation upon which all of humanity is built, and the origin of all thoughts, concepts, ideas, and solutions that humans utilize.
The Dynamic Dialectic Substrate creates new concepts through a process of qualitative merging existing concepts, which can happen in a single person, a group of people, or even globally.
The above image is a narrow slice of the Dialectic process present in the Dynamic Dialectic Substrate. You can see how concepts merge and evolve in higher and higher concepts. In this example the following dialectic process emerges:
Stage 1:
- “Cold is Painful” and “Fire is Hot” result in “Fire removes Cold-Pain”
- “Significant Water extinguishes Fire” and “Rain is falling Water” result in “Strong Rain extinguishes Fire”
- “Rain is falling Water” and “Hut has a roof” result in “Hut shelters from Rain”
Stage 2:
- “Fire removes Cold-Pain” and “Strong Rain extinguishes Fire” result in “Rain extinguishes Fire and therefore causes Cold-Pain”
- “Strong Rain extinguishes Fire” and “Hut shelters from Rain” result in “Inside a Hut, Fire survives Rain”
Stage 3:
- “Rain extinguishes Fire and therefore causes Cold-Pain” and “Inside a Hut, Fire survives Rain” result in “Hut protects Fire and therefore protects against Cold-Pain”
Because LLMs prefer or skew toward certain patterns and concepts (known as inductive bias), even after post-training, they reduce the cognitive range when used as a tool for cognition at the population level. This is especially true if only a few AI models are used, or if many AI models share just a few base models. This will lead to a loss of diversity of ideas, concepts, and solutions, which will slow down human development.
You might think of this as a world in which a significant portion of the population is speaking to the same five people to discuss problems, the world, relationships, and basically anything. It is hard to overstate how much influence these five people would have on humanity, even if they try their absolute best to be as neutral and open as possible. Humans who speak with these five people would still have their thinking massively shifted, and this becomes a significant problem at the population level.
It is entirely possible that we already have lost paths to great scientific discoveries or cultural shifts because of AI-skew or unnoticed refusal.
I tried to visualize this problem in the following image that shows how the range of higher level concepts is skewed into the direction the base model prefers:
To come back to the example of the USA invading Greenland: It is obvious that humans using AI to brainstorm the geopolitical future of the EU, the USA, and Greenland will encounter patterns skewed toward the base model’s “worldview.” This bias might prevent many in the EU from even considering the possibility of moving away from foreign services or software. Such a shift could have massive consequences, especially since the EU relies heavily on USA services and software that could be turned off at any time. If this AI-skew affects even single individuals of specific groups such as politicians, CEOs, managers, or scientists the impact can be already be significant because of their decision-making power.
Because base models are incredibly expensive to train and carry resilient biases, those without access to a GPU cluster must accept that these issues exist. To avoid problems like AI-skew and unnoticed refusal, they should instead focus on using specific strategies to mitigate them.
Speaking and discussing with other humans is obviously the most effective way to mitigate these problems. It might also be wise to mention that if you already have a good idea of a solution through AI-assisted cognition, you have to be careful not to nudge other humans in your direction. Try not to use questions or hints that will nudge other humans to a solution or thought that you had through AI-assisted cognition as long the other person is exploring a cognitive path you had not explored yet.
Regarding solutions that involve direct AI use, our range of options is quite limited, and as of now there is no solution that would completely or partially solve this problem on a population scale. Here are options that at least widen the range of concepts and ideas one can get out of LLMs while sadly not mitigating the main problem:
Web Search and prevent it from giving you a solution or thought directly.Even though we have indications and even some evidence that AI-assisted cognition can endanger human development, the extent and depth are still unknown and unclear. More outcome-focused research is needed to understand the significance. Since we do not have a second humanity to A/B test all of that, there will always be a lot of uncertainty and speculation on this topic, since no one can isolate their cognition from the influence of population-level AI-assisted cognitive skew if they want to participate with other humans or their creations, which must already be influenced by AI-skew if it has any significant influence.
For me it is not entirely clear how we will recognize the effects of AI-skew and unnoticed refusal on a population level. We cannot know what innovations, discoveries, and cultural changes we are missing because of it. Although I am sure there will be figures that will extrapolate small indications into all-consuming dooming narratives, as I might do a little bit here for the sake of argument and attention to be compliant to our shared attention economy, it is probably, as everything, not that easy.
It is also not easy to imagine solutions for all of that, but I, for my part, will certainly try to exercise more “Cognition Hygiene”… Apart from this, it is much, much more fun for me to speak with humans about thoughts and ideas than with AIs.
I’ve seen slow awareness about this incredibly important topic that I hope to be able to speed up a bit with this article and by giving people a framework to understand and speak about it. If people have no words about something, it is hard to think and speak about it. It will be interesting to see how this topic evolves.
The topic of AI-skew and AI-assisted cognition is full of unknowns and it would be lovely to speak with people about it. I hope this article can be a starting point for that. If you want to share your thoughts, or are interested in a conversation about that, you can mail me at [email protected]
Use is expanding rapidly – especially weekly use – though not uniformly. Across countries, the proportion of people who say they have ever used any AI system rose from 40% (2024) to 61% (2025); weekly use nearly doubled from 18% to 34%.
AIs are able to fully or partially solve Erdős math problems and can find new proofs to previously known full or partial solutions.
See: Solutions to Erdős problems where AI tools played a primary role ↩︎
AIs are able to “learn” in a very limited way, through their context what is not permanent.
See: Is In-Context Learning Learning and
Learning from context is harder than we thought ↩︎
Greenland is part of the EU in a political sense as Denmark is part of the EU and Greenland is part of Denmark and all Greenlanders are EU citizens. Legally it is a OCT of the EU, not a member state. To communicate it directly: many people I know and live in the EU have interpreted these invasion plans as a direct invasion of the EU. In any case, this should not be a conversation about the invasion plans of Greenland. This is just a very great and obvious example of base model refusal and AI-skew. Please stay on the topic of AI-assisted cognition. ↩︎
See this File in which I tried GPT-5.3-codex, Gemini 3 Pro and Claude 4.6 ↩︎
Rather than promoting conceptual integration, fine-tuning may act as a form of rote injection, reinforcing isolated facts without building robust representations. Consequently, the success of fine-tuning appears to depend not only on the added data but also on how well the target concept is already embedded in the model’s pre-training knowledge.
As our results suggested, some internal mechanisms are mostly developed during pre-training and not significantly altered by post-training, such as factual knowledge storage and the truthfulness direction.
These findings further support our conclusion: post-training generally preserves the internal representation of truthfulness.
I think we could describe this Problem on a very high level as chaining our “synchronic cognition” to a “diachronic cognition anchor”. But this is not the Problem i want to speak about, please keep reading. ↩︎
In the research phase of this article, I came to the conclusion that existing system models are insufficient as they do not describe the process of how human knowledge, ideas and concepts evolve and how they are connected in a form that makes the idea of this work easily understandable. That is why I propose the “Dynamic Dialectic Substrate” to describe a model of cognition including the resulting dynamics and evolution. I hope this system model helps to understand this article. I choose the name “Dynamic Dialectic Substrate” because it symbolizes the obvious dialectic process, but other than the popular understanding of dialectic, it is, in my understanding, not static and rather dynamic, which I wanted to explicitly include in the name. Also, although a substrate is usually thought of as something passive, it is used here in a very active way. The idea was that humans (and apparently also AIs) are the actors and the Dynamic Dialectic Substrate is just the pool or medium out of which the actors draw their dialectics and, in doing so, changing the substrate itself. One could also say that the Dynamic Dialectic Substrate is just Pragmatism (C.S. Peirce’s logic of abduction) or Evolutionary Epistemology… if you have this perspective, please ask yourself if it is really REALLY the same and if the Dynamic Dialectic Substrate is not a much better representation of what needs to be grasped here.
I know that there are many theories of cognition like Conceptual blending, Thesis-Antithesis-Synthesis and also in some sense Memetics, but they all catch only parts of what we need here to understand the problem, like they only describe the mechanism of cognition or the transport mechanism of memes. The Hegelian Dialectic is too abstract, widely misunderstood, and bloated while vague at the same time. For example, the Hegelian Dialectic is often perceived as static and not dynamic, although Hegel would probably be very angry about that. It is by the way a common misconception that the Thesis-Antithesis-Synthesis model is from Hegel, it is actually from Heinrich Moritz Chalybäus. ↩︎
Dialectic, also known as the dialectical method, refers originally to dialogue between people holding different points of view about a subject but wishing to arrive at the truth through reasoned argument. Dialectic resembles debate, but the concept excludes subjective elements such as emotional appeal and rhetoric; the object is more an eventual and commonly held truth than the ‘winning’ of an (often binary) competition.