The point that always comes to mind is: correlation does not imply causation. I guess the main contribution would be a better mapping of the areas of the brain associated with speech production, but jumping from "these two things correlate" to "these two things are essentially the same" seems to me a bit of a stretch.
Ok. I buy it. The sequencing necessary to translate thought to words, necessarily imposes a serialisation which in consequence marshalls activity into a sequence, which in turn matches the observed statistically derived LLM sequences.
I tend to say the same things. I often say "this AGI is bullshit" and the ocurrence of Bullshit after the acronym AGI is high. I would be totally unsurprised if the linear sequence of neuronal signalling to both think, and emote as speech or even "charades" physical movements to say "AGI is bullshit" would not in some way mimic that of an LLM, or vice versa.
Responses here however seem not commensurate with the evidence presented. Two of the papers[0][1] that provide the sources for the illustration in the blog post are about research conducted on a very small group of subjects. They measure neural activity when listening to a 30 minutes podcast (5000 words). Participants tried to guess next words. All the talk about "brain embedding" is derived from interpreting neuronal activity and sensor data geometrically. It is all very contrived.
Very interesting stuff from a neuroscience, linguistics and machine learning perspective. But I will quote from the conclusion of one of the papers[1]: "Unlike humans, DLMs (deep language models) cannot think, understand or generate new meaningful ideas by integrating prior knowledge. They simply echo the statistics of their input"
[0] Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns (https://www.nature.com/articles/s41467-024-46631-y)
[1] Shared computational principles for language processing in humans and deep language models (https://www.nature.com/articles/s41593-022-01026-4)
Is there some theorem stating something like random few-hot vectors can always be combined linearly to match any signal with a low p-value?
I thought I encountered it sometimes in my experiments and that this might be happening in this llm x neuroscience trend of matching llm internals to brain signals.
* A linear transformation of a speech encoder's embeddings closely aligns them with patterns of neural activity in the brain's speech areas in response to the same speech sample.
* A linear transformation of a language decoder's embeddings closely aligns them with patterns of neural activity in the brain's language areas in response to the same language sample.
They mention a profound difference in the opening paragraph, "Large language models do not depend on symbolic parts of speech or syntactic rules". Human language models very obviously and evidently do. On that basis alone, it can't be valid to just assume that a human "embedding" is equivalent to an LLM "embedding", for input or output.
We have a closed system that we designed to operate in a way that is similar to our limited understanding of how a portion of the brain works, based on how we would model that part of the brain if it had to traverse an nth-dimensional array. We have loosely observed it working in a way that could roughly be defined as similar to our limited understanding of how a portion of the brain works given that limitation that we know is not true of the human brain, with a fairly low confidence level.
Even if you put an extreme level of faith into those very subpar conclusions and take them to be rigid... That does not make it actually similar to the human brain, or any kind of brain at all.
[Citation needed]. Actually, the paper does give a citation (G.F. Marcus, the Algebraic Mind), that is from 2019 according to their citation list (i.e. before gpt3), but actually seems to be from the early 2000s.
In effect: any kind of llm activation could be correlated to brain signals even though it's just a sophisticated p mapping and would not correspond to anything useful.
Iirc the team at meta ai of jean remi king shown that even randomly trained LLMs could be fitted.
Going a bit further, I'll speculate that the actions made by a human brain are simply a function of the "input" from our ~5 senses combined with our memory (obviously there are complications such as spinal reflexes, but I don't think those affect my main point). Neural nets are universal function approximators, so can't a sufficiently large neural net approximate a full human brain? In that case, is there any merit to saying that a human "understands" something in a way that a neural net doesn't? There's obviously a huge gap between the two right now, but I don't see any fundamental difference besides "consciousness" which is not well defined to begin with.
I believe the same, and also I'm willing to accept that the human brain can intentionally operate in an stochastic parrot mode.
Some people have the ability to fluently speak non-stop, completely impromptu. I wonder if it's similar to an LLM pipeline, where there'a constant stream of thoughts being generated based on very recent context, which are then passed through various output filters.
So many reasons: absorb information faster; improve spatial visualization; motivation, intrinsic motivation hacking ; simulations...etc
Give me the code to my brain and let me edit it, with version control please :D
What is your basis for this? Do you have any evidence or expertise in neuroscience to be able to make this claim?
> Neural nets are universal function approximators, so can't a sufficiently large neural net approximate a full human brain?
We do not understand the brain well enough to make this claim.
> but I don't see any fundamental difference besides "consciousness" which is not well defined to begin with.
Yeah besides the gaping hole in our current understanding of neuroscience, you have some good points I guess.
More evidence against "stochastic parrots"
- zero shot translation, where LLMs can translate between unseen pairs of languages
- repeated sampling of responses from the same prompt - which shows diversity of expression with convergence of semantics
- reasoning models - solving problems
But my main critique is that they are better seen as pianos, not parrots. Pianos don't make music, but we do. And we play the LLMs on the keyboard like regular pianos.
I think any reasonable scientist would a-priori react the same way to these claims as claims that neural networks alone can possibly crack human intuition: “that sounds like sci-fi speculation at best”. But that’s the crazy world we live in…
Nootropics may help with stimulation, as well as memory and cognition in general. There is a whole community dedicated to it with effective stacks.
If there were no such structure, then their methods based on aligning neural embeddings with brain "embeddings" (really just vectors of electrode values or voxel activations) would not work.
> They mention a profound difference in the opening paragraph, "Large language models do not depend on symbolic parts of speech or syntactic rules". Human language models very obviously and evidently do. On that basis alone, it can't be valid to just assume that a human "embedding" is equivalent to an LLM "embedding", for input or output.
This feels like "it doesn't work the way I thought it would, so it must be wrong."
I think actually their point here is mistaken for another reason: there's good reason to think that LLMs do end up implicitly representing abstract parts of speech and syntactic rules in their embedding spaces.
Make it simple. Stare at a clock with a big second hand. Take one breath every 15 seconds. Then, after a minute or so, push it out to 20 seconds, then one every 30 seconds.
For the 30, my pattern tends to stabilize on inhale for 5-7, hold for 5-7, and then a slow exhale. I find that after the first exhale, if I give a little push I can get more air out of my lungs.
Do this once a day, 7-10 minutes session, for a week, and see if things aren't a little different.
I see this as maybe it’s not a statistical parrot, but it’s still only some kind of parrot, Maybe a sleep deprived one.
Honestly do they ? To me, they clearly don't. Grammar is not how language works. It's useful fiction. Language even in humans seems to be a very statistical process.
(My utterly uninformed knee-jerk reaction here, but even if I was a true believer I don't think I'd reach for "compelling".)
I philosophically reject the notion that consciousness is an important factor here. The question of whether or not you have a consciousness doesn't affect what I take away from this conversation, and similarly the question of whether an AI has a consciousness doesn't affect what I take away from my actions with it. If the (non-)existence of others' consciousnesses doesn't materially affect my life—and we assume that it's a fundamentally unanswerable question—why should I care other than curiosity?
For example, in difficult perceptual tasks ("can you taste which of these three biscuits is different" [one biscuit is made with slightly less sugar]), a correlation of 0.3 is commonplace and considered an appropriate amount of annotator agreement to make decisions.
This is something that Chomsky got very wrong, and the statistical/ML crowd got very right.
But still, grammars are a very useful model.
if you add random rolls, you get a gaussian, thanks central limit theorem.
if you sum them, you get a lognormal distribution, which approximates a power law up to a cutoff
That doesn't remotely follow from the UAT and is also almost certainly false.
Not here though, where you are trying to prove a (near) equivalence.
Statistics are just another way to record a grammar, all the way down to the detail of how one talks about bicycles, or the Dirty War in Argentina.
If a grammar is defined as a book that enumerates the rules of a language, then of course language doesn't require following a grammar. If a grammar is defined as a set of rules for communicating reasonably well with another person who knows those same rules, then language follows grammars.
In my view, this phrase is only unambiguous to those who feel the preposition tradition, and all the heavy lifting is done here by “mit” (and “durch” in the opposite case, if one wants to make it clear). Articles are irrelevant and are dictated by the verb and the preposition, whose requirements are sort of arbitrary (sehen Akk., mit Dat.) and fixed. There’s no article-controlled variation that could change meaning, to my knowkedge it would be simply incorrect.
I’m also quite rusty on Deutsch, aber habe es nicht völlig vergessen, it seems.
The first 5 minutes of this video do good job of explaining what i'm getting at - https://www.youtube.com/watch?v=YNJDH0eogAw
My German is pretty rusty, why exactly is it unambiguous?
I don't see how changing the noun would make a difference. "Ich sehe" followed by any of these: "den Mann mit dem Fernglas", "die Frau mit dem Fernglas", "das Mädchen mit dem Fernglas" sounds equally ambiguous to me.
Language processing is thinking as far the brain is concerned and there's no evidence that these are 2 cleanly separated processes whether you 'think' in words or not.
But it's the other way around! Grammars follow languages. Or, more precisely, grammars are (very lossy) language models.
They describe typical expectations of an average language speaker. Grammars try to provide a generalized system describing an average case.
I prefer to think of languages as a set of typical idioms used by most language users. A given grammar is an attempt to catch similarities between idioms within the set and turn 'em into a formal description.
A grammar might help with studying a language, and speed up the process of internalizing idioms, but the final learning stage is a set of things students use in certain situations aka idioms. And that's it.
> Statistics are just another way to record a grammar
I almost agree.
But it should be "record a language". These are two approaches to the problem of modeling human languages.
Grammars are an OK model. Statistical models are less useful to us humans but given the right amount of compute they do show much better (see LLMs).
You would also think emphasizing grammar's usefulness would make it plain that I do not think it is a waste of time.
Grammars the way I understand them are are a family of human language models. Typically discrete in nature. The approach was born out of Chomsky's research culminating in the Universal Grammar idea.
Language is a lot more than parsing syntax, whatever your thoughts are on the matter, even LLMs are clearly doing more than that. Are there any experiments where subjects had severe cognitive deficiencies and language in its full breadth or maybe i should say communication ? came out unscathed ?
The chris experiments don't seem to go into much detail in that front.
When a group of birds fly, each bird discovers/knows that flying just a little behind another will reduce the amount of flaps it needs to fly. When you have nearly every bird doing this, the flock form an interesting shape.
'Birds fly in a V shape' is essentially what grammar is here - a useful fiction of the underlying reality. There is structure. There is meaning but there is no rule the birds are following to get there. No invisible V shape in the sky constraining bird flight.
Sure, LLMs are also lossy but also much more scalable.
I've spent quite a lot of time with 90s/2000s papers on the topic, and I don't remember any model useful in generating human language better than "stohastic parrots" do.
> That doesn't contradict the argument that “thinking” and “language processing” are not two sequential or clearly separated modes in the brain but deeply intertwined.
It's not an argument, it's an assertion, that is, in fact, contradicted by the experimental evidence I described (Moro and "Chris"). Of course they are "deeply intertwined" but because of the evidence it's probably an interface between two distinctive systems rather than one general system doing two tasks.
Second, I didn't say that, in language, structure implies deterministic rules, I said that there is a deterministic rule that involves the structure of a sentence. Specifically, sentences are interpreted according to their parse tree, not the linear order of words.
As for the birds analogy, the "rules" the birds follow actually does explain the V-shape that the flock forms. You make an observation "V-shaped flock" ask the question "why a V-shape and not some other shape" and try to find a explanation (the relative bird positions make it easier to fly [because of XYZ]). In the case of language you observe that there is structure dependence, you ask why it's that way and not another (like linear order) and try to come up with an explanation. You are trying to suggest that the observation that language has structure dependence is like seeing an image of an object in a cloud formation: an imagined mental projection that doesn't have any meaningful underlying explanation. You could make the same argument for pretty much anything (e.g. the double-slit experiment is just projecting some mental patterns onto random behavior) and I don't think it's a serious argument in this case either.
The fact that statistical models are better predictors than the-"true"-characterization-that-we-haven't-figured-out-yet is completely irrelevant, just as it would be irrelevant if your deep-learning net was a better predictor of the weather: it wouldn't imply that the weather doesn't follow rules in physics, regardless of whether we knew what those rules were.
There's no contradiction because i never argued/asserted the brain didn't have parts tuned for language, which is really all this experiment demonstrates.
There is plenty evidence for to suggest this
https://pubmed.ncbi.nlm.nih.gov/27135040/
https://pubmed.ncbi.nlm.nih.gov/25644408/
https://www.degruyter.com/document/doi/10.1515/9783110346916...
And research on syntactic surprisal—where more predictable syntactic structures are processed faster—shows a strong correlation between the probability of a syntactic continuation and reading times.
>In the case of language you observe that there is structure dependence, you ask why it's that way and not another (like linear order) and try to come up with an explanation. You are trying to suggest that the observation that language has structure dependence is like seeing an image of an object in a cloud formation: an imagined mental projection that doesn't have any meaningful underlying explanation.
No I'm suggesting that all you're doing here is cooking up some very nice fiction like Newton did when he proposed his model of gravity. Grammar does not even fit into rule based hierarchies all that well. That's why there are a million strange exceptions to almost every 'rule'. Exceptions that have no sensible explanations beyond, 'well this is just how it's used' because of course that's what happens when you try to break down an inherently probabilistic process into rigid rules.
GP didn't say anything about grammars being arbitrary. In fact, his claim that grammars are models of languages would mean the complete opposite.
> There's no contradiction because i never argued/asserted the brain didn't have centers tuned for language, which is really all this experiment demonstrates.
I don't know what you are trying to say then.
I'm not sure what this is supposed to show? If I can predict what you are going to say so what. I can predict you are going to pick something up too if you are looking at it and start moving your arm. So what?
The third paper looks like a similar argument. As far as I can tell neither paper 1 or 2 propose a probabilistic model for language. 1 talks about how certain language features are acquired faster with more exposure (that isn't inconsistent with a deterministic grammar). I believe 2 is the same.
> No I'm suggesting that all you're doing here is cooking up some very nice fiction like Newton did when he proposed his model of gravity.
Absolutely bonkers to describe Newton's model of gravity as "fiction". In that sense every scientific breakthrough is fiction: Bohr's model of the atom is fiction (because it didn't use quantum effects), Einstein's gravity will be fiction too when physics is unified with quantum gravity. No sane person uses the word "fiction" to describe any of this, it's just scientific refinement: we go from good models to better ones, patching up holes in our understanding, which is an unceasing process. It would be great if we could have a Newton-level "fictitious" breakthrough in language.
> Grammar does not even fit into rule based hierarchies all that well. That's why there are a million strange exceptions to almost every 'rule'. Exceptions that have no sensible explanations beyond, 'well this is just how it's used' because of course that's what happens when you try to break down an inherently probabilistic process into rigid rules.
No one is saying grammar has been solved, people are trying to figure out all the things that we don't understand.
The main point of contention is their statement that "grammar follows language" which, in the Chomsky sense, is false: (universal) grammar/syntax describes the human language faculty (the internal language system) from which external languages (English, French, sign language) are derived, so (external) languages follow grammar.
I don't see how you can say his language ability is much higher than his general problem solving ability if you don't know what proficiency of language he is capable of reaching.
When you are learning say English as a second language, there are proficiency tiers you get assigned when you get tested - A1, A2 etc
If he's learning all these languages but maxing out at A2 then his language ability is only slightly better than his general problem solving ability.
This is the point i'm trying to drive home. Maybe it's because i've been learning a second language for a couple years and so i see it more clearly but saying 'he learned x language' says absolutely nothing. People say that to mean anything from 'well he can ask for the toilet' to 'could be mistaken for a native'.
>I don't know what you are trying to say then.
The brain has over millions of years been tuned to speak languages with certain structures. Deviating from these structures is more taxing for the brain. True statement. But how on earth does that imply the brain isn't 'thinking' for the structures it is used to ? Do you say you did not think for question 1 just because question 2 was more difficult ?
If the speed of your understanding varies with how frequent and predictable syntactic structures are then your understanding of syntax is a probabilistic process. A strictly non-probabilistic process would have a fixed, deterministic way of processing syntax, independent of how often a structure appears or how predictable it is.
>I can predict you are going to pick something up too if you are looking at it and start moving your arm. So what?
Ok ? This is very interesting. Do you seriously think this prediction right now isn't probabilistic ? You estimate not from rigid rules but past experience that it's likely I will pick it up. What if i push it off the table ? You think that isn't possible? What if i grab the knife in my bag while you're distracted and stab you instead? Probability is the reason you picked that option instead of the myriad of options.
>Absolutely bonkers to describe Newton's model of gravity as "fiction". In that sense every scientific breakthrough is fiction: Bohr's model of the atom is fiction (because it didn't use quantum effects), Einstein's gravity will be fiction too when physics is unified with quantum gravity. No sane person uses the word "fiction" to describe any of this, it's just scientific refinement: we go from good models to better ones, patching up holes in our understanding, which is an unceasing process. It would be great if we could have a Newton-level "fictitious" breakthrough in language.
"All models are wrong. Some are useful" - George Box. There's nothing insane with calling a spade a spade. It is fiction and many academics do view it in such a light. It's useful fiction, but fiction none the less. And yes, Einstein's theory is more useful fiction. Grammar is a model of language. It is not language.
All I am saying is that grammars (as per Chomsky) or even high-school rule-based stuff are imperfect and narrow models of human languages. They might work locally, for a given sentence, but fall apart when applied to the problem at scale. They also (by definition) fail to capture both more subtle and more general complexities of languages.
And the universal grammar hypothesis is just that - a hypothesis. It might be convenient at times to think about languages in this way in certain contexts but that's about it.
Also, remember, this is Hacker News, and I am just a programmer who loves his programming/natural languages so I look at everything from a computational point of view.
> But how on earth does that imply the brain isn't 'thinking' for the structures it is used to ? Do you say you did not think for question 1 just because question 2 was more difficult ?
The point isn't to define the word "thinking" it is to show that the language capacity is a distinct faculty from other cognitive capacities.
In what sense? I don't see how it tells you anything if you have the sentence "The cat ___ " and then you expect a verb like "went" but you could get a relative clause like "that caught the mouse". The sentence is interpreted deterministically not by what what follows after a fragment might contain but what it does contain. If you are more "surprised" by the latter it doesn't tell you that the process is not deterministic.
> Ok ? This is very interesting. Do you seriously think this prediction right now isn't probabilistic ? You estimate not from rigid rules but past experience that it's likely I will pick it up. What if i push it off the table ? You think that isn't possible. What if i grab the gun in my bag while you're distracted and shoot you instead?
I think you are confusing multiple things. I can predict actions and words, that doesn't mean sentence parsing/production is probabilistic (I'm not even sure exactly what a person might mean by that, especially with respect to production) nor does it mean arm movement is.
> "All models are wrong. Some are useful" - George Box. There's nothing insane with calling a spade a spade. It is fiction and many academics do view it in such a light. It's useful fiction, but fiction none the less. And yes, Einstein's theory is more useful fiction. Grammar is a model of language. It is not language.
I have no idea what you are saying: calling grammar a "fiction" was supposed to be a way to undermine it but now you are saying that it was some completely trivial statement that applies to the best science?
The fact that a deep-neural-net can predict the weather better than a physics-based model does not mean that the weather is not physics-based. Furthermore deep-neural-nets predict but don't explain while a physics-based model tries to explain (and consequently predict).
The claim isn't about whether the ultimate interpretation is deterministic-it’s about the process of parsing and expectation-building as the sentence unfolds.
The idea is that language processing (at least in humans and many computational models) involves predictions about what structures are likely to come next. If the brain (or a model) processes common structures more quickly and experiences more difficulty and higher processing times with less frequent ones, then the process of parsing sentences is very clearly probabilistic.
Being "surprised" isn't just a subjective experience here - it manifests as measurable processing costs that scale with the degree of unexpectedness. This graded response to probability is not explainable with purely deterministic models that would parse every sentence with the same algorithm and fixed steps.
>I have no idea what you are saying: calling grammar a "fiction" was supposed to be a way to undermine it but now you are saying that it was some completely trivial statement that applies to the best science?
None of my comments undermine grammar beyond saying it is not how language works. I preface 'fiction' with the word useful multiple times and make comparisons to Newton.
This isn't true. For one more common sentences are probably structurally simpler and structurally simpler sentences are faster to process. You also get in bizarre territory when you can predict what someone is going to say before they say it: Obviously no "parsing" has occurred there so the fact that you predicted it cannot be evidence that parsing is probabilistic. If that is the case then a similar argument is true if you have only a sentence fragment. The probabilistic prediction is some ancillary process just as if I can predict that a cup is going to fall doesn't make my vision a probabilistic process in any meaningful sense. If for some reason I couldn't predict I could still see and I could still parse sentences.
Furthermore, you can obviously parse sentences and word sequences you have never seen before (and sentences can be arbitrarily complex/nested, at least up to your limits on memory). You can also parse sentences with invented terms.
Most importantly it's not clear how sentences are produced in the mind in this model. Is the claim that you somehow start with a word and produce some random most-likely next word? Do you not believe in syntax parse trees?
Finally, (as Chomsky points out in the video I linked) this model doesn't account for structure dependence. For example why is the question form of the sentence "The man who is tall is happy" "Is the man who is tall happy?" and not "is the man who tall is happy?". Why not move the first "is" that you come across?
> In a strictly deterministic model, both continuations ("went" or "that caught the mouse") would be processed through the same fixed algorithm with the same computational steps, regardless of frequency. The parsing mechanism wouldn't be influenced by prior expectations
Correct. You seem to imply that is somehow unreasonable. Computer parsers work this way.
> Being "surprised" isn't just a subjective experience here - it manifests as measurable processing costs that scale with the degree of unexpectedness. This graded response to probability is not explainable with purely deterministic models.
Again, there are two orthogonal concepts: Do I know what you are going to say next or how you are going to finish your sentence (and possibly something like strain or slowed processing when faced with an unusual concept) and what process do I use to interpret the thing you actually said.
> None of my comments undermine grammar beyond saying it is not how language works. I preface 'fiction' with the word useful multiple times and make comparisons to Newton.
Again, I have no idea what the point of describing universal grammar as fiction is if you say the term applies to all other great scientific theories.
How does the human brain process natural language during everyday conversations? Theoretically, large language models (LLMs) and symbolic psycholinguistic models of human language provide a fundamentally different computational framework for coding natural language. Large language models do not depend on symbolic parts of speech or syntactic rules. Instead, they utilize simple self-supervised objectives, such as next-word prediction and generation enhanced by reinforcement learning. This allows them to produce context-specific linguistic outputs drawn from real-world text corpora, effectively encoding the statistical structure of natural speech (sounds) and language (words) into a multidimensional embedding space.
Inspired by the success of LLMs, our team at Google Research, in collaboration with Princeton University, NYU, and HUJI, sought to explore the similarities and differences in how the human brain and deep language models process natural language to achieve their remarkable capabilities. Through a series of studies over the past five years, we explored the similarity between the internal representations (embeddings) of specific deep learning models and human brain neural activity during natural free-flowing conversations, demonstrating the power of deep language model’s embeddings to act as a framework for understanding how the human brain processes language. We demonstrate that the word-level internal embeddings generated by deep language models align with the neural activity patterns in established brain regions associated with speech comprehension and production in the human brain.
Our most recent study, published in Nature Human Behaviour, investigated the alignment between the internal representations in a Transformer-based speech-to-text model and the neural processing sequence in the human brain during real-life conversations. In the study, we analyzed neural activity recorded using intracranial electrodes during spontaneous conversations. We compared patterns of neural activity with the internal representations — embeddings — generated by the Whisper speech-to-text model, focusing on how the model's linguistic features aligned with the brain's natural speech processing.
For every word heard (during speech comprehension) or spoken (during speech production), two types of embeddings were extracted from the speech-to-text model — speech embeddings from the model’s speech encoder and word-based language embeddings from the model's decoder. A linear transformation was estimated to predict the brain’s neural signals from the speech-to-text embeddings for each word in each conversation. The study revealed a remarkable alignment between the neural activity in the human brain's speech areas and the model's speech embeddings and between the neural activity in the brain’s language area and the model's language embeddings. The alignment is illustrated in the following animation, modeling the sequence of the brain’s neural responses to subjects’ language comprehension:
As the listener processes the incoming spoken words, we observe a sequence of neural responses: Initially, as each word is articulated, speech embeddings enable us to predict cortical activity in speech areas along the superior temporal gyrus (STG). A few hundred milliseconds later, when the listener starts to decode the meaning of the words, language embeddings predict cortical activity in Broca’s area (located in the inferior frontal gyrus; IFG).
Turning to participants' production, we observe a different (reversed!) sequence of neural responses:
Looking at this alignment more closely, about 500 milliseconds before articulating the word (as the subject prepares to articulate the next word), language embeddings (depicted in blue) predict cortical activity in Broca’s area. A few hundred milliseconds later (still before word onset), speech embeddings (depicted in red) predict neural activity in the motor cortex (MC) as the speaker plans the articulatory speech sequence. Finally, after the speaker articulates the word, speech embeddings predict the neural activity in the STG auditory areas as the listener listens to their own voice. This dynamic reflects the sequence of neural processing, starting with planning what to say in language areas, then how to articulate it in motor areas, and finally monitoring what was spoken in perceptual speech areas.
The quantitative results of the whole-brain analysis are illustrated in figure below: for each word, given its speech embeddings (red) and language embedding (blue), we predicted the neural response in each electrode at time lags ranging from -2 seconds before to +2 seconds after the word onset (x-axis value of 0 in the figure). This was done during speech production (left panel) and speech comprehension (right panel). The related graphs illustrate the accuracy of our predictions of neural activity (correlation) for all words as a function of the lag in the electrodes across various brain regions.
During speech production, it is evident that language embeddings (blue) in the IFG peaked before speech embeddings (red) peaked in the sensorimotor area, followed by the peak of speech encoding in the STG. In contrast, during speech comprehension, the peak encoding shifted to after the word onset, with speech embeddings (red) in the STG peaking significantly before language encoding (blue) in the IFG.
All in all, our findings suggest that the speech-to-text model embeddings provide a cohesive framework for understanding the neural basis of processing language during natural conversations. Surprisingly, while Whisper was developed solely for speech recognition, without considering how the brain processes language, we found that its internal representations align with neural activity during natural conversations. This alignment was not guaranteed — a negative result would have shown little to no correspondence between the embeddings and neural signals, indicating that the model's representations did not capture the brain's language processing mechanisms.
A particularly intriguing concept revealed by the alignment between LLMs and the human brain is the notion of a "soft hierarchy" in neural processing. Although regions of the brain involved in language, such as the IFG, tend to prioritize word-level semantic and syntactic information — as indicated by stronger alignment with language embeddings (blue) — they also capture lower-level auditory features, which is evident from the lower yet significant alignment with speech embeddings (red). Conversely, lower-order speech areas such as the STG tend to prioritize acoustic and phonemic processing — as indicated by stronger alignment with speech embeddings (red) — they also capture word-level information, evident from the lower yet significant alignment with language embeddings (blue).
LLMs are trained to process natural language by using a simple objective: predicting the next word in a sequence. In a paper published in Nature Neuroscience, we discovered that, similar to LLMs, the language areas of a listener’s brain attempt to predict the next word before it is spoken. Furthermore, like LLMs, listeners' confidence in their predictions before the word’s onset modifies their surprise level (prediction error) after the word is articulated. These findings provide compelling new evidence for fundamental computational principles of pre-onset prediction, post-onset surprise, and embedding-based contextual representation shared by autoregressive LLMs and the human brain. In another paper published in Nature Communications, the team also discovered that the relation among words in natural language, as captured by the geometry of the embedding space of an LLM, is aligned with the geometry of the representation induced by the brain (i.e., brain embeddings) in language areas.
While the human brain and Transformer-based LLMs share fundamental computational principles in processing natural language, their underlying neural circuit architectures are markedly different. For example, in a follow-up study, we investigated how information is processed across layers in Transformer-based LLMs compared to the human brain. The team found that while the non-linear transformations across layers are similar in LLMs and language areas in the human brain, the implementations differ significantly. Unlike the Transformer architecture, which processes hundreds to thousands of words simultaneously, the language areas appear to analyze language serially, word by word, recurrently, and temporally.
The accumulated evidence from the team’s work uncovered several shared computational principles between how the human brain and deep learning models process natural language. These findings indicate that deep learning models could offer a new computational framework for understanding the brain's neural code for processing natural language based on principles of statistical learning, blind optimization, and a direct fit to nature. At the same time, there are significant differences between the neural architecture, types and scale of linguistic data, the training protocols of Transformer-based language models, and the biological structure and developmental stages through which the human brain naturally acquires language in social settings environments. Moving forward, our goal is to create innovative, biologically inspired artificial neural networks that have improved capabilities for processing information and functioning in the real world. We plan to achieve this by adapting neural architecture, learning protocols, and training data that better match human experiences.
The work described is the result of Google Research's long-term collaboration with the Hasson Lab at the Neuroscience Institute and the Psychology Department at Princeton University, the DeepCognitionLab at the Hebrew University Business School and Cognitive Department, and researchers from the NYU Langone Comprehensive Epilepsy Center.
Common sentences are not necessarily structurally simpler and those still get processed faster so yes it's pretty true.
>You also get in bizarre territory when you can predict what someone is going to say before they say it: Obviously no "parsing" has occurred there so the fact that you predicted it cannot be evidence that parsing is probabilistic.
Of course parsing has occurred. Your history with this person (and people in general) and what you know he likes to say, his mood and body language. Still probabilistic.
>Furthermore, you can obviously parse sentences and word sequences you have never seen before (and sentences can be arbitrarily complex/nested, at least up to your limits on memory). You can also parse sentences with invented terms.
So? LLMs can do this. I'm not even sure why you would think probabilistic predictors couldn't.
>Most importantly it's not clear how sentences are produced in the mind in this model. Is the claim that you somehow start with a word and produce some random most-likely next word? Do you not believe in syntax parse trees?
That's one way to do it yeah. Why would I 'believe in it' ? Computers that rely on it don't work anywhere near as well as those that don't. What evidence is there to it being anything more than a nice simplification ?
>Finally, (as Chomsky points out in the video I linked) this model doesn't account for structure dependence. For example why is the question form of the sentence "The man who is tall is happy" "Is the man who is tall happy?" and not "is the man who tall is happy?". Why not move the first "is" that you come across?
Why does a LLM that encounters a novel form of that sentence generate the question form correctly ?
You are giving examples that probalistic approaches are clearly handling as if they are examples that probalistic approaches cannot explain. It's bizarre
>Correct. You seem to imply that is somehow unreasonable. Computer parsers work this way.
I'm not implying it's unreasonable. I'm telling you the brain clearly does not process language this way because even structurally simple but uncommon syntax is processed slower.
>Again, I have no idea what the point of describing universal grammar as fiction is if you say the term applies to all other great scientific theories
What's the point of describing Newton's model as fiction if I still teach it in high schools and Universities? Because erroneous models can still be useful.
>Again, there are two orthogonal concepts: Do I know what you are going to say next or how you are going to finish your sentence (and possibly something like strain or slowed processing when faced with an unusual concept) and what process do I use to interpret the thing you actually said.
The brain does not comprehend a sentence without trying to predict its meaning. They aren't orthogonal. They're intrinsically linked
This is just redefining terms to be so vague as to make rationality inquiry or discussion impossible. I don't know what re-definition of parsing you could be using that would still be in any way useful or to what "probabilistic" in that case is supposed to apply to.
If you are saying that the brain is constantly predicting various things so that it automatically imbues some process that doesn't involve prediction as probabilistic then that is just useless.
> Common sentences are not necessarily structurally simpler and those still get processed faster so yes it's pretty true.
Well, I'll have to take your word for it as you haven't cited the paper but I would point to the reasonable explanation of different processing times that has nothing to do with parsing I gave further below. But I will repeat the vision analogy: If I had an experiment that showed that I took longer to react to an unusual visual sequence we would not immediately conclude that the visual system was probabilistic. The more parsimonious explanation is that the visual system is deterministic and some other part of cognition takes longer (or is recomputed) because of the "surprise".
> So? LLMs can do this. I'm not even sure why you would think probabilistic predictors couldn't.
It's not about capturing it in a statistics or having an LLM produce it, it's about explaining why that rule occurs and not some other. That's the difference between explanation and description.
> That's one way to do it yeah. Why would I 'believe in it' ? Computers that rely on it don't work anywhere near as well as those that don't. What evidence is there to it being anything more than a nice simplification ?
Because producing one token at a time cannot produce arbitrary recursive structures like sentences can be? Because no language uses linear order? Because when we express a thought it usually can't be reduced to a single start word and statistically most-likely next word continuations? It's also irrelevant what computers do, we are talking about what humans do.
> Why does a LLM that encounters a novel form of that sentence generate the question form correctly ?
That isn't the question. The question is why it's that way and not another. It's as if I ask why do the planets move in a certain pattern and you respond with "well why does my deep-neural-net predict it so well?". It's just nonsense.
> You are giving examples that probalistic approaches are clearly handling as if they are examples that probalistic approaches cannot explain. It's bizarre
No probabilistic model has explained anything. You are confusing predicting with explaining.
> I'm not implying it's unreasonable. I'm telling you the brain clearly does not process language this way because even structurally simple but uncommon syntax is processed slower.
I explained why you would expect that to be the case even with deterministic processing.
> What's the point of describing Newton's model as fiction if I still teach it in high schools and Universities? Because erroneous models can still be useful.
Well as I said this is also true of Einstein's theory of gravity and you presumably brought up the point to contrast universal grammar with that theory rather than point out the similarities.
> The brain does not comprehend a sentence without trying to predict its meaning. They aren't orthogonal. They're intrinsically linked
The brain is doing lots of things, we are talking about the language system. Again, if instead we were talking about the visual system no one would dispute that the visual system is doing the "seeing" and other parts of the brain are doing predicting.
In fact they must be orthogonal because once you get to the end of the sentence, where there are no next words to predict, you can still parse it even if all your predictions were wrong. So the main deterministic processing bit (universal grammar) still needs to be explained and the ancillary next-word-prediction "probabilistic" part is not relevant to its explanation.