I don’t see any particular contradiction. Moving fabs onshore is absolutely in the interest of both parties.
Can you more explicitly describe what the X and Y points you allude to are?
There was an Ezra Klein podcast awhile back where Swisher (a journalist) made it sound like she'd been buddy-buddy with Elon Musk for some time. What is/was her relationship to Altman?
Altman is in bed with Trump. He isn’t all in. But Tuesday evidencing an electoral shift makes Friar’s comments exceptionally ill timed.
Also clear that the 1.4T figure includes some accounting for spend that does not come directly from OpenAI (grid/power/data infra for example). Obviously some government involvement is needed, but more at EPA/State/Local level to fast track construction permits, more-so than financial help from Treasury.
I'm confused why this generates such sensational headlines.
Read: https://www.paulgraham.com/fundraising.html
" Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. "
Meanwhile America is having no Kings protests.
https://entropytown.com/articles/2025-11-06-openai-cfo/
"If you want to sell your shares, I'll find you a buyer." OpenAI and Microsoft Detail Landmark Partnership, Navigate Future of AI and Compute
https://founderboat.com/interviews/2025-11-01-openai-sam-sat...
Crazy sequences in a week...
Somebody please tell me how to short this. I'm going all in.
The AI bubble must really be about to pop.
Someone I know who has been an absolute staunch advocate of chatgpt,and has been for the best part of the past 1.5 years suddenly changed their tune earlier this week. That is my signal that things are turning south.
There will be a reprieve for some time, with increasing revenue. But eventually that will jam to a halt and all the doubts will intensify.
The article as a whole just seems libelous? Almost personal?
Last time I checked Nouriel was partying up in the Hamptons so being a permabear is lucrative.
Gary called it a long time ago but I think the Dwarkesh and Karpathy podcast is when the shift started.
The gist is altman was trying to be slick about requiesting a government bailout, got called out on his bullshit, and then decided to lie and say he never wanted a bailout
The implication being the bubble is getting closer and closer to popping, since even altman is thinking about how to survive the pop
So: Altman did not ask for OpenAI loans to be guaranteed, nor did the CFO. It was on behalf of others drawn into the needs of the industry the AMIC grant was supposed to support. Self interested by OpenAI? Sure! And also not about to make the top 10k leaderboard for "sleezy things companies do".
They were talking about backstops for chips, to get fabs constructed in the US. If anyone else said this, it would have been considered a great idea, both republicans and democrats talked about the same thing to get TSMC production into the US, but everyone pretended that OpenAI was talking about data centers in this context. They weren't/
Altman's 1.4T isn't like that - it's a proposed new investment in stuff that doesn't exist yet and there would be no job losses or the like if it fails to exist. They have been talking about potential government support for the new ventures, partly to keep up with China which uses similar government support. I'm not sure if it's a good idea but it would not be a bailout, more a subsidy.
They were bank bailouts.
Unsecured government loans are either bailouts, entitlements in disguise, or (usually misguided) attempts at broad economic stimulus. This definitely isn't either of the latter two.
> No substance, no skill, just wheeling dealing
That's their only real skill. There's an entire class of people who specialize in exploiting society's hard work.
We truly live in the dumbest timeline.
I’m not holding my breath for hot takes, but I got what I came for: sama said some stuff, the thread.
One of the major beliefs of this view is that LLMs are essentially impossible because there's not enough information in language to learn it unless you have a special purpose language-learning module built into the brain by evolution. This is Chomsky's "poverty of the stimulus" argument.
Marcus still defends this view and because of this bias is committed to trying to prove to everyone that LLMs are not possible or at least that they're some kind of illusion. There's a sense in which they threaten his fundamental concept of how the brain works.
In proposing and defending these views he appears to me and others to be having a sort of internal crisis that's playing out publicly where his need to be right about this is coloring his judgment and objectivity.
He called for general bailouts, which would benefit OAI far far more than others since its spending the most, and then he got backlash, he lied and said that OAI doesnt want bailouts at all.
You cant be so literalist to just take the man at his exact word and not try to determine his motivations. Especially this guy, whos made a career out of dancing around silicon valley's "no assholes" policy
A "bailout" is what happened in 2009, in the sense the banks would literally have collapsed without it (and they probably should have).
OpenAI is not going to collapse without these loans. Huge difference.
Also for the record, not rationalizing, because I'm not in favor of either handouts or bailouts.
I felt like those were the kind of folks I could get on board with; not someone with an MBA that needs head count to reach their KPIs. Someone that cared about solving a problem and was genuinely smart.
I don't buy that they can create AGI by investing trillions in training models and infrastructure.
If you ask me, this is just more money spent on pollution.
The need to replace humans to be profitable, just sounds like the end goal is to destroy the planet with datacenters and hurt people generally.
Sounds like a net negative for the planet.
https://arstechnica.com/information-technology/2024/02/repor...
ChatGpt is also building other products/brands like Sora to capture more mindshare.
Linux is free and yet people use Windows.
Maybe I’m just old, but I don’t see the appeal?
If you’re trying to convince people, then you should probably have a convincing argument. Otherwise it feels like kiwifarms-posting with a megaphone.
Capital denial to competitors.
The problem with Sam Altman is that he is a shyster and leech feeding off the hard work of thousands of programmers and engineers.
The 1.4 T amounts to a broad nearly decade long capex plan, not liabilities.
The loans and backstops etc were a request, not for OpenAI, but on behalf of manufacturers of grid equipment, manufacturers that OpenAI wouldike the government to consider as eligible for money already carved out by the AMIC national investment in chips and AI, and also probably more money as well-- it's a separate group of tangential industries that weren't initially considered, so why not ask? Sure it would help keep the wheels moving in OpenAI and the broad AI and US semiconductor industry, but it's far away from and ask by Altman for a bailout of his company.
- Greed is blinding even to intelligent people, especially greedy people in groups
- Society is incredibly vulnerable to lying, we mostly rely on the disincentive that people usually feel bad about it, but the ones who don’t can get away with anything.
- There’s really only a subtle difference between many successful startups and Ponzi schemes. Sam Altman’s grift is only one level more sophisticated than SBF or Elizabeth Holmes
Edit: lol at the downvotes. You're just proving my point, goons.
> trying to prove to everyone that LLMs are not possible or at least that they're some kind of illusion
This is such poor phrasing I can't help but wonder if it was intentional. The argument is over whether LLMs are capable of AGI, not whether "LLMs are possible".
You also 100% do not have to buy into Chomsky's theory of the brain to believe LLMs won't achieve AGI.
This is the worst misrepresentation of Marcus' argument I've seen so far.
The argument is that there is not enough information available to a child to do this. So even if we grant the dubious premise that LLMs have learned to speak languages in a manner analogous to humans, they are not a counterexample to Chomsky’s poverty of the stimulus argument because they have been trained on a vast array of linguistic data that is not available within a single human childhood.
If you want to better understand Chomsky’s position, it’s easiest to do so in relation to other animals. Why are other intelligent animals not able to learn human languages? The rather unsurprising answer, in Chomsky’s view, is that humans have a built-in linguistic capacity, rather in the way that e.g. bats have a built in capacity for echolocation. The claim that bats have a built-in capacity for echolocation is not refuted by the existence of sonar. Likewise, our ability to construct machines that mimic some aspects of human linguistic capacity does not automatically refute the hypothesis that this is a specific human capacity absent in other animals.
Imagine if sonar engineers were constantly shitting on chiropterologists because their silly theory of bats having evolved a capacity for echolocation has now been refuted by human-constructed sonar arrays. The argument makes so little sense that it’s difficult to even imagine the scenario. But the argument against Chomsky from LLMs doesn’t really make any more sense, on reflection.
Chomsky hasn’t helped his case in recent years by tacking his name on some dumb articles about LLMs that he didn’t actually write. (A warning to us all that retirement is a good idea.) So I don’t blame people who are excited about LLMs for seeing him as a bit of rube, but the supposed conflict between Chomsky and LLMs is entirely artificial. Chomsky is (was) trying to do cognitive science. People experimenting with LLMs are not, on the other hand, making any serious effort to study how humans acquire language, and so have very little of substance to argue with Chomsky about. They are just taking opportunistic pot shots at a Big Name because it’s a good way to get attention.
For the record, Chomsky himself has never made any very specific claims about a dedicated module in the brain or about the evolutionary origins of human linguistic capacity (except for some skeptical comments on it being a gradual adaptation).
Guaranteed loans are a completedly different thing and are still debatable, as I expand here -- https://news.ycombinator.com/item?id=45839060 -- but that's not the debate I'm seeing anywhere.
Yes people use Windows. Go look up the history of how that came to be, it had nothing to do with their brand. Sam is looking for his IBM.
If you just take those chinese models and slap them on some decent looking website, nobody will know the difference. People take brand recognition far too seriously.
If Linux was actually indistinguishable from Windows, you bet your ass nobody in their right mind would install Windows. But they're actually different things.
This is still an unknown.
> Linux is free and yet people use Windows.
The moat isn't even 1% of the one that Windows has - and even Windows has been losing ground on the consumer side. While slow, the rate acceleration is definitely there. Compare Oct 2020-Oct 2025 vs the preceding 5-year period.
https://www.pon.harvard.edu/daily/negotiation-skills-daily/w...
Google has stayed on top for 25 years because they're better and free. LLM providers will have to compete on price doing expensive inference.
Isn't that basically the history of the internet?
I just wished the bozos actually replied to the post instead of hiding behind a button.
It will ruin some lives. I hate that the american economy runs on these speculative cycles. If we tempered our expectations to what was sane, but still bullish, we'd have far less to fall
And my phrasing was wonderful and perfect.
Also Gary Marcus doesn't make an "argument" in the singular. He's a researcher with decades of public work and private work.
There's too much to hash it out here in HN. You can try to save the LAD argument by a strategic retreat, but it's been in retreat for decades now and keeps losing ground. It's clear that neural networks can learn the rules of grammar without specifically baking grammatical hierarchies into the network. You can retreat to saying it's about setting hyper parameters or priors but even the evidence for that is marginal.
There are certainly features of the brain that make language learning easier (such as size) but POS doesn't really provide anything to guide research and is mostly of historical interest now. It's a claim that something is impossible, which is a strong claim. And the evidence for it is poor. It's not clear it would have any adherents if it were proposed anew today. And this is all before LLMs enter the picture.
The research from neuroscience and learning theory and machine learning etc have all pointed toward a view of the brain as significantly different from the psychological nativism view. When many prominent results in the nativist camp failed to replicate during the replicability crisis, most big name researchers pivoted to other fields. Marcus is one of the remaining vocal holdouts for nativism. And his beliefs about AI align very closely with all the old debates about statistical learning models vs symbolic manipulation etc.
> Why are other intelligent animals not able to learn human languages?
Animals and plants do communicate with each other in structured ways. Animals can learn to communicate with humans. This is one of those areas where you can choose to try to see the continuities with communication or you can try to define a vision of language that isolates human language as completely set apart. I think human language is more like an outlier in complexity to the communication animals do rather than a fundamentally different thing. In that sense there's not much of a mystery given brain size, number of neurons, sociality etc.
> The argument is that there is not enough information available to a child to do this
Yes, but children are the humans who earn language in the typical case. So you can replace "child" with "human" especially with all the hedging I did in my first post (e.g. "essentially"). As I said above Chomsky is known to have underestimated the amount of input babies receive. Babies hear language from the moment they're born until they learn to speak. Also, as a parent, I often correct grammatical and other mistakes as toddlers learn to talk. Other parents do the same. Part of the POS is based on the premise that children don't get their grammar corrected often.
Chatgpt can become an Ad company just like Google probably bigger than Google.
They don’t care about the wake of destruction, and likely believe that it is virtuous to bring it on.
If it turns like this it's extremely cynical and one would wonder how could Altman stay ou of prison (i am sure he will not go to prison).
In spite of this, I think LLMs display intelligence and for me that is more useful than their understanding of language. I haven’t read anything from Chomsky tbh.
The utility of LLMs come from their intelligence and the price point at which it is achieved. Ideally the discussion should focus on that. The deeper discussion of AGI should not worry the policy makers or the general public. But unfortunately, business seems intent on diving into the philosophical arguments of how to achieve AGI because that is the logic they have chosen to convince people into giving them more and more capital. And that is what makes Marcus’ and his friends’ critique relevant.
One can’t really critique people like Marcus saying he is being academic and pedantic on LLM capabilities, are they real, are they not etc when the money is relentlessly chasing those un-achieved capabilities.
So even though you’re saying we aren’t talking about AGI and this isn’t the topic, everything kind of circles back into AGI and the amount of money being poured into chasing that.
I don't know whether he's right or wrong, but deep down I feel the amount of money funneled in the Transformed architecture might have blocked other approaches, both potentially promising and others doomed for failure, just because LLMs' quick wins.
Do you use LLMs to post on HN? I'm asking seriously.
You mention 'neural networks' learning rules of grammar. Again, this is relevant to Chomsky's argument only to the extent that such devices do so on the basis of the kind of data available to a child. Here you implicitly reference a body of research that's largely non-existent. Where are the papers showing that neural networks can learn, say, ECP effects, ACD, restrictions on possible scope interpretations, etc. etc., on the basis of a realistic child linguistic corpus?
Your 'continuities' argument cuts both ways. There are continuities between human perception and bat perception and between bat communication and human communication; but we still can't echolocate, and bats still can't hold conversations. The specifics matter here. Is bat echolocation just a more complex variant of my very slight ability to sense whether I'm in an enclosed location or an outdoor space when I have my eyes closed? And is the explanation for why bats but not humans have this ability that bat cognition is just more sophisticated than human cognition? I'm sure neural networks can be trained to do echolocation too. Humans can train an artificial network to do echolocation, therefore it can't be a species-specific capacity of bats. << This seems like a terrible argument, no?
Poverty of the stimulus arguments don't really depend at all on the assumption that parents don't correct children, or that children ignore such corrections. If you look at specific examples of the kind of grammatical rules that tend to interest generative linguists (e.g. ACD, ECP effects, ...) then parents don't even know about any of these, and certainly aren't correcting their children on them.
Chomsky has never made any specific estimate of the 'amount' of input that babies receive, so he certainly can't be known to have underestimated it. Poverty of the stimulus arguments are at heart not quantitative but rather are based on the assumption that certain specific kinds of data are not likely to be available in a child's input. This assumption has been validated by experimental and corpus studies (e.g. https://sites.socsci.uci.edu/~lpearl/courses/readings/LidzWa...)
> Babies hear language from the moment they're born until they learn to speak
I can assure you that this insight is not lost on anyone who works on child language acquisition :)
I would appreciate if you and the GP not personally insult me when you have a question though. You may feel that you know Marcus to be into one particular thing but some of us have been familiar with his work long before he pivoted to AI.
Some LMs are specifically trained on child-focused small corpora in the 10 million range, e.g. BabyLM: https://babylm.github.io.
Keep in mind that before age 2, children are using individual words and getting much richer feedback than LMs are.
Humans can and do echolocate: https://en.wikipedia.org/wiki/Human_echolocation. There are also anatomical differences that are not cognitive that affect the abilities like echolocation. For example, the positioning and frequency response of sensors (e.g. ears) can affect echolocation performance.
The first is that your phrasing "that LLMs are not possible or at least that they're some kind of illusion" collapses the claim being made to the point where it looks as if you're saying Marcus believes people are just deluded that something called a "LLM" exists in the world. But even allowing for some inference as to what you actually meant, it remains ambiguous whether you are talking about language acquisition (which you are in your 2nd paragraph) or the genuine understanding and reasoning / robust world model induction necessary for AGI, which is the focus of Marcus' recent discussion on LLMs, and why we're even talking about Marcus here in the first place.
You seem more familiar with Marcus' thinking on language acquisition than I, so I can only assume that his thinking on language acquisition and LLMs is somewhat related to his thinking on understanding and reasoning / world model induction and LLMs. But it doesn't appear to me, based on what I've read of Marcus, that his claims about the latter really depend on Chomsky. Which brings me to the 2nd problem with your post, where you make the uncharitable claim that "he appears to me and others to be having a sort of internal crisis that's playing out publicly", as if it were simply impossible to believe that LLMs are not capable of genuine understanding / robust world model induction otherwise.
I didn't mean to attack you personally and I'm really sorry if it sounded this way. I appreciate the generally positive atmosphere on HN and I believe it more important than the actual argument, whatever it may be.
Children don't get 'rich feedback' at all on the grammatical structure of their sentences. I think this idea is probably based on a misconception of what 'grammar' is from a generative linguistics perspective. When was the last time that a child got rich feedback on their misinterpretation of an ACD construction? https://www.bu.edu/bucld/files/2011/05/29-SyrettBUCLD2004.pd...
LLMs trained on small datasets don't perform that well from the point of view of language acquisition – even up to 100 million tokens. There's not a very large literature on this because, as I said, there are many more people interested in making a drive-by critique of generative linguistics than there are people who are genuinely interested in investigating different models of child language acquisition. But here is one suggestive result: https://aclanthology.org/2025.emnlp-main.761.pdf See also the last paragraph of p.6 onwards of https://arxiv.org/pdf/2308.03228
The other point that's often missed in evaluations of these models is their capacity for learning completely non-human-like languages. Thus, the BabyLM models have some limited success in learning (for example) some island constraints, but could just have easily acquired languages without island constraints. That then leaves the question of why we do not see human languages without such constraints.
They probably do get parents and the like correcting them or giving an example. Kid says we goed fish, adult say yeah we went fishing. I taught English as a foreign language a bit and people learn almost entirely from examples like that rather than talking about ellipsis or any sort of grammar jargon.
It seems brains / neurons / LLMs are good at pattern recognition. Brains probably quicker on the uptake than LLM backpropagation though.
See above for some examples of the kinds of grammatical principles that can form the basis of a poverty of the stimulus argument. They’re not generally the kind of thing that parental corrections could conceivably help with, for two reasons:
1) (The main reason) Poverty of the stimulus arguments relate to features of grammatical constructions that are rarely exemplified. As examples are rarely uttered, deviant instances are rarely corrected, even assuming the presence of superlatively wise and attentive caregivers.
2) (The reason that you mention) Explicit instruction on grammatical rules has almost no effect on most people, especially young children. So corrections at most add a few more examples of bad sentences to the child’s dataset, which they can probably obtain anyway via more indirect cues.
If corrections were really effective, someone should be able to do a killer experiment where they show an improved (i.e. more adult-like) handling of, say, quantifier scope in four year olds after giving them lots of relevant corrections. I am open minded about the outcome of such an experiment, but I’d bet a fairly large amount of money that it would go nowhere.
The fact that Sam Altman is a liar is no longer news. As I argued here in late 2023, Altman truly was fired for being “not completely candid” — just like the board said. Recent books by Karen Hao and Keach Hagey pretty much confirm this. I dissected his 2023 Senate testimony here.
But just in case any one was seriously still in doubt a just-released 62-page deposition from Ilya Sutskever ought to seal the deal:
[

A recent lawsuit furthers that sense that employees no longer trust Altman:
[

But it is no longer about lying to employees. It is about directly lying to the American public. Events of the last couple days surrounding Wednesday OpenAI’s CFO Sarah Friar’s call for loan guarantees have brought things to a new level.
The very idea – of having the US government bail out OpenAI from their reckless spending — is outrageous, as I explained here:
[

And I was far, far from alone in my fury. To fully understand what I am about to reveal, you need to understand that anger rapidly ricocheted across Washington and the entire nation. Here are two examples, one from a prominent Republican governor,
[

And here is another, from the White House AI Czar:
[

Altman, sensing that he had massively blundered, wrote a meandering fifteen-paragraph reply on X that started like this, written in full, capitalized paragraphs (unlike his usual style of short, cryptic remarks written in lowercase).
[

Nobody believed it. There were dozens of hostile, skeptical replies like this:
[

Even ChatGPT wasn’t buying it.
[

But that’s not the kicker.
§
The kicker is this: Sam was, once again, lying his ass off. What he meant by “we do not have or want government guarantees for OpenAI data centers” was actually that … OpenAI explicitly asked the White House office of science and technology (OSTP) to consider Federal loan guarantees, just a week earlier:
[

In a podcast that was probably recorded in last several days, Altman also appears to have been laying the groundwork for loan guarantees:
[

In short, Altman was—likely in conjunction with Nvidia, which also just seemed to be laying groundwork for a bailout—launching a full court press for loan guarantees when he got caught with hands in the cookie jar.
And then Altman lied about the whole thing to the entire world. Even David Sacks at the White House may have been conned.
Nobody should ever trust this man. Ever.
That’s what Ilya saw.
Gary Marcus was blocked on X by Kara Swisher in November 2022 for saying that the board did not trust Sam Altman. Marcus remains blocked by Ms. Swisher to this day.
No posts