1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.
2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.
i guess i should add a 3rd reason.
3. ycomb has a financial stake in spreading the news about how wonderful this tech is!
lolol
Jessica Livingston's personal stake in OpenAI is maybe at most 0.1% or less and Paul Graham's, afaik, is 0.
So the bias doesn't seem as large as OP thinks
*https://xcancel.com/paulg/status/2041366050693173393
And "toughness, adaptability, and determination" >>> "ambition", frankly
I'd go as far to say that it's impossible at this point to form an AI company without YCombinator not investing in it.
So what does the true definition of "AGI" actually mean? It depends on who you ask.
It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.
It's letting me build stuff I probably wouldn't be able to build by myself without raising lots of money for way cheaper, at least until GitHub Copilot gets incredibly nerfed next month.
I think it’s mostly the above, rather than a capitalist conspiracy, or in its relevance as a scientific curiosity.
Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.
At the same time, the current version of HN is still usable, you just need to mentally filter LLM-related stuff. It was similar with cryptocurrencies TBH.
AGI - Automatically Generating Income
No worries, there will be a startup creating "AGI Bench", >=80% means you're AGI, they will be valued $50B.
When Greg Brockman makes a lot of money from the deal.
If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.
It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.
I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.
Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.
But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.
- Sam Altman ~9%
- YCombinator had <5%
- Steve Huffman ~3% Although he had ~4% voting power via Class B shares.
- Alexis Ohanian: Minimal
- Advance Publications: ~30%
- Tencent: ~11%.
The original founders (Steve Huffman and Alexis Ohanian) massively diluted when they sold Reddit to Advance Publications in 2006 for $10 to $20 million.
Numbers above are vaguely accurate. See https://www.untaylored.com/post/who-owns-reddit
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
Paul Graham of Y-Combinator in response tweeted some positive things about Altman, emphasising that they didn't fire him as CEO of YC (though not going as far as declaring him trustworthy).
Now John Gruber of DaringFireball (an Apple blog) added context by claiming that YC owns a 0.6% stake in OpenAi, worth around $5bn, which might colour Graham's judgement.
As far as I know this is the first time anyone has publicly claimed to know, quoting insider sources, what YC's actual stake in OpenAI is.
"Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.
In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).
https://www.scmp.com/news/china/science/article/3351721/chin...
But in general I do believe AI has the potential to be a great positive for humanity on its own - if the open models stay strong and not only a few people control them.
And yes, humans as a whole are not even ready for cars or nuclear weapons. We build and used them anyway.
But my brain is still pretty busy and I don't think the younger generation is getting dumber because of LLMs, rather mindless consuming TikTok and co
LLMs are a also great learning tool and anyone using them should know their limits quickly. Not all do, though. That is obvious.
Speaking of companies with valuable minority stakes in AI companies, there’s one thing that stuck in my craw about the blockbuster Ronan Farrow / Andrew Marantz investigative piece on Sam Altman and OpenAI last month for The New Yorker. It didn’t come up during Nilay Patel’s excellent interview with Farrow on Decoder, either.
Sam Altman was the president of Y Combinator for several years, and left to become the full-time CEO of OpenAI. The New Yorker quotes Y Combinator co-founder Paul Graham multiple times, in the context of Altman’s trustworthiness. (Some of those quotes are firsthand, others secondhand.) Graham’s role in the story — particularly his public remarks after publication — comprised an entire section in my own take on the New Yorker piece, wherein I concluded:
I would characterize Graham’s tweets re: Altman this week as emphasizing only that Altman was not fired or otherwise forced from YC, and could have stayed as CEO at YC if he’d found another CEO for OpenAI. But for all of Graham’s elucidating engagement on Twitter/X this week regarding this story, he’s dancing around the core question of the Farrow/Marantz investigation, the one right there in The New Yorker’s headline: Can Sam Altman be trusted? “We didn’t ‘remove’ Sam Altman” and “We didn’t want him to leave” are not the same things as saying, say, “I think Sam Altman is honest and trustworthy” or “Sam Altman is a man of integrity”. If Paul Graham were to say such things, clearly and unambiguously, those remarks would carry tremendous weight. But — rather conspicuously to my eyes — he’s not saying such things.
The thing that stuck in my craw is this: Does Y Combinator own a stake in OpenAI? And if they do, given OpenAI’s sky-high valuation, isn’t that stake worth billions of dollars?
OpenAI was seeded by an offshoot of Y Combinator called YC Research in 2016 — when Altman was running YC. In December 2023, the well-known AI expert (and AI-hype skeptic) Gary Marcus wrote the following, in a piece on Altman’s trustworthiness in the wake of the OpenAI board saga that saw Altman fired, re-hired, and the board purged in the course of a tumultuous week:
After poking around, I found out that “I have no equity in OpenAI” was only half the truth; while Altman to my knowledge holds no direct equity in OpenAI, he does have an indirect stake in OpenAI, and that fact should have been disclosed.
In particular, he own a stake of Y Combinator, and Y Combinator owns a stake in OpenAI. It may well be worth tens of millions of dollars; even for Altman, that’s not trivial. Since he was President of Y Combinator, and CEO of OpenAI; he surely was aware of this.
So it’s well known that Y Combinator owns some stake in OpenAI. But how big is that stake? This seems like devilishly difficult information to obtain. I asked around and a little birdie who knows several OpenAI investors came back with an answer: Y Combinator owns about 0.6 percent of OpenAI. At OpenAI’s current $852 billion valuation, that’s worth over $5 billion.
Graham and his wife Jessica Livingston are two of Y Combinator’s four founding partners. The fact that Paul Graham personally has billions of dollars at stake with OpenAI doesn’t mean that his public opinion on Sam Altman’s trustworthiness and leadership is invalid. But it certainly seems like the sort of thing that ought to be disclosed when quoting Graham as an Altman character reference. A billion dollars here, a billion there — that adds up to the sort of money that might skew a fellow’s opinion.