Or is this "we said we are going to invest $X"? What about the circular agreements?
~$6.5 trillion
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
edit - sorry, it is in fact adjusted, text is kinda hard to see
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
There’s a loop of everyone is saying stuff because everyone else is saying stuff that turns into a sort of reality inspired fan fiction.
It’s not just that it’s wrong or imprecise, that I expect, it’s that the folklore takes on a life of its own.
If they were laid on a sensible route, completed on budget and time, and savvily operated. Many railroads went bust.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
We aren't even getting infrastructure out of it, they are just powering it with gas turbines..
I’m getting my popcorn ready for the bubble pop.
The only problem is, if AI doesn’t solve cold fusion, we’re back to square one. And a few trillion dollars in the hole.
I certainly think it was a mistake.
And what is the ROI on either of those right now?
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
I would love to see another report that isn't a year old with actual ROI figures...
Got it.