The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.
It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.
OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.
Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.
If it's built in stages each state will have never variants of hardware I imagine.
https://www.msn.com/en-us/money/general/as-oracle-plans-thou...
David Ellison is fueling his buying spree with debt guaranteed by his dad's oracle shares. The various assets David has bought are already suffering losses of viewership because viewers are turned off by their new ideological slant.
Usually debt investors are not worried if the stock price is high. Debt has precedence over equity, so if the stock price is riding high, the CEO can always be convinced to print more shares to service the debt. The Oracle stock price has not been doing that hot lately, however. As the article said, it is 50% down. Still ORCL has 430 Billion market cap in comparison with 130 Billion of debt. It seems manageable. But stock prices can move very fast. Ironically, the war in Iran, which David's new news sources keep supporting is causing ORCL stock to go down which can bring down David's new media empire.
David just purchased Warner Bros for about 110. A lot of that (40 billion) is also guaranteed by daddy's ORCL shares. Warner Bros owns Comedy Central, which sadly has been one of Americas most dependable news sources.
The house of cards is still standing but its getting awfully wobbly.
I could see Nvidia adding terms of sale requiring disposal rather than resale.
Stargate is backed by the US gvt hence why they're comfortable to put that under debt financing
There has to be some theory to explain the story to be consistent with this comment.

watch now
Artificial intelligence chips are getting upgraded more quickly than data centers can be built, a market reality that exposes a key risk to the AI trade and Oracle's debt-fueled expansion.
OpenAI is no longer planning to expand its partnership with Oracle in Abilene, Texas, home to the Stargate data center, because it wants clusters with newer generations of Nvidia graphics processing units, according to a person familiar with the matter.
The current Abilene site is expected to use Nvidia's Blackwell processors, and the power isn't projected to come online for a year. By then, OpenAI is hoping to have expanded access to Nvidia's next-generation chips in bigger clusters elsewhere, said the person, who asked not to be named due to confidentiality.
Bloomberg was first to report on the companies ending their plans for expansion in Abilene. In a post on X on Sunday, Oracle called news reports about the activity, "false and incorrect," but the post only said existing projects are on track and didn't address expansion plans.
Oracle secured the site, ordered the hardware, and spent billions of dollars on construction and staff, with the expectation of going bigger.
An Oracle spokesperson declined to comment.
It's a logical decision for OpenAI, which doesn't want older chips. Nvidia used to release a new generation of data center processors every two years. Now, CEO Jensen Huang has the company shipping one every year, and each generation offers a leap in capability. Vera Rubin, unveiled at CES in January and already in production, delivers five times the inference performance of Blackwell.
For the companies building frontier models, the smallest improvement in performance could equate to huge gaps in model benchmarks and rankings, which are closely followed by developers and translate directly to usage, revenue, and valuation.
That all points to a bigger problem at play. For infrastructure companies, securing a site, connecting power and standing up a facility takes 12 to 24 months at minimum. But customers want the latest and greatest, and they're tracking the yearly chip upgrades.
Oracle's added challenge is that it's the only hyperscaler funding its buildout primarily with debt, to the tune of $100 billion and counting. Google, Amazon and Microsoft, by contrast, are leaning on their enormous cash-generating businesses.
Meanwhile, Oracle partner Blue Owl is declining to fund an additional facility, and plans to cut up to 30,000 jobs.
Oracle reports fiscal third-quarter results on Tuesday, and investors will be paying close to how the company addresses a $50 billion capital expenditure plan with negative free cash flow, and whether the financing pipeline can hold up.
The stock is down 23% so far this year and has lost over half its value since peaking in September.
Beyond Oracle, GPU depreciation is a risk for the broader market and could have ramifications across the AI landscape. Every infrastructure deal signed today may result in a commitment to outdated hardware before the power is even connected.
WATCH: Jefferies' Brent Thill talks to CNBC ahead of Oracle earnings

watch now
In order to take advantage of that, someone needs to be positioned to process all that material economically, and to make the logistics achievable by the big players. If it costs Facebook $10million to store and transport phased out gpus vs just sending them to a landfil, they're not going to do it. If they get $100k for recycling - probably not going to do it. If they pocket $5 million, they will definitely contract that out, especially if it costs $50 million to build out the infrastructure to handle it.
Probably a good company idea - transport, disposal, refurbishment of out of cycle GPUs and datacenter assets, creating a massive recycling pipeline for recapturing all the valuable elements is a pretty good niche.
I bought a used NEC SX Aurora TSUBASA (PCIe x16 board that looks like a GPU board) and realized it has no fans. The server case it is designed to fit into is pressurized by fans forcing air through eight cards on a special 4 + 4 slot motherboard. I have to stack and mount three 40mm fans on the back.
My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs.
This site apparently sources ex-enterprise(-only) systems and puts them into desktop style enclosures.
Are they?
OpenAI's current revenue is $25 billion a year. They are expected to spend $600 billion on infrastructure in the next 4 years to sustain and grow that revenue.
The story is the same across the industry. Amazon, Google, Microsoft and Meta are spending a combined $650 billion on infrastructure in 2026 alone.
None of these investments are immediately profitable. And it remains to be seen whether they eventually will be or not.
Don't forget the possibility that it's AI slop.
That sounds about right.
> People at openai are lying to cnbc?
Remove "to cnbc" and that's a yes.
> cnbc are fabricating stories while drunk?
Maybe not drunk but likely high.
To use the hit HBO TV show silicon valley analogy, it is far more likely that "the bear is sticky with honey" will happen at Oracle than at Open AI. Some kind of game of telephone gone wrong at some point and now the people responsible at Oracle must double down in order to kick the can to the next quarter and not appear clueless.
Statutory disclaimer: I am not affiliated with either Open AI or Oracle and have no insider information. All of this is mere conjecture and has no basis in reality.
I also don't think companies are going to have mandatory replacement cycles for GPU hardware the same way they do for everything else, because:
1. It is an order or magnitude (or more) more expensive.
2. It isn't clear whether Moore's law will apply to the AI GPU space the same way it has for everything else.
Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
Would be interested to know if others have takes on this.
Why would them sell it cheaper to the 2nd market??
It will hurt the sales of new ones. This is the way even with food, let alone technology. Don't expect to buy cheaper 2nd GPU any century soon.
That's exactly the point.
Performance/watt is increasing so much gen-to-gen that it makes no longer sense to run older hardware.
Not my words, Jensen's.
There are PCIe versions of these right? And another comment is saying there are PCI adapters too. It "only" requires 600 to 700W. It's not out of reach for everybody.
If the used regular server market is any indication, you can find, after a few years, a lot of enterprise gear at totally discounted prices. CPU costing $4K brand new for $100 after a few years: stuff like that.
A friend has got a 42U rack and so do some homelab'ers. People have been running GPU farms mining cryptocurrencies or doing "transcoding" (for money).
It's not just CPUs at 1/40th of their brand new price: network gear too. And ECC RAM (before the recent RAM craze).
I'm pretty sure that if H200 begin to flood the used market, people shall quickly adapt.
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.
I agree with that. But if they resell old H200s, people are resourceful and shall find a way to run these.
That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.
[1] https://l4rz.net/running-nvidia-sxm-gpus-in-consumer-pcs/
A couple real world points:
1. They generally don't just fail. More likely a repairable component on a board fails and you can send it out to be repaired.
2. For my current stuff, I have a 3 year pro support contract that can be extended. Anything happens, Dell goes and fixes it. We also haven't had someone in our cage at the DC in over 6 months now.
So they have to hope they’re a part of the future in the AI capacity because their SaaS business is going to take a big hit.
YTD performance didn’t fully bake this reality in. It was seen as them having 2 huge revenue streams, the market is realizing that AI is a threat to SaaS and baking that into stonks
https://www.youtube.com/watch?v=1H3xQaf7BFI&t=1577s
in the States.
New stuff is all liquid cooled by default and that's a paradigm shift for your average home lab.
I'm less aware of exactly what's happening on the power side of things but I think some of the architectures are now moving to relatively high voltage DC throughout and then down converting it to low voltage right before it's used. So not exactly just plug-and-play with your average nema15 outlet.