If Nvidia did try to exert any pressure to scrap ARC, that would be both a huge financial and geopolitical scandal. It's in the best interest of the US to not only support Intel's local manufacturing, but also it's GPU tech.
I wonder what this means for the ARC line of GPUs?
This time around Nvidia should HOLDL the stock
Probably government-mandated.
https://www.intc.com/news-events/press-releases/detail/1750/...
What’s old is new again: back in 2017, Intel tried something similar with AMD (Kaby Lake-G). They paired a Kaby Lake CPU with a Vega GPU and HBM, but the product flopped: https://www.tomshardware.com/news/intel-discontinue-kaby-lak...
1) This year, Intel, TSMC, and Samsung announced their latest factories' yields. Intel was the earliest, with 18A, while Samsung was the most recent. TSMC yieled above 60%, Intel below 60%, and Samsung around 50% (but Samsung's tech is basically a generation ahead and technically more precise), and Samsung could improve their yields the most due to the way set up the processes, where 70% is the target. Until last year, Samsung was in the second place, and with the idea that Intel caught up so fast and taking Samsung's position at least for this year, Nvidia bought Intel's stock since it's been getting cheaper since COVID.
2) It's just generally good to diversify into your competitors. Every company does this, especially when the price is cheap.
> Nvidia will also have Intel build custom x86 data center CPUs for its AI products for hyperscale and enterprise customers.
Hell has frozen over at Intel. Actually listening to people that want to buy your stuff, whatever next? Presumably someone over there doesn't want the AI wave to turn into a repeat of their famous success with mobile.
In the event Intel ever do get US based fabrication semi competitive again (and the national security motivation for doing so is intense) nVidia will likely have to be a major customer, so this does make sense. I remain doubtful that Intel can pull it off, and it will have to come from someone else.
https://www.intc.com/news-events/press-releases/detail/1748/...
Erm, a rather important point to bury down the story. The fiest question on anyone’s lips will be is this $5bn to build new chip technology, or $5bn for employees to spend on yachts?
5 Billion is just a start but this is a gift for nVidia to eventually squire intel.
Looks like using GPU IP to take over other brands' product lines is now officially an nVidia strategy.
I guess the obvious worry here is whether Intel will continue development of their own dGPUs, which have a lovely open driver stack.
[0]: <https://www.fudzilla.com/6882-nvidia-continues-comic-campaig...>
Why/how is INTC premarket up from $24.90 around 30% (to $32), when Nvidia is buying the stock at $23.28 ? Who is selling the stock?
I suppose the Intel board decided this? Why did they sell under the current market price? Didn't the Intel board have fiduciary duty to get as good a price from Nvidia as possible? If Nvidia buying stock moves it up so much, it seems like a bad deal to sell the stock for so little.
Why would it matter if not? This is a nice partnership. Each gets something the other lacks.
And it strengthens domestic manufacturing. Taiwan is going to be subumed soon, and we need more domestic production now.
They will be dominating AMD now on both fronts if things go smoothly for them.
> Intel stock experienced dilution because the U.S. government converted CHIPS Act grants into an equity stake, acquiring a significant ownership percentage at a discounted price, which increased the total number of outstanding shares and reduced existing shareholders' ownership percentage, according to The Motley Fool and Investing.com. This led to roughly 11% dilution for existing shareholders
Intel has never been so cheap relative to the kinds of IP assets that Nvidia values and probably will not be ever again if this and other investments keep it afloat.
Trump's FTC would not block.
You write with proper case-sensitivity for their titles which suggests some historic knowledge of the two. They have been very close partners on CPU+GPU for decades. This investment is not fundamentally changing that.
The current CEO is more like a CFO--cutting costs and eliminating waste. There are two exits from that: sell off, as you say, and re-investment in the products of most likely future profit. This could be a signal that the latter is the plan and that the competitive aspects of the nVidia-intel partnership will be sidelined for a while.
So long as the AI craze is hanging in there it feels like having that expertise and IP is going to have high potential upside.
They wanted to launch DGX Spark early summer and it's nowhere to be seen, while strix halo is shipping in over 30+ SKUs from all major manufacturers.
In a top-down oligarchy, their best interests are served by focusing on the desires of the great leader, in contrast to a competitive bottom-up market economy, where they would focus on the desires of customers and shareholders.
The reason the stock surged up past $30 is the general market's reaction to the news, and subsequent buying pressure, not the stock transaction itself. It seems likely that once the exuberance cools down, the SP will pull back, where to I can't say. Somewhere between $25 and $30 would be my bet, but this is not financial advice, I'm just spitballing here.
Nvidias GPUs are theoretically fast on initial benchmarks. But that’s mostly optimization by others for Nvidia? That’s it.
Everything Nvidia has done is a pain. Closed-source drivers (old pain), out of tree-drivers (new pain), ignoring (or actively harming) Wayland (everyone handles implicit sync well, except Nvidia which required explicit sync[1]), and awkward driver bugs declared as “it is not a bug, it is a feature”. The infamous bug:
This extension provides a way for applications to discover when video
memory content has been lost, so that the application can re-populate
the video memory content as necessary.
https://registry.khronos.org/OpenGL/extensions/NV/NV_robustn...This extension will be soon ten years old. At least they intend to fix it? They just didn’t in the past 9 years! Basically, video memory could be gone after Suspend/Resume, VT-Switch and so on. The good news is, after years someone figured that out and implemented a workaround. For X11 with GNOME:
https://www.phoronix.com/news/NVIDIA-Ubuntu-2025-SnR
I hope in the meantime somebody implemented a patch for Wayland.
What we need? Reliability. And Linux support. That’s why I purchase AMD. And previously Intel.
[1] I don’t judge whether implicit sync or explicit are better.
That would seem weird to be. Intel’s iGPUs are an incredibly good solution for their (non-glamorous) niche.
Intel’s dGPUs might be in a risky spot, though. (So… what’s new?)
Messing up Intel’s iGPUs would be a huge practical loss for, like, everyday desktop Linux folks. Tossing out their dGPUs, I don’t know if it is such a huge loss.
- The datacenter GPU market is 10x larger than the consumer GPU market for Nvidia (and it's still growing). Winning an extra few percentage points in consumer is not a priority anymore.
- Nvidia doesn't have a CPU offering for the datacenter market and they were blocked from acquiring ARM. It's in their interest to have a friend on the CPU side.
- Nvidia is fabless and has concentrated supplier and geopolitical risk with TSMC. Intel is one of the only other leading fabs onshoring, which significantly improves Nvidia's supplier negotiation position and hedges geopolitical risk.
American competition isn't a zero sum, and it's in Nvidias' best interest to keep the market healthy.
Nvidia is contributing to Nova, the new Nvidia driver for GSP based hardware.
https://rust-for-linux.com/nova-gpu-driver
Alexandre Courbot, an Nvidia dev, is comaintainer.
For context, I highly recommend the old Stratechery articles on the history of Intel foundry.
Had Apple failed, Microsoft would probably have been found to have a clear monopolistic position. And microsoft was already in hot waters due to InternetExplorer IIRC.
They probably assume Intel is more similar to them in skill and execution than what’s reality…
I don't like the idea of using Intel given their lack of disclosure for Spectre/Meltdown and some of their practices (towards AMD)
https://markets.businessinsider.com/news/stocks/warren-buffe...
What I mean is that whenever NVIDIA removed features from their "consumer" GPUs in order to reduce production costs and increase profits, AMD immediately followed them, instead of attempting to offer GPUs that have something that NVIDIA does not have.
Intel at least tries to be a real competitor, e.g. by offering much, much better FP64 performance or by offering more memory.
If Intel's discrete GPUs disappear, there will be no competition in consumer GPUs, as AMD tries to compete only in "datacenter" GPUs. I have ancient AMD GPUs that I cannot upgrade to newer AMD GPUs, because the newer GPUs are worse, not better (for computational applications; I do not care about games), while Intel offers acceptable substitutes, due to excellent performance per $.
Moreover, NVIDIA also had excellent Linux driver support for more than 2 decades, not only for games, but also for professional graphics applications (i.e. much better OpenGL support than AMD) and for GPU computing applications (i.e. CUDA). AMD gets bonus points for open-source drivers and much more complete documentation, but the quality of their drivers has been typically significantly worse.
NVIDIA always had good support even for FreeBSD, where I had to buy discrete NVIDIA GPU cards for computers with AMD APUs that were not supported for any other OS except Windows and Linux.
AMD "consumer" GPUs are a great choice for those who are interested only in games, but not for those interested in any other GPU applications. AMD "datacenter" GPUs are good, but they are far too expensive to be worthwhile for small businesses or for individuals.
/ steps down from soap box /
I have been looking at high-end laptops with dedicated AMD Graphics chip, but can't find many... So I will probably go with AMD+NVidia with MUX switch, let's see how it goes... Unless someone else has other suggestions?
They were felt at an IC level.
5% by about any accounting makes you a very, very influential stockholder in a publicly traded company with a widely distributed set of owners.
Any down the road repercussions be damned from their perspective.
Intel can no longer fund new process nodes by itself, and no customers want to take the business risk to build their product on a (very difficult) new node when tsmc exists. They're in a chicken and egg situation. (see also https://stratechery.com/2025/u-s-intel/ )
/me picturing Khaby Lame gesturing his hands at an obvious workaround.
Are you comparing Samsung against Intel here specifically, or also TSMC?
I am also thinking that it may just be more of a "everyone scratching each other's backs." Intel avoids anti-trust/monopoly investigations, Intel is saved for all the institutional and political stakeholders and Nvidia floats an artificial competitor to make them look less like a monopoly, Intel stays alive, etc.
It's pretty clear AMD and Nvidia are gatekeeping memory so they can iterate over time and protect their datacenter cards.
Intel had a prime opportunity to blow this up.
Also, since this Intel deal makes no sense for NVIDIA, a good observer would notice that lately, he seems to spend more time on Air Force One than with NVIDIA teams. The leak of any evidence, showing this was an investment ordered by the White House, will make his company hostage of future demands from the current corrupt administration. The timing is already incredibly suspicions.
We will know for sure he become a hostage, if the next NVIDIA investment is on World Liberty Financial.
"Anatomy of Two Giant Deals: The U.A.E. Got Chips. The Trump Team Got Crypto Riches." - https://www.nytimes.com/2025/09/15/us/politics/trump-uae-chi...
On the other hand, my Steam Deck has been exceedingly stable.
So I guess I would say: Buy AMD but understand that they don't have the resources to truly support all of their hardware on any platform, so they have to prioritize.
Both of which NVidia does a lot better in practice! I'm all for open-source in-tree drivers, but in practice, 15 years on, AMD is still buggy on Linux, whereas NVidia works well (not just on Linux but on FreeBSD too).
> I don’t judge whether implicit sync or explicit are better.
Maybe you should.
Funnily, AMD's in-tree drivers are kind of a pain in the ass. For up to a year after a new GPU is released, you have to deal with using mesa and kernel packages from outside your distro.. While if you buy a brand new nVidia card, you just install the latest release of the proprietary drivers and it'll work.
Linux's driver model really is not kind to new hardware releases.
Of course, I still buy AMD because Nvidia's drivers really aren't very good. But that first half a year was not pleasant last time I got a relatively recently released (as in, released half a year earlier) AMD card.
Linux support has been excellent on AMD for less than 15 years though. It got really good around 10 years ago, not before.
Those who want to run Linux seriously will buy AMD. Intel will be slowly phased out, and this will reduce maintenance and increase the quality of anything that previously had to support both Intel and AMD.
However, if Microsoft or Apple scoop up AMD, all hell will break loose. I don’t think either would have interest in Linux support.
Even before the Pascal GTs, most of the GT 7xx cards, which you would assume were Maxwell or Kepler from the numbering, were rebadged Fermi cards (4xx and 5xx)! That generation was just a dumping ground for all the old chips they had laying about, and given the prominence of halfway decent iGPUs by that point, I can't say I blame them for investing so little in the lineup.
That said, the dGPUs are definitely somewhat at risk, but I think the risk is only slightly elevated by this investment, given that it isn't exactly a cash cow and Intel has been doing all sorts of cost-cutting lately.
But now, are they really going to undermine this partnership for that? Their GPUs probably aren't going to become a cash cow anytime soon, but this thing probably will. The mindset among American business leaders of the past two decades has been to prioritize short-term profits above all else.
I feel bad for gamers - I’ve been considering buying a B580 - but honestly the consumer welfare of that market is a complete sidenote.
IMHO we will soon see more small/quiet PCs without a slot for a graphics card, relying on integrated graphics. nVidia has no place in that future. But now, by dropping $5B on Intel they can get into some of these SoCs and not become irrelevant.
The nice thing for Intel is that they might be able to claim graphics superiority in SoC land since they are currently lagging in CPU.
- a bigger R&D budget for their main competitor in the GPU market
- since Nvidia doesn't have their own CPUs, they risk becoming more dependent on their main competitor for total system performance.
I was recently looking into 2nm myself, and based on wikipedia article on 2nm, TSMC 2nm is about 50% more dense than the samsung and intel equivalent. They aren’t remotely the same thing. Samsung 2nm and Intel 18A are about as dense as TSMC 3nm, that’s been in production for years.
This definitely isn't a thing that every company does (or even close to every company).
This is very likely the new culture that LBT is bringing in. This can only be good.
(Image credit: Nvidia)
In a surprise announcement that finds two long-time rivals working together, Nvidia and Intel announced today that the companies will jointly develop multiple new generations of x86 products together — a seismic shift with profound implications for the entire world of technology. Before the news broke, Tom's Hardware spoke with Nvidia representatives to learn more details about the company’s plans.
The products include x86 Intel CPUs tightly fused with an Nvidia RTX graphics chiplet for the consumer gaming PC market, named the ‘Intel x86 RTX SOCs.’ Nvidia will also have Intel build custom x86 data center CPUs for its AI products for hyperscale and enterprise customers. Additionally, Nvidia will buy $5 billion in Intel common stock at $23.28 per share, representing a roughly 5% ownership stake in Intel. (Intel stock is now up 33% in premarket trading.)
Nvidia emphasized that the companies are committed to multi-generation roadmaps for the co-developed products, which represents a strong investment in the x86 ecosystem. But Nvidia representatives tell us it also remains fully committed to other announced product roadmaps and architectures, including the company's Arm-based GB10 Grace Blackwell processors for workstations and the Nvidia Grace CPUs for data centers, as well as the next-gen Vera CPUs. Nvidia says it also remains committed to products on its internal roadmaps that haven’t been publicly disclosed yet, indicating that the new roadmap with Intel will merely be additive to existing initiatives.
Nvidia hasn’t disclosed whether it will use Intel Foundry to produce any of these products yet. However, while Intel has used TSMC to manufacture some of its own recent products, its goal is to bring production of most high-performance products back into its own foundries.
Some products never left. For instance, Intel’s existing Granite Rapids data center processors use the ‘Intel 3’ node, and the upcoming Clearwater Forest Xeons will use Intel’s own 18A process node for compute. This suggests that at least some of the Nvidia-custom x86 silicon, particularly for the data center, could be fabbed on Intel nodes. Intel also uses TSMC to fabricate many of its client x86 processors, however, so we won’t know for sure until official announcements are made — particularly for the RTX GPU chiplet.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
While the two companies have engaged in heated competition in some market segments, Intel and Nvidia have partnered for decades, ensuring interoperability between their hardware and software for products spanning both the client and data center markets. The PCIe interface has long been used to connect Intel CPUs and Nvidia GPUs. The new partnership will find tighter integration using the NVLink interface for CPU-to-GPU communication, which affords up to 14 times more bandwidth along with lower latency than PCIe, thus granting the new x86 products access to the highest performance possible when paired with GPUs. That's a strategic advantage. Let’s dive into the details we’ve learned so far.
For the PC market, the Intel x86 RTX SoC chips will come with an x86 CPU chiplet tightly connected with an Nvidia RTX GPU chiplet via the NVLink interface. This type of processor will have both CPU and GPU units merged into one compact chip package that externally looks much like a standard CPU, rivaling AMD’s competing APU products.
Intel's new x86 RTX CPUs will compete directly with AMD's APUs. For AMD, that means it faces intensifying competition from a company with the leading market share in notebook CPUs (Intel ships ~79% of laptop chips worldwide) that's now armed with GPU tech from Nvidia, which ships 92% of the world's gaming GPUs.
This type of tight integration packs all the gaming prowess into one package without an external discrete GPU, providing power and footprint advantages. As such, we're told these chips will be heavily focused on thin-and-light gaming laptops and small form-factor PCs, much like today’s APUs from AMD. However, it’s possible the new Nvidia/Intel chips could come in multiple flavors and permeate further into the Intel stack over time.
Intel has worked on a similar type of chip before with AMD; there is at least one significant technical difference between these initiatives, however. Intel launched its Kaby Lake-G chip in 2017 with an Intel processor fused into the same package as an AMD Radeon GPU chiplet, much the same as the description of the new Nvidia/Intel chips. You can see an image of the Intel/AMD chip below.
Image
1
of
5
An RTX GPU chiplet connected to an Intel CPU chiplet via the fast and efficient NVLink interface.
This SoC had a CPU at one end connected via a PCIe connection to the separate AMD GPU chiplet, which is flanked by a small, dedicated memory package. This separate memory package was only usable by the GPU. The Nvidia/Intel products will have an RTX GPU chiplet connected to the CPU chiplet via the faster and more efficient NVLink interface, and we’re told it will have uniform memory access (UMA), meaning both the CPU and GPU will be able to access the same pool of memory. Given the particulars of Nvidia's NVLink Fusion architecture, we can expect the chips to communicate via a refined interface, but it is unlikely that it will leverage Nvidia's C2C (Chip-to-Chip) technology, an inter-die/inter-chip interconnect that's based on Arm protocols that aren't likely optimized for x86.
Intel notoriously axed the Kaby Lake-G products in 2019, and the existing systems were left without proper driver support for quite some time, in part because Intel was responsible for validating the drivers, and then finger-pointing ensued. We’re told that both Intel and Nvidia will be responsible for their respective drivers for the new models, with Nvidia naturally providing its own GPU drivers. However, Intel will build and sell the consumer processors.
We haven’t spoken with Intel yet, but the limited scope of this project means that Intel’s proprietary Xe graphics architecture will most assuredly live on as the primary integrated GPU (iGPU) for its mass-market products.
Intel will fabricate custom x86 data center CPUs for Nvidia, which Nvidia will then sell as its own products to enterprise and data center customers. However, the entirety and extent of the modifications are currently unknown. We know that Nvidia will employ its NVLink interface, which suggests that the chips could leverage Nvidia’s new NVLink Fusion technology for custom CPUs and accelerators, enabling faster and more efficient communication with Nvidia’s GPUs than is possible with the PCIe interface.
(Image credit: Nvidia)
Intel has long offered custom Xeons to its customers, primarily hyperscalers, often with relatively minor tweaks to clock rates, cache capacities, and other specifications. In fact, these mostly slightly-modified custom Xeon models once comprised more than 50% of Intel’s Xeon shipments. Intel has endured several years of market share erosion due to AMD’s advances, most acutely in the hyperscale market. Therefore, it is unclear if the 50% number still holds true, as hyperscalers were the primary customers for custom models.
Intel has said that it will design completely custom x86 chips for customers as part of its IDM 2.0 strategy. However, aside from a recent announcement of custom AWS chips that sound like the slightly modified Xeons mentioned above, we haven’t heard of any large-scale uptake for significantly modified custom x86 processors. Intel announced a new custom chip design unit just two weeks ago, so it will be interesting to learn the extent of the customization for Nvidia’s x86 data center CPUs.
Nvidia already uses Intel’s Xeons in several of its systems, like the Nvidia DGX B300, but these systems still use the PCIe interface to communicate with the CPU. Intel’s new collaboration with Nvidia will obviously open up new opportunities, given the tighter integration with NVLink and all the advantages it brings with it.
The likelihood of AMD adopting NVLink Fusion is somewhere around zero, as the company is heavily invested in its own Infinity Fabric (XGMI) and Ultra Accelerator Link (UALink) initiatives, which aim to provide an open-standard interconnect to rival NVLink and democratize rack-scale interconnect technologies. Intel is also a member of UALink, which uses AMD’s Infinity Fabric protocol as the foundation.
Nvidia’s $5 billion purchase of Intel common stock will come at $23.28 a share, roughly 6% below the current market value, but several aspects of this investment remain unclear. Nvidia hasn’t stated whether it will have a seat on the board (which is unlikely) or how it will vote on matters requiring shareholder approval. It is also unclear if Intel will issue new stock (primary issuance) for Nvidia to purchase, as it did when the U.S. government recently became an Intel shareholder (that is likely). Naturally, the investment is subject to approval from regulators.
Nvidia’s buy-in comes on the heels of the U.S government buying $10 billion of newly-created Intel stock, granting the country a 9.9% ownership stake at $20.47 per share. The U.S. government won’t have a seat on the board and agreed to vote with Intel’s board on matters requiring shareholder approval “with limited exceptions.” Softbank has also recently purchased $2 billion worth of primary issuance of Intel stock at $23 per share.
Swipe to scroll horizontally
Row 0 - Cell 0 | Total | Share Price | Stake in Intel |
Nvidia | $5 Billion | $23.28 | ~5% |
U.S. Government | $9 Billion | $20.47 | ~9.9% |
Softbank | $2 Billion | $23 | Row 3 - Cell 3 |
The U.S. government says it invested in Intel with the goal of bolstering US technology, manufacturing, and national security, and the investments from the private sector also help solidify the struggling Intel. Altogether, these investments represent a significant cash influx for Intel as it attempts to maintain the heavy cap-ex investments required to compete with TSMC, all while struggling with a negative amount of free cash flow.
“AI is powering a new industrial revolution and reinventing every layer of the computing stack — from silicon to systems to software. At the heart of this reinvention is Nvidia’s CUDA architecture,” said Nvidia CEO Jensen Huang. “This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem—a fusion of two world-class platforms. Together, we will expand our ecosystems and lay the foundation for the next era of computing.”
“Intel’s x86 architecture has been foundational to modern computing for decades – and we are innovating across our portfolio to enable the workloads of the future,” said Intel CEO Lip-Bu Tan. “Intel’s leading data center and client computing platforms, combined with our process technology, manufacturing and advanced packaging capabilities, will complement Nvidia's AI and accelerated computing leadership to enable new breakthroughs for the industry. We appreciate the confidence Jensen and the Nvidia team have placed in us with their investment and look forward to the work ahead as we innovate for customers and grow our business.”
We’ll learn more details of the new partnership later today when Nvidia CEO Jensen Huang and Intel CEO Lip-Bu Tan hold a webcast press conference at 10 am PT. {EDIT: you can read our futher coverage of that press event here.}
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
Paul Alcorn is the Editor-in-Chief for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.
Not sure if it's flopped, because the only machine with that CPU i could find is the intel nuc.
Let's go back even further.. I get strong nForce vibes from that extract!
Took me almost 5 min to drill through enough Wikipedia pages to find the Radeon VII string.
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processin... https://en.wikipedia.org/wiki/Radeon_RX_Vega_series
Contrast that with the earlier R9 285 that I used for nearly 10 years until I was finally able to get a 9070XT that I'm very happy with. They were still refining support for that aged GCN 1.2 driver even today, even if things are a lower priority to backport.
Overall the ONLY things I'm unhappy about this GPU generation.
* Too damned expensive * Not enough VRAM (and no ECC off of workstation cards?) * Too hard for average consumers to just buy direct and cut out the scalpers
The only way I could get my hands on a card was to buy through a friend that lives within range of a Microcenter. The only true saints of computer hardware in the whole USA.
Less competition is NOT good news for AMD users. Their CPUs are already at lot less competitively priced now that they beat Intel for market share.
That action may cease to exist soon, especially after Vance is POTUS and the courts stacked with Peter Thiel loyalists that back his vision of anti-competition. Bet on it.
Nvidia is leaning more into data centres, but lack a CPU architecture or expertise. Intel is struggling financially, but have knowledge in iGPUs and a vast amount of patents.
They could have alot to give one another, and it's a massive win if it keeps intel afloat.
Never imagined politics so obviously manipulating the talking heads with nary a care about perception.
I think the assumption there is that the strategic partnership that is part of the deal would in effect preclude Intel from aggressively competing with NVIDIA in that market, perhaps with the belief that the US governments financial stake in Intel would also lead to reduced anti-trust scrutiny of such an agreement not to compete.
I don't think so:
> The chip giant hasn’t disclosed whether it will use Intel Foundry to produce any of these products yet.
It seems pretty likely this is an x86 licensing strategy for nvidia. I doubt they're going to be manufacturing anything on intel fabs. I even wonder if this is a play to get an in with Trump by "supporting" his nationalizing intel strategy.
...and yet Nvidia is not gambling with the odds. Intel could have challenged Nvidia on performance-per-dollar or per watt, even if they failed to match performance in absolute terms (see AMD's Zen 1 vs Intel)
Preempting a (potential) future competitor from entering a market is also an antitrust issue.
This was all for naught as AMD purchased ATi, shutting out all other chipsets and Intel did the same. Things actually looked pretty grim for Nvidia at this point in time. AMD was making moves that suggested APUs were the future and Intel started releasing platforms with very little PCIe connectivity, prompting Nvidia to build things like the Ion platform that could operate over an anemic pcie 1x link. There were really were the beginnings of strategic moves to lock Nvidia out of their own market.
Fortunately, Nvidia won a lawsuit against Intel that required them to have pcie 16x connectivity on their main platforms for 10 years or so and AMD put out non-competitive offerings in the CPU space such that the APU take off never happened. If Intel had actually developed their integrated GPUs or won that lawsuit or if AMD had actually executed Nvidia might well be an also-ran right around now.
To their credit, Nvidia really took advantage of their competitors inability to press their huge strategic advantage during that time. I think we're in a different landscape at the moment. Neither AMD nor Intel can afford boot Nvidia since consumers would likely abandon them for whoever could still slot in an Nvidia card. High performance graphics is the domain of add-in boards now and will be for awhile. Process node shrinks aren't as easy and cooling solutions are getting crazy.
But Nvidia has been shut out of the new handheld market and haven't been a good total package for consoles as SoC both rule the day in those spaces so I'm not super surprised at the desire for this pairing. But I did think nvidia had given up these ambitions was planning to try to build an adjacent ARM based platform as a potential escape hatch.
Apart from that APU(395+) from AMD, intel iGPU is on par with AMD right now.
The 395+ is more like dGPU and CPU on same die.
Doesn't feel the same because the 1997 investment was arranged by Apple co-founder Steve Jobs. He had a long personal relationship with Bill Gates so could just call him to drop the outstanding lawsuits and get a commitment for future Office versions on the Mac. Basically, Steve Jobs at relatively young age of 42 was back at Apple in "founder mode" and made bold moves that the prior CEO Gil Amelio couldn't do.
Intel doesn't have the same type of leadership. Their new CEO is a career finance/investor instead of a "new products new innovation" type of leader. This $5 billion investment feels more like the result of back-channel discussions with the US government where they "politely" ask NVIDIA to help out Intel in exchange for less restrictions selling chips to China.
They were one of the first to actually support open source drivers, with the r128 and original radeon (r100) drivers. Then went radio silence for the next few years, though the community used that as a baseline to support the next few generations (r100 to r500).
Then they reemerged with actually providing documentation for their Radeon HD series (r600 and r700), and some development resources but limited - and often at odds with the community-run equivalents at the time (lots of parallel development with things like the "radeonhd" driver and disagreements on how much they should rely on their "atombios" card firmware).
That "moderate" level of involvement continued for years, releasing documentation and some initial code for the GCN cards, but it felt like beyond the initial code drops most of the continuing work was more community-run.
Then only relatively recently (the last ~10 years) have they started putting actual engineering effort into things again, with AMDGPU and the majority of mesa changes now being paid for by AMD (or Valve, which is "AMD by proxy" really as you can guarantee every $ they spend on an engineer is $ less they pay to AMD).
So hopefully that's a trend you can actually rely on now, but I've been watching too long to think that can't change on a dime.
Intel was well on its way to be a considerable threat to NVIDIA with their Arc line of GPUs, which are getting better and cheaper with each generation. Perhaps not in the enterprise and AI markets yet, but certainly on the consumer side.
This news muddies this approach, and I see it as a misstep for both Intel and for consumers. Intel is only helping NVIDIA, which puts them further away from unseating them than they were before.
Competition is always a net positive for consumers, while mergers are always a net negative. This news will only benefit shareholders of both companies, and Intel shareholders only in the short-term. In the long-term, it's making NVIDIA more powerful.
Someone should tell nvidia that. They sure seem to think they have a datacenter CPU.
https://www.nvidia.com/en-us/data-center/grace-cpu-superchip...
> Nvidia is fabless and has concentrated supplier and geopolitical risk with TSMC. Intel is one of the only other leading fabs onshor[e]
TSMC is building state of the art fabs in Arizona, USA. Samsung in Texas, USA. I assume these are being done to reduce geopolitical risk on all sides.Something that I never read about: Why can't NVidia use Samsung fabs? They are very close to TSMC state of the art.
East India Company has been conducting continental wars on its own. A modern company with $4T valuation and a country-GDP-size revenue and possessing key military technology of today and tomorrow wars - AI software and hardware, including robotics - can successfully wage such a continental war through a suitable proxy, say an oversized private military contractor (especially if it massively armed with drones and robots), and in particular is capable of defending an island like Taiwan. (or thinking backwards - an attack on Taiwan would cause a trillion or two drop in NVDA valuation. What options get on the table when there is a threat of a trillion dollar loss ... To compare - 20 years of Iraq cost 3 trillions, ie. 150B/year buy you a lot of military hardware and action, and efficient defense of Taiwan would cost much less than that.)
Quicktime got stolen by an ex-Apple employee & in return Apple had Microsoft commit money & promise to have Office suite available on macOS/OS X
Correction - if they care. And they don't care to do it on Linux, so you get them dragging feet for decades for something like Wayland support, PRIME, you name it.
Basically, the result is that in practice they offer abysmally bad support, otherwise they'd have upstream kernel drivers and no userspace blobs. Linux users should never buy Nvidia.
This actually makes sense: for example, a new task has swapped out previous task's data, or host and guest are sharing the GPU and pushing each others data away. I don't understand why this is not a part of GPU-related standards.
As for solution, discarding all the GPU data after resume won't help? Or keeping the data in the system RAM.
Apples demise wouldve nailed the case.
Sure it makes sense for NVidia to propagate proprietary interfaces like NVLink, as well as to do anything that helps drive their own GPU sales, but I'm not sure how you so confidently conclude from that that propping up Intel is in NVidia's best interest ?
Not all deals are made in cash, they can borrow money against their market share.
AMD's actual commitment to open innovation over the past ~20 years has been game changing in a lot of segments. It is the aspect of AMD that makes it so much more appealing than intel from a hacker/consumer perspective.
I mean that also applies to Intel and Nvidia. Intel does make GPUs but their market impact is basically zero.
But yeah, it's probably easier to just use cash on hand.
However I do imagine Intel GPUs, that were never great to start with, might be doomed, long term.
Also another possibility would be, there goes One API, which I doubt many people would care about, given how many rebrands SYSCL already went through.
This has been somewhat improved-- some mainboards will have HDMI and DisplayPort plumbed to the iGPU, but the classic "trader desk" with 4-6 screens hardly needs a 5090.
They could theoretically sell the same 7xx and 1030 chips indefinitely. I figure it's a static market like those strange 8/16Mb VGA chipsets that you sometimes see on server mainboards, just enough hardware to run diagnostics on a normally headless box.
I think this partnership will damage nvidia. It might damage intel, but given they're circling the drain already, it's hard to make matters worse.
It's probably bad for consumers in every dimension.
Or to take the opposite, if nvidia rolled over intel and fired essentially everyone in the management chain and started trying to run the fabs themselves, good chance they'd turn the ship around and become even more powerful than they already are.
Intel isn’t at that point, but the companies trajectory isn’t looking good. I’d happily sacrifice ARC to keep a duopoly in CPU’s.
Consumers still have AMD as an alternative for very decent and price attractive GPUs (and CPUs).
But there's always TSMC being a pretty hard bottleneck - maybe they just can't get enough (and can't charge close to their GPU offerings per wafer), and pairing with Intel themselves is preferable to just using Intel's Foundry services?
To be fair from what I hear someone really should tell at least half of nvidia that.
Taiwanese gov prevents them from doing it. Leading node has to be on Taiwanese soil
They're not. Most have tried at 1 point. Apple had a release with TSMC + Samsung and users spotted a difference. There was quite a bit of negativity.
[0] https://thisdayintechhistory.com/12/06/apple-sues-over-quick...
Right now if the US wants to go to war with China, or anyone China really really likes, they can expect with high probability to very quickly encounter major problems getting the best chips. AIUI the world has other fab capacity that isn't in Taiwan, and some of it is even in the US, but they're all on much older processes. Some things it's not a problem that maybe you end up with an older 500MHz processor, but some things it's just a non-starter, like high-end AI.
Sibling commenters discussing profits are on the wrong track. Intel's 2024 revenue, not profits, was $53.1 billion. The Federal Government in 2024 spent $6,800 billion. No entity doing $1.8 trillion in 2024 in deficit spending gives a rat's ass about "profits". The US Federal government just spends what it wants to spend, it doesn't have any need to generate any sort of "profits" first. Thinking the Federal government cares about profits is being nowhere near cynical enough.
The AI hype train was built on the premise that AI will progress linearly and eventually end up replacing a lot of well paid white collar work, but it failed to deliver on that promise by now, and progress has flatlined or sometimes even gone backwards (see GPT-5 vs 4o).
FAANG companies can only absorb these losses for so long before shareholders pull out.
Yes.
> Other than the market segmentation over RAM amounts, I don't see very much difference.
The difference between CDNA and RDNA is pretty much how fast it can crunch FP64 and SR-IOV. Prior to RDNA, AMD GPUs were jacks of all trades with compute bias. Which made them bad for gaming unless the game is specifically written around async compute. Vega64 has more FP64 compute than the 4080 for context.
I think if AMD was able to get a solid market share of datacenter GPUs, they wouldn't have unified. This feels like CDNA team couldn't justify its existence.
This feels like a 'brand new sentence' to me because I've never met an ALi chipset that I liked. Every one I ever used had some shitty quirk that made VIA or SiS somehow more palatable [0] [1].
> Intel started releasing platforms with very little PCIe connectivity,
This is also a semi-weird statement to me, in that it was nothing new; Intel already had an established history of chipsets like the i810, 845GV, 865GV, etc which all lacked AGP. [2]
[0] - Aladdin V with it's AGP Instabilities, MAGiK 1 with it's poor handling of more than 2 or 3 'rows' of DDR (i.e. two double-sided sticks of DDR turned it into a shitshow no matter what you did to timings. 3 usually was 'ok-ish' and 2 was stable.)
[1] - SIS 730 and 735 were great chipsets for the money and TBH the closest to the AMD760 for stability.
[2] - If I had a dollar for every time I got to break the news to someone that there was no real way to put a Geforce or 'Radon' [3] in their eMachine, I could have had a then-decent down payment for a car.
[3] - Although, in an odd sort of foreshadowing, most people who called it a 'Radon', would specifically call it an AMD Radon... and now here we are. Oddly prescient.
I realize the AGX is more of a low power solution and it's possible that nvidia is still technically limited when building SOCs but this is just speculation.
Does anybody know actual ground truth reasoning why Nvidia is buying Intel despite the fact that nvidia can make their own SOCs?
I don't understand what you're saying here. I've used NVidia on Linux and FreeBSD a lot. They work great.
If your argument is they don't implement some particular feature that matters to you, fair enough. But that's not an argument that they don't offer stability or Linux support. They do.
Both Mutter and KWin have really good Nvidia Wayland sessions nowadays.
Looking at Google's recent antitrust settlement, I'm not sure this is true at present.
https://www.theregister.com/1998/10/29/microsoft_paid_apple_...
> handwritten note by Fred Anderson, Apple's CFO, in which Anderson wrote that "the [QuickTime] patent dispute was resolved with cross-licence and significant payment to Apple." The payment was $150 million
The days of Nvidia ignoring Linux is over.
It would be an enormous loss to the consumer/enthusiast GPU buyer, as a third major competitor is improving the market from what feels like years and years of dreadful price/perf ratio.
To get money from the outside, you either have to take on debt or you have to give someone a share in the business. In this case, the board of directors concluded the latter is better. I don't understand why you think it is gross.
I'm sorry that's just not correct. Intel is literally just getting started in the GPU market, and their last several releases have been nearly exactly what people are asking for. Saying "they've lost" when the newest cards have been on the market for less than a month is ridiculous.
If they are even mediocre at marketing, the Arc Pro B50 has a chance to be an absolute game changer for devs who don't have a large budget:
https://www.servethehome.com/intel-arc-pro-b50-review-a-16gb...
I have absolutely no doubt Nvidia sees that list of "coming features" and will do everything they can to kill that roadmap.
Intel is up 30% pre market on this news so I think the existing shareholders will be fine.
Out of 12 mobile GT 7xx cards only 3 were Fermi (and 2 of those were M and not GT) rest were Kepler.
This is why they built the Grace CPU - noting that they're using Arm's Neoverse V2 cores rather than their own design.
I did the math on TSMC N2 vs Intel 18A, and the former is 30% denser according to TSMC
Microsoft once owned a decent amount of Apple & Facebook for example.
the fact that google pay firefox anually meaning that its in best interest of google that there is no monopoly, judge says
The "F you Nvidia" Linus Torvalds moment in 2012 is a meme that will not die.
But with the state of the courts today... who knows..
Would be foolish to throw that away now that they're finally getting closer to "a product someone may want to buy" with things like B50 and B60.
In 2009 he said a no to an Intel-compatible x86 CPU.
In 2022 he said If x86 already exists, we’ll just use/partner: “One of the rules of our company is not to squander resources on something that already exists. If something already exists, for example, an x86 CPU, we’ll just use it… we’ll partner with them.”
No. They were ordered to make this investment...
But also, does this amount of ownership even give them the ability to kill anything on Intel's roadmap without broad shareholder consensus (not that that's even how roadmaps are handled anyway)?
https://www.ft.com/content/12adf92d-3e34-428a-8d61-c91695119...
Investing in Apple and Borland were an counter-anti-trust legal move, keeping the competitors alive, but on life support. This way they could say to the government "yes there is competition".
Google does the same these days by keeping Firefox alive.
I have since switched from Ubuntu to Fedora, maybe Fedora ships mesa and kernel updates within a week or two from release, I don't know. But being unable to use the preferred distro is a serious downside for many people.
Intel's foundry costs are probably competitive with nvidia too - nvidia has too much opportunity cost if nothing else.
While it doesn't quite compete at performance and power consumption, it does at price/performance and overall value. It is a $250 card, compared to the $300 of the 4060 at launch. You can still get it at that price, if there's stock, while the 4060 hovers around $400 now. It's also a 12GB card vs the 8GB of the 4060.
So, sure, this is not competitive at the high-end segment, but it's remarkable what they've accomplished in just a few years, compared to the decades that AMD and NVIDIA have on them. It's definitely not far fetched to assume that the gap would only continue to close.
Besides, Intel is not only competing at GPUs, but APUs, and CPUs. Their APU products are more performant and efficient than AMD's (e.g. 140V vs 890M).
How was Intel "circling the drain"?
They have a very competitive offering of CPUs, APUs, and GPUs, and the upcoming Panther Lake and Nova Lake architectures are very promising. Their products compete with AMD, NVIDIA, and ARM SoCs from the likes of Apple.
Intel may have been in a rut years ago, but they've recovered incredibly well.
This is why I'm puzzled by this decision, and as a consumer, I would rather use a fully Intel system than some bastardized version that also involves NVIDIA. We've seen how well that works with Optimus.
AMD has always followed closely NVIDIA in crippling their cheap GPUs for any other applications.
After many years of continuously decreasing performance of the "consumer" GPUs, only Intel has offered in the Battlemage GPUs FP64 performance comparable with what could be easily obtained 10 years ago, but no longer today.
Therefore, if the Intel GPUs disappear, then the choices in GPUs will certainly become much more restricted than today. AMD has almost never attempted to compete with NVIDIA in features, but whenever NVIDIA dropped some feature, so did AMD.
An example - Starlink antenna, sub-$500, a phased array which actually is like a half or a third of such an array on a modern fighter jet where it cost several millions. Musk naturally couldn't go the way of a million-per-antenna, so he had to develop and source it on his own. The same with anti-missile defense - if/when NVDA gets to it to defend the TSMC fabs, NVDA would produce such defense systems orders of magnitude cheaper, and that defense would work much better than the modern military systems.
China's economy shuts down in a month, their population starves in another month
The US is desperate to not have that war, because they spent so long in denial about how sophisticated China has become that it would be a total humiliation. What you see as the US wanting war is them simply playing catch up.
The US government always ought to have the interest of US companies in mind, their job is to work in the interest of the voters and a lot of us work for US companies.
They don’t make AI chips really, they make the best high-throughput, high-latency chips. When the AI bubble pops, there’ll be a next thing (unless we’re really screwed). They’ve got as good chance of owning that next thing as anybody else does. Even better odds if there are a bunch of unemployed CUDA programmers to work on it.
This is comically premature.
Also never forget that in technology moreso than any other industry showing a loss while actually secretly making a profit is a high art form. There is a lot of land grabbing happening right now, but even so it would be a bit silly to take the profit/loss public figures at face value.
I had a ULi M1695 board (ASRock 939SLI32-eSATA2) and it was unusual for the era in that it was a $90 motherboard with two full x16 slots. Even most of the nForce boards at the time had it set up as x8/x8. For like 10 minutes you could run SLI with it until nVidia deliberately crippled the GeForce drivers to not permit it, but I was using it with a pretty unambitious (but fanless-- remember fanless GPUs?) 7600GS.
They also did another chipset pairing that offered a PCI-Ex16 slot and a fairly compatible AGP-ish slot for people who had bought an expensive (which then meant $300 for a 256MB card) graphics card and wanted to carry it over. There were a few other boards using other chipsets (maybe VIA) that tried to glue together something like that, but the support was much more hit-or-miss.
OTOH, I did have an Aladdin IV ("TXpro") board back in the day, and it was nice because it supported 83MHz bus speeds when a "better" Intel TX board wouldn't. A K6-233 overclocked to 250 (3x83) was detectably faster than at 262 (3.5x75)
I remember a lot disappointed people on forums who couldn't upgrade their cheap PCs as well, but there were still motherboards available with AGP to slot into for Intel's best products. Intel couldn't just remove it from the landscape altogether (assuming they wanted to) because they weren't the only company making Intel supporting chipsets. IIRC Intel/AMD/Nvidia were not interested in making AGP+PCIe supporting chipsets at all, but VIA/ALi and maybe SiS made them instead because it was a free for all space still. Once that went away Nvidia couldn't control their own destiny.
I.e. if anything new will need something implemented tomorrow, Nvidia will make their users wait another decade again. Which I consider an unacceptable level of support and something that flies in the face of those who claim that Nvidia supports Linux well.
I saw this move more as setting up a worthy competitor to Snapdragon X Elite, and it could also probably crush AMD APUs if these RTX things are powerful.
Since "nm" is meaningless these days, the transistor count/mm2 is below.
As reference: TSMC 3nm is ~290 million transistors/mm2 (MTr/mm2).
IBM TSMC Intel Samsung
22nm 16.50
16nm/14nm 28.88 44.67 33.32
10nm 52.51 100.76 51.82
7nm 91.20 237.18 95.08
5nm 171.30
3nm 292.21
2nm 333.33
https://news.ycombinator.com/item?id=27063034https://www.techradar.com/news/ibm-unveils-worlds-first-2nm-...
They turned down Acorn about the 286, which led to Acorn creating the Arm, they have turned down various console makers, they turned down Apple on the iPhone, and so on. In all cases they thought the opportunities were beneath them.
Intel has always been too much about what they want to sell you, not what you need. That worked for them when the two aligned over backwards compat.
Clearly the threat of an Arm or RISC-V finding itself fused to a GPU running AI inference workloads has woken someone up, at last.
I agree that Intel would be better served to spin off its fab division, a potential buyer could be the US government for military and national security relevant projects.
So yes. That's how American competition works.
It isn't a zero sum game. We try to create a market environment that is competitive and dynamic.
Monopolies are threat to both the company and a free open dynamic market. If Nvidia feels it could face an antitrust suit, which is reasonable, it is in its best interest to fund the future of Intel.
That's American capitalism.
Literally a previous gen card.
To that point, they've been "just getting started" in practically every chip market other than x86/x64 CPUs for over 20 years now, and have failed miserably every time.
If you think Nvidia is doing this because they're afraid of losing market share, you're way off base.
224 GB/s
128 bit
The monkey's paw curls...I love GPU differentiation, but this is one of those areas where Nvidia is justified shipping less VRAM. With less VRAM, you can use fewer memory controllers to push higher speeds on the same memory!
For instance, both the B50 and the RTX 2060 use GDDR6 memory. But the 2060 has a 192-bit memory bus, and enjoys ~336 GB/s bandwidth because of it.
My RTX 5090 is about 10x faster (measured by FP32 TFLOPS) and I still don't find it to be fast enough. I can't imagine using something so slow for AI/ML. Only 2.2 tokens/sec on an 8B parameter Llama model? That's slower than someone typing.
I get that it's a budget card, but budget cards are supposed to at least win on a pure price/performance ratio, even with a lower baseline performance. The 5090 is 10x faster but only 6-8x the price, depending on where in the $2-3,000 price range you can find one at.
According to Wikipedia intel 7nm density is ~62 MTr/mm2. I cannot find the source wikichip page mentioned in your reference post.
FWIW, I am not in the semi industry and all my info are from Wikipedia https://en.m.wikipedia.org/wiki/7_nm_process https://en.m.wikipedia.org/wiki/2_nm_process
The business is looking for additional capital. You can only do that by either selling new shares or raising debt.
> in the business like everyone else, not increasing the total share count or causing dilution. They chose not to do this because it would have been more expensive due to properly compensating existing shareholders. So it's spiritually just theft.
Shareholder dilution isn't inherently theft. Specific circumstances, motivations, and terms of issuance have a bearing on whether the dilution is harmful or whether it is necessary for the business.
For instance, it can be harmful if: minority shareholders are oppressed, shares are issued at a deeply discounted price with no legitimate business need or to benefit insiders at the expense of other shareholders, or if the raised capital isn't used effective to grow the company.
Dilution can be beneficial, such as when the raised capital is used for growth, employee compensation via employee stock options, etc.
Nah, nobody cares about that. Even in their heyday, SLI and CrossFire barely made sense technologically. That market is basically non-existent. There's more people now wanting to run multiple GPUs for inference than there ever were who were interested in SLI, and those people can mix and match GPUs as they like.
Also their network cards no longer work properly which is deeply aggravating as that used to be something I could rely on, just bought some realtek ones to work around the intel ones falling over.
Some would say that's circling the drain.
Don't mistake talking about a thing as advocating for that thing. It leaves you completely unable to process international politics, and frankly, a lot of other news and discussion as well. If you can only think about things you approve of, your model of the world is worse than useless.
The countries wouldn't fire nukes against each other's mainlands but maybe against each other's fleets. Pretty likely
Destroy the other country?
Take it over?
Be in a 1984 style „fake“ war forever?
There will undoubtedly still be a market for Nvidia chips but it won’t be enough to keep things going as they are.
A new market opening up with the same demand as AI just at the point that AI pops would be a miracle. Something like being an unsecured bond holder in 2010.
And what is that post-AI bubble "next big thing" exactly?
If there were, you'd already see people putting their money towards it.
When you follow the progress in the last 12 months, it really isn't. Big AI companies spent "hella' stacks" of cash, but delivered next to no progress.
Progress has flatlined. The "rocket to the moon" phase has already passed us by now.
Anyway, I'd just point out that users don't even need to depend on the bots for increase productivity, they just need to BELIEVE it increases their productivity. Exhibit A being the recent study which found that experienced programmers were actually less productive when they used an LLM, even though they self-reported productivity gains.
This may not be the first time the tech industry has tricked us into thinking it makes us more productive, when in reality it's just figuring out ways to consume more of our attention. In Deep Work, Cal Newport made the argument that interruptive "network tools" in general decrease focus and therefore productivity, while making you think that you're doing something valuable by staying constantly connected. There was a study on this one too. They looked at consultants who felt that replying as quickly as possible to their clients, even outside of work hours, was important to their job performance. But then when they took the interruptive technologies away, spent more time focusing on their real jobs, and replied to the clients less often, they started producing better work and client feedback scores actually went up.
Now personally I haven't stopped using an LLM when I code but I'm certainly thinking twice about how I use it these days. I actually have cut out most interruptive technology when I work, i.e. email notifications disabled, not keeping Slack open, phone on silent in a drawer, etc. and it has improved my focus and probably my work quality.
Numbers prove we aren't. Sales figures show very few customers are willing to pay $200 per month for the top AI chatbots, and even at $200/month, OpenAI is still taking a loss on that plan so they're still loosing money even with top dollar customers.
I think you're unaware just how unprofitable the big AI products are. This can only go on for so long. We're not in the ZIRP era anymore where SV VC funded unicorns can be unprofitable indefinitely and endlessly burn cash on the idea that when they'll eventually beat all competitors in the race to the bottom and become monopolies they can finally turn a profit by squeezing users with higher real-world price. That ship has sailed.
Stinks of Mussolini-style Corporatism to me.
It was definitely going to upset the market. Now i understand the radio silence on a card that was supposed to have been coming by Xmas.
It leads to mistakes like you mention, where a new market segment or new entrant is not a sure thing. And then it leads to mistakes like Larrabee and Optane where they talk themselves into overconfidence (“obviously this is a great product, we wouldn’t be doing it if it wasn’t guaranteed to make $1B in the first year”).
It is very hard to grow a business with zero risk appetite. You can’t take risky high return bets, and you can’t acknowledge the real risk in “safe” bets.
The problem is, console manufacturers know precisely how much of their product they anticipate to sell, and it's usually a lot. The PlayStation 5 is 80 million units so far.
And at that scale, the console manufacturers want to squeeze every vendor as hard as they can... and Intel didn't see the need to engage in a bidding war with AMD that would have given them a sizable revenue but very little profit margin compared to selling Xeon CPUs to hyperscalers where Intel has much more leverage to command higher prices and thus higher margins.
> they turned down Apple on the iPhone
Intel just was (and frankly, still is) unable to compete on the power envelope with ARM, that's why you never saw x86 take off on Android as well despite quite a few attempts at it.
Apple only chose to go for Intel with its MacBook line as PowerPC was practically dead and offered no way to extract more performance, and they dropped Intel as soon as their own CPUs were competitive. To get Intel CPUs to the same level of power efficiency that M-series CPUs have would require a full rework of the entire CPU infrastructure and external stack, that would require money that even Intel at its best frankly did not have. And getting x86 to be power effective enough for a phone? Just forget it.
> Clearly the threat of an Arm or RISC-V finding itself fused to a GPU running AI inference workloads has woken someone up, at last.
Actually, that is surprising for me as well. NVIDIA's Tegra should easily be powerful enough to run the OS for training or inference workload. If I were to guess, NVIDIA wants to avoid getting caught too hard on the "selling AI shovels" train.
They just need to separate business units.
Looking at Google's recent antitrust settlement, I'm not sure this is true at present.
Its also orders of magnitudr slower than what I normally see cited by people using 5090s; heck, its even much slower than I see on my own 3080Ti laptop card for 8B models, though usually won’t use more than an 8bpw quant for that size model.
> The 5090 is 10x faster but only 6-8x the price
I don't buy into this argument. A B580 can be bought at MSRP for 250$. A RTX 5090 from my local Microcenter is around 3250$. That puts it at around 1/13th the price.
Power costs can also be a significant factor if you choose to self-host, and I wouldn't want to risk system integrity for 3x the power draw, 13x the price, a melting connector, and Nvidia's terrible driver support.
EDIT: You can get an RTX 5090 for around 2500$. I doubt it will ever reach MSRP though.
the intel card is great for 1080p gaming. especially if you're just playing counterstrike, indie games, etc, you don't need a beast.
very few people are trying to play 4k tombraider on ultra with high refresh rate.
It also happened under G. W. Bush with banks and auto manufacturers, but the worst offense was under Nixon with his nationalization of passenger rail.
At least with the bank and car manufacturer bailouts the government eventually sold off their stocks, and with the Intel investment the government has non-voting shares, but the government completely controls the National Railroad Passenger Corporation, (the NRPC aka Amtrak) with the board members being appointed by the president of the United States.
We lost 20 independent railroads overnight, and created a conglomerate that can barely function.
If you fiddle and concentrate only on the top performers, the bottom falls out. Most of the US economy is still in small companies.
Wait, RDR2 is badly optimized? When I played it on my Intel Arc B580 and Ryzen 7 5800X, it seemed to work pretty well! Way better than almost any UE5 title, like The Forever Winter (really cool concept, but couldn't get past 20-30 FPS, even dropping down to 10% render scale on a 1080p monitor). Or with the Borderlands 4 controversy, I thought there'd be way bigger fish to fry.
Sounds like they will someday soon.
There will always be giant, faraway GPU supercomputer clusters to train models. But the future of inference (where the model fits) is local to the CPU.
It's typical corporate venturing and reporting to a CFO. Google is not much better with them cutting their small(er) projects.
Intel, at one of its lowest low, still come up with lunar lake, which is not as efficiency as Apple M, but still, quite impressive.
I bet if they were focus on mobile when they are at their peak, they could come up with something similar to Apple M
Google search is genuinely being threatened.
Google is not a monopoly, not entirely.
If AI usage also starts accruing to Google then there should be a new antitrust suit.
Correct, but said technology needs to be self sustaining commercially. The cost the white collar worker pays needs to be enough to cover the cost of running the AI + profit
It seems like we are a long way off that yet but maybe we expect an AI to solve that problem ala Kurzweil
We must live in different universes, then.
Intel's 140V competes with and often outperforms AMD's 890M, at around half the power consumption.[1]
Intel's B580 competes with AMD's RX 7600 and NVIDIA's RTX 4060, at a fraction of the price of the 4060.[2]
They're not doing so well with desktop and laptop CPUs, although their Lunar Lake and Arrow Lake CPUs are still decent performers within their segments. The upcoming Panther Lake architecture is promising to improve this.
If these are not the signs of competitive products, and that they're far from "circling the drain", then I don't know what is.
FWIW, I'm not familiar with the health of their business, and what it takes to produce these products. But from a consumer's standpoint, Intel hasn't been this strong since... the early 00s?
[1]: https://www.notebookcheck.net/Radeon-890M-vs-Arc-140V_12524_...
[2]: https://www.notebookcheck.net/Intel-Arc-B580-Benchmarks-and-...
Moreover, there were claims that the memory errors on GTX Titan were quite frequent. On graphics applications memory errors seldom matter, but if you have to do a computation twice to be certain that there were no memory errors affecting the results, that removes much of the performance advantage of a GPU.
A shocking surprise needs to be a surprise for it to work. Call it strategic naivety if you want.
I don’t think geographically restricting a war is even possible, really. The US’s typical game plan involves hitting the enemy’s decision-making capabilities faster than they can react. That goes out the window if we can’t hit each other’s mainlands. A war where we don’t get to use our strongest trick and China keeps their massive industrial base is an absurd losing one that the US would be totally nuts to sign up for.
Anyway, we and China can be perfectly good peaceful competitors.
To save others a search, here is a blog post and the paper:
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
https://arxiv.org/abs/2507.09089
Thanks for mentioning it.
[1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...
Make no mistake - there is no reason to do this besides shortening the hardware lifespan with Box86. But it is possible, most certainly.
Let's assume Trump admin pressured Nvidia to invest in intel.
Chips act (voted by Democrats / Biden) gave Intel up to $7.8 billion of YOUR money (taxes) in form of direct grants.
Was it more of "Mussolini-style corporatism" to you or not?
I just meant their integrated GPUs are what's completely safe here.
(And also, I'm pretending Macs don't exist for this statement. They aren't even PCs anymore anyway, just giant iPhones, from a silicon perspective.)
This relates to the Intel problem because they see the world the way you just described, and completely failed to grasp the importance of SoC development where you are suddenly free to consider the world without the preexisting buses and peripherals of the PC universe and to imagine something better. CPU cores are a means to an end, and represent an ever shrinking part of modern systems.
Maybe this changed with the AI race but there are plenty of people buying older chips by the millions for all sorts of products.
80 million in 5 years is a nothing burger as far as volume.
They've been making discrete GPUs on and off since the 80s, and this is at least their 3rd major attempt at it as a company, depending on how you define "major".
They haven't even just started on this iteration, as the Arc line has been out since 2022.
The main thing I learned from this submission is how much people hate Nvidia.
I've been using Mistral 7B, and I can get 45 tokens/sec, which is PLENTY fast, but to save VRAM so I can game while doing inference (I run an IRC bot that allows people to talk to Mistral), I quantize to 8 bits, which then brings my inference speed down to ~8 tokens/sec.
For gaming, I absolutely love this card. I can play Cyberpunk 2077 with all the graphics settings set to the maximum and get 120+ fps. Though when playing a much more graphically intense game like that, I certainly need to kill the bot to free up the VRAM. But I can play something simpler like League of Legends and have inference happening while I play with zero impact on game performance.
I also have 128 GB of system RAM. I've thought about loading the model in both 8-bit and 16-bit into system RAM and just swap which one is in VRAM based on if I'm playing a game so that if I'm not playing something, the bot runs significantly faster.
And so that gave AMD an opening, and with that opening they got to experiment with designs, tailor a product, get experience and industrial marketshare, and they were able to continue to offer more and better products. Intel didn't just miss a mediocre business opportunity, they missed out on becoming a trusted partner for multiple generations, and they handed market to AMD that AMD used to be a better market competitor.
2010-2011 was also the time that AMD were starting to moan a bit about DX11 and the higher level APIs not being sufficient to get the most out of GPUs, which led to Mantle/Vulkan/DX12 a few years down the road. Intel did a bit regarding massively parallel software rendering, with the flexibility to run on anything x86 and implement features as you liked, or AMD's efforts for 'fusion' (APU+GPU, after recently acquiring ATi) or HSA which I seem to recall was about dispatching different types of computing to the best suited processor(s) in the system for it. However I got the impression a lot of development effort is more interested in progressing on what they already have instead of starting in a new direction, and game studios want to ship finished and stable/predictable product, which is where support from intel would have helped.
But certainly Intel wasn’t willing to wait for the market. Didn’t make $1 billion instantly; killed.
NVDA sold 153 million Tegra units to Nintendo in 8 years, so 1.5M units a month. That's just as comparable.
[1] https://www.servethehome.com/on-ice-lake-intel-xeon-volumes-...
I think there's a lot of frustration with Nvidia as of late. Their monopoly was mostly won on the merits of their technology but now that they are a monopoly they have shifted focus from building the best technology to building the most lucrative technology.
They've demonstrated that they no longer have interested in producing the best gaming GPUs because those might cannibalize their server technology. Instead they seem to focus on crypto and AI while shipping over priced knee capped cards at outrageous prices.
People are upset because they fear this deal will somehow influence Intel's GPU ambitions. Unfortunately I'm not sure these folks want to buy Intel GPUs, they just want Nvidia to be scared into competing again so they can buy a good Nvidia card.
People just need to draw a line in the sand and stop supporting Nvidia.
AMD isn't precisely a market competitor. The server and business compute market is still firmly Intel and there isn't much evidence of that changing unless Apple drops M series SoCs to the wide open market which Apple won't do. Intel could probably release a raging dumpster fire and still go strong, oh wait, that's what they've been doing the last few years.
AMD is only a competitor in the lower end of the market, a market Intel has zero issue handing to AMD outright - partially because a viable AMD keeps the antitrust enforcers from breathing down their neck, but more because it drags down per-unit profit margins to engage in consoles and the lower rungs and niches.
The GPUs might be competitive on price, but that's about it. It's pretty much a hardware open beta.
Donald anounces tariffs and the markets react. He postpones tariffs and the markets react again. Only Donald and his friends know what he will announce next.
A cheap GPU ten-plus years ago was $200-300. That GPU either had no FP64 units at all, or had them "crippled" just like today. What happened between then and now is that the $1k+ market segment became the $10k+ market segment (and the $200+ market segment became the $500+ market segment). That sucks, and nVidia and AMD are absolutely milking their customers for all they're worth, but nothing really got newly "crippled" along the way.
Ermmm dude they are competing with Google. They have to keep reinvesting otherwise Google captures the users OAI currently has.
Free cash flows matter. Not accounting earnings. On a FCFF basis they largely in the red. Which means they have to keep raising money, at some point somebody will turn around and ask the difficult questions. This cannot go on forever.
And before someone mentions Amazon... Amazon raised enough money to sustain their reinvestment before they eventually got to the place where their EBIT(1-t) was greater than reinvestment.
This is not at all whats going on with OAI.
If you're gonna buy at face value whatever Scam Altman claims, then I have some Theranos shares you might be interested in.
[0] https://en.wikipedia.org/wiki/Corporatism#Neo-corporatism
[1] https://en.wikipedia.org/wiki/Corporatism#Fascist_corporatis...
Also the arc wasn't in jeopardy, the arc cards have been improving with every release and the latest one got pretty rave reviews.
To anyone actually paying attention, igpus have come a long way. They are no longer an 'I can play minecraft' thing.
They did push hard on their UMPC x86 SoCs (Paulsbo and derivatives) to Sony, Nokia, etc. These were never competitive on heat or battery life.
I don't know exactly how the scaling works here but considering how LLM inference is memory bandwidth limited you should go beyond 100 tokens/sec with the same model and a 8 bit quantization.
There was not that much democracy in the French post-WW2 technocratic establishment, but I agree that they were not technically fascist (nor otherwise).
You probably meant Poulsbo (US15W) chipset
I hadn't realized that "Arc" and "Integrated" overlapped, I thought that brand and that level of power was only being used on discrete cards.
I do think that integrated Arc will probably be killed by this deal though, not for being bad as it's obviously great, rather for being a way for Intel to cut costs with no downsides for Intel. If they can make RTX iGPUs now, and the Nvidia and RTX brand being the strongest in the gaming space... Intel isn't going to invest the money in continuing to develop Arc, even if Nvidia made it clear that they don't care, it just doesn't make any business sense now.
That is a loss for the cause of gaming competition. Although having Nvidia prop up Intel may prove to be a win for competition in terms of silicon in general versus them being sold off in parts, which could be a real possibility it seems.
This is not true anymore, as it IS changing, and very rapidly. AMD has shot up to 27.3% of the server market share, which they haven't had since the Opteron days 20 years ago. Five years ago their server market share was very small single digits. They're half of desktops, too. https://www.pcguide.com/news/no-amd-and-intel-arent-50-50-in...
Like I said, Intel may not be market leader in some segments, but they certainly have very competitive products. The fact they've managed to penetrate the dGPU duopoly, while also making huge strides with their iGPUs, is remarkable on its own. They're not leaders on desktops and servers, but still have respectable offerings there.
None of this points to a company that's struggling, but to a healthy market where the consumer benefits. News of two rivals collaborating like this is not positive for consumers.
This feels like a misreading of what I wrote. The discovery that he is using tariffs to make a personal profit should be surprising.
> Donald anounces tariffs and the markets react. He postpones tariffs and the markets react again. Only Donald and his friends know what he will announce next.
That wouldn’t surprise me at all, I just don’t think a hypothesis about how he could abuse his power will be very compelling to anybody who doesn’t already think he’s prone to corruption. If anything, I think it starts inoculating people to the idea.
Clearly I'm doing something wrong if it's a net loss in performance for me. I might have to look more into this.
This is one reason Intel and Samsung are both hesitant about going to the next node- Intel has put out official statements that they are only going to 14A if they can get Foundry up with a significant partner, and Samsung is hedging their bets and being cagey about their own 1.4nm node (at least in English, I haven't seen any direct demand for a major foundry customer from Samsung, just statements saying that they were going to be delaying and might not be building it at all).
If you're using llama.cpp run the benchmark in the link I posted earlier and see what you get; I think there's something like it for vllm as well.
It isn't the "method of communication". It's legislation vs. coercion (in the speculative scenario from the parent comment).
>a company that's struggling, but to a healthy market where the consumer benefits
I would argue that the market is only marginally healthier than, say, 2018. Intel is absolutely struggling. The 13th and 14th generation were marred by degradation issues and the 15th generation is just "eh", with no real reason to pick it over Zen. The tables have simply flipped compared to seven years ago; AMD at least is not forcing consumers to change motherboards every two years.
And Intel doesn't even seem to care too much that they're losing relevance. One thing they could do is enable ECC on consumer chips like AMD did for the entire Ryzen lineup, but instead they prefer to keep their shitty market segmentation. Granted, I don't think it would move too many units, but it would at least be a sign of good will to enthusiasts.