* Identify the workloads that haven't scaled in a year. Your ERPs, your HRIS, your dev/stage/test environments, DBs, Microsoft estate, core infrastructure, etc. (EDIT, from zbentley: also identify any cross-system processing where data will transfer from the cloud back to your private estate to be excluded, so you don't get murdered with egress charges)
* Run the cost analysis of reserved instances in AWS/Azure/GCP for those workloads over three years
* Do the same for one of these high-core "pizza boxes", but amortized over seven years
* Realize the savings to be had moving "fixed infra" back on-premises or into a colo versus sticking with a public cloud provider
Seriously, what took a full rack or two of 2U dual-socket servers just a decade ago can be replaced with three 2U boxes with full HA/clustering. It's insane.
Back in the late '10s, I made a case to my org at the time that a global hypervisor hardware refresh and accompanying VMware licenses would have an ROI of 2.5yrs versus comparable AWS infrastructure, even assuming a 50% YoY rate of license inflation (this was pre-Broadcom; nowadays, I'd be eyeballing Nutanix, Virtuozzo, Apache Cloudstack, or yes, even Proxmox, assuming we weren't already a Microsoft shop w/ Hyper-V) - and give us an additional 20% headroom to boot. The only thing giving me pause on that argument today is the current RAM/NAND shortage, but even that's (hopefully) temporary - and doesn't hurt the orgs who built around a longer timeline with the option for an additional support runway (like the three-year extended support contracts available through VARs).
If we can't bill a customer for it, and it's not scaling regularly, then it shouldn't be in the public cloud. That's my take, anyway. It sucks the wind from the sails of folks gung-ho on the "fringe benefits" of public cloud spend (box seats, junkets, conference tickets, etc...), but the finance teams tend to love such clear numbers.
> From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB, roughly 1,152 MB in total.
If cores are grouped into four-core blocks, and each block has 4MB of cache… isn’t that just 1MB per core? So 288MB total?
HotHardware reports
https://hothardware.com/news/intel-clearwater-forest-xeon-6-...
> these processors pack in up to 288 of the little guys as well as 576MB of last-level cache, 96 PCIe 5.0 lanes, and 12-channel DDR5-8000.
> The Xeon 6+ processors each have up to 12 compute tiles fabbed on 18A, all of which have six quad-core modules for a total of 24 cores per tile. There are also three 'active' base tiles on Intel 3, so-called because the base tiles include 192MB of last-level cache, which is so-called because each compute tile has 48MB of L3 cache.
So maybe 1MB per core L2, then 192MB of basically-L4 per base tile, then 48MB of L3 per compute tile? 192*3+48*12 gets me to the 1152, maybe that’s it.
Anyway, apparently these things will have “AMX” matrix extensions. I wonder if they’ll be good number crunchers.
Because if this is not thunder Intel will default.
I promise you. Heard it from some youtuber as well, trust me.
I wonder whether the next bottleneck becomes software scheduling rather than silicon - OS/runtimes weren’t really designed with hundreds of cores and complex interconnect topologies in mind.
And it's clearly an IFS play too. Intel Foundry needs a proof point — you can publish PDKs all day, but nothing sells foundry credibility like eating your own cooking in a 288-core server part at 450W. If Foveros Direct works here, it's the best ad Intel could run for potential foundry customers.
The chiplet sizing is smart for another reason nobody's mentioned: yield. 18A is brand new, yields are probably rough. But 24 cores per die is small enough that even bad yields give you enough good chiplets. Basically AMD's Zen playbook but with a 3D twist.
Also — 64 CXL 2.0 lanes! Several comments here are complaining about DDR5 prices, which is fair. But CXL memory pooling across a rack could change that math completely. I wonder if Intel is betting the real value isn't the cores but being the best CXL hub in the datacenter.
The ARM competition is still the elephant in the room though. "Many efficient cores" is what ARM has always done natively, and 17% IPC uplift on Darkmont doesn't close that gap by itself.
But right pricing hardware is hard if you’re small shop. My mind is hard-locked onto Epyc processors without thought. 9755 on eBay is cheap as balls. Infinity cores!
Problem with hardware is lead time etc. cloud can spin up immediately. Great for experimentation. Organizationally useful. If your teams have to go through IT to provision machine and IT have to go through finance so that spend is reliable, everybody slows down too much. You can’t just spin up next product.
But if you’re small shop having some Kubernetes on rack is maybe $15k one time and $1.2k on going per month. Very cheap and you get lots and lots of compute!
Previously skillset was required. These days you plug Ethernet port, turn on Claude Code dangerously skip permissions “write a bash script that is idempotent that configures my Mikrotik CCR, it’s on IP $x on interface $y”. Hotspot on. Cold air blowing on face from overhead coolers. 5 minutes later run script without looking. Everything comes up.
Still, foolish to do on prem by default perhaps (now that I think about it): if you have cloud egress you’re dead, compliance story requires interconnect to be well designed. More complicated than just basics. You need to know a little before it makes sense.
Feel like reasoning LLM. I now have opposite position.
Also about "make-or-break": they've been saying this for all of Intel's products since at least 2022 *yawn*
I’ve had success with this approach by keeping it to only the business process management stacks (CRMs, AD, and so on—examples just like the ones you listed). But as soon as there’s any need for bridging cloud/onprem for any data rate beyond “cronned sync” or “metadata only”, it starts to hurt a lot sooner than you’d expect, I’ve found.
As I understand things, it would be extremely unusual to ship a chip that was bound by floating point throughput, not uncached memory access, especially in the desktop/laptop space.
I haven't been following the Intel server space too carefully, so it's an honest question: Was the old thing compute and not bandwidth limited, or is this going to be running inference at the same throughput (though maybe with lower power consumption)?
AMD has had these sorts of densities available for a minute.
> Identify the workloads that haven't scaled in a year.
I have done this math recently, and you need to stop cherry picking and move everything. And build a redundant data center to boot.
Compute is NOT the major issue for this sort of move:
Switching and bandwidth will be major costs. 400gb is a minimum for interconnects and for most orgs you are going to need at least that much bandwidth top of rack.
Storage remains problematic. You might be able to amortize compute over this time scale, but not storage. 5 years would be pushing it (depending on use). And data center storage at scale was expensive before the recent price spike. Spinning rust is viable for some tasks (backup) but will not cut it for others.
Human capital: Figuring out how to support the hardware you own is going to be far more expensive than you think. You need to expect failures and staff accordingly, that means resources who are going to be, for the most part, idle.
So on the OS side we might already have the needed tools for these CoC (cluster on chip ;))
But that's just one piece of the puzzle, I guess.
I mean....
IMO Erlang/Elixir is a not-terrible benchmark for how things should work in that state... Hell while not a runtime I'd argue Akka/Pekko on JVM Akka.Net on the .NET side would be able to do some good with it...[0] Similar for Go and channels (at least hypothetically...)
[0] - Of course, you can write good scaling code on JVM or CLR without these, but they at least give some decent guardrails for getting a good bit of the Erlang 'progress guaranteed' sauce.
By that point we'll be desiring the new 1000 core count CPUs though.
Stuffed an 8480+ ES with 192gb of memory across 8 channels and it’s actually not too bad.
Let's not get carried away here
Its hard drive and SSD space prices that stagger me on the cloud. Where one of the server CPUs might only be about 2x the price of buy a CPU for a few years if you buy less in a small system (all be it with less clock speed usually on the cloud) the drive space is at least 10-100x the price of doing it locally. Its got a bit more potential redudency but for that overhead you can repeat that data a lot of times.
As time has gone on the deal of cloud has got worse as the hardware got more cores.
If that's you then the GraniteRapids AP platform that launched previously to this can hit similar numbers of threads (256 for the 6980P). There are a couple of caveats to this though - firstly that there are "only" 128 physical cores and if you're using VMs you probably don't want to share a physical core across VMs, secondly that it has a 500W TDP and retails north of $17000, if you can even find one for sale.
Overall once you're really comparing like to like, especially when you start trying to have 100+GbE networking and so on, it gets a lot harder to beat cloud providers - yes they have a nice fat markup but they're also paying a lot less for the hardware than you will be.
Most of the time when I see takes like this it's because the org has all these fast, modern CPUs for applications that get barely any real load, and the machines are mostly sitting idle on networks that can never handle 1/100th of the traffic the machine is capable of delivering. Solving that is largely a non-technical problem not a "cloud is bad" problem.
- request some HW to run $service
- the "IT dept" (really, self-interested gatekeeper) might give you something now, or in two weeks, or god help you if they need to order new hardware then its in two months, best case
- there will be various weird rules on how the on-prem HW is run, who has access etc, hindering developer productivity even further
- the hardware might get insanely oversubscribed so your service gets half a cpu core with 1GB RAM, because perverse incentives mean the "IT dept" gets rewarded for minimizing cost, while the price is paid by someone else
- and so on...
The cloud is a way around this political minefield.
For those that do, your scaling example works against you. If today you can merge three services into one, then why do you need full time infrastructure staff to manage so few servers? And remember, you want 24/7 monitoring, replication for disaster recovery, etc. Most businesses do not have IT infrastructure as a core skill or differentiator, and so they want to farm it out.
We had a massive performance issue a few years ago that we fixed by mapping our processes to the numa zones topology . The default design of our software would otherwise effectively route all memory accesses to the same numa zone and performance went down the drain.
At my job we use HyperV, and finding someone who actually knows HyperV is difficult and expensive. Throw in Cisco networking, storage appliances, etc to make it 99.99% uptime...
Also that means you have just one person, you need at least two if you don't want gaps in staffing, more likely three.
Then you still need all the cloud folks to run that.
We have a hybrid setup like this, and you do get a bit of best of both worlds, but ultimately managing onprem or colo infra is a huge pain in the ass. We only do it due to our business environment.
The bottlenecks are pretty much hardware-related - thermal, power, memory and other I/O. Because of this, you presumably never get true "288 core" performance out of this - as in, it's not going to mine Bitcoin 288 as fast as a single core. Instead, you have less context-switching overhead with 288 tasks that need to do stuff intermittently, which is how most hardware ends up being used anyway.
Yep, the scheduling has been a problem for a while. There was an amazing article few years ago about how the Linux kernel was accidentally hardcoded to 8 cores, you can probably google and find it.
IMO the most interesting problem right now is the cache, you get a cache miss every time a task is moving core. Problem, with thousands of threads switching between hundreds of cores every few milliseconds, we're dangerously approaching the point where all the time is spent trashing and reloading the CPU cache.
I still regret not buying 1TB of RAM back in ~October...
But I am at a loss to how Intel are really going to get any traction with IFS. How can anyone trust Intel as a long-term foundry partner. Even if they priced it more aggressively, the opportunity cost in picking a supplier who decides to quit next year would be catastrophic for many. The only way this works is if they practically give their services away to someone big, who can afford to take that risk and can also make it worth Intel's continued investment. Any ideas who that would be, I've got nothing.
That makes a lot of sense. Cloud providers are selling compute, and as cores get faster, the single core gets more expensive.
Darkmont is a slightly improved variant of the Skymont cores used in Arrow Lake/Lunar Lake and it has a performance very similar to the Arm Neoverse V3 cores used in Graviton5, the latest generation of custom AWS CPUs.
However, a Clearwater Forest Xeon CPU has much more cores per socket than Graviton5 and it also supports dual-socket motherboards.
Darkmont also has a greater performance than the older big Intel cores, like all Skylake derivatives, inclusive for AVX-using programs, so it is no longer comparable with the Atom series of cores from which it has evolved.
Darkmont is not competitive in absolute performance with AMD Zen 5, but for the programs that do not use AVX-512 it has better performance per watt.
However, since AMD has started to offer AVX-512 for the masses, the number of programs that have been updated to be able to benefit from AVX-512 is increasing steadily, and among them are also applications where it was not obvious that using array operations may enhance performance.
Because of this pressure from AMD, it seems that this Clearwater Forest Xeon is the final product from Intel that does not support AVX-512. Both next 2 Intel CPUs support AVX-512, i.e. the Diamond Rapids Xeon, which might be launched before the end of the year, and the desktop and laptop CPU Nova Lake, whose launch has been delayed to next year (together with the desktop Zen 6, presumably due to the shortage of memories and production allocations at TSMC).
Until the bills _really_ start skyrocketing...
That's partially true; managing cloud also takes skill, most people forget that with end result being "well we saved on hiring sysadmins, but had to have more devops guys". Hell I manage mostly physical infrastructure (few racks, few hundred VMs) and good 80% of my work is completely unrelated to that, it's just the devops gluing stuff together and helping developers to set their stuff up, which isn't all that different than it would be in cloud.
> And remember, you want 24/7 monitoring, replication for disaster recovery, etc.
And remember, you need that for cloud too. Plenty of cloud disaster stories to see where they copy pasted some tutorial thinking that's enough then surprise.
There is also partial way of just getting some dedicated servers from say OVH and run infra on that, you cut out a bit of the hardware management from skillset and you don't have the CAPEX to deal with.
But yes, if it is less than at least a rack, it's probably not worth looking for onprem unless you have really specific use case that is much cheaper there (I mean less than usual half)
On top of that no one really knows what the fuck they are doing in AWS anyway.
Not quite. If you hire a bad talent to manage your 'cloud gear' then you would find what the mistakes which would cost you nothing on-premises would cost you in the cloud. Sometimes - a lot.
No they don't. They are horribly wasteful and inefficient compared to kernel TCP. Also they are useless because they sit on top of a kernel network interface anyways.
Unless you're doing specific tricks to minimize latency (HFT, I guess?) then there is no point.
Im pretty sure a box like this could run our whole startup, hosting PG, k8s, our backend apis, etc, would be way easier to setup, and not cost 2 devops and $40,000 a month to do it.
All of the complexity of onprem, especially when you need to worry about failover/etc can get tricky, especially if you are in a wintel env like a lot of shops are.
i.e. lots of companies are doing sloppy 'just move the box to an EC2 instance' migrations because of how VMWare jacked their pricing up, and now suddenly EC2/EBS/etc costing is so cheap it's a no brain choice.
I think the knowledge base to set up a minimal cost solution is too tricky to find a benefit vs all the layers (as you almost touched on, all the licensing at every layer vs a cloud provider managing...)
That said, rug pulls are still a risk; I try to push for 'agnostic' workloads in architecture, if nothing else because I've seen too many cases where SaaS/PaaS/etc decide to jack up the price of a service that was cheap, and sure you could have done your own thing agnostically, but now you're there, and migrating away has a new cost.
IOW, I agree; I don't think the human capital is there as far as infra folks who know how to properly set up such environments, especially hitting the 'secure+productive' side of the triangle.
This comes up again and again. It was the original sales pitch from cloud vendors.
Often the very same companies repeating this messaging are recruiting and paying large teams of platform developers to manage their cloud…and pay for them to be on call.
This is really the core problem. Every time I’ve done the math on a sizable cloud vs on-prem deployment, there is so much money left on the table that the orgs can afford to pay FAANG-level salaries for several good SREs but never have we been able to find people to fill the roles or even know if we had found them.
The numbers are so much worse now with GPUs. The cost of reserved instances (let alone on-demand) for an 8x H100 pod even with NVIDIA Enterprise licenses included leaves tens of thousands per pod for the salary of employees managing it. Assuming one SREs can manage at least four racks the hardware pays for itself, if you can find even a single qualified person.
The company did need the same exact people to manage AWS anyway. And the cost difference was so high that it was possible to hire 5 more people which wasn't needed anyway.
Not only the cost but not needing to worry about going over the bandwidth limit and having soo much extra compute power made a very big difference.
Imo the cloud stuff is just too full of itself if you are trying to solve a problem that requires compute like hosting databases or similar. Just renting a machine from a provider like Hetzner and starting from there is the best option by far.
Is it still a problem in 2026 when unemployment in IT is rising? Reasons can be argued (the end of ZIRP or AI) but hiring should be easier than it was at any time during the last 10 years.
> At my job we use HyperV, and finding someone who actually knows HyperV is difficult and expensive...
Try offering significantly higher pay.
You memory only has so much bandwidth, but now it's shared by even more cores.
https://arstechnica.com/gadgets/2024/09/hacker-boots-linux-o...
Getting the performance to scale can be hard, of course. The less inter-core communication the better. Things that tend to work well are either stuff where a bunch of data comes in and a single thread works on it for a significant amount of time then ships the result or things where you can rely on the NIC(s) to split traffic and you can process the network queue for a connecrion on the same core that handles the userspace stuff (see Receive Side Scaling), but you need a fancy NIC to have 288 network queues.
Also, there's so many hyperthreading vulnerabilities as of late they've disabled on hyperthreaded data center boards that I'd imagine this de-risks that entirely.
Last time I tried to do anything networking with Claude it set up route preference in opposite order (it thought lower number means more preferred, while it was opposite), fucking it up completely, and then invented config commands that do not exist in BIRD (routing software suite).
Then I looked at 2 different AIs and they both hallucinated same BIRD config commands that were nonexistent. And by same I mean they hallucinated existence of same feature.
> If your teams have to go through IT to provision machine and IT have to go through finance so that spend is reliable, everybody slows down too much. You can’t just spin up next product.
The time of having to order a bunch of servers for new project is long over. We just spun k8s cluster for devs to self-service themselves and the prod clusters just have a bit of accounting shim so adding new namespace have to be assigned to a certain project so we can bill client for it.
Also you're allowed to use cloud services while you have on-prem infrastructure. You get best of both, with some cognition cost involved.
Also have seen it disabled in academic settings where they want consistent performance when benchmarking stuff.
That the CPU cores are low frequency cores probably helps with yield as well.
Of course, having fewer faster cores does have the benefit that you require less RAM... Not a big deal before, you could get 512GB or 1TB of RAM fairly cheap, but these days it might actually matter? But then at the same time, if two E-cores are more powerful than one hyperthreaded P-core, maybe you actually save RAM by using E-cores? Hyperthreading is, after all, only a benefit if you spawn one compiler process per CPU thread rather than per core.
EDIT: Why in the world would someone downvote this perspective? I'm not even mad, just confused
They cite a very specific use case in the linked story: Virtualized RAN. This is using COTS hardware and software for the control plane for a 5G+ cell network operation. A large number of fast, low power cores would indeed suit such a application, where large numbers of network nodes are coordinated in near real time.
It's entirely possible that this is the key use case for this device: 5G networks are huge money makers and integrators will pay full retail for bulk quantities of such devices fresh out of the foundry.
Putting more cores is just another desperate move to play the benchmark. Power is roughly quadratic with frequency, every time you fall behind competition, you can double the number of cores and reduce the frequency by 1.414 to compensate.
Repeat a few times and you get CPU with hundreds of cores, but each core is so slow it can hardly do any work.
Folks wanting one or the other miss savings had by effectively leveraging both.
Here is the quote:
"The company says operators deploying 5G Advanced and future 6G networks increasingly rely on server CPUs for virtualized RAN and edge AI inference, as they do not want to re-architect their data centers in a bid to accommodate AI accelerators."
Edge AI usually means very small models that run fine on CPUs.
If you're doing regular inference for a product with very flat throughput requirements (and you're doing on-prem already), on-prem GPUs can make a lot of sense.
But if you're doing a lot of training, you have very bursty requirements. And the H100s are specifically for training.
If you can have your H100 fleet <38% utilized across time, you're losing money.
If you have batch throughput you can run on the H100s when you're not training, you're probably closer to being able to wanting on-prem.
But the other thing to keep in mind is that AWS is not the only provider. It is a particularly expensive provider, and you can buy capacity from other neoclouds if you are cost-sensitive.
* sign the papers for server colo * get quote and order servers (which might take few weeks to deliver!), near always a pair of switches * set them up, install OSes, set up basic services inside the network (DNS, often netboot/DHCP if you want to have install over network, and often few others like image repository, monitoring etc.)
It's "we have product and cashflow, let's give someone a task to do it" thing, not "we're a startup ,barely have PoC" thing
It's unfortunately not so cut and dry
The first is that SRE team size primarily scales with the number of applications and level of support. It does scale with hardware but sublinearly, where number of applications usually scales super linearly. It takes a ton less effort to manage 100 instances of a single app than 1 instance of 100 separate apps (presuming SRE has any support responsibilities for the app). Talking purely in terms of hardware would make me concerned that I’m looking at an impossible task.
The second (which you probably know, but interacts with my next point) is that you never have single person SRE teams because of oncall. Three is basically the minimum, four if you want to avoid oncall burnout.
The last is that I don’t know many SREs (maybe none at all) that are well-versed enough in all the hardware disciplines to manage a footprint the size we’re talking. If each SRE is 4 racks and a minimum team size is 4, that’s 16 racks. You’d need each SRE to be comfortable enough with networking, storage, operating system, compute scheduling (k8s, VMWare, etc) to manage each of those aspects for a 16 rack system. In reality, it’s probably 3 teams, each of them needs 4 members for oncall, so a floor of like 48 racks. Depending on how many applications you run on 48 racks, it might be more SREs that split into more specialized roles (a team for databases, a team for load balancers, etc).
Numbers obviously vary by level of application support. If support ends at the compute layer with not a ton of app-specific config/features, that’s fewer folks. If you want SRE to be able to trace why a particular endpoint is slow right now, that’s more folks.
You wanted sysadmins / IT / data center technicians.
AWS charges $55/hour for EC2 p5.48xlarge instance, which goes down with 1 or 3 year commitments.
With 1 year commitment, it costs ~$30/hour => $262k per year.
3-year commitment brings price down to $24/hour => $210k per year.
This price does NOT include egress, and other fees.
So, yeah, there is a $120k-$175k difference that can pay for a full-time on-site SRE, even if you only need one 8xH100 server.
Numbers get better if you need more than one server like that.
Before I drop 5 figures on a single server, I'd like to have some confidence in the performance numbers I'm likely to see. I'd expect folk who are experienced with on-prem have a good intuition about this - after a decade of cloud-only work, I don't.
Also, cloud networking offers a bunch of really nice primitives which I'm not clear how I'd replicate on-prem.
I've estimated our IT workload would roughly double if we were to add physically racking machines, replacing failed disks, monitoring backups/SMART errors etc. That's... not cheap in staff time.
Moving things on-prem starts making financial sense around the point your cloud bills hit the cost of one engineers salary.
That is incorrect. On AWS you need a couple DevOps that will Tring together the already existing services.
With on premise, you need someone that will install racks, change disks, setup high availability block storage or object storage, etc. Those are not DevOps people.
On-prem wins for a stable organization every time though.
If you have a separate physical NIC for each namespace you probably won't have any contention.
Given current trends I think we're eventually going to be forced to adopt new programming paradigms. At some point it will probably make sense to treat on-die HBM distinctly from local RAM and that's in addition to the increasing number of NUMA nodes.
Not to mention potential customers who would prefer a US based foundry regardless. My guess is that there's a pretty large part of the market that would be perfectly fine with using Intel.
Though... these days, getting enough RAM to support builds across 80 cores would be twice the price of the whole rest of the system I'm guessing.
Gaming CPUs and some EPYCs are the best
That said I'll point to the Intel Atom - the first version and refresh were an 'in-order' where hyper-threading was the cheapest option (both silicon and power-wise) to provide performance, however with Silvermont they switched to OOO execution but ditched hyper threading.
https://news.ycombinator.com/item?id=38260935
> This article is clickbait and in no way has the kernel been hardcoded to a maximum of 8 cores.
Companies decommission hardware on a schedule after all, not when it stops working.
EDIT: Though looking for similar deals now, I can only find ones up to 128GB RAM and they're near twice the price I paid. I got 7F72 + motherboard + 512GB DDR4 for $1488 (uh, I swear that's what I paid, $1488.03. Didn't notice the 1488 before.) The closest I can find now is 7F72 + motherboard + 128GB DDR4 for over $2500. That's awful
With the standard form of business trust: a contract.
I imagine that means less C++/Rust than most, which means much less time spent serialized on the linker / cross compilation unit optimizer.
Which is why I used AMD in my last desktop computer build
So, I wonder if this is going to be any faster than the previous generation for edge AI.
The Panther Lake vs Ryzen laptop performance comparisons show that Pather Lake does well, basically trading against top end Ryzen AI laptop chips in both absolute performance, and performance per watt.
(For various reasons, I just care about VPS/bare metal, and S3-compatiblity.)
I'm looking at those because I'm having difficulty forecasting bandwidth usage, and the pessimistic scenarios seem to have me inside the acceptable use policies of the small providers while still predicting AWS would cost 5-10x more for the same workload.
That's why nowadays one would use a managed collocation service, not hosting a rack in the office basement.
no shortage of IT talent in 2026, the market is literally overflowing with resumes and wages are dropping. huge gluts of fairly generic online degree holders.
they can use AI to write basic Ansible just as well as my Seniors
The kernel tries to guess as well as it can though - many years ago I hit a fun bug in the kernel scheduler that was triggered by numa process migration ie the kernel would move the processes to the core closest to the ram. It happened that in some cases the migrated processes never got scheduled and got stuck forever.
Disabling numa migration removed the problem. I figured out the issue because of the excellent ‘a decade of wasted cores’ paper which essentially said that on ‘big’ machines like ours funky things could happen scheduling wise so started looking at scheduling settings .
The main numa-pinning performance issue I was describing was different though, and like you said came from us needing to change the way the code was written to account for the distance to ram stick. Modern servers will usually let you choose from fully managed ( hope and pray , single zone ) to many zones, and the depending on what you’ve chosen to expose, use it in your code. As always, benchmark benchmarks.
That's vastly overstating it. You hit nail in the head in previous paragraphs, it's number of apps (or more generally speaking ,environments) that you manage, everything else is secondary.
And that is especially true with modern automation tools. Doubling rack count is big chunk of initial time spent moving hardware of course, but after that there is almost no difference in time spent maintaining them.
In general time per server spent will be smaller because the bigger you grow the more automation you will generally use and some tasks can be grouped together better.
Like, at previous job, server was installed manually, coz it was rare.
At my current job it's just "boot from network, pick the install option, enter the hostname, press enter". Doing whole rack (re)install would take you maybe an hour, everything else in install is automated, you write manifest for one type/role once, test it, and then it doesn't matter whether its' 2 or 20 servers.
If we grew server fleet say 5-fold, we'd hire... one extra person to a team of 3. If number of different application went 5-fold we'd probably had to triple the team size - because there is still some things that can be made more streamlined.
Tasks like "go replace failed drive" might be more common but we usually do it once a week (enough redundancy) for all servers that might've died, if we had 5x the number of servers the time would be nearly the same because getting there dominates the 30s that is needed to replace one.
Most companies I have seen have never updated the BIOS of their servers, nor the firmware on their switches. Some of those have production applications on Windows XP or older and you can see VMware ESXi < 6.5 still in the wild. The same for all kinds of other systems including Oracle Linux 5.5 with some ancient Oracle DB like 10g or something, that was the case like 5 years ago but I don't think the company has migrated away completely to this day.
Any sufficiently old company will accrete systems and approaches of various vintages over time only very slowly ripping out some of those systems. Usually what happens is that parts of old systems or old workarounds will live on for decades after they have been supposedly decommissioned. I had a colleague who was using CRT monitors in 2020 with computers of similar vintage, probably with Pentium III or early Pentium IV, because he had everything set up there and it just worked for what he was doing. I don't admire it, yet that stuff works and I do respect that people don't want to replace expensive systems just because they are out of support, when they do actually work and they have people taking care of them.
Never been an SRE but interact with them all the time…
My own personal experience is there is commonly a division between App SREs that look after the app layer and Infra SREs that looks after the infrastructure layer (K8S, storage, network, etc)
The App SRE role absolutely scales with the number of distinct apps. The extent to which the Infra SRE role does depends on how diverse the apps are in terms of their infrastructure demands
Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.
If there's an issue with the server while they're sick or on vacation, you just stop and wait.
If they take a new job, you need to find someone to take over or very quickly hire a replacement.
There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one. You can start diagnosing things and replacing parts and hope it gets to the root issue, but that's more downtime.
Going on-prem like this is highly risky. It works well until the hardware starts developing problems or the person in charge gets a new job. The weeks and months lost to dealing with the server start to become a problem. The SRE team starts to get tired of having to do all of their work on weekends because they can't block active use during the week. Teams start complaining that they need to use cloud to keep their project moving forward.
we have 7 racks and 3 people. The things you mentioned aren't even 5% of the workload.
There are things you figure out once, bake into automation, and just use.
You install server once and remove it after 5-10 years, depending on how you want to depreciate it. Drives die rarely enough it's like once every 2 months event at our size
The biggest expense is setting up automation (if I was re-doing our core infrastructure from scratch I'd probably need good 2 months of grind) but after that it's free sailing. Biggest disadvantage is "we need a bunch of compute, now", but depending on business that might never be a problem, and you have enough savings to overbuild a little and still be ahead. Or just get the temporary compute off cloud.
Real Devops people are competent from physical layer to software layer.
Signed,
Aerospace Devop
Like what?
So 4 sockets per chassis, up to 8 chassis in a complete system. Afaik OS sees it as single huge system, that is kinda their special sauce here.
The bug made it to the kernel mailing list where some Intel people looked into it and confirmed there is a bug. There is a problem where is the kernel allocation logic was capped to 8 cores, which leaves a few percent of performance off the table as the number of cores increase and the allocation is less and less optimal.
It's classic tragedy of the commons. CPU have got so complicated, there may only be a handful of people in the world who could work and comprehend a bug like this.
That said, there are sequential steps in Yocto builds too, notably installing packages into the rootfs (it uses dpkg, opkg or rpm, all of which are sequential) and any code you have in the rootfs postprocessing step. These steps usually aren't a significant part of a clean build, but can be a quite substantial part of incremental builds.

(Image credit: Intel)
Intel this week formally introduced its Xeon 6+ processors codenamed 'Clearwater Forest' that pack up to 288 energy-efficient Darkmont cores and are the first data center CPUs made on the company's 18A fabrication process (1.8nm-class). Intel aims its Xeon 6+ 'Clearwater Forest' processors primarily for telecom, cloud, and edge AI workloads as they feature Advanced Matrix Extensions (AMX), QuickAssist Technology (QAT), and Intel vRAN Boost technologies.
Go deeper with TH Premium: CPU
Image
1
of
2

(Image credit: Intel)
Intel's 'Darkmont' efficiency cores have received rather meaningful microarchitectural upgrades. Each core integrates a 64 KB L1 instruction cache, a broader fetch and decode pipeline, and a deeper out-of-order engine capable of tracking more in-flight operations. The number of execution ports has also been increased in a bid to improve both scalar and vector throughput under heavily threaded server workloads.
Article continues below
From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. At the top of the hierarchy, there is last-level cache across the full package that surpasses 1 GB, roughly 1,152 MB in total. This unusually large pool is intended to keep data close to hundreds of active cores and reduce dependence on external memory bandwidth, which in turn is meant to both increase performance and lower power consumption.
Platform-wise, the processor remains drop-in compatible with the current Xeon server socket, so the CPU has 12 memory channels that support DDR5-8000, 96 PCIe 5.0 lanes with 64 lanes supporting CXL 2.0.

(Image credit: Intel)
Intel positions Clearwater Forest for telecom and cloud workloads. The company says operators deploying 5G Advanced and future 6G networks increasingly rely on server CPUs for virtualized RAN and edge AI inference, as they do not want to re-architect their data centers in a bid to accommodate AI accelerators. By combining matrix/vector acceleration, vRAN offloads (using the vRAN Boost), large caches, and broad I/O in one platform, the CPU can perform jobs that are normally reserved for various accelerators that consume more power and take up space.
Also, extreme core count of Xeon 6+ 'Clearwater Forest' CPUs — that approaches 288 cores for uniprocessor configurations and 576 cores in dual socket configurations, enabling a single server to host dozens or even hundreds of virtual machines while maintaining power efficiency and low latency.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Systems based on Intel's Xeon 6+ processors will be available later this year.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
Sorry, not sure I am following your question.
When I was looking in October, I hadn't bought hardware for the better part of a decade, and I saw all these older posts on forums for DDR4 at $1/GB, but the lowest I could find was at least $2/GB used. These days? HAH!
If I had a decent sales channel I might be speculating on DDR4/DDR5 RAM and holding it because I expect prices to climb even higher in the coming months.
I hope it was wrong, but it seems at least plausible to me. I'm sure that probably fixes could be made for all these issues, but the reason the current paradigm works is that, other than the motherboard and CPU, everything else you need is standard, consumer grade equipment which is therefore cheap. If you need to start buying custom (new) power supplies etc. to go along, then the price may not make as much sense anymore.
I personally feel like I will downscale my homelab hardware to reduce its power draw. My HW is rather old (and leagues below yours), more recent HW tends to be more efficient, but I have no idea how well these high end server boards can lower their idle power consumption?
https://www.lenovo.com/us/en/p/laptops/ideapad/ideapad-slim-...
EDIT: Changed the lenovo link to a non-preorder laptop (was https://www.hp.com/us-en/shop/pdp/hp-omnibook-ultra-laptop-n...)
GPU and CPU manufacturing is the same thing, same node, same result. GPU is always maximizing perf/power ratio because it's embarrassingly parallel, leaving no room to game the benchmark. CPU can be gamed by having a single fast core, that drops performance in half as soon as you use another core.
You make products for well capitalized wireless operators that can afford the prevailing cost of the hardware they need. For these operations, the increase in RAM prices is not a major factor in their plans: it's a marginal cost increase on some of the COTS components necessary for their wireless system. The specialized hardware they acquire in bulk is at least an order of magnitude more expensive than server RAM.
Intel will sell every one of these CPUs and the CPUs will end up in dual CPU SMP systems fully populated with 1-2 TB of DDR5-8000 (2-4GB/core, at least) as fast as they can make them.
You could have SRE do it, but most places don’t because you can get someone to swap a dead drive for way cheaper (it’s not really a complicated operation).
That growth of SRE teams comes from wanting reliability further up the stack. If you’re not on AWS, there’s no Aurora so someone has to be DBA to do backups, performance monitoring, configuring failovers for when a disk dies and RAID needs to rebuild, etc. Same for network, networked storage, yada yada
It sort of comes back to support levels. Your Infra SRE teams stay small if either a) an app SRE team owns application specific stuff, or b) SRE just doesn’t support application specific stuff. Eg if a particular query is slow but the DB is normal, who owns root causing that? Whoever does needs headcount, whether it’s app SRE, infra SRE or the devs.
they come with warranty, often with technican guaranteed to arrive within few hours or at most a day. Also if SHTF just getting cloud to augument current lackings isn't hard
You can ask AI to troubleshoot and fix the issue.
And depending on the problem set in question, one can also potentially leverage "the cloud" for the big bursty compute needs and have the cheap colo for the day to day stuff.
For instance, in a past life the team I worked on needed to run some big ML jobs while having most things on extremely cheap colo infra. Extract the datasets, upload the extracted and well-formatted data to $cloud_provider, have VPN connectivity for the small amount of other database traffic, and we can burst to have whatever compute needed to get the computations done really quick. Copy the results artifact back down, deploy to cheap boxes back at the datacenter to host for clients stupid-cheap.
This is usually not the case because DevOps are often people that mostly worked on cloud services and Kubernetes clusters and not real hardware since most companies do not have on premise hardware anymore.
Flash is still extremely slow compared to ram, including modern flash, especially in a world where ram is already very slow and your cpu already keeps waiting for it.
That being said, you should consider ram/flash/spinning to be all part of a storage hierarchy with different constants and tradeoffs ( volatile or not, big or small , fast or slow etc ), and knowing these tradeoffs will help you design simpler and better systems.
These all have real hardware coherency going over the external cables, same protocol. Here is a Power10 server picture, https://www.engineering.com/ibm-introduces-power-e1080-serve... the cables attach right to headers brought out of the chip package right off the phy, there's no ->PCI->ethernet-> or anything like that.
These HP systems are similar. These are actually descendants of SGI Altix / SGI Origin systems which HP acquired, and they still use some of the same terminology (NUMAlink for the interconnect fabric). HP did make their own distinct line of big iron systems when they had PA-RISC and later Itanium but ended up acquiring and going with SGI's technology for whatever reasons.
These HP/SGI systems are slightly different from IBM mini/mainframes because they use "commodity" CPUs from Intel that don't support glueless multi socket that large or have signaling that can get across boards, so these have their own chipset that has some special coherency directories and a bunch of NUMAlink PHYs.
SGI systems came from HPC so they were actually much bigger before that, the biggest ones were something around 1024 sockets, back when you only had 1 CPU per socket. The interconnect topology used to be some tree thing that had like 10 hops between the farthest nodes. It did run Linux and wasn't technically cheating, but you really had to program it like a cluster because resource contention would quickly kill you if there was much cacheline transfer between nodes. Quite amazing machines, but not suitable for "enterprise" so IIRC they have cut it down and gone with all-to-all interconnect. It would be interesting to know what they did with coherency protocol, the SGI systems used a full directory scheme which is simple and great at scaling to huge sizes but not the best for performance. IBM systems use extremely complex broadcast source snooping designs (highly scoped and filtered) to avoid full directory overhead. Would be interesting to know if HPE finally went that way with NUMAlink too.
Found this diagram from an old HP product which was an SGI derivative https://support.hpe.com/hpesc/public/docDisplay?docId=a00062... 2 QPI busses and 16 NUMAlink ports!
Aha, it's still a directory protocol. https://support.hpe.com/hpesc/public/docDisplay?docId=sd0000...
Cheating IMO would be an actual cluster of systems using software (firmware/hypervisor) to present a single system image using MMU and IB/ethernat adapters to provide coherency.
You can still use cloud for excess capacity when needed. E.g. use on-prem for base load, and spin up cloud instances for peaks in load.
> Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.
> If there's an issue with the server while they're sick or on vacation, you just stop and wait.
Very much depends on what you're doing, of course, but "you just stop and wait" for sickness/vacation sometimes is actually good enough uptime -- especially if it keeps costs down. I've had that role before... That said, it's usually better to have two or three people who know the systems though (even if they're not full time dedicated to them) to reduce the bus factor.
> $120K isn't going to cover the fully loaded costs of an SRE who can set up and run that.
Literally this. I can do SRE on-prem and cloud, and my 50/30/20 budget break-even point (as in, needs and savings but no wants - so 70%) is $170k before taxes. Rent is astonishingly high right now, and the sort of mid-career professional you want to handle SRE for your single DC is going to take $150k in this market before fucking off to the first $200k job they get.
Know your market, and pay accordingly. You cannot fuck around with SREs.
> Hiring 1 person to run the infrastructure means that 1 person is on-call 24/7 forever.
This is less of an issue than you might think, but strongly dependent upon the quality of talent you’ve retained and the budget you’ve given them. Shitbox hardware or cheap-ass talent means you’ll need to double or triple up locally, but a quality candidate with discretion can easily be supported by a counterpart at another office or site, at least short-term. Ideally though, yeah, you’ll need two engineers to manage this stack, but AWS savings on even a modest (~700 VMs) estate will cover their TC inside of six months, generally.
> There's a second bus factor: What happens when that 8xH100 starts to get flakey? You can't move the jobs to another server because you only have one. You can start diagnosing things and replacing parts and hope it gets to the root issue, but that's more downtime.
This strikes at another workload I neglected to mention, and one I highly recommend keeping in the public cloud: GPUs.
GPUs on-prem suck. Drivers are finnicky, firmware is flakey, vendor support inconsistent, and SR-IOV is a pain in the ass to manage at scale. They suck harder than HBAs, which I didn’t think was possible.
If you’re consuming GPUs 24x7 and can afford to support them on-prem, you’re definitely not here on HN killing time. For everyone else, tune your scaling controls on your cloud provider of choice to use what you need, when you need it, and accept the reality that hyperscalers are better suited for GPU workloads - for now.
> Going on-prem like this is highly risky.
Every transaction is risky, but the risk calculus for “static” (ADDS) or “stable” (ERP, HRIS, dev/test) work makes on-prem uniquely appealing when done right. Segment out your resources (resist the urge for HPC or HCI), build sensible redundancies (on-prem or in the cloud), and lean on workhorse products over newer, fancier platforms (bulletproof hypervisors instead of fragile K8s clusters), and you can make the move successful and sensible. The more cowboy you go with GPUs, K8s, or local Terraform, the more delicate your infra becomes on-prem - and thus the riskier it is to keep there.
Keep it simple, silly.
These come in a non-flakey variant?
S3 has excellent legal and auditory settings for data, as well as automatic data retention policies.
KMS is a very secure and well done service. I dare you to find an equivalent on-prem solution that offers as much security.
And then there's the whole DR idea. Failing over to another AWS region is largely trivial if you set it up correctly - on prem is typically custom to each organization, so you need to train new staff with your organizations workflows. Whereas in AWS, Route53 fail-over routing (for example) is the same across every organization. This reduces cost in training and hiring.
In AWS, it's straightforward to say e.g. "permit traffic on port X from instances holding IAM role Y".
You can easily e.g. get the firewall rules for all your ec2 instances in a structured format.
I really would not look forward to building something even 1/10th as functional as that.
The troublesome hardware is the stuff with custom backplanes and multiple daughterboards each hosting a node. Also AMD CPUs that lock themselves to a single motherboard.
If anyone has the time and knowledge to help with AVX512 support then it would be most welcome. Fair warning, even with the initial work already done this is still a huge project.
God bless capitalism.
None of those fit in 4MB of cache (the per-core on this part), or 1GB (the aggregate cache).
What AI models are you actually talking about? Do you mean old-school ML stuff, like decision trees or high dimensional indexes? No one I know calls those "AI", which is generally reserved for big-ish neural networks.
Companies following consultant reports will usually end up offering 50% ranges, which for SRE/SIE roles in major metros comes to around $163k. If they study BLS/FRED/CPI data and aim to pay someone enough for a 50/30/20 budget in a major metro at median rent, they’ll offer $175k to $200k+. If they want someone to stick around, buy an average home, lay roots, it’s $210k+, minimum.
“Six figures” doesn’t cover essentials anymore for almost every major city in the USA, and the last thing you can afford to cheap out on is the labor supporting your IT infra. Every corner you cut today on TC (outsourcing, offshoring, consulting) is just letting fires rage until you either parachute out or everything burns down, and that’s not a game you can afford to play with critical business technologies.
If a business can’t afford a properly staffed crew with enough allowance to cover a rotation of on call duties and allow for vacations, they should prefer the managed cloud services.
You’re paying more but you’re buying freedom and flexibility.
And the other argument: every company I've ever know to do AWS has an AWS sysadmin (sorry "devops"), same for Azure. Even for small deployments. And departments want their own person/team.
By doing this, you're guaranteeing a bus factor of below 1. I can't think of any business that wouldn't see that as being a completely unacceptable risk.
>> $120K isn't going to cover the fully loaded costs of an SRE who can set up and run that.
> Literally this. I can do SRE on-prem and cloud, and my 50/30/20 budget break-even point (as in, needs and savings but no wants - so 70%) is $170k before taxes. Rent is astonishingly high right now, and the sort of mid-career professional you want to handle SRE for your single DC is going to take $150k in this market before fucking off to the first $200k job they get.
That's $120k per pod. Four pods per rack at 50kW.
What universe are we living in that a single SRE can't manage even a single rack for less than half a million in total comp?
and somehow i have this impression that gpus on slurm/pbs could not be simpler.
u can use a vm for the head node, dont even need the clustering really..if u can accept taking 20min to restore a vm.. and the rest of the hardware are homogeneous - you setup 1 right and the rest are identical.
and its a cluster with a job queue.. 1 node going down is not the end of the world..
ok if u have pcie GPUs sometimes u have to re-seat them and its a pain. otherwise if ur h200 or disks fail u just replace them, under warranty or not...
My favorite are the responses from people saying the warranty will have someone show up in “hours” and fix it. Best of luck to you.
This system has SMP ASICs on the motherboards that talk to a couple of Intel processor sockets using their coherency protocol over QPI and they basically present themselves as a coherency agent and memory provider (similarly to the way that processors themselves have caches and DDR controllers). The Intel CPUs basically talk to them the same way they would another processor. But out the other side these ASICS connect to a bunch of others all doing the same thing, and they use their own coherency protocol among themselves.
For my hacking purposes it would have been perfect. It's hard to justify the project at even $2/GB though.
https://www.lenovo.com/us/en/p/laptops/ideapad/ideapad-slim-...
I let you know that your were uninformed and even suggested a very low effort way that you might look into the matter. So why didn't you do that?
A couple fairly arbitrary examples. A high performance zero shot TTS model can weigh in at well under 150 MiB. You can solve MNIST (ie perform OCR of handwritten english) to better than 99% accuracy with a sub-100 KiB model. Your LLM of choice will be able to provide you with plenty of others.
I never understand the drive to stay away from cloud services for small scale operations. It’s not your money that’s being spent on the cloud, but it is your free time being asked to be on call when you encourage your company to self-host!
The kind where TC isn’t measured by pod managed, but by person hired. Also the world where median rent in major metros is $3500 a month.
If you think $120k is rich, you’re either operating in the boonies, outside the USA/Canada, or incredibly out of touch with the cost of living today and need to seriously go study BLS/FRED/CPI data sets to understand how expensive it is to live right now.
Meanwhile lots of enterprise firewalls barely even have a concept of "zones". Its practically not even close to comparing for most deployments. Maybe with extremely fancy firewall stacks with $ $MAX_INT service contracts one can do something similar. But I guess with on-prem stuff things are often less ephemeral, so there's slightly less need.
So it's not CXL, instead it's proprietary ASICs masquerading as NUMA nodes but actually forwarding to their counterparts in the other chassis? Are they proprietary to HP or is this some new standard?
I’ve seen this before. It turns into restrictions on when you can schedule vacation times.
Not fun when your family wants to go on a trip but you can’t get the time off because it’s not one of the allowed vacation times.
Indeed, there's no reason for a company to host this kind of batch compute in North America. You can get very good people in Eastern Europe at 1/3 the cost.
Network engineers do network engineering :)
Kubernetes is running on bare metal quite a lot of places.