Wish it was Blender though ;)
> Testing conducted by Apple in January 2026 using preproduction 13-inch and 15-inch MacBook Air systems with Apple M5, 10-core CPU, 10-core GPU, 32GB of unified memory, and 4TB SSD, and production 13-inch and 15-inch MacBook Air systems with Apple M4, 10-core CPU, 10-core GPU, 32GB of unified memory, and 2TB SSD. Time to first token measured with an 8K-token prompt using a 14-billion parameter model with 4-bit quantization, and LM Studio 0.4.1 (Build 1). Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Air.
Are they doubling down on local LLMs then?
I still think Apple has a huge opportunity in privacy first LLMs but so far I'm not seeing much execution. Wondering if that will change with the overhaul of Siri this spring.
> Even More Value for Upgraders
> The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a major leap for pro users. There’s never been a better time for customers to upgrade from a previous generation of MacBook Pro with Apple silicon or an Intel-based Mac.
I read as "Whoops we made the M1 Macbook Pro too good, please upgrade!"
I think I will get another 2-5 years out my mine.
Apple: If you document the hardware enough for the Asahi team to deliver a polished Linux experiene, I'll buy one this year!
Interestingly, 36-128GB models are showing as “currently unavailable” on the store page, and you can’t even place an order for them right now? But for anyone curious, it’s quoting $5099 for the 128GB RAM 14” MacBook Pro model.
Also, the mix of cores have changed drastically.
- 6 "Super cores"
- 12 "Performance cores"
I'm guessing these are just renamed performance and efficiency cores from previous generations.
This is a massive change from the M4 Max:
- 12 performance cores
- 4 efficiency cores
This seems like a downgrade (in core config but may not be in actual MT) assuming super = performance and performance = efficiency cores.
I have not once felt the need to upgrade in years, and that’s with doing pretty demanding 3D and LLM work.
For those of us with astigmatism it's really night and day experience.
MacBook Pro with M5 Pro now comes standard with 1TB of storage, while MacBook Pro with M5 Max now comes standard with 2TB. And the 14-inch MacBook Pro with M5 now comes standard with 1TB of storage.This is the important statement. 614GB/s is quite decent, however a NVIDIA RTX 5090 already offers 1,792 GB/s (roughly 3x) of memory bandwidth, for comparison.
How is that different from the silicon interposer they were using before?
The big change is the two dies don’t have to fabbed next to each other in a single wafer, which is fantastic for costs and yields. But would this affect the interconnect speed somehow?
How would the two be wired together?
Could this mean the Ultra comes back in M6 since it would be easier to fab?
Neural Accelerators (aka NAX) accelerates matmults with tile sizes >= 32. From a very high level perspective, LLM inference has two phases: (chunked) prefill and decode. The former is matmults (GEMM) and the latter is matrix vector mults (GEMV). Neural Accelerators make the former (prefill) faster and have no impact on the latter.
Apple is in the hardware business.
They want you to buy their hardware.
People using Cloud for compute is essentially competitive to their core business.
Doubt
I imagine you basically use online models exclusively, and occasionally try out local stuff.
Source: My fortune 20 company tried with M whatever, and the local llms were unusable.
The high memory Macs have been great for being able to run LLMs, but the prompt processing has always been on the slow side. The new AI acceleration in these should help with that.
There are also workloads like compiling code where I’ll take all the extra speed I can get. Every little bit of reduced cycle time helps me finish earlier in the day.
And then there’s gaming. I don’t game much, but the M1 and M2 era Apple Silicon feels sluggish relative to what I have on the nVidia side.
and that’s with doing pretty demanding 3D and LLM work.
It definitely chokes with larger models that can fit the 192GB of RAM. Prompt processing is a big bottleneck before M5.It's the first time I've ever been so repulsed by a design that I actively avoid it just... out of sheer preference.
I think I read somewhere long time ago that Capture One is also using Qt for GUI, though cannot find this anymore, so probably not true.
Now it starts at $1699, a $100 bump but comes with a 1TB SSD. Previously it would have cost $1799 for the 1T SSD, so it's a $100 bump on base price but you are also getting 1TB SDD for $100 less than before.
The prompt processing sped up.
Not the output generation.
M4 was notoriously slow at this compared to DGX etc.
You can buy two m5 pro base model for the same price as a single 5090...
They seem to market it as a technological advancement, which it is, but rather than being excited im actually worried about hidden latencies that could come with that approach. Have you found any interesting info on that yet?
I think at this point Apple will just release new versions of laptops whenever new CPU revisions and yields allow. M5 Pro wasn't ready for October so delayed until now.
And another rumor said these are going to be updated again this fall but I’m not sure about that. With OLED screens and M6 (supposedly).
I don't mind it, I open Apple stock. But I'm def not buying into their rebranding of integrated GPU under the guise of Unified Memory.
Now extrapolating in line with how Sun servers around year 2000 cost a fortune and can be emulated by a 5$ VPS today, Apple is seeing that they can maybe grab the local LLM workloads if they act now with their integrated chip development.
But to grab that, they need developers to rely less on CUDA via Python or have other proper hardware support for those environments, and that won't happen without the hardware being there first and the machines being able to be built with enough memory (refreshing to see Apple support 128gb even if it'll probably bleed you dry).
This correlation of Apple and privacy needs to rest. They have consistently proven to be otherwise - despite heavily marketing themselves as "privacy-first"
https://www.theguardian.com/technology/2019/jul/26/apple-con...
I assume they have a moderate bet on on-device SLMs in addition to other ML models, but not much planned for LLMs, which at that scale, might be good as generalists but very poor at guaranteeing success for each specific minute tasks you want done.
In short: 8gb to store tens of very small and fast purpose-specific models is much better than a single 8gb LLM trying to do everything.
> The tech giant says the chips are engineered around its new Fusion Architecture, an advanced design that merges two dies into a single, high-performance system on a chip (SoC), which includes a powerful CPU, scalable GPU, Media Engine, unified memory controller, Neural Engine, and Thunderbolt 5 capabilities.
https://techcrunch.com/2026/03/03/apple-unveils-m5-pro-and-m...
They also replaced the efficiency cores on the CPU chiplet with a new higher performance design.
> The CPU now features six “super cores,” which is Apple’s term for its highest-performance cores, alongside 12 all-new performance cores. Collectively, the CPU boosts performance by up to 30% for pro workloads.
Which roughly translates to 30B Q8 size LLM at 10t/s for the M5 Pro and 60B Q8 size LLM at 10t/s for the M5 Max
For reference, RTX 3090 24GB has a memory bandwidth of approx. 936.2 GB/s, DGX Spark 128GB features a unified memory bandwidth of up to 273 GB/s
Are they doubling down on local LLMs then?
Neural Accelerator was present in iPhone 17 and M5 chip already. This is not new for M5 Pro/Max.Apple's stated AI strategy is local where it can and cloud where it needs. So "doubling down"? Probably not. But it fits in their strategy.
Nothing has broken and I consistently get 4-6 hours of heavy work time while on battery. An amazing machine for the price I paid.
My M3 Pro from a few years ago for the same price had 18GB.
Incidentally, I just switched to Asahi Linux, but that was for software quality and openness reasons, rather than anything to do with performance.
Unfortunately, number always must go up (and the rate at which the number goes up, also must go up).
This seems even likely as the memory bandwidth hasn't increased enough for those kinds of speedups, and I guess prefill is more likely to be compute-bound (vs mem bw bound).
The base M5 has super/efficiency cores.
The Pro and Max have super/performance cores.
I have a Intel-based 2019 Macbook Pro still and I have NEVER in its lifetime gotten even half of what they are claiming here. These days if I run it from battery I might get 90 mins.
That said I had a maxed out Macbook Pro M4 Max on order but just cancelled it right now and will get this new M5 Max one for basically the same price. Once I saw that they didn't up the price of memory (I don't know how it doesn't affect them) I canceled my order.
It doesn't even look like they added cellular as an option with their own C1X chip (getting around the licensing / cost issues since it's their own chip now).
Aren't the OpenClaw enjoyers buying Mac Minis because it's the cheapest thing which runs macOS, the only platform which can programmatically interface with iMessage and other Apple ecosystem stuff? It has nothing to do with the hardware really.
Still, buying a brand new Mac Mini for that purpose seems kind of pointless when a used M1 model would achieve the same thing.
Latency to the first token is not like a web page where first paint already has useful things to show. The first token is "The ", and you'll be very happy it's there in 50ms instead of 200ms... but then what you really want to know is how quickly you'll get the rest of the sentence (throughput)
Remains to be seen how capable it actually is. But they're certainly trying to sell the privacy aspect.
No change from the previous models then, 16GB->32GB was already $400. They're cutting into their previously enormous margins to keep the prices stable, rather than hiking the prices to maintain their margins.
Isn't this it?
Interesting that this hasn't budged since the memory shortages appeared.
But I think this predates Tahoe.
The new tensor cores, sorry, "Neural Accelerator" only really help with prompt preprocessing aka prefill, and not with token generation. Token generation is memory bound.
Hopefully the Ultra version (if it exists) has a bigger jump in memory bandwidth and maximum RAM.
Wondering if local LLM (for coding) is a realistic option, otherwise I wouldn't have to max out the RAM.
You mean on your first token. Whats the performance after 500 and 3000 tokens?
I genuinely don't understand why people post stuff like this. People are not informed enough to know you mean first tokens. They are going to make a mistake and buy one thinking they will get 100tk/s.
Are you working for Apple marketing? Do you have post purchase regret? I cannot imagine deliberately misleading people. Maybe you are hoping more buyers build up your ecosystem?
I think this is a new design, with Apple having three tiers of cores now, similar to what Qualcomm has been doing for a while.
I think how it breaks down is:
- "Super" are the old "P" cores, and the top tier cores now
- "Performance" cores are a new tier and seen for the first time here, slotting between "old" P and E in performance
- "Efficiency" / "E" are still going to be around; but maybe not in desktop/Pro/Max anymore.
I believe they lower the clock speed, limit how much work is done in parallel on each core, and limit how aggressive the speculative execution is so less work is wasted.
opens in new window
PRESS RELEASE March 3, 2026
CUPERTINO, CALIFORNIA Apple today announced the latest 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bringing game-changing performance and AI capabilities to the world’s best pro laptop. With M5 Pro and M5 Max, MacBook Pro features a new CPU with the world’s fastest CPU core,1 a next-generation GPU with a Neural Accelerator in each core, and higher unified memory bandwidth, altogether delivering up to 4x AI performance compared to the previous generation, and up to 8x AI performance compared to M1 models.2 This allows developers, researchers, business professionals, and creatives to unlock new AI-enabled workflows right on MacBook Pro. It now comes with up to 2x faster SSD performance2 and starts at 1TB of storage for M5 Pro and 2TB for M5 Max. The new MacBook Pro includes N1, an Apple-designed wireless networking chip that enables Wi-Fi 7 and Bluetooth 6, bringing improved performance and reliability to wireless connections. It also offers up to 24 hours of battery life; a gorgeous Liquid Retina XDR display with a nano-texture option; a wide array of connectivity, including Thunderbolt 5; a 12MP Center Stage camera; studio-quality mics; an immersive six-speaker sound system; Apple Intelligence features; and the power of macOS Tahoe. The new MacBook Pro comes in space black and silver, and is available to pre-order starting tomorrow, March 4, with availability beginning Wednesday, March 11.
MacBook Pro shows a music production screen in Avid Pro Tools.
Up to 7.8x faster AI image generation performance when compared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.
Up to 6.9x faster LLM prompt processing when compared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.
Up to 5.2x faster 3D rendering in Maxon Redshift when compared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.
Up to 1.6x faster gaming performance with ray tracing in games like Cyberpunk 2077: Ultimate Edition when compared to MacBook Pro with M4 Pro.
Up to 8x faster AI image generation performance when compared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.
Up to 6.7x faster LLM prompt processing when compared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.
Up to 5.4x faster video effects rendering performance in Blackmagic DaVinci Resolve Studio when compared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.
Up to 3.5x faster AI video-enhancing performance in Topaz Video when compared to MacBook Pro with M4 Max.
Enhanced AI performance with Neural Accelerators in the GPU: Users upgrading from M1 models will experience up to 8x faster AI performance.2
Exceptional battery life: The new MacBook Pro gets up to 24 hours of battery life, giving Intel-based upgraders up to 13 additional hours, and users coming from M1 models will get up to three more hours, so they can get more done on a single charge.2 And unlike many PC laptops, MacBook Pro delivers the same incredible performance whether plugged in or on battery. Users will be able to fast-charge up to 50 percent in just 30 minutes using a 96W or higher USB-C power adapter.2
Best display in a pro laptop: Upgraders will enjoy the Liquid Retina XDR display, which features 1600 nits peak HDR brightness and up to 1000 nits for SDR content, and offers a nano-texture option.
Comprehensive connectivity: The new MacBook Pro has a wide array of connectivity options, including three Thunderbolt 5 ports for high-speed data transfer, HDMI that supports up to 8K resolution, an SDXC card slot for quick media import, and MagSafe 3 with fast-charge capability. Upgraders can also drive up to two high-resolution external displays with M5 Pro, and up to four high-resolution displays with M5 Max, providing the flexibility to create expansive workspaces.
Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring improved performance and reliability to wireless connections.
Advanced camera, mics, and speakers: Featuring a 12MP Center Stage camera with Desk View support and studio-quality mics, the new MacBook Pro will allow users to look and sound their best while taking calls. They will also experience an immersive six-speaker sound system with support for Spatial Audio.
A MacBook Pro user sits outside with a telescope under a starry night’s sky.
MacBook Pro shows a colorful screen with a crowd of people wearing vibrant clothes.
A MacBook Pro user works at their desk with two external displays.
macOS Tahoe transforms the MacBook Pro experience with powerful capabilities that turbocharge productivity.6 Major updates to Spotlight make it easier to find relevant apps and files and immediately take action right from the search bar. Apple Intelligence is even more capable while protecting users’ privacy at every step.7 Shortcuts get even more powerful with intelligent actions and the ability to tap directly in to Apple Intelligence models. Integrated into Messages, FaceTime, and the Phone app, Live Translation helps users easily communicate across languages, translating text and audio.7 Additionally, developers can bring Apple Intelligence capabilities into their applications or tap in to the Foundation Models framework for specialized on-device intelligence tasks. Continuity features include the Phone app on Mac, which lets users relay cellular calls from their nearby iPhone, and with Live Activities from iPhone, they can stay on top of things happening in real time.6 macOS Tahoe also features a beautiful new design with Liquid Glass, and users can personalize their Mac in even more ways with an updated Control Center, in addition to new color options for folders, app icons, and widgets.
Customers can pre-order the new 14- and 16-inch MacBook Pro models with M5 Pro and M5 Max starting tomorrow, March 4, on apple.com/store and in the Apple Store app in 33 countries and regions, including the U.S. All models will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, starting Wednesday, March 11.
The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for education.
The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for education. All models are available in space black and silver.
Additional technical specifications, configure-to-order options, and accessories are available at apple.com/mac.
The 14-inch MacBook Pro with M5 now comes standard with 1TB of storage, and is available in space black and silver, starting at $1,699 (U.S.) and $1,599 (U.S.) for education.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
AppleCare delivers exceptional service and support, with flexible options for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to protect multiple products in one simple plan. Both plans include coverage for accidents like drops and spills, theft and loss protection on eligible products, battery replacement service, and 24/7 support from Apple Experts. For more information, visit apple.com/applecare.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup or focus on features that will help them make the most of their new device. Customers can also learn more about getting started and going further with their new device with a Today at Apple session at their nearest Apple Store.
Customers in the U.S. who shop at Apple using Apple Card can pay monthly at 0 percent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 percent Daily Cash back — all up front. More information — including details on eligibility, exclusions, and Apple Card terms — is available at apple.com/apple-card/monthly-installments.
Text of this article
Media in this article
It's the best. We all turned it off. 100% privacy.
for example, let's say the new os depends on m5's exclusive thumbnail generator accelerator, and let's say it improves speed by a 20%.
now, your M1 notebook than on previous OSes uses standard gpu acceleration for thumbnails will not have this specialized hardware acceleration, it will have software fallback that will be 90% slower.
you won't notice it a first thought because it's stuff, fast, but it eats a bit of the processor.
multiply this by 1000 features and you have a slow machine.
I don't know how else to explain how an ipad pro cannot even scroll a menu without stuttering, it's insane how fast these things were on release
I still don't have a strong urge to upgrade. I could probably get by on 32GB (like my work-issued machine is) but 64GB is the right amount of headroom for me.
~9 years later, there are a lot of people still using it as their main machine, waiting until we get kicked off the corp network for lack of software support.
Apple has had enough war chests with the ability of buying the entirety of TSMC's new capacity years in advance in the past.
If I were to guess, Apple locked in their entire BOM and production capacity two years ago. That's something even the large players cannot replicate because they run cash-lean and have too many different SKUs, and the small players (Framework, System76, even Steam) are entirely left to the forces of the markets.
I use my laptop for development. I don't actually use most of the built in applications. My browser is Firefox, I use codex, vs code, intellij, iterm2, etc. Most of that works just fine just as it did on previous versions of the OS. I actually on purpose keep my tool chains portable as I like to have the option to switch back to Linux when I want to. I've done that a few times. I come back for the hardware, not the OS.
In my experience, if you don't like Apple's OS changes that is unfortunate but they don't seem to generally respond to a lot of the criticism. Your choices are to get further and further out of date, switch to something else, or just swallow your pride. Been there done that. Windows is a "Hell No" for me at this point. I'll take the UX, with all the pastel colors that came and went and all the other crap that got unleashed on macs over the last ten years. Definitely a case of the grass not being greener on Windows. Even with the tele tubby default desktop in XP back in the day.
I can deal with Linux (and use that on and off on one of my laptops). However, that just doesn't run that well on mac hardware. And any other hardware seems like a big downgrade to me. Both Windows and Linux are arguably a lot worse in terms of UX (or lack thereof). Linux you can tweak. And you kind of have to. But it just never adds up to consistent and delightful. Windows, well, at this point liking that is probably a form of Stockholm Syndrome. If that doesn't bother you, good for you.
So, Mac OS it is for me as everything else is worse. I've in the past deferred updates to new versions of Mac OS as well. Generally you can do that for a while but eventually it becomes annoying when things like homebrew and other development toys start assuming you run something more recent. And of course for security reasons you might just not drag your feet too long. Just my personal, pragmatic take.
Literally unusable
Most stuff ends up running Metal -> GPU I thought
That's actually the biggest growth area in LLMs, it is no longer about smart, it is about context windows (usable ones, note spec-sheet hypotheticals). Smart enough is mostly solved, combating larger problems is slowly improving with every major release (but there is no ceiling).
I think the truth is somewhere in the middle, many people don't realize just how performant (especially with MLX) some of these models have become on Mac hardware, and just how powerful the shared memory architecture they've built is, but also there is a lot of hype and misinformation on performance when compared to dedicated GPU's. It's a tradeoff between available memory and performance, but often it makes sense.
For example, 6 super, 8 performance, and 4 efficiency.
The M5 performance cores can be scaled down to match efficiency cores in performance and power usage.
Source for this?> The industry-leading super core was first introduced as performance cores in M5, which also adopts the super core name for all M5-based products
But new "performance" is claimed to be new design (= not just overclocked efficiency core from M5?):
> M5 Pro and M5 Max also introduce an all-new performance core that is optimized to deliver greater power-efficient, multithreaded performance for pro workloads.
quotes from https://www.apple.com/newsroom/2026/03/apple-debuts-m5-pro-a...
M5 Max maxes out at 128GB, so that will have to wait for the eventual M5 Ultra anyways.
CoreImage - GPU accelerated image processing out of the box;
ML/GPU frameworks - you can get built-in, on device's GPU running ML algorithms or do computations on GPU;
Accelerate - CPU vector computations;
Doing such things probably will force you to have platform specific implementations anyway. Though as you said - makes sense only in some niches.
They also probably had RAM contracts in place far enough in advance to avoid the worst of the price spikes.
Basically, too many choices to "focus on" makes non a winner except the incumbent.
For example, grab yourself an Omen Transcend 14, spec it to 64GB RAM and the RTX 5070. You’re under $2000 and getting better graphics performance for anything that isn’t AI, and you’ve got an upgradable 1TB SSD and removable WiFi card.
You’re also getting an OLED screen which most people would prefer.
This model in particular I’ve chosen because it’s just as quiet as the M4 MacBook Pro models within 3dB during high intensity usage and gets very similar battery life, actually better battery life than the M4 Pro/Max models for light tasks.
the Liquid Glass for example probably is not so great when it comes to resources. Probably works better with latest metal and hardware blocks on the GPU in M5 as opposed to using GPU cores and unified memory on 8gb M1 making latest macOS work not so great. I have the M1 8gb air and it is really slow on Tahoe. It was snappy just a couple of years ago on a fresh install.
It's so bad I switched back to Chrome. I had thought Chrome had a major battery life penalty compared to Safari on Macs, but I checked more up-to-date info and apparently that's outdated.
https://creativestrategies.com/research/m5-apple-silicon-its...
For example, up until MacBookPro M2, MacBookPro M2 came with M2 Pro chip.
However, starting with M3, Apple lowered the MacBookPro MSRP to $1599, but its base configuration was downgraded to M3 chip from M3 Pro. To get the M3 Pro, you had to pay $1999. There's substantial performance between the two.
Same with M4. To get the M4 Pro chip, you had to pay $1999.
Now to get M5 Pro chip, it's $2199. Still a good value, but just saying it's a deviation from the trend.
That's likely only part of the reason. Mac Mini is now "cheap" because everyone exploded in price. RAM and SSD etc have all gone up massively. Not the mention Mac mini is easy out of the box experience.
For the same price in API calls, you could fund AI driven development across a small team for quite a long while.
Whether that remains the case once those models are no longer subsidized, TBD. But as of today the comparison isn't even close.
I certainly only use Macs when being project assigned, then there are plenty of developers out there whose job has nothing to do with what Apple offers.
Also while Metal is a very cool API, I rather play with Vulkan, CUDA and DirectX, as do the large majority of game developers.
https://survey.stackoverflow.co/2025/technology/#1-computer-...
Yes, I'm sure by then there will be better models on offer via cloud providers, but idk if I'll even care. I'm not doing science / research or complex mathematical proofs, I just want a model good enough to vibe code personal projects for fun. So I think at that point I'll stop being a OpenAI / Anthropic customer.
Phones have less configurability, they sell more, and colors seem more important.
I considered the mac mini at the time, but the mac mini only makes sense if you need the local processing power or the apple ecosystem integration. It's certainly not cheaper if you just need a small box to make API calls and do minimal local processing.
And while it is stupid slow, you can run models of hard drive or swap space. You wouldn’t do it normally, but it can be done to check an answer in one model versus another.
Only groups of developers more tied to Windows that I can think of are probably embedded people tied due to weird hardware SDK's and Windows Active Directory dependent enterprise people.
Outside of that almost everyone hip seems to want a Mac.
I already left the beta train on my iPhone because I had too many issues getting my grocery apps to allow me to place orders without going to my laptop and doing it in a web browser.
Even if a new device is a small upgrade from last year's model, it can be a giant upgrade for other people.
In Europe I can get a 128gb mac studio m4 max for 300 euros more than a 5090 (for which you still need to buy a power supply, motherboard, cpu , &c.)
Yeah, because Mac upgrade prices were already sky high before the component shortage. 32GB of DDR5-6000 for a PC rocketed from $100 to $500, while the cost of adding 16GB to a Mac was and still is $400.
If you just need "a small box to make API calls and do minimal local processing" you an also just buy a RPI for a fraction of the price of the GMKtec G10.
All 3 serve a different purpose; just because you can buy a slower machine for less doesn't mean the price:performance of the M1 Mac Mini changes.
Do you really need Openclaw now? And not claude code + zapier or Claude code + cron?
That's the point. If you have worse CPU and GPU Windows will be sluggish (it's bloated).
The US 1s? Is that why we have Deepseek and then other non-US open source LLMs catching up rapidly?
World view please. The developer community is not US only.
It wouldn’t surprise me if the deepseek people were primarily using Mac’s. Maybe Alibaba might be using PCs? I’m not sure.