> Even More Value for Upgraders
> The new 14- and 16-inch MacBook Pro with M5 Pro and M5 Max mark a major leap for pro users. There’s never been a better time for customers to upgrade from a previous generation of MacBook Pro with Apple silicon or an Intel-based Mac.
I read as "Whoops we made the M1 Macbook Pro too good, please upgrade!"
I think I will get another 2-5 years out my mine.
Apple: If you document the hardware enough for the Asahi team to deliver a polished Linux experiene, I'll buy one this year!
Actually, I can think of one hardware want: have they gotten it to where you can do external GPUs and the like more easily?
Would still buy one over any other laptop on the market today for what I use them for.
It's still shrugging off everything I throw at it, including Windows-only games. I've yet to have a moment where I wished it was faster. I was hoping for a newer display or body before I upgraded. The only "essential" features seem to be WiFi 7 and Bluetooth 6 if they make much a difference in everyday life.
Which roughly translates to 30B Q8 size LLM at 10t/s for the M5 Pro and 60B Q8 size LLM at 10t/s for the M5 Max
For reference, RTX 3090 24GB has a memory bandwidth of approx. 936.2 GB/s, DGX Spark 128GB features a unified memory bandwidth of up to 273 GB/s
It doesn't even look like they added cellular as an option with their own C1X chip (getting around the licensing / cost issues since it's their own chip now).
I am having to use M4 at work and it is the worst piece of equipment I have used. Knowing how Apple releases more of the same, M5 won't be different.
It has 24GB and it is slow asf, takes forever to open apps and macOS somehow managed to be worse than Windows.
I am Linux user, and on macOS you CANNOT use:
- Ctlr + ABCVYXZ
- Shift/Ctrl + Insert: Copy/paste for terminal
- F5
- Home/End
- Backspace?? Fn + Del like WTF!!
- Select with touchpad?? You must physically press its button like WTF
- Mouse with backward/forward button?? Good luck!!
macOS feels like it was built for people who depend heavily on mouses, if you are used to Linux able to get a lot done with keyboard shortcuts that work even on Windows mind you, you are gone.
The amount of time wasted fighting macOS is insane.
The temptation of running a local LLM on my gaming PC's GPU finally gave me the incentive I needed to set up Tailscale & Mosh, and there's no going back. My 15" M2 Macbook air is my ideal travel form-factor, and I'd much rather "upgrade" by adding a power-sipping homelab box I can remote into from anywhere.
MacBook Pro with M5 Pro now comes standard with 1TB of storage, while MacBook Pro with M5 Max now comes standard with 2TB. And the 14-inch MacBook Pro with M5 now comes standard with 1TB of storage.How is that different from the silicon interposer they were using before?
The big change is the two dies don’t have to fabbed next to each other in a single wafer, which is fantastic for costs and yields. But would this affect the interconnect speed somehow?
How would the two be wired together?
Could this mean the Ultra comes back in M6 since it would be easier to fab?
Are they doubling down on local LLMs then?
I still think Apple has a huge opportunity in privacy first LLMs but so far I'm not seeing much execution. Wondering if that will change with the overhaul of Siri this spring.
Also, the mix of cores have changed drastically.
- 6 "Super cores"
- 12 "Performance cores"
I'm guessing these are just renamed performance and efficiency cores from previous generations.
This is a massive change from the M4 Max:
- 12 performance cores
- 4 efficiency cores
This seems like a downgrade (in core config but may not be in actual MT) assuming super = performance and performance = efficiency cores.
I'm really wanting to build proper local-first AI workflows at home, and I think Apple has an opportunity to make that possible in a way other companies aren't really focused on, but we need significantly larger memory capabilities to do it, which I know is tough in the current memory market but should be available for a cost.
> Testing conducted by Apple in January 2026 using preproduction 13-inch and 15-inch MacBook Air systems with Apple M5, 10-core CPU, 10-core GPU, 32GB of unified memory, and 4TB SSD, and production 13-inch and 15-inch MacBook Air systems with Apple M4, 10-core CPU, 10-core GPU, 32GB of unified memory, and 2TB SSD. Time to first token measured with an 8K-token prompt using a 14-billion parameter model with 4-bit quantization, and LM Studio 0.4.1 (Build 1). Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Air.
> The tech giant says the chips are engineered around its new Fusion Architecture, an advanced design that merges two dies into a single, high-performance system on a chip (SoC), which includes a powerful CPU, scalable GPU, Media Engine, unified memory controller, Neural Engine, and Thunderbolt 5 capabilities.
https://techcrunch.com/2026/03/03/apple-unveils-m5-pro-and-m...
They also replaced the efficiency cores on the CPU chiplet with a new higher performance design.
> The CPU now features six “super cores,” which is Apple’s term for its highest-performance cores, alongside 12 all-new performance cores. Collectively, the CPU boosts performance by up to 30% for pro workloads.
I have not once felt the need to upgrade in years, and that’s with doing pretty demanding 3D and LLM work.
For those of us with astigmatism it's really night and day experience.
I have a Intel-based 2019 Macbook Pro still and I have NEVER in its lifetime gotten even half of what they are claiming here. These days if I run it from battery I might get 90 mins.
That said I had a maxed out Macbook Pro M4 Max on order but just cancelled it right now and will get this new M5 Max one for basically the same price. Once I saw that they didn't up the price of memory (I don't know how it doesn't affect them) I canceled my order.
This is the important statement. 614GB/s is quite decent, however a NVIDIA RTX 5090 already offers 1,792 GB/s (roughly 3x) of memory bandwidth, for comparison.
It's one of those things, yes if I'm spending that much on a laptop I can afford to spend $80 on the adapter too, but does it feel good as a customer to do that or are you souring the experience of buying from you just to earn a few more dollars.
Interestingly, 36-128GB models are showing as “currently unavailable” on the store page, and you can’t even place an order for them right now? But for anyone curious, it’s quoting $5099 for the 128GB RAM 14” MacBook Pro model.
As there target for that marketing, I can report it hits home!
But objectively, there is nothing wrong with my current experience at all.
I have never had that experience over many generations and types of machines. The M1 keeps looking better and better in hindsight.
—-
Looking forward, either the M5 is the next M1, a bump of good that will last. Or Apple will be really firing on all cylinders if it can “obsolete” the M5 anytime soon.
Of all the stupid things Apple has said lately, this is the most obtuse, pro-market-insulting nonsense. Intel-based Macs were knee-deep in issues Apple wasn’t fixing, and then along came the snappy and always-cool M1.
That was the best time for customers to upgrade. The new Silicon generations can be quite good, but they’re not worlds ahead in anything.
I’ll upgrade my M1 when Apple releases a macOS worthy of being used by its pro customers.
for example, let's say the new os depends on m5's exclusive thumbnail generator accelerator, and let's say it improves speed by a 20%.
now, your M1 notebook than on previous OSes uses standard gpu acceleration for thumbnails will not have this specialized hardware acceleration, it will have software fallback that will be 90% slower.
you won't notice it a first thought because it's stuff, fast, but it eats a bit of the processor.
multiply this by 1000 features and you have a slow machine.
I don't know how else to explain how an ipad pro cannot even scroll a menu without stuttering, it's insane how fast these things were on release
My next MBP will have 128GB memory, but these prices just wanna make me wait longer.
I thought the buyer was insane to buy it at that price. But, of course mine had a decent spec and still had the Apple care warranty with very low battery cycle count. After the sale, the buyer told me the truth: The M1 is the best chip Apple ever made and I wouldn't see much of a difference in real world between the M1 Pro and an M3 Pro unless it was the Max version of the chip.
I didn't believe him then. But, after a year of being on M3 Pro, I gotta say he was spot on. Don't get me wrong, the M3 Pro is definitely faster in a lot of things. But not 3x or 2x faster like Apple always like to market. I can open a few extra tabs without slowing down, compile times (Elixir) did get somewhat faster. But definitely not faster to the point where there were two generations worth of performance improvements like Apple claimed.
The M1 chip series is vastly underrated.
Nothing has broken and I consistently get 4-6 hours of heavy work time while on battery. An amazing machine for the price I paid.
Also, my wife's still using the older touch bar MBP, and we'll, it works fine for her too.
I'm not sure who needs the newer pros.
It's chiplets just like GB10, Strix Halo, etc. One die has the CPU and the other die has the GPU.
How is that different from the silicon [bridge] they were using before?
It's probably similar.
the two dies don’t have to fabbed next to each other
They never were; this is a widespread misunderstanding.
But would this affect the interconnect speed somehow?
Apple never documented the internal interconnect for the M4 Pro/Max and now they don't document it for the M5 Pro/Max so we don't know. It's probably better to read reviews and avoid theorycrafting and backseat driving.
Unfortunately, number always must go up (and the rate at which the number goes up, also must go up).
That G4 was a dog in Mac OS X 10.1. I installed Yellow Dog, and it lit a rocket under its ass.
i wish the new mbp had 256GB of ram :(
I don't see why I need a new computer at the moment. In the past, I always got to a stage where the machine felt sluggish.
(Can at least replace them via the self-service repair store. Fiddly job but worth it)
They seem to market it as a technological advancement, which it is, but rather than being excited im actually worried about hidden latencies that could come with that approach. Have you found any interesting info on that yet?
Incidentally, I just switched to Asahi Linux, but that was for software quality and openness reasons, rather than anything to do with performance.
this is my exact opposite experience. my M3 Max from 2 years ago now has <2hrs battery life at best. wondering if any experts here can help me figure out what is going on? what should i be expecting?
The new tensor cores, sorry, "Neural Accelerator" only really help with prompt preprocessing aka prefill, and not with token generation. Token generation is memory bound.
Hopefully the Ultra version (if it exists) has a bigger jump in memory bandwidth and maximum RAM.
This seems even likely as the memory bandwidth hasn't increased enough for those kinds of speedups, and I guess prefill is more likely to be compute-bound (vs mem bw bound).
Wondering if local LLM (for coding) is a realistic option, otherwise I wouldn't have to max out the RAM.
>I think I will get another 2-5 years out my mine.
I only own a M4 because the M1 had a hardware fault and I needed a replacement ASAP. (I sold the M1 after repair.)
Although I'm glad to have a newer machine with longer future support, I have yet to notice any meaningful performance difference.
The general case is hardly a "tinfoil hat theory". They openly do that, and the major reason is to tie to new hardware adoption.
That said, it doesn't usually work like you call it. It's not adding new features depending on HW optimization to slow older machines down (after all one could just not use those features in an older machine, or toggle them off).
It's rather: you want to get these shiny new features, which is all we advertise for iOS/macOS N+1, and the main new changes? The big ones will only work if you have a newer machine, even though we could trivially enable them on older machines (and some don't even need special hardware, as there are third-party hacks that unlock them and they work fine).
the Liquid Glass for example probably is not so great when it comes to resources. Probably works better with latest metal and hardware blocks on the GPU in M5 as opposed to using GPU cores and unified memory on 8gb M1 making latest macOS work not so great. I have the M1 8gb air and it is really slow on Tahoe. It was snappy just a couple of years ago on a fresh install.
Yes, different computers have different keyboards. Macs have had Mac conventions since 1984, and if you're not used to it you're not used to it. Instead of sticking to "the thing I'm not used to is wrong", I would suggest trying to be objective about which is the better UX.
I spent the first 25 years of my computer use being a Linux and Windows user and barely ever touching a Mac, so I've had to adjust, but to be honest, the truth is that Apple was always right and Microsoft made the wrong call.
Ctrl is meant to send control characters. This has been well defined since the birth of ASCII. In macOS, cmd-C is always copy, cmd-V is always paste. It does not matter what mode or program you are in.
Windows was designed for an IBM PC where the keyboard only had Ctrl and Alt, and when they copied Apple conventions (like cmd-X, C, V for cut, copy, paste) they made the wrong decision in using Ctrl for it. We've paid for this debt ever since. GNOME, KDE, XFCE, [...] devs continued this travesty by copying Windows, and so now on Linux Ctrl-C is copy in GUI apps, unless it's a terminal, in which case Ctrl-C will break out of your process, and copy is Ctrl-Shift-C. This is insane and bad UX.
The correct choice for a Linux DE would have been to use the "super" key ("Windows logo") for copy/paste etc, but of course they couldn't do that because 1) not everyone had that key and 2) it would confuse Windows users.
- Select with touchpad?? You must physically press its button like WTF
System Settings -> Touchpad -> Tap to click
- Mouse with backward/forward button?? Good luck!!
This is stupid. I will agree that Apple's insistence to not really support anything other than their own severely limited input devices is boneheaded. I recommend installing SensibleSideButtons.
I think at this point Apple will just release new versions of laptops whenever new CPU revisions and yields allow. M5 Pro wasn't ready for October so delayed until now.
And another rumor said these are going to be updated again this fall but I’m not sure about that. With OLED screens and M6 (supposedly).
While it’s workable, anything less than 24GB to me feels rather constrained. I definitely am not efficient though - leaving way too many browser tabs open I never actually get back to, running a few chrome profiles for work/side hustle/personal, etc.
I don’t think I’ve ever been CPU constrained for many years now. The few times I need to something that maxes out CPU just isn’t worth the upgrade vs taking a break to grab a cup of coffee.
I recently swapped out my work PC (a beefy workstation laptop) for an M4 Pro and it’s an amazing upgrade.
Also can you run batchwise effectively like vllm on cuda?
Enough to run multiple agents at the same time with throughput?
Neural Accelerators (aka NAX) accelerates matmults with tile sizes >= 32. From a very high level perspective, LLM inference has two phases: (chunked) prefill and decode. The former is matmults (GEMM) and the latter is matrix vector mults (GEMV). Neural Accelerators make the former (prefill) faster and have no impact on the latter.
• Studios with Ultra Mx, now 4-way RDMA over Thunderbolt 5, and enormous RAM and SSD options, suggest a strong focus. I don't know what else that RAM would be intended for. Four Studio Ultras (total of 360 GPU cores with M5 Ultras?) with 2TB of unified RAM is a local model beast.
• They refashioned their GPU cores to better support both graphic and neural processing, despite already having focused NPU cores.
I would say they have been leaning into local models for several years.
I expect we will see more models being optimized for smaller sizes, as demand for them increases. With hardware performance and neural focus trending up, and model requirements/quality trending down, the next few years will be interesting times.
What would make me happy: Ultra x 2 (i.e. 2xUltra, 4xMax, 8xPro, 16xM5) packaging in the Studio. With 8-way RDMA. Mac Kong. Perhaps Apple will start making server cards again.
Are they doubling down on local LLMs then?
Neural Accelerator was present in iPhone 17 and M5 chip already. This is not new for M5 Pro/Max.Apple's stated AI strategy is local where it can and cloud where it needs. So "doubling down"? Probably not. But it fits in their strategy.
The base M5 has super/efficiency cores.
The Pro and Max have super/performance cores.
Whoah, both the Pro and Max CPUs feature 18 cores. This hasn't happened since M1 Pro/Max. This is a surprise.
Replying to my own post. In hindsight, this shouldn't be any surprise because these chips are now chiplets. Apple is connecting a CPU die with a GPU die. This means they're designing just one CPU die rather than two. An Ultra would just be two of these CPU dies.I think I read somewhere long time ago that Capture One is also using Qt for GUI, though cannot find this anymore, so probably not true.
Batch-1 token generation, that is often quoted, does not benefit from this. It's purely RAM bandwidth-limited.
For example, grab yourself an Omen Transcend 14, spec it to 64GB RAM and the RTX 5070. You’re under $2000 and getting better graphics performance for anything that isn’t AI, and you’ve got an upgradable 1TB SSD and removable WiFi card.
You’re also getting an OLED screen which most people would prefer.
This model in particular I’ve chosen because it’s just as quiet as the M4 MacBook Pro models within 3dB during high intensity usage and gets very similar battery life, actually better battery life than the M4 Pro/Max models for light tasks.
Phones have less configurability, they sell more, and colors seem more important.
not too annoying to setup if the first thing you install is claude-cli
You start having problems when an heavy compilation is required, e.g. Android / iOS builds
Now extrapolating in line with how Sun servers around year 2000 cost a fortune and can be emulated by a 5$ VPS today, Apple is seeing that they can maybe grab the local LLM workloads if they act now with their integrated chip development.
But to grab that, they need developers to rely less on CUDA via Python or have other proper hardware support for those environments, and that won't happen without the hardware being there first and the machines being able to be built with enough memory (refreshing to see Apple support 128gb even if it'll probably bleed you dry).
Apple is in the hardware business.
They want you to buy their hardware.
People using Cloud for compute is essentially competitive to their core business.
I assume they have a moderate bet on on-device SLMs in addition to other ML models, but not much planned for LLMs, which at that scale, might be good as generalists but very poor at guaranteeing success for each specific minute tasks you want done.
In short: 8gb to store tens of very small and fast purpose-specific models is much better than a single 8gb LLM trying to do everything.
I don't mind it, I open Apple stock. But I'm def not buying into their rebranding of integrated GPU under the guise of Unified Memory.
Remains to be seen how capable it actually is. But they're certainly trying to sell the privacy aspect.
So as most people in or adjacent to the AI space know, NVidia gatekeeps their best GPUs with the most memory by making them eye-wateringly expensive. It's a form of market segmentation. So consumer GPUs top out at 16GB (5090 currently) while the best AI GPUs (H200?) is 141GB (I just had to search)? I think the previou sgen was 80GB.
But these GPUs are north of $30k.
Now the Mac Studio tops out currently at 512GB os SHARED memory. That means you can potentially run a much larger model locally without distributing it across machines. Currently that retails at $9500 but that's relatively cheap, in comparison.
But, as it stands now, the best Apple chips have significantly lower memory bandwidth than NVidia GPUs and that really impacts tokens/second.
So I've been waiting to see if Apple will realize this and address it in the next generation of Mac Studios (and, to a lesser extend, Macbook Pros). The H200 seems to be 4.8TB/s. IIRC the 5090 is ~1.8TB/s. The best Apple is (IIRC) 819GB/s on the M3 Ultra.
Apple could really make a dent in NVidia's monopoly here if they address some of these technical limitations.
So I just checked the memory bandwidth of these new chips and it seems like the M5 is 153GB/s, M5 Pro is ~300 and M5 Max is ~600. I was hoping for higher. This isn't a big jump from the M4 generation. I suspect the new Studios will probably barely break 1TB/s. I had been hoping for higher.
I think this is a new design, with Apple having three tiers of cores now, similar to what Qualcomm has been doing for a while.
I think how it breaks down is:
- "Super" are the old "P" cores, and the top tier cores now
- "Performance" cores are a new tier and seen for the first time here, slotting between "old" P and E in performance
- "Efficiency" / "E" are still going to be around; but maybe not in desktop/Pro/Max anymore.
I believe they lower the clock speed, limit how much work is done in parallel on each core, and limit how aggressive the speculative execution is so less work is wasted.
128 GB maximum.
Sigh.
Oh dear 14B and 4-bit quant? There are going to be a lot of embarrassed programmers who need to explain to their engineering managers why their Macbook can't reasonably run LLMs like they said it could. (This already happened at my fortune 20 company lol)
M5 128GB RAM with 614GB/s memory transfer
This is a huge step over M4 32GB 153GB/s memory transfer
For local LLM this make it a replacement for a DGX Spark, which offers a third of the transfer speed and is not something you toss in your backpack as your laptop. It’s practically useful for a lot of local use cases and that I think is the 4x factor (memory xfer) - but the 128Gb unified headroom tremendously improves the models you can run and training you can do.
Before:
"We have 6 performance cores and 12 efficiency cores"
After:
"We have 6 super cores and 12 performance cores"
"Wow, how did you achieve this?"
"We changed the names."
opens in new window
PRESS RELEASE March 3, 2026
CUPERTINO, CALIFORNIA Apple today announced the latest 14- and 16-inch MacBook Pro with the all-new M5 Pro and M5 Max, bringing game-changing performance and AI capabilities to the world’s best pro laptop. With M5 Pro and M5 Max, MacBook Pro features a new CPU with the world’s fastest CPU core,1 a next-generation GPU with a Neural Accelerator in each core, and higher unified memory bandwidth, altogether delivering up to 4x AI performance compared to the previous generation, and up to 8x AI performance compared to M1 models.2 This allows developers, researchers, business professionals, and creatives to unlock new AI-enabled workflows right on MacBook Pro. It now comes with up to 2x faster SSD performance2 and starts at 1TB of storage for M5 Pro and 2TB for M5 Max. The new MacBook Pro includes N1, an Apple-designed wireless networking chip that enables Wi-Fi 7 and Bluetooth 6, bringing improved performance and reliability to wireless connections. It also offers up to 24 hours of battery life; a gorgeous Liquid Retina XDR display with a nano-texture option; a wide array of connectivity, including Thunderbolt 5; a 12MP Center Stage camera; studio-quality mics; an immersive six-speaker sound system; Apple Intelligence features; and the power of macOS Tahoe. The new MacBook Pro comes in space black and silver, and is available to pre-order starting tomorrow, March 4, with availability beginning Wednesday, March 11.
MacBook Pro shows a music production screen in Avid Pro Tools.
Up to 7.8x faster AI image generation performance when compared to MacBook Pro with M1 Pro, and up to 3.7x faster than MacBook Pro with M4 Pro.
Up to 6.9x faster LLM prompt processing when compared to MacBook Pro with M1 Pro, and up to 3.9x faster than MacBook Pro with M4 Pro.
Up to 5.2x faster 3D rendering in Maxon Redshift when compared to MacBook Pro with M1 Pro, and up to 1.4x faster than MacBook Pro with M4 Pro.
Up to 1.6x faster gaming performance with ray tracing in games like Cyberpunk 2077: Ultimate Edition when compared to MacBook Pro with M4 Pro.
Up to 8x faster AI image generation performance when compared to MacBook Pro with M1 Max, and up to 3.8x faster than MacBook Pro with M4 Max.
Up to 6.7x faster LLM prompt processing when compared to MacBook Pro with M1 Max, and up to 4x faster than MacBook Pro with M4 Max.
Up to 5.4x faster video effects rendering performance in Blackmagic DaVinci Resolve Studio when compared to MacBook Pro with M1 Max, and up to 3x faster than MacBook Pro with M4 Max.
Up to 3.5x faster AI video-enhancing performance in Topaz Video when compared to MacBook Pro with M4 Max.
Enhanced AI performance with Neural Accelerators in the GPU: Users upgrading from M1 models will experience up to 8x faster AI performance.2
Exceptional battery life: The new MacBook Pro gets up to 24 hours of battery life, giving Intel-based upgraders up to 13 additional hours, and users coming from M1 models will get up to three more hours, so they can get more done on a single charge.2 And unlike many PC laptops, MacBook Pro delivers the same incredible performance whether plugged in or on battery. Users will be able to fast-charge up to 50 percent in just 30 minutes using a 96W or higher USB-C power adapter.2
Best display in a pro laptop: Upgraders will enjoy the Liquid Retina XDR display, which features 1600 nits peak HDR brightness and up to 1000 nits for SDR content, and offers a nano-texture option.
Comprehensive connectivity: The new MacBook Pro has a wide array of connectivity options, including three Thunderbolt 5 ports for high-speed data transfer, HDMI that supports up to 8K resolution, an SDXC card slot for quick media import, and MagSafe 3 with fast-charge capability. Upgraders can also drive up to two high-resolution external displays with M5 Pro, and up to four high-resolution displays with M5 Max, providing the flexibility to create expansive workspaces.
Wi-Fi 7 and Bluetooth 6: With the Apple N1 chip, Wi-Fi 7 and Bluetooth 6 bring improved performance and reliability to wireless connections.
Advanced camera, mics, and speakers: Featuring a 12MP Center Stage camera with Desk View support and studio-quality mics, the new MacBook Pro will allow users to look and sound their best while taking calls. They will also experience an immersive six-speaker sound system with support for Spatial Audio.
A MacBook Pro user sits outside with a telescope under a starry night’s sky.
MacBook Pro shows a colorful screen with a crowd of people wearing vibrant clothes.
A MacBook Pro user works at their desk with two external displays.
macOS Tahoe transforms the MacBook Pro experience with powerful capabilities that turbocharge productivity.6 Major updates to Spotlight make it easier to find relevant apps and files and immediately take action right from the search bar. Apple Intelligence is even more capable while protecting users’ privacy at every step.7 Shortcuts get even more powerful with intelligent actions and the ability to tap directly in to Apple Intelligence models. Integrated into Messages, FaceTime, and the Phone app, Live Translation helps users easily communicate across languages, translating text and audio.7 Additionally, developers can bring Apple Intelligence capabilities into their applications or tap in to the Foundation Models framework for specialized on-device intelligence tasks. Continuity features include the Phone app on Mac, which lets users relay cellular calls from their nearby iPhone, and with Live Activities from iPhone, they can stay on top of things happening in real time.6 macOS Tahoe also features a beautiful new design with Liquid Glass, and users can personalize their Mac in even more ways with an updated Control Center, in addition to new color options for folders, app icons, and widgets.
Customers can pre-order the new 14- and 16-inch MacBook Pro models with M5 Pro and M5 Max starting tomorrow, March 4, on apple.com/store and in the Apple Store app in 33 countries and regions, including the U.S. All models will begin arriving to customers, and will be in Apple Store locations and Apple Authorized Resellers, starting Wednesday, March 11.
The 14‑inch MacBook Pro with M5 Pro starts at $2,199 (U.S.) and $2,049 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Pro starts at $2,699 (U.S.) and $2,499 (U.S.) for education.
The 14‑inch MacBook Pro with M5 Max starts at $3,599 (U.S.) and $3,299 (U.S.) for education; and the 16‑inch MacBook Pro with M5 Max starts at $3,899 (U.S.) and $3,599 (U.S.) for education. All models are available in space black and silver.
Additional technical specifications, configure-to-order options, and accessories are available at apple.com/mac.
The 14-inch MacBook Pro with M5 now comes standard with 1TB of storage, and is available in space black and silver, starting at $1,699 (U.S.) and $1,599 (U.S.) for education.
With Apple Trade In, customers can trade in their current computer and get credit toward a new Mac. Customers can visit apple.com/shop/trade-in to see what their device is worth.
AppleCare delivers exceptional service and support, with flexible options for Apple users. Customers can choose AppleCare+ to cover their new Mac, or in the U.S., AppleCare One to protect multiple products in one simple plan. Both plans include coverage for accidents like drops and spills, theft and loss protection on eligible products, battery replacement service, and 24/7 support from Apple Experts. For more information, visit apple.com/applecare.
Every customer who buys directly from Apple Retail gets access to Personal Setup. In these guided online sessions, a Specialist can walk them through setup or focus on features that will help them make the most of their new device. Customers can also learn more about getting started and going further with their new device with a Today at Apple session at their nearest Apple Store.
Customers in the U.S. who shop at Apple using Apple Card can pay monthly at 0 percent APR when they choose to check out with Apple Card Monthly Installments, and they’ll get 3 percent Daily Cash back — all up front. More information — including details on eligibility, exclusions, and Apple Card terms — is available at apple.com/apple-card/monthly-installments.
Text of this article
Media in this article
We’ve really had it good with these Apple Silicon Mac’s
I’m not even sure how it got installed, possibly when I installed Zoom for an interview once but I don’t know. Point is, at least in one case, AI can help track down battery hogs.
Apple will replace the battery for $249 if you choose to. https://support.apple.com/mac-laptops/repair?services=servic...
Which is fine, I use Firefox usually, but any time I open Chrome it just seems to drain the battery super fast.
Unless of course you're doing something that truly sucks down your battery! If I spin up a few Docker instances doing 100% CPU then obviously battery will go down much quicker.
They also probably had RAM contracts in place far enough in advance to avoid the worst of the price spikes.
https://www.macrumors.com/2026/03/05/mac-studio-no-512gb-ram...
I type this from an M3 Max 2023 MBP that still has 98% battery health. But admittedly it's only gone through 102 charge cycles in ~2 years.
(use `pmset -g rawbatt` to get cycle count or `system_profiler SPPowerDataType | grep -A3 'Health'` to get health and cycles)
I'm typing this on an M3 Max; its max battery capacity is 88%. I've got some things running (laptop average temp is 50-55C, fans off), screen is half brightness, and it's projected to go from 90% to 0% in five hours. I don't usually baby it enough to test this, but 8-10 hours should be achievable.
The only plausible answers are either: something you’re running is eating CPU/GPU cycles like crazy (browser tabs gone amok, background processes) or you have a defective battery. Use Activity Monitor to look for energy usage and that will give you a pretty good idea.
Hot take: people should get used to, and expect to, replace device batteries 1 or 2 times during the device lifetime. They're the main limiting factor on portable device longevity, and engineers make all kinds of design tradeoffs just to make that 1 battery that the device ships with last long enough to not annoy users. If we could get people used to taking their device in for a battery once every couple of years, we could dramatically reduce device waste, and also unlock functionality that's hidden behind battery-preserving mechanisms.
And some apps are really inefficient. New Codex app drains my battery. If you are using Codex I recommend minimizing it, since it’s the UI that uses most power.
Most stuff ends up running Metal -> GPU I thought
Or they value things differently than you do.
Like screen brightness. Or external IO. Or more than 64GB of memory. Or not being stuck with Windows. Or an SSD larger than 2TB.
> removable WiFi card
I could stick my hand into a wood chipper and still use the stump to count the number of people I've ever seen mention much less desire a removable wifi card in the decision making process about a laptop.
No government mandates lack of charger.
I've never owned an iPhone, but if I did, it would be a sweet luxury to be able to colour match the phone to the mac. Orange on orange, orange on purple, purple on green. iPads can do it and they're practically useless e-waste
The high memory Macs have been great for being able to run LLMs, but the prompt processing has always been on the slow side. The new AI acceleration in these should help with that.
There are also workloads like compiling code where I’ll take all the extra speed I can get. Every little bit of reduced cycle time helps me finish earlier in the day.
And then there’s gaming. I don’t game much, but the M1 and M2 era Apple Silicon feels sluggish relative to what I have on the nVidia side.
You sadly just missed the window or cancelled too soon.
Normally if your current order is in progress they swap it out for the best closest spec for the exact same price you ordered the M4.
I am on a Macbook Pro M1 Pro running Asahi and a 28 inch external display via USB-C / dp alt mode as of typing this comment. They have a `fairydust` branch in their kernel repo which is meant for devs to test and hack on dp alt mode support, but it just works for me without problems.
See https://www.reddit.com/r/AsahiLinux/comments/1pzht74/dpaltmo...
People always overlook that CUDA is a polyglot ecosystem, the IDE and graphical debugging experience where one can even single step on GPU code, the libraries ecosystem.
And as of last year, NVidia has started to take Python seriously and now with cuTile based JIT, it is possible to write CUDA kernels in pure Python, not having Python generate C++ code that other tools than ingest.
They are getting ahead of Modular, with Python.
It's the best. We all turned it off. 100% privacy.
> But I'm def not buying into their rebranding of integrated GPU under the guise of Unified Memory.
But it is Unified Memory? Thanks to Intel iGPU term is tainted for a long time.
The M5 performance cores can be scaled down to match efficiency cores in performance and power usage.
Source for this?> The industry-leading super core was first introduced as performance cores in M5, which also adopts the super core name for all M5-based products
But new "performance" is claimed to be new design (= not just overclocked efficiency core from M5?):
> M5 Pro and M5 Max also introduce an all-new performance core that is optimized to deliver greater power-efficient, multithreaded performance for pro workloads.
quotes from https://www.apple.com/newsroom/2026/03/apple-debuts-m5-pro-a...
CoreImage - GPU accelerated image processing out of the box;
ML/GPU frameworks - you can get built-in, on device's GPU running ML algorithms or do computations on GPU;
Accelerate - CPU vector computations;
Doing such things probably will force you to have platform specific implementations anyway. Though as you said - makes sense only in some niches.
The only differences that are more expensive EU vs US is the AppleCare+ and taxes.
US looks like you pay yearly for AppleCare+ while EU it has to be for a fixed number of years.
So, if you want one of mine, you can have one. On me. Because I'm fucking drowning in the things and appreciate not having to deal with another one.
The EU requires that users must be able to buy a device without a charger. It's a huge supply chain challenge to add two variants of every single SKU, one with a charger and one without. So the obvious solution is to sell the charger separately, since you need that regardless, and always sell the device without a charger. You avoid having two variants of everything that way.
Now, you could maybe argue that Apple should default to bundle a charger with your laptop, so that you'd have to uncheck a "bundle charger" checkbox on their website. But do you really care whether your laptop costs $2200 and you can buy a charger for $60 or your laptop costs $2260 and you can save $60 by removing the charger?
You can make an argument that doing it Apple's way hides a price increase. And yeah, that's probably fair. But it's not like Apple is afraid of non-hidden price increases either.
My M3 Pro from a few years ago for the same price had 18GB.
The prompt processing sped up.
Not the output generation.
M4 was notoriously slow at this compared to DGX etc.
That's how they make loot on their 128GB MacBook Pros. By kneecapping the cheap stuff. Don't think for a second that the specs weren't chosen so that professional developers would have to shell out the 8 grand for the legit machine. They're only gonna let us do the bare minimum on a MacBook Air.
I'd take that tradeoff. On my M3 Ultra, the inference is surprisingly fast, but the prompt processing speed makes it painful except as a fallback or experimentation, especially with agentic coding tools.
That's actually the biggest growth area in LLMs, it is no longer about smart, it is about context windows (usable ones, note spec-sheet hypotheticals). Smart enough is mostly solved, combating larger problems is slowly improving with every major release (but there is no ceiling).
But I think this predates Tahoe.
and that’s with doing pretty demanding 3D and LLM work.
It definitely chokes with larger models that can fit the 192GB of RAM. Prompt processing is a big bottleneck before M5.Now it starts at $1699, a $100 bump but comes with a 1TB SSD. Previously it would have cost $1799 for the 1T SSD, so it's a $100 bump on base price but you are also getting 1TB SDD for $100 less than before.
Oh really, it's universally better?
> For those of us with astigmatism it's really night and day experience.
Oh. So it's better for someone else with a specific eye condition, who is practically guaranteed to never use a MacBook that I buy?
Aren't the OpenClaw enjoyers buying Mac Minis because it's the cheapest thing which runs macOS, the only platform which can programmatically interface with iMessage and other Apple ecosystem stuff? It has nothing to do with the hardware really.
Still, buying a brand new Mac Mini for that purpose seems kind of pointless when a used M1 model would achieve the same thing.
5090 has 32GB, and the 4090 and 3090 both have 24GB.
For example, 6 super, 8 performance, and 4 efficiency.
You can buy two m5 pro base model for the same price as a single 5090...
https://appleinsider.com/articles/25/10/15/eu-gets-what-it-a...
In the US they provide one in the box free of charge.
Interesting that this hasn't budged since the memory shortages appeared.
No change from the previous models then, 16GB->32GB was already $400. They're cutting into their previously enormous margins to keep the prices stable, rather than hiking the prices to maintain their margins.
Isn't this it?
It's the first time I've ever been so repulsed by a design that I actively avoid it just... out of sheer preference.
If you move your home directory to a different disk partition, you can even share it between two different macOS versions!
Yes, I'm sure by then there will be better models on offer via cloud providers, but idk if I'll even care. I'm not doing science / research or complex mathematical proofs, I just want a model good enough to vibe code personal projects for fun. So I think at that point I'll stop being a OpenAI / Anthropic customer.
And not even diehard Apple fanboys deny this.
I genuinely feel bad for people who fall for their marketing thinking they will run LLMs. Oh well, I got scammed on runescape as a child when someone said they could trim my armor... Everyone needs to learn.
~9 years later, there are a lot of people still using it as their main machine, waiting until we get kicked off the corp network for lack of software support.
But, I don’t feel totally corrected since that laptop only has one binary choice besides color. It’s 256GB w/o Touch ID or 512GB w/ Touch ID.
M5 Supports up to two external displays over any combination of Thunderbolt and HDMI ports:
Two displays up to a native resolution of 6K at 60Hz or 4K at 144Hz or
One display up to a native resolution of 8K at 60Hz or 5K at 120Hz or 4K at 240Hz
M5 Pro Supports up to three external displays over any combination of Thunderbolt and HDMI ports:
Three displays up to a native resolution of 6K at 60Hz or 4K at 144Hz or
One display up to a native resolution of 8K at 60Hz or 5K at 120Hz or 4K at 240Hz plus a second display up to a native resolution of 5K at 120Hz or 4K at 200Hz
Underneath it all is Wine which is the open source compatibility layer project which Crossover contributes to.
M5 Max maxes out at 128GB, so that will have to wait for the eventual M5 Ultra anyways.
Basically, too many choices to "focus on" makes non a winner except the incumbent.
This arb you’re talking about doesn’t exist. An m1 studio with 64 gb was $1300 prior to openclaw. You’re not getting that today.
I would have preferred that too since I could Asahi it later. It’s just not cheap any more. The m4 is flat $500 at microcenter.
Liquid Glass is really killing my love for Apple products. I'll probably get a Framework and an Android phone for my next device purchases.
They really need to just admit it was a bad move and make like Sonic.
They mandate that the charger be optional. The charger is indeed optional, and you don't pay for the charger if you don't opt-in.
What exactly else do you want?
SSD larger than 2GB? That's not a differentiating feature of a Mac. As a completely random example, an HP Omen Transcend 14 has DUAL M.2 SSD slots and that's not even a high end PC laptop. Macs are the only systems on the market where you can't upgrade the storage after purchase and you're doing so at insane markups.
More than 64GB of memory, yeah, that's also available on other PCs. Numerous PCs. I found multiple models from multiple manufacturers that support the same 128GB maximum. My Framework 13 supports 96GB and it's socketed DDR5 so I can just buy it at a store after purchase, or you can look at the new Lenovo T series Gen 7 (10/10 iFixit Repair Score) which has LPCAMM2 memory, allowing for BOTH repairability/upgradability AND high memory speed.
External IO, again, Apple isn't the gatekeeper of Thunderbolt 5. Where is the MacBook Pro with Oculink for the best external GPU performance? My Framework 13 has four fully customizable ports, I can literally put whatever ports I want on the machine and switch them out. Apple can't be bothered to put a USB-A port on a device despite the fact that it's still widely used and it would be convenient to just have one on there.
Apple doesn't make their own display panels. You can get a PC laptop with a wide variety of panels including the same mini-LED technology. Where is the MacBook Pro with a tandem OLED panel? You don't really get a choice with a Mac, you are stuck with the two different panels that they sell.
On Mac you're stuck with macOS. On a PC laptop you have more choices, Windows or a wide variety of Linux and BSD derivatives. Linux on Mac hardware is not fully functional and compatible with the hardware. Being stuck with Mac means you are unable to run a far wider array of software than being stuck with Windows or Linux. Imagine it this way: you just bought a top of the line MacBook Pro with the M5 Max chip, you've got a beast of a machine! You just spent your day crushing high intensity productivity tasks. Now you'd like to leverage the insane speed of your MacBook Pro playing some AAA video games. Oops! The macoS game library is miniscule, and CrossOver ($ annual license fee) is not ideal compared to Steam on Windows or Steam/Proton on Linux. I guess I can just play Cyberpunk 2077 or Rise of the Tomb Raider for the 10th time on my Mac!
Nobody mentions a removable WiFi card until their WiFi/Bluetooth stops working and they're stuck with an astronomical repair bill and now because they're Apple customers they are buying an entirely new system and "recycling" their old laptop. On a system like a Framework or a Lenovo T series 7th gen, components like USB-C ports are removable in case they are physically damaged, and you can buy parts directly from the manufacturer. Apple's strategy is to upsell you on an extended warranty/insurance plan to try and avoid astronomical repair bills.
For example, up until MacBookPro M2, MacBookPro M2 came with M2 Pro chip.
However, starting with M3, Apple lowered the MacBookPro MSRP to $1599, but its base configuration was downgraded to M3 chip from M3 Pro. To get the M3 Pro, you had to pay $1999. There's substantial performance between the two.
Same with M4. To get the M4 Pro chip, you had to pay $1999.
Now to get M5 Pro chip, it's $2199. Still a good value, but just saying it's a deviation from the trend.
https://survey.stackoverflow.co/2025/technology/#1-computer-...
I certainly only use Macs when being project assigned, then there are plenty of developers out there whose job has nothing to do with what Apple offers.
Also while Metal is a very cool API, I rather play with Vulkan, CUDA and DirectX, as do the large majority of game developers.
That's likely only part of the reason. Mac Mini is now "cheap" because everyone exploded in price. RAM and SSD etc have all gone up massively. Not the mention Mac mini is easy out of the box experience.
For the same price in API calls, you could fund AI driven development across a small team for quite a long while.
Whether that remains the case once those models are no longer subsidized, TBD. But as of today the comparison isn't even close.
Apple has had enough war chests with the ability of buying the entirety of TSMC's new capacity years in advance in the past.
If I were to guess, Apple locked in their entire BOM and production capacity two years ago. That's something even the large players cannot replicate because they run cash-lean and have too many different SKUs, and the small players (Framework, System76, even Steam) are entirely left to the forces of the markets.
There definitely are some who fit into this category, but if they're buying the latest and greatest on a whim then they've likely got money to burn and you probably don't need to feel bad for them.
Reminds me of the saying: "A fool and his money are soon parted".
For reference:
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| qwen35 ?B Q5_K - Medium | 6.12 GiB | 8.95 B | MTL,BLAS | 6 | pp512 | 288.90 ± 0.67 |
| qwen35 ?B Q5_K - Medium | 6.12 GiB | 8.95 B | MTL,BLAS | 6 | tg128 | 16.58 ± 0.05 |
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | MTL,BLAS | 6 | pp512 | 615.94 ± 2.23 |
| gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | MTL,BLAS | 6 | tg128 | 42.85 ± 0.61 |
Klein 4B completes a 1024px generation in 72seconds.They could have just made some layout improvements without trashing everything visually; that's sad, really.
The contour they put around the icon is really, really bad. How the fuck did they approve that?
Booting a 15 year old Mac a while ago had me surprised how clean the interface actually is. The Dock/Desktop look a lot better in the old versions, and the age is mostly showing in apps like Finder which do look a bit dated.
I really hope someone at Apple is going to make the call to drastically reduce the Liquid Glass design and start complying with their own UX guidelines again.
TL;DW: 2010s intel mac era laptops have seen at very best 35% single core CPU performance over in 5 years time! This happens almost every year now with M line macs.
Rant:
Retina macs were great and had great form factor over unibody macs. Touch-bar macs in the mid 2010s was IMHO a disaster. Terrible keyboard, poorer thermal capacity, missing essential ports, adapters galore.
But when it comes to performance - early 2010s macbooks with dedicated gpus had serious overheating issues.
Retina macbooks were decent, both form factor and performance.
Touch-bar macs were totally abysmal, all performance gains over previous generations was all through pumping more heat. CPUs constantly pegged at 90C+, cannot have laptop on your lap, Apple planning and delaying release schedules around intel fumbling their tik/tok cycles (as far as i remember some macs did not get any improvements for 2 years+ if not way more). Upgrades sometimes were total jokes, because of thermal throttling there was no point to put more hardware than it could work with. From reviews buying higher level cpu sometimes didn’t give noticeable real life gains because, again, thermal throttling kicking in instantly. 2020 intel macbook pro has fans spinning almost all the time. Having a remote call - your battery is dead in 2h max (essentially 1% per 1min).
M1 mac gave insane perceived performance boost - no noticeable throttling. Macbook airs are fully passively cooled, never heard M Macbook pro with fans screeching.
Also real full work day battery doing real work without power adapter at full performance. Cool to touch most of the time.
I made homework for a job in 2020 on a 2013 personal macbook. Apart from memory footprint - I could not feel noticeable difference on development experience. Editing images was frustrating on both. With M macs - its silent, smooth fast.
Number of parallel cores matching best intel cpus on base models, GPU blowing any mobile gpu in price range out of the water with thermal capacity to peg it 100% no problem. Unified memory for those GPUs to do what you could only imagined doing on GPUs that cost 3 times more than the macbook.
It’s a such excellent architecture that yeah - it’s “boring” you can nitpick about M69 Ultra Pro Max performance, but take a base MBP of any M line and it blows almost any laptop out of the water, even to this day.
https://9to5mac.com/2025/10/16/no-the-eu-didnt-ban-apple-fro...
Voice —> Speech to Text -> LLM to determine intent -> JSON -> API call -> response -> LLM -> text to speech.
TTFT is irrelevant, you have to process everything through the pipeline before you can generate a response. A fast model is more important than a good model
Source: I do this kind of stuff for call centers. Yes I know modern LLMs don’t go through the voice -> text -> LLM -> text -> voice anymore. But that only works when you don’t have to call external sources
For example just now from the front page: https://news.ycombinator.com/item?id=47242637 "Speculative Speculative Decoding"
Or this: https://openreview.net/forum?id=960Ny6IjEr "Low-Rank Compression of Language Models Via Differentiable Rank Selection"
https://creativestrategies.com/research/m5-apple-silicon-its...
What is "it" and what didn't it do?
Barring removal of Esc key, I think the touch bar was useful because it showed contextual actions. But not every app used it so it didn't really get a chance to shine.
I think people that do do tasks where a touch screen makes sense are probably just doing most of their work on an iphone or an ipad anyway.
Now gesture control on VR/AR setups? Sure, that feels like a new human/computer interaction system that makes sense. Jabbing at my laptop screen with one hand on my keyboard, not so much.
Sonnet is so fast too. GPT-5.2 needs reasoning tuned up to get tool calling reliable and Qwen3 Coder Next wasn’t close. I haven’t tried Qwen3.5-A3B. Hearing rave reviews though.
If you’re using successfully some model knowing that alone is very helpful to me.
And while it is stupid slow, you can run models of hard drive or swap space. You wouldn’t do it normally, but it can be done to check an answer in one model versus another.
Part of this has to be blamed on Apple, as the chassis designer and system integrator. Intel did not force them to put an i9 in the 16" MBP, Apple made that decision. Even now, Apple refuses to use the old Touchbar chassis for anything other than passively-cooled base model chips. It's a tacit admission that they know the design failed; it probably would still suck with Pro and Max chips inside them.
The paper-thin unibody, Butterfly keyboard and Touchbar were all unpopular features, but Apple shipped them anyways. It really shouldn't take 4+ years to respond to critical design flaws, especially if you're a trillion-dollar business.
Regardless, point still stands, they probably ordered this several years ago.
Most people can totally live with 16gigs but it is kind of a waste for the horsepower. They know what they are doing. Apple is a master in upselling.
Though personally I don't mid the aggressive upsellign as long as the quality is there. Problem is, the hardware quality is great but the software side is severely lacking and getting worse.
Which, I mean, I love unified memory, as one of those weirdos that does do local LLM stuff and am contemplating if it's time to upgrade my m2 max.
But if you needed 32gb then you still need at least 32gb now. Unless swap on nvme disks is enough for you - and it isn't for me.
The M4 Max has 546 GB/s compared to 614GB/s for the M5 Max. Which is like 12% faster not 4x.
I use my laptop for development. I don't actually use most of the built in applications. My browser is Firefox, I use codex, vs code, intellij, iterm2, etc. Most of that works just fine just as it did on previous versions of the OS. I actually on purpose keep my tool chains portable as I like to have the option to switch back to Linux when I want to. I've done that a few times. I come back for the hardware, not the OS.
In my experience, if you don't like Apple's OS changes that is unfortunate but they don't seem to generally respond to a lot of the criticism. Your choices are to get further and further out of date, switch to something else, or just swallow your pride. Been there done that. Windows is a "Hell No" for me at this point. I'll take the UX, with all the pastel colors that came and went and all the other crap that got unleashed on macs over the last ten years. Definitely a case of the grass not being greener on Windows. Even with the tele tubby default desktop in XP back in the day.
I can deal with Linux (and use that on and off on one of my laptops). However, that just doesn't run that well on mac hardware. And any other hardware seems like a big downgrade to me. Both Windows and Linux are arguably a lot worse in terms of UX (or lack thereof). Linux you can tweak. And you kind of have to. But it just never adds up to consistent and delightful. Windows, well, at this point liking that is probably a form of Stockholm Syndrome. If that doesn't bother you, good for you.
So, Mac OS it is for me as everything else is worse. I've in the past deferred updates to new versions of Mac OS as well. Generally you can do that for a while but eventually it becomes annoying when things like homebrew and other development toys start assuming you run something more recent. And of course for security reasons you might just not drag your feet too long. Just my personal, pragmatic take.
Latency to the first token is not like a web page where first paint already has useful things to show. The first token is "The ", and you'll be very happy it's there in 50ms instead of 200ms... but then what you really want to know is how quickly you'll get the rest of the sentence (throughput)
Not upgrading any of my Macs ever again. I was a fanboy looking for every new update like a present, for 13 years, not anymore. It took one Tahoe burn all that trust. Never upgrading major OS versions on hardware from Apple again.
Literally unusable
That's amazing. I have an early 2023 M2 Max MBP that mostly charges in desktop mode, which limits to 80%. I just looked in battery health and it says 82%. Damn! :(
For giggles, earlier today I asked Apple how much they'd give me for this machine if I traded it in on a brand new $5K M5 Max equivalent. $825. Ouch. I think I will keep it for a few more years. 96GB is enough memory to do anything I want, and it's been such a great performer that it's easily my favorite MacBook ever. I do wish the battery weren't so degraded though.
For anec-science, here goes:
% pmset -g rawbatt
03/03/2026 18:29:51
AC; Not Charging; 76%; Cap=76: FCC=100; Design=6075; Time=1092:15; 0mA; Cycles=63/1000; Location=0;
Polled boot=02/09/2026 07:24:50; Full=03/03/2026 18:24:52; User visible=03/03/2026 18:28:52
% system_profiler SPPowerDataType | grep -A3 'Health'
Health Information:
Cycle Count: 63
Condition: Normal
Maximum Capacity: 82%What in the world is an idle Claude Desktop doing that uses so much power?
> Hot take: people should get used to, and expect to, replace device batteries 1 or 2 times during the device lifetime.
I agree that people should get used to replacing device batteries, but if you accept that then you should just stop worrying about charge habits. An MBP that doesn't have a defective or extreme-heat-damaged battery should stay above 80% battery capacity for at least 600 charge cycles without any special care at all. That's many years of regular charging, and 80% capacity is still good for all day usage.
Replies like "but, but other laptops" are very weak attempts at deflection.
I won’t try to claim that Electron and friends have no place is software development but we absolutely should be pushing back harder against stuffing it everywhere it possibly can be.
> Anyone who cares about value isn’t getting a non-base model Mac.
And then proceeded to suggest a very specific model which has several limitations compared to what's available in a Mac.
Of course there are alternative options. The point is that each person has different priorities, and each available option has different trade offs.
I didn't really read the rest of all that.
On a laptop I would imagine it's actually more likely that fingertips would touch the screen while opening it.
Only groups of developers more tied to Windows that I can think of are probably embedded people tied due to weird hardware SDK's and Windows Active Directory dependent enterprise people.
Outside of that almost everyone hip seems to want a Mac.
I considered the mac mini at the time, but the mac mini only makes sense if you need the local processing power or the apple ecosystem integration. It's certainly not cheaper if you just need a small box to make API calls and do minimal local processing.
Suddenly that $40k is quite reasonable because you’ll never pay another dollar for st least 2-3 years.
Maybe hold back on the attitude
In Europe I can get a 128gb mac studio m4 max for 300 euros more than a 5090 (for which you still need to buy a power supply, motherboard, cpu , &c.)
I wonder if there's a fab time secondary market where Wall Street types are making millions off speculating fab time.
But low rank compression isn't trading off compute for memory - it's just compressing the model. And critically, that's lossy compression. That's primarily a trade-off of quality for speed/size, with a little bit of added compute. Same goals as quantization. If there was some compute-intensive lossless compression of parameters, lots of people would be happy. But those floating point values look a lot like gaussian noise, making them extremely difficult to compress.
generally, the less parameters, the less knowledge they have.
Or just because if somebody who knows the code inside out doesn't shoot down most new stupid feature requests, the product would end up a slow overcomplicated mess of random features and technical debt.
I've been holding out as you do for as long as I can but in 1-2 years the apps just stop working (some of them).
It's so bad I switched back to Chrome. I had thought Chrome had a major battery life penalty compared to Safari on Macs, but I checked more up-to-date info and apparently that's outdated.
Also, FWIW: Hector/Lina is no longer associated with Asahi anymore.
[1]: https://developer.apple.com/documentation/virtualization/usi...
https://cua.ai/docs/lume https://docs.openclaw.ai/channels/bluebubbles
Everyone hip alright, or at least those that would dream to earn a salary big enough to afford Apple taxes.
Remember there are world regions where developers barely make 1 000 euros per month.
Do you really need Openclaw now? And not claude code + zapier or Claude code + cron?
That's the point. If you have worse CPU and GPU Windows will be sluggish (it's bloated).
The 5090 is at 1,792GB/sec and potential M5 Ultra would be 1,230GB/sec and 512GB RAM. Maybe 1TB. Not 32.
It feels really stupid to have to throw away a perfectly capable machine with 64GB of RAM in 2026.
Searching for Chat yields "Ask ChatGPT", "ChatGPT Atlas", "ChatGPT Atlas" the website, and chatgpt.com. Does not yield the actual ChatGPT.app which I have currently open lol.
Win 11 is bad compared to Win 10 as well. I'm fairly new to Linux so I can't really form an opinion there.
It keeps nagging me to update to Tahoe.
Oh ... I just checked, and I could update to 14.8.4. Maybe that's safe.
Every modern desktop uses webviews in some capacity. macOS renders many apps with webviews, GNOME uses gjs to script half the desktop. The time to push back was 10-20 years ago, it's too late to revert now.
I responded to that with ample evidence that PCs deliver all of that and more.
Your response is to dismiss and insult my comment as being lengthy musings of a crazy person. You’re not reading “all that” just like Fox News isn’t going to put a socialist on the air without talking over them.
Dismissal is the stage you get to when logic and reasoning can’t sufficiently defend your point.
This is what Mac zealots do: they put blindfolds on and pretend like Mac hardware is above and beyond the laws of physics and PC laptops can’t possibly satisfy the same priorities that Macs do. Customer-hostile design is just a manifestation of consumer priority.
The US 1s? Is that why we have Deepseek and then other non-US open source LLMs catching up rapidly?
World view please. The developer community is not US only.
I have a M1 MacBook Pro with the touch bar since. It’s crap. I remember the keynote where they introduced it and a DJ mixed music using it. It was ridiculous that it got approved.
Yeah because Mac upgrade prices were already sky high, long before the component shortage. 32GB of DDR5-6000 for a PC rocketed from $100 to $500, while the cost of adding 16GB to a Mac was and still is $400.
If you just need "a small box to make API calls and do minimal local processing" you an also just buy a RPI for a fraction of the price of the GMKtec G10.
All 3 serve a different purpose; just because you can buy a slower machine for less doesn't mean the price:performance of the M1 Mac Mini changes.
The Mac mini strangely is and has been a very good deal for years now.
I've been working my way up from a 3090 system and I've been surprised by how underwhelming even the finetunes are for complex coding tasks, once you've worked with Opus. Does it get better? As in, noticeably and not just "hallucinates a few minutes later than usual"?
2-3 years ago people were fantasizing on running local models on a consumer nvidia RTX GPU.
2) If there is a market to hedge/speculate, it will exist. Maybe not 'fab futures' but there is definitely one.
Assuming, of course, that your legal team signs off on their assurance not to train on or store your data with said Enterprise plans.
To be clear, I totally get the idea of running local LLMs for toy reasons. But in a business context the sell on a stack of Mac Pros seems misguided at best.
RDMA is the bare minimum we should expect from a system that doesn't support eGPUs and treats PCI like a foreign language. It's not a long-term solution and even Apple themselves cannot deny this: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...
(It didn't help that they couldn't point to a single user facing feature.)
Or that the App Store lock in is for our safety. When anyone who wanted that particular safety, could choose to continue using there store exclusively.
Etc.
He just does not have it. No field. No spiraling eyes. Perhaps he should grow a beard and wave around a tobacco pipe. Works for some.
Because I have the problem on 7+ Macs (as in all mine, my kids', my sister's and my dad's (all of which I am primary tech support on)) where if I press ⌘+ to increase the font size on a website, it increases — and then immediately reverts back to the previous size.
Every single time. But only the first time. I just did it on this site to be sure it still happens.
Do it again, and it works.
It's been happening for at least one or two years, across more than one major OS upgrade. ¯\_(ಠ_ಠ)_/¯
I already left the beta train on my iPhone because I had too many issues getting my grocery apps to allow me to place orders without going to my laptop and doing it in a web browser.
Meanwhile on Windows major features like the Start menu are written in React.
Worth noting that WebKit webviews also tend to be more lightweight than their Chromium brethren.
I don't think gjs is a webview. It uses JavaScript, granted, but binds to a native toolkit, not to DOM and CSS.
Fortunately I just keep my laptop closed and use an attached display and keyboard and mouse, so I don't even remember if my M1 has a touch bar.
Also minor nit: it's seldom, not seldomly. Seldom certainly doesn't seem like an adverb, but it is.
Ok, actually you're right, that's a use case where I'll agree it's probably useful. If you're writing iOS applications it might be nice to run it in Simulator and be able to do gestures without having to offload to your physical device for testing.
A “slot” is the variable part of an intent. For instance “I want directions to 555 MockingBird Lane”. Would trigger a Directions intent that required where you are coming from and where you are going. Of course in that case it would assume your location.
Back in the pre LLM days and the way that Siri still works, someone had to manually list all of the different “utterances” that should trigger the intent - “Take me to {x}”,”I want to go to {x}” in every supported language and then had to have follow up phrases if someone just said something like “I need directions” to ask them something like “Where are you trying to go”.
Now you can do that with an LLM and some prompting and the LLM will keep going back and forth until all of the slots are filled and then tell it to create a JSON response when it has all of the information your API needs and you call your API.
This us what a prompt would look like to use a book a flight tool.
https://chatgpt.com/share/69a7d19f-494c-8010-8e9e-4e450f0bf0...
You also get the benefit of this works in any language not just English.
Musk is probably closest, but he’s become so involved in partisan politics it makes his field far less effective at distorting reality.
Absolutely. Why are all the buttons centred on the task bar for Windows 11? Violation of so many design rules. Literally the worst part of MacOS they took there which contradicted other reasons for the design. Throwing the mouse to the corner for a start button no longer works. I could go on.
> I'm fairly new to Linux so I can't really form an opinion there.
Gnome is great if you want something that gets out of your way. Some folks lament that its not as UI feature rich as KDE, but for me thats a bonus. The minimal UI combined with concentrating on UI features such as better mixed monitor scaling, etc. Love it.
KDE is extremely flexible, and featureful. You don't like the Windows default look and feel, make it a dock. Make it similar to Windows 8. Go wild. Not my thing these days but I can completely understand the draw to not be beholden to other peoples design choices if they don't fit your style.
I haven't used XFCE for a long time, as it didn't keep up with my high resolution monitors. But it was fast and flexible, and I hear that they are addressing this stuff now.
i3 was great. I drifted away during the great Wayland migration when i had to upgrade my laptop, found a bunch of neat updates to Gnome for my hardware, and just haven't found the time to return.
But the main point is that you are not forced into any one person/corporate point of view.
The same trend is visible in GPUs: for example, my RTX 2070 (GDDR6) has the same memory bandwidth as a 3070 and only a little bit less than a 4070 (GDDR6X). However, a 5070 does get significantly more bandwidth due to the jump to GDDR7. Lower-end cards like the 4060 even stuck to GDDR6, which gave them a bandwidth deficit compared to a 3060 due to the narrower memory buses on the 40 series.
It wouldn’t surprise me if the deepseek people were primarily using Mac’s. Maybe Alibaba might be using PCs? I’m not sure.
And this is what people with some kind of irrational obsession with hating Apple do: they work themselves up into some kind of fever pitch because other people have different priorities and choose a different computer.
Enjoy your HP. Or your Lenovo. It's a bit hard to keep up with which one it is you want.
But if the contract was for a specific amount of RAM and then people start coming to Apple more for high RAM machines, they're going to exhaust their contract sooner than usual and run out of cheap memory to buy. Then they have to decide if they want to lower their margins or raise the already-high price up to nosebleed levels.
lol. you need to look at rpi 5 prices again. they are insane.
With Anthropic you're paying for "more tokens than the free plan" which has no meaning
With some custom tooling, we built our own local enterprise setup:
Support ticketing system Custom chat support powered by our trained software-support model Resolved repository with detailed step-by-step instructions User-created reports and queries Natural language-driven report generation (my favorite — no more dragging filters into the builder; our (Secret) local model handles it for clients) In-application tools (C#/SQL/ASP.NET) to support users directly, since our software runs on-site and offline due to PPI A cool repair tool: import/export “support file packet patcher” that lets us push fixes live to all clients or target niche cases Qwen3 with LoRA fine-tuning is also incredible — we’re already seeing great results training our own models.
There’s a growing group pushing K2.5s to run on consumer PCs (with 32GB RAM + at least 9GB VRAM) — and it’s looking very promising. If this works, we’ll be retooling everything: our apps and in-house programs. Exciting times ahead!
Sadly not really. The Pi 5 8gb canakit starter set, which feels like a more true price since it's including power supply, MicroSD card, and case, is now $210. The pi5 8gb by itself is $135.
A 16gb pi5 kit, to match just the RAM capacity to say nothing of the difference in storage {size, speed, quality} and networking, is then also an eye watering $300
It is the first local model I've tried which could reason properly. Similar to Gemini 2.5 or sonnet 3.5. I gave it some tools to call , asked claude to order it around, (download quotes, print charts, set up a gnome extension) even claude was sort of impressed that it could get the job done.
Point is, it is really close. It isn't opus 4.5 yet, but very promising given the size. Local is definitely getting there and even without GPUs.
But you're right, I see no reason to spend right now.
He can do and say a lot of shit because he will still be viewed as real-life Iron Man, because in some ways he kind of is.
He doesn't have a RDF but has Kardashev Scale Intent (KSI).
The lobbyists in the political fray are out to steal his value for money lunch despite his demonstrated effectiveness, over and over again.
Jobs couldn't even engage the politicians to give away or at discount the Apple ][ to education.
I should clarify that by small I mean in the 3-8B range. I haven't tested the 14-30B ones, my experience is only about the smaller ones.
In my experience, small models are not good for coding (except very basic tasks), they're not good for general knowledge. So the only purpose I could see for them would be, when they're given the information, i.e. summarization or RAG.
But in my summarization experiments, they consistently misunderstood the information given to them. They constantly made basic errors and failed to understand the text.
So having eliminated programming, general knowledge, summarization and (by extension, RAG, because if you can't understand the information, then you can't do RAG either, by definition) -- I have eliminated all the use cases that I had in mind!
That would leave very basic tasks like classification or keywords, but I think there they would be in the awkward middle ground of being disappointing relative to big LLMs for many tasks, and cumbersome relative to small specialized models which can run fast and cheap and be fine tuned.
My portfolio appreciates you.
Apple has accepted a 100% price increase for Samsung's LPDDR5X memory, with DRAM supply commitments secured only through the first half of 2026. Tim Cook acknowledged during the Q1 FY2026 earnings call that storage price increases would significantly impact Q2 gross margins.. Apple is evaluating ChangXin Memory Technologies (CXMT) and Yangtze Memory Technologies (YMTC) as new supply sources, attempting to rebuild pricing leverage through supply chain diversification.Unfortunately, I don't know that Linux handles the bespoke 5k graphics. Moreover, our corp Linux distribution is only certified for particular devices. Even if the screen worked, you wouldn't be allowed on the network, which is the whole problem with Intel support being dropped in the first place.
For a different opinion, please see https://woltman.com/gnome-bad/
GNOME is extremely opinionated.
At that point buy a used macbook air m1.
I have a Firefox window with 3500 tabs right now.
Using LLMs for voice assistants is relatively new at scale that’s the difference between Alexa and Alexa+ and Gemini powered Google Assistant and what Apple has been trying to do with Siri for two years.
It’s really just using LLMs for tool calling. It is just call centers were mostly built before the age of LLMs and companies are slow to update
oh no
(tbh surprisingly few references to Apple otherwise)
The session so far is stored in a file like /tmp/s.json messages array. Claude reads that file, appends its response/query, sends it to the API and reads the response.
I simply wrapped this process in a python script and added tool calling as well. Tools run on the client side. If you have Claude, just paste this in :-)
Yep, I know it is opinionated and I really like a lot of their decisions. Most of what he says in that is "it doesn't clone Windows therefore it breaks my muscle memory". I don't care about your opinions and it isn't the same as mine.
But the best part is that it's optional.
For cables, my iMac had an opening in the back for RAM sticks, which I popped out and wired all cables through. I mounted the driver board on a piece of plexiglass so that most of the ports are accessible directly to the RAM opening. For power, I use a regular third party power brick I had laying around, though some people have reused the iMac’s original power cable with an internal power supply.
Honestly, the hardest parts were identifying the correct driver board and gluing the front glass back on after assembly.
Strange, I've had the GNOME conversation with three people and all of them brought up that it's "optional." Strangely coincidental.
The network connection isn't the main problem, it's every access to a protected system that would no longer trust the device.
I mean, I can understand defense in depth and not wanting anyway a possible unsafe device connected to the corp network which still might expose some unwanted data (i.e. I imagine a trusted device on the corporate LAN might relax some local firewall rules to make it easier to develop? I'm just guessing, no real idea)