They suck for large sequential file access, but incredible for small random access: databases.
And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?
In an era of RAM shortages and quarterly price increases, Optane remains viable for swap and CPU/GPU cache.
https://pcper.com/2017/06/how-3d-xpoint-phase-change-memory-...
Looking at those charts, besides the DWPD it feels like normal NVMe has mostly caught up. I occassionally wonder where a gen 7/8(?) optane would be today if it caught on, it'd probably be nuts.
It seems like there's a very small window, commercially, for new persistent memories. Flash throughput scales really cost-efficiently, and a lot is already built around dealing with the tens-of-microseconds latencies (or worse--networked block storage!). Read latencies you can cache your way out of, and writers can either accept commit latency or play it a little fast and loose (count a replicated write as safe enough or...just not be safe). You have to improve on Flash by enough to make it worth the leap while remaining cheaper than other approaches to the same problem, and you have to be confident enough in pulling it off to invest a ton up front. Not easy!
https://goughlui.com/2024/07/28/tech-flashback-intel-optane-...
The SSD form factor wasn’t any faster at writes than NAND + capacitor-backed power loss protection. The read path was faster, but only in time to first byte. NAND had comparable / better throughput. I forget where the cutoff was, but I think it was less than 4-16KB, which are typical database read sizes.
So, the DIMMs were unprogrammable, and the SSDs had a “sometimes faster, but it depends” performance story.
Worth noting though that Optane is also power-hungry for writes compared to NAND. Even when it was current, people noticed this. It's a blocker for many otherwise-plausible use cases, especially re: modern large-scale AI where power is a key consideration.
1. “Optane” in DIMM form factor. This targeted (I think) two markets. First, use as slower but cheaper and higher density volatile RAM. There was actual demand — various caching workloads, for example, wanted hundreds of GB or even multiple TB in one server, and Optane was a route to get there. But the machines and DIMMs never really became available. Then there was the idea of using Optane DIMMs as persistent storage. This was always tricky because the DDR interface wasn’t meant for this, and Intel also seems to have a lot of legacy tech in the way (their caching system and memory controller) and, for whatever reason, they seem to be barely capable of improving their own technology. They had multiple serious false starts in the space (a power-supply-early-warning scheme using NMI or MCE to idle the system, a horrible platform-specific register to poke to ask the memory controller to kindly flush itself, and the stillborn PCOMMIT instruction).
2. Very nice NVMe devices. I think this was more of a failure of marketing. If they had marketed a line of SSDs that, coupled with an appropriate filesystem, could give 99% fsync latency of 5 microseconds and they had marketed this, I bet people would have paid. But they did nothing of the sort — instead they just threw around the term “Optane” inconsistently.
These days one could build a PCM-backed CXL-connected memory mapped drive, and the performance might be awesome. Heck, I bet it wouldn’t be too hard to get a GPU to stream weights directly off such a device at NVLink-like speeds. Maybe Intel should try it.
It isn't weird at all. I would be surprised if it ever succeed in the first place.
Cost was way too high. Intel not sharing the tech with others other than Micron. Micron wasn't committed to it either, and since unused capacity at the Fab was paid by Intel regardless they dont care. No long term solution or strategy to bring cost down. Neither Intel or Micron have a vision on this. No one wanted another Intel only tech lock in. And despite the high price, it barely made any profits per unit compared to NAND and DRAM which was at the time making historic high profits. Once the NAND and DRAM cycle went down again cost / performance on Optane wasn't as attractive. Samsung even made some form of SLC NAND that performs similar to Optane but cheaper, and even they end up stopped developing for it due to lack of interest.
The SSDs were never going to be dominant at straight read or write workloads, but they were absolutely king of the hill at mixed workloads because, as you note, time to first byte was so low that they switched between read and write faster than anything short of DRAM. This was really, really useful for a lot of workloads, but benchmarkers rarely bothered to look at this corner... despite it being, say, the exact workload of an OS boot drive.
For years there was nothing that could touch them in that corner (OS drive, swap drive, etc) and to this day it's unclear if the best modern drives still can or can't compete.
You're looking at the entirely wrong kind of shrinking. Hard drives are still (gradually) improving storage density: the physical size of a byte on a platter does go down over time.
Optane's memory cells had little or no room for shrinking, and Optane lacked 3D NAND's ability to add more layers with only a small cost increase.
There are very few applications that benefit from such low latency, and if one has to go off the standard path of easy, but slow and expensive and automatically backup up, people will pick the ease.
Having the best technology performance is not enough to have product market fit. The execution required from the side of executives at Intel is far far beyond their capability. They developed a platform and wanted others to do the work of building all the applications. Without that starting killer app, there's not enough adoption to build an ecosystem.
The read path is sort of a wash, but writes are still unequalled. NAND writes feel like you're mailing a letter to the floating gate...
So what you mean is that on the most important metric of them all for many workloads, Flash-based NVMe has not caught up at all. When you run a write heavy workload on storage with a limited DWPD (including heavy swapping from RAM) higher performance actually hurts your durability.
Flash is no bueno for write-heavy workloads, and the random-access R/W performance is meh compared to Optane. MLC and SLC have better durability and performance, but still very mid.
Power failure can happen in between any of "1 byte updates with crazy latencies." However small latency is, power failure is still faster. Usually, there is a write ahead or some other log that alleviates the problem, this log is usually written in streaming fashion.
What is good, though, is that "blast radius" [1] of failure is smaller than usual - failed one byte write rarely corrupts more that one byte or cache line. SQLite has to deal with 512 (and even more) bytes long possible corruptions on most disks, with Optane it is not necessarily so. So, less data to copy, scan, etc.
"Just make it a faster SSD" was never a business. The DIMMs were weird, sure, but the bigger issue was that Optane made the most sense when software treated storage and memory as one tier, and almost nobody was going to rewrite kernels, DBs, and apps for a product that cost more than flash and solved pain most buyers barely felt.
This showed up as amazing numbers on a 50%-read, 50%-write mix. Which, guess what, a lot of real workloads have, but benchmarks don't often cover well. This is why it's a great OS boot drive: there's so much cruddy logging going on (writes) at the same time as reads to actually load the OS. So Optane was king there.
For databases, where you do lots of small scattered writes, and lots of small overwrites to the tail of the log, modern SSDs coalesce writes in that buffer, greatly reducing write wear, and allowing the effective write bandwidth to exceed the media write bandwidth.
These schemes are much less expensive than optane.
That was never going to work out. Adding an entirely new kind of memory to your storage stack was never going to be easier or cheaper than adding a few large capacitors to the drive so it could save the contents of the DRAM that the SSD still needed whether or not there was Optane in the picture.
Late last year I switched from a 1.5tb Optane 905P to a 4tb WD Blue SN5000 NVMe drive in a gaming machine and saw improved load times, which makes sense given the read and write speeds are ~double. No observable difference otherwise.
I'm sure that's not the use case you were looking for. I could probably tease out the difference in latency with benchmarks but that's not how I use the computer.
The 905P is now in service as an SSD cache for a large media server and that came with a big performance boost but the baseline I'm comparing to is just spinning drives.
I believe Optane retained a performance advantage (and I think even today it's still faster than the best SSDs) but SSDs remain good enough and fast enough while being a lot cheaper.
The ideal usage of optane was as a ZIL in ZFS.
Basically any RDBMS? MySQL and Postgres both benefit from high performance storage, but too many customers have moved into the cloud where you can’t get NVMe-like performance for durable storage for anything remotely close to a worthwhile price.
If CXL was around at the time it would have been such a nice fit, allowing for much lower latency access.
It also seems like in spite of the bad fit, there were enough regular options drives, and they were indeed pretty incredible. Good endurance, reasonable price (and cheap as dirt if you consider that endurance/lifecycle cost!), some just fantastic performance figures. My conclusion is that alas there just aren't many people in the world who are serious about storage performance.
It's hard to tell you, because it's subjective, I don't swap back and forth between an SSD and the optane drives. I have my old system, which has a 2TB Samsung 980 Pro NVME drive (PCIE 4.0 x4, or 8GB/s max) as root, and a Sabrent rocket 4 plus 4TB drive secondary (also PCIE 4.0), so I ran sysbench on both systems, so I could share the differences. (Old system 5950X, new system 9950X3D).
It feels snappier, especially when doing compilations...
Sequential reads: I started with a 150GB fileset, but it was being served by the kernel cache on my newer system (256GB RAM vs 128GB on the old), so I switched to use 300GB of data, and the optanes gave me 5000 MiB/s for sequential read as opposed to 2800 MiB/s for the 980 Pro, and 4340 MiB/s for the Rocket 4 Plus.
Random writes alone (no read workload) The optane system gets 2184 MiB/s, the 980 Pro gets 32 MiB/s, and the Rocket 4 Plus gets 53 MiB/s.
Mixed workload (random read/write) The optanes get 725/483 as opposed to 9/6 for the 980 Pro, and 42/28 for the Rocket 4 Plus.
2x1.5TB Optane Raid0: Prep time: `sysbench fileio --file-total-size=150G prepare` 161061273600 bytes written in 50.41 seconds (3047.27 MiB/sec).
Benchmark:
`sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 1.1719GiB each
150GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 46421.95
writes/s: 30947.96
fsyncs/s: 99034.84
Throughput:
read, MiB/s: 725.34
written, MiB/s: 483.56
General statistics:
total time: 60.0005s
total number of events: 10584397
Latency (ms):
min: 0.00
avg: 0.01
max: 1.32
95th percentile: 0.03
sum: 58687.09
Threads fairness:
events (avg/stddev): 10584397.0000/0.00
execution time (avg/stddev): 58.6871/0.00
2TB Nand Samsung 980 Pro:
Prep time:
`sysbench fileio --file-total-size=150G prepare`
161061273600 bytes written in 87.15 seconds (1762.53 MiB/sec). Benchmark:
`sysbench fileio --file-total-size=150G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 1.1719GiB each
150GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 594.34
writes/s: 396.23
fsyncs/s: 1268.87
Throughput:
read, MiB/s: 9.29
written, MiB/s: 6.19
General statistics:
total time: 60.0662s
total number of events: 135589
Latency (ms):
min: 0.00
avg: 0.44
max: 15.35
95th percentile: 1.73
sum: 59972.76
Threads fairness:
events (avg/stddev): 135589.0000/0.00
execution time (avg/stddev): 59.9728/0.00
4TB Sabrent Rocket 4 Plus:
Prep time:
`sysbench fileio --file-total-size=300G prepare`
322122547200 bytes written in 152.39 seconds (2015.92 MiB/sec). Benchmark:
`sysbench fileio --file-total-size=300G --file-test-mode=rndrw --max-time=60 --max-requests=0 run`
WARNING: --max-time is deprecated, use --time instead
sysbench 1.0.20 (using system LuaJIT 2.1.1741730670)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Extra file open flags: (none)
128 files, 2.3438GiB each
300GiB total file size
Block size 16KiB
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Initializing worker threads...
Threads started!
File operations:
reads/s: 2690.28
writes/s: 1793.52
fsyncs/s: 5740.92
Throughput:
read, MiB/s: 42.04
written, MiB/s: 28.02
General statistics:
total time: 60.0155s
total number of events: 613520
Latency (ms):
min: 0.00
avg: 0.10
max: 8.22
95th percentile: 0.32
sum: 59887.69
Threads fairness:
events (avg/stddev): 613520.0000/0.00
execution time (avg/stddev): 59.8877/0.00Somewhere I still have some actual battery-backed DIMMs (DRAM plus FPGA interposer plus awkward little supercapacitor bundle) in drawer. They were not made by Intel, but Intel was clearly using them as a stepping stone toward the broader NVDIMM ecosystem. They worked on exactly one SuperMicro board, kind of, and not at all if you booted using UEFI. Rebooting without doing the magic handshake over SMBUS [0] first took something like 15 minutes, which was not good for those nines of availability.
[0] You can find my SMBUS host driver for exactly this purpose on the LKML archives. It was never merged, in part, because no one could ever get all the teams involved in the Xeon memory controller to reach any sort of agreement as to who owned the bus or how the OS was supposed to communicate without, say, defeating platform thermal management or causing the refresh interval to get out of sync with the DIMM temperature, thus causing corruption.
I’m suspicious that everything involved in Optane development was like this.
It's so incredibly fast and responsive that the LuCI interface completely loads the moment I hit enter on the login form.
https://forums.servethehome.com/index.php?threads/so-i-teste...
There was certainly a time when it seemed they were shopping for engineers opinions of what to do with it, but I think they quickly determined it would be a much smaller market anyway from ssds and didn’t end up pushing on it too hard. I could be wrong though, it’s a big company and my corner was manufacturing and not product development.
For a lot of bulk storage, yes, you don't have frequently changing data. But for databases or caches, that are under heavy load, optane was not only far faster, but if looking at life-cycle costs, way way less.
It was also the best boot drive money could buy. Still is, I think, though other comments in the thread ask how it compares against today's best, which I'd also love to see.
Isn't that actually crazy good, even insane value for the performance and DWPD you get with Optane, especially with DRAM being ~$15/GB or so? I don't think ~$1/GB NAND is anywhere that good on durability, even if the raw performance is quite possibly higher.
I have to wonder if it isn't usable for some kind of specialized AI workflow that would benefit from extremely low latency reads but which is isn't written often, at this point. Perhaps integrated in a GPU board.
The niche that could actually make use of Optane's endurance was small and shrinking, and Intel had no roadmap to significantly improve Optane's $/GB which was unquestionably the technology's biggest weakness.
https://pcpartpicker.com/forums/topic/425127-benchmarking-op...
You can compare their benchmarks with the other almost 400 SSDs we've benchmarked. Most impressive is that three years later they are still the top random read QD1 performers, with no traditional flash SSD coming anywhere close:
https://pcpartpicker.com/products/internal-hard-drive/benchm...
They are amazing for how consistent and boring their performance is. Bit level access means no need for TRIM or garbage collection, performance doesn't degrade over time, latency is great, and random IO is not problematic.
Intel's got an amazing record of axing projects as soon as they've done the hard work of building an ecosystem.
There were/are often projects that come down from management that nobody thinks are worth pursuing. When i say nobody, it might not just be engineers but even say 1 or 2 people in management who just do a shit roll out. There are a lot of layers of Intel and if even one layer in the Intel Sandwich drag their feet it can kill an entire project. I saw it happen a few times in my time there. That one specific node that intel dropped the ball on kind of came back to 2-3 people in one specific department, as an example.
Optane was a minute before I got there, but having been excited about it at the time and somewhat following it, that's the vibe I get from Optane. It had a lot of potential but someone screwed it up and it killed the momentum.
Which “Optane memory”? The NVMe product always worked on non-Intel. The NVDIMM products that I played with only ever worked on a very small set of rather specialized Intel platforms. I bet AMD could have supported them about as easily as Intel, and Intel barely ever managed to support them.
And their whole deal was making RAM persistent anyway, which isn't exactly what I want.
Once in a while new hardware is released that makes a difference. Such a device is the Intel Optane series of high-performance SSD’s for professional use, which was released in late 2017. In this case I’m talking about the Intel Optane P4800X and P5800X and their consumer counterparts (900P and 905P). All drives are based on the 3D XPoint Technology that Intel co-developed with Micron.
In contrary to regular SSD’s, Optane drives like the P5800X brings ultra low latency, high durability and high performance to the table. Effectively Optane is a technology that both has aspects of DRAM and regular NAND based Flash. The downsides of Optane are the high cost and relative low capacity. Combined with the high innovation rate of NAND SSD’s and Compute eXpress Link (CXL) around the corner, there was little reason to switch to this pieces of technology for most companies.
Finally, Intel decided to stop the innovation of this technology in July 2022 as part of it’s IDM 2.0 company strategy. In fact, Intel stopped all their flash storage based activities. This does not mean the drives aren’t for sale anymore.
Current Optane based products (in SSD and DIMM form) are still being sold. In the beginning of this year even a new Optane Persistent Memory NV-DIMM series 300 (also called PMEM) was released. This new Optane release was needed for the 4th generation of Intel Scalable CPU’s code name Sapphire Rapids that was released in January 2023.
ServeTheHome made a great video about Optane Persistent Memory if you’re not familiar with it.
As part of the VMware vExpert program, I had the opportunity to test a couple of Intel Optane P4800X drives, which I wanted to get my hands on for quite time, being a (hardware) techie. Many thanks to Corey Romero, Matt Mancini, Simon Todd and the Intel Business and Storage BU for making this possible.
| Optane P4800X (1st Gen) | Optane P5800X (2nd Gen) | |
| Capacity | 375 GB – 1,5 TB | 400 GB – 3,2 TB |
| Release date | Q3 2017 | Q4 2020 |
| PCIe version | PCIe 3.0 (NVMe) | PCIe 4.0 (NVMe) |
| Sequential Read | 2500 MB/s | 7200 MB/s |
| Sequential Write | 2200 MB/s | 6200 MB/s |
| Read IOPS (4K) | 550.000 | 1.500.000 |
| Write IOPS (4K) | 500.000 | 1.500.000 |
| Durability (DWPD) | 30 (Write Intensive) | 100 (Write Intensive) |
The question arises, what makes Optane drives special compared to NAND based SSD’s. When not familiar with this topic, NAND relates to the type of flash chips used on an SSD. Pretty much all (non Optane) SSD’s are NAND based.
The answer why Optane has advantages over NAND based SSD’s can be explained based on a couple of their qualities.
In general the durability of an SSD is an important rating, because it shows how many data can be written to the device during the warranty period. In general the cheaper the SSD, the less data can be written.
For examples, when an SSD has only has 3 instead of 5 year of warranty, 40% lesser writes are supported. That’s the easy and cheapest way for manufactures to “raise” the durability.
Compared to NAND based SSD’s, durability is where an Optane drive really shines. So, how does it compares to other professional and consumer drives?
| SSD Type | Power-loss Protection | Durability (DWPD) | Warranty |
| QLC based consumer | No | 0,1 | 3y |
| TLC based consumer | No | 0,2 – 0,35 | 3-5y |
| TLC based prosumer | Yes | 0,3 – 0,35 | 5y |
| TLC based professional (Read Intensive) | Yes | 1 | 5y |
| TLC based professional (Mix Use) | Yes | 3 | 5y |
| MLC / TLC based professional (Write Intensive) | Yes | 10 | 5y |
| Optane P4800X (1st Gen.) | Yes | 30 | 5y |
| Optane P5800X (2nd Gen.) | Yes | 100 | 5y |
As can be seen in the table above, the Optane is the choice of SSD for high write environments.
For a more detailed explanation of SSD durability, check “Comparing Wear Figures on SSDs” blog by Jim Handy (a.k.a. The SSD Guy) to learn more about the numbers that manufacturers throw at you regarding the durability of their drives.
Relevant durability terms are:
In the end all terms relate to each other, which shows nicely in the diagram below.
A second aspect that contributes to durability is data consistency. Professional SSD’s (and some consumer ones) have power-loss protection (PLP). PLP is an often overlooked feature of an SSD, that takes care of pending data to be safely stored before the device completely loses power:
PLP protects against:
PLP can be done in hardware by adding an array of capacitor to the device or in firmware, of which the hardware implementation is preferred. For more info see the link “A Closer Look At SSD Power Loss Protection” from Kingston Memory.
All professional grade SSD’s have I’ve worked with (including Optane) have PLP. Be sure to check your drives for hardware PLP. On the I**ntel ARK product specification site**, hardware PLP for Optane drives translates to the “Enhanced Power Loss Data Protection” feature.
Relevant section of P5800X Spec sheet.
Performance of an SSD has 2 important aspects. Latency and Write Consistency.
Latency is where Optane has a clear advantage over NAND SSD’s. Latency is the time the device needs to pull the requested data off it’s flash chips and send it to the CPU for processing.
Let’s compare Optane drives with current Gen professional NAND based ones. It does not make sense to perform the test myself, since the pro’s at Storagereview.com have already done that.
The tables above shows it all. IOPS wise, Optane is up-to-par with the competition, but the drive shines when it comes down to latency. It’s just astonishing. Being around 25 microsecond (us) per 4K random read for Optane versus 90 to 110 us for NAND based drives. Putting it in perspective, is that every small read on an Optane drive is more than 300% faster. Even up to 1.3 Million IOPS on a single drive.
This means that data reaches the CPU faster, but also lowers the load on the CPU because it simply has to wait shorter.
An Optane drives delivers it’s maximum write performance in a consistent way, therefore you could say it’s a form or QoS. That’s not the case for NAND based drives.
A normal NAND based SSD often has DRAM cache or a small part of extra fast flash. This is due to the fact that NAND SSD’s can only writes data to empty 4K pages. Therefore empty pages must be available for optimal write performance.
Due to it’s nature, the erase of a NAND pages takes some time. When most pages are in use or have not been emptied yet (by the garbage collection process), the writes are cached in the DRAM and/or the small of extra fast flash. When that is also filled-up, degradation of write performance occurs for the drive to be able to keep with the garbage collection. In other words, writes are throttled until enough of empty pages are available.
Optane does not work this way since it is byte addressable instead of per 4K page. Secondly data can be overwritten directly on Optane devices. Due to these two characteristics, write are always performed with maximum performance, even under continuous heavy write loads.
Due to the reasons above like low latency, consistent (write) performance, high IOPS and high endurance the Optane drives are especially suited for:
The Intel Optane drives are well suite for many use cases. Especially those that require (consistent) low latency and high IOPS, combined with a high endurance. On the other hand NAND type SSD are still getting better and a lot of development takes place in that space. Secondly the prices for NAND drives are still dropping to an all time low nowadays.
Unfortunately in a couple of years Optane SSD’s will be end of sale, since Intel decided to stop it’s involvement in the memory space. Nonetheless is still available for a couple of year so we still can enjoy it in SSD or NV-DIMM form factor.
By reading this post, I hope you’ve learned about Optane key architectural differences, SSD Endurance, SSD Power Loss Protectection, write consistency and why those qualities matters. Again many thanks to Corey Romero, Matt Mancini, Simon Todd and the Intel Business and Storage BU for the opportunity.
Cheers, Daniël
ServeTheHome: Optane Persistent Memory (NV-DIMM’s or also called PMEM
ServeTheHome: Compute eXpress Link (CXL)
Western Digital: Understanding SSD Endurance
TheSSDGuy: Comparing Wear Figures on SSDs
If they had been cheaper, I think they'd have been really, really popular.
The newest fully E-core based Xeon CPUs have reached that figure by now, at least in dual-socket configs.
Yes, the pure-Optane consumer "Optane memory" products were at a hardware level just small, fast NVMe drives that could be use anywhere, but they were never marketed that way.
Of course it works exceptionally well when the instinct turns out to be right. But can end companies if it isn’t.
That uncertainty couldn't have done the market any favors.
Also… were those weird hybrid SSDs even implemented by actual hardware, or were they part of the giant series of massive kludges in the “Rapid Storage” family where some secret sauce in the PCIe host lied to the OS about what was actually connected so an Intel driver could replace the OS’s native storage driver (NVMe, AHCI, or perhaps something worse depending on generation) to implement all the actual logic in software?
It didn’t help Intel that some major storage companies started selling very, very nice flash SSDs in the mean time.
They were definitely part of the series of massive kludges. But aside from the Intel platforms they were marketed for, I never found a PCIe host that could see both of the NVMe devices on the drive. Some hosts would bring up the x2 link to the Optane half of the drive, some hosts would bring up the x2 link to the QLC half of the drive, but I couldn't find any way to get both links active even when the drive was connected downstream of a PCIe switch that definitely had hardware support for bifurcation down to x2 links. I suspect that with appropriate firmware hacking on the host side, it may have been possible to get those drives fully operational on a non-Intel host.