Intel and Micron Put Their Shared Past Behind Them...

In 2018, Intel and Micron decided to end their long-running memory partnership, going their separate ways first for NAND flash development after their 96-layer node, and then for 3D XPoint memory after completing development of its second generation. The split was completed in 2019, and in 2020 we saw the first post-divorce developments from both players. Intel has introduced 144-layer 3D NAND as their follow-up to the 96L generation, with a variety of consumer and enterprise SSDs announced earlier this month.

Micron used the breakup as the opportunity to make more significant changes, switching their 3D NAND design from a floating gate memory cell to a charge trap cell design with a replacement gate manufacturing process, much more similar to what everyone other than Intel has been doing since the transition from planar to 3D NAND. Micron initially made this switch with a 128L generation that was used as a low-volume trial, and quickly followed up with their recent announcement of a 176L generation. That has ostensibly already started shipping in some products, but Micron won't identify them and their 176L NAND won't really begin to have an impact on the SSD market until well into 2021.

... And Intel Makes a New Deal

After ending the Micron partnership, Intel's NAND and SSD business was the odd man out in the industry. They're the only player focused on 3D floating gate designs, and their product line is very strongly focused on the enterprise/datacenter SSD market, with consumer SSDs as more of an afterthought. They are also putting far more of an emphasis on QLC NAND than any other player in the SSD market: in May, they even confirmed that they would not be releasing any more consumer SSDs with TLC NAND. Some of this QLC focus is an attempt to promote their 3D XPoint-based Optane products for fast tier storage, but the fact remains that Intel's newly independent 3D NAND business is relatively small and specialized within the NAND industry.

Given its new station in the market, it isn't too surprising that Intel's NAND business became the next acquisition target. In October, SK hynix and Intel announced a deal for SK hynix to purchase Intel's 3D NAND business and their NAND-based SSD product lines, leaving Intel with their Optane products as their only storage offerings. This acquisition will be an unusually long, drawn-out process with the final part of the transfer planned for March 2025. Once the deal gets regulatory approval (expected sometime in 2021, probably later in the year), SK hynix will get ownership of Intel's NAND fab and the SSD business and IP (eg. Intel's NVMe SSD controller designs). Meanwhile, Intel will retain their 3D NAND IP and the employees operating their NAND fab. The NAND IP and fab employees will be transferred at the end of the process, in 2025.

Neither company has commented on how this acquisition will affect their long-term technology roadmaps, but it's easy to imagine that by 2025 Intel's 3D floating gate may be a dead-end and SK hynix would switch the Intel fab in Dalian, China over to whatever charge trap based process the rest of the SK hynix NAND fabs are using at the time. Even though the differing 3D NAND technologies may not be practical to maintain in the long run, in the short term the SK hynix and Intel SSD businesses are quite complementary. They're two of the smaller players in the NAND market, but with relatively little overlap. SK hynix sells twice as many SSDs as Intel but lower total capacity in GB, so clearly SK hynix will be acquiring a much larger presence in the high-capacity enterprise SSD market—which is also the most profitable market segment. Meanwhile, SK hynix has recently become much more competitive for consumer SSDs after their first several generations of 3D NAND made them a perennial also-ran in the SSD market and left their 3D NAND mostly relegated to mobile (smartphone) storage and memory cards. The combined SK hynix + Intel NAND business is on track to have the second-largest market share in the NAND market.

Related Reading: SK Hynix to Buy Intel’s NAND Memory Business For $9 Billion

More Layers And Faster Interfaces

As the major flash memory manufacturers roll out new generations of 3D NAND flash memory, we have seen some of both diversification and convergence of the technologies implemented by the various manufacturers. The layer counts for 3D NAND are starting to get a bit confusing: At the 96-layer node, Samsung stuck out with just 92 layers—they came up a bit short because they managed to avoid using string stacking in their manufacturing process. Beyond 96 layers, the roadmaps for layer count are all over the place. Kioxia and Western Digital announced 112L NAND at the beginning of 2020, but we haven't seen it hit the market yet. Intel jumped from 96L to 144L. Micron made a short stop at 128L before continuing on to 176L. SK hynix started shipping 128L and has also recently announced 176L. Samsung's 128L debuted with the 980 PRO. It takes months to over a year to get from a NAND manufacturer announcing a new node to sampling that NAND to ramping up production enough for it to appear in mass-market consumer products, so most of the consumer SSDs on the market today are still 96L models with some of the newer drives using 128L NAND. In 2021 we should see someone announce 200+ layer 3D NAND, but new products hitting the shelves will feature anywhere from 112 to 176 layers.

The layer count for 3D NAND is a decent indicator of each generation's density cost per GB, but another metric is becoming important: IO performance per die. The speed of IO between the NAND flash and the SSD controller can be a limiting factor for the drive's overall performance. For a PCIe 3.0 x4 SSD with an 8-channel controller, running each channel at 533MT/s with TLC NAND was sufficient to provide the ~3.5GB/s sequential read performance that saturates the PCIe link. Now that drives are moving to PCIe 4.0, both NAND and SSDs need to support higher IO speeds. There's also a trend toward using four-channel controllers for mainstream consumer SSDs and reserving 8-channel controllers for high-end consumer and entry-level datacenter drives, and this is also driving a need for higher IO speeds. We're now seeing most PCIe 4.0 SSD controllers supporting 1.2GT/s to 1.6GT/s IO speeds, with the latest NAND generations offering similar speeds. This big jump in per-die performance is how the SK hynix Gold P31 with a 4-channel controller was able to match the performance of other PCIe 3.0 SSDs with 8-channel controllers, while using far less power. NAND IO speeds will probably stall again for a generation or two after PCIe 4.0 goes mainstream, until PCIe 5.0 or later starts to catch on in the storage market.

Related Reading:

3D XPoint and SLC NAND Remain Niche Technologies

At the very high end of the SSD market and almost exclusively for top of the line enterprise SSDs, there are memory technology options aside from the QLC and TLC NAND used in mainstream SSDs. 3D XPoint memory is an alternative to NAND flash memory, and was developed by Intel and Micron. Micron has technically commercialized 3D XPoint with their limited-release X100 SSD, but otherwise Intel's Optane products are almost synonymous with 3D XPoint memory. Intel's first product to use second-generation 3D XPoint memory was formally announced earlier this month: the Optane SSD DC P5800X, replacing the first-generation P4800X. Intel has actually been sampling this for months, and they have let some independent performance numbers slip out that back up their impressive performance claims—but we still don't have a good idea when to expect widespread availability, or what pricing is like. Intel's other Optane products are still using the first-gen 3D XPoint memory, including the second generation of Optane DC Persistent Memory DIMMs that are usable with Cascade Lake and Ice Lake Xeons.

After Intel launched 3D XPoint memory, Samsung responded by bringing back SLC NAND flash memory in the form of their Z-NAND, a latency-optimized 48L 3D SLC NAND. We haven't heard anything new on that front since the 983 ZET two years ago, but Toshiba (now Kioxia) eventually announced their own low-latency 3D SLC memory: XL-FLASH. That memory started sampling this year and is starting to find its way into niche high-end enterprise SSDs from several brands. We have a sample of the DapuStor Haishen3-XL H3900 on our enterprise testbed.

These high-end memories may sound like they offer exciting possibilities for high-end storage, but their impact on the market has been limited and very little has changed in the past year. Everyone is finding 3D TLC NAND to be fast enough for almost every purpose, and the niche for faster-than-TLC storage options will remain tiny. There are no revolutionary new memory technologies poised to shake up the market in the near future.

Related Reading:

NVMe 2.0: Coming Soon, And Already Partly Here

Work has been underway for over a year on version 2.0 of the NVMe specification, but despite the major version number bump it won't be bringing much revolutionary new technology. Instead, the spec is being reorganized to split it into more digestible components with separate documents for PCIe transports and NVMe over Fabrics transports, and different documents for each command set. The NVMe Working Group has been trying to ratify and publish new features without waiting to bundle them all up in the big spec releases, so we already have specs for two new command sets aside from the familiar hard drive-like block storage command set: Zoned Namespaces (ZNS) and Key-Value Namespaces. We covered the ZNS spec in depth earlier this year. It represents a significant change in the abstraction layer between flash memory and application software and allows software to structure its IO patterns in a much more flash-friendly manner, enabling better endurance and performance and lower hardware costs. The SSD industry has been exploring options along these lines for quite a while, including with several proposals for Open-Channel SSDs. With the ZNS spec, it looks like they have finally settled on the right balance for exposing the underlying hardware details without requiring too much software complexity, and lots of companies in the enterprise storage space are on board. Western Digital was the first to announce a ZNS SSD as a commercial product with their Ultrastar DC ZN540.

The other new command set—for Key-Value SSDs—is a subject we haven't covered in as much detail. Conceptually, it's pretty simple: instead of providing the traditional block storage interface with 512-byte or 4kB sectors, the SSD allows reading variable-length values identified by keys rather than logical block addresses (LBAs). This adds only a tiny bit more complexity to the Flash Translation Layer of a typical enterprise SSD, but allows it to offload almost all of the functionality of a key-value database store like RocksDB. As with ZNS SSDs, KV SSDs require significant changes to host software, but in turn offer benefits to performance and endurance. The KV SSD concept doesn't seem to be getting as much interest across the industry, but having it standardized opens the door to wider adoption.

Both of these major new NVMe features are at least initially going to be almost exclusively used by the major hyperscale cloud computing companies, because of the scope of software modifications required. As the necessary modifications roll out to common server application software, these storage features may catch on more broadly in the server market, but there's no easy path for them to make their way into the consumer storage market.

Related Reading:


AnandTech Year In Review 2020: Solid State Drives


View All Comments

  • Tomatotech - Wednesday, December 30, 2020 - link

    It's been impressive to see how quickly NVME has maxed out PCIe 3.0. It looks like PCIe 4.0 will also be maxed out quite quickly. Will be good to see PCIe 5.0 drives coming out sooner rather than later, though I strongly suspect it will be from the likes of AMD or Apple, not Intel.

    If Intel could pull a rabbit out of the hat and produce PCIe 5.0 support before everyone else, that would go a long way towards bringing them back to the top again. Will they? Probably not.

    Hopefully NVME 2.0 / Zoned Namespaces will also help with the other two corners of SSD performance - random I/O performance. That's nowhere near maxing PCIe 3.0, and has a long way to go. Apple said as much when they gave their M1 chips huge registers and SSD bandwidth - there is a lot more SSDs can do to shovel random data into the CPU in the huge quantities needed.
  • dullard - Wednesday, December 30, 2020 - link

    Intel and AMD will both likely have PCIe 5.0 in 2022. Intel might just squeak it in during the end of 2021. The first PCIe 5.0 hardware was recently successfully tested and it used Intel chips.
  • Great_Scott - Sunday, January 3, 2021 - link

    3D-Xpoint already excels at random low-Q depth IO without any of those optimizations. Such a shame that a technology that would excel at small drive for normal users is a expensive niche solution for businesses. Reply
  • James5mith - Wednesday, December 30, 2020 - link

    Where is the AData drive based on the Rainer SSD controller? Reply
  • Billy Tallis - Wednesday, December 30, 2020 - link

    My review sample of the ADATA Gammix S70 literally arrived without warning while I was writing this article. I had to go back and edit a statement about Innogrit controllers still being MIA. So far, it appears the only place to order one is ADATA's own online store, but they claim to be in stock. Reply
  • lmcd - Wednesday, December 30, 2020 - link

    Imo we'll still see SATA SSDs for a while. They'll just have lower-binned NAND than NVMe. Very few systems support more than 2 NVMe disks and that's just not very much storage. Reply
  • name99 - Wednesday, December 30, 2020 - link

    Billy, might I suggest that at some point, for interest, you run the standard AnandTech SSD benchmarks on an M1 Mac?
    The reason this would be interesting is this:

    The author is not interested in dick-measuring "proofs" of how great the SSDs are, rather he just wants a feel for how they behave under *his* standard workloads (and using standard macOS API calls, no playing around with low level calls, queue depths, etc). There are also three or four earlier articles that explain his methodology and reasoning.

    Even so, the numbers he sees are fairly impressive when compared to other "laptop" SSD results; simplifying down to essentially 2.8GB/s for a range of "realistic" sizes and use patterns.
    This means it would be interesting to see how those numbers compare with the drives you are benchmarking every month when running essentially the same sort of test code.
  • ozzuneoj86 - Wednesday, December 30, 2020 - link

    "there aren't many PCs left that need a SATA SSD for an aftermarket upgrade"
    I seriously disagree with this statement. There are hundreds of thousands (millions?) of computers that exist in offices, homes or for sale on the used market that do not use M.2 drives at all, let alone NVMe. Even a bottom of the barrel DRAM-less SATA SSD will make these systems run perfectly for the average non-gamer, and will likely run acceptably for YEARS until something outright fails. I would bet that the majority of those users would notice very little difference in every day tasks (web, email, etc) if you had them upgrade to a brand new system with an NVMe SSD.

    When Windows 10 no longer functions on anything older than Kaby Lake, then maybe we'll see a real decline in the need for SATA SSDs, but for now, I think demand for the drives themselves is quite high. Just because manufacturers no longer need to invest R&D money into SATA SSDs to satisfy consumers doesn't mean there is no market for the existing drives.

    Take this with a grain of salt, but compare the number of user ratings of popular SATA SSDs on Amazon to the number on popular NVMe models. Most of the SATA drives have 4-8x the number of ratings, despite only being a year or two older. This isn't a guaranteed metric (too many variables) but saying that SATA drives are already becoming obsolete isn't true.

    New PCs already have SSDs in them. Old PCs that don't feel fast enough (or need larger SSDs) are the ones people are upgrading... and by far the vast majority don't use NVMe.
  • FunBunny2 - Wednesday, December 30, 2020 - link

    "Everyone is finding 3D TLC NAND to be fast enough for almost every purpose, and the niche for faster-than-TLC storage options will remain tiny. There are no revolutionary new memory technologies poised to shake up the market in the near future."

    well, I remain convinced that at some point, solid state storage will be fast enough to support memory-to-memory ISA, i.e. directly in storage. just like in the early 80s. once that happens, lots and lots of code will be ripped out and simplified. OS writers will start pulling their hair out, of course. cpu design will be greatly simplified, what with jettisoning all those levels of cache/buffer, and transaction control (both in the RDBMS sense and general data control) will need be re-thought. if that comes about, cpu chips get really, really tiny. have you looked at how much of a current chip is some form of memory?
  • npz - Wednesday, December 30, 2020 - link

    That's what Optane memory is designed for and it's still no where near as fast both in throughput and latency as latest memory even though Optane itself is much, much faster than regular SSD for memory type access workloads. I knew people in the same division who had started working on code for it a few years ago and it was up in the air whether or not it was worth the effort. You see, because of the differences in performance characteristics and the unique ability for storage it was really to be treated as yet-another tiered storage device or cache rather than local memory.

    One downside of all solid state storage compared to memory is how writes are handled. Memory is byte addressable and optimized for comparatively tiny row/column access, amounting to word and double word or two sized (8 bytes, etc) while SSD is designed to be addressed at minimum of 4k pages at the h/w level for erase-write cycles. Erasing is not "free" at all like it is in memory. Because of that, there is always another logical translation layer sitting on top to fetch a new page to write to while the old is marked for erasure. DRAM just does not have that problem

Log in

Don't have an account? Sign up now