Micron Samples 256 GB DDR5-8800 MCR DIMMs: Massive Modules for Massive Servers
by Anton Shilov on March 22, 2024 4:00 PM ESTMicron this week announced that it had begun sampling of its 256 GB multiplexer combined (MCR) DIMMs, the company's highest-capacity memory modules to date. These brand-new DDR5-based MCRDIMMs are aimed at next-generation servers, particularly those powered by Intel's Xeon Scalable 'Granite Rapids' processors that are set to support 12 or 24 memory slots per socket. Usage of these modules can enable datacenter machines with 3 TB or 6 TB of memory, with the combined ranks allowing for effect data rates of DDR5-8800.
"We also started sampling our 256 GB MCRDIMM module, which further enhances performance and increases DRAM content per server," said Sanjay Mehrotra, chief executive of Micron, in prepared remarks for the company's earnings call this week.
In addition to announcing sampling of these modules, Micron also demonstrated them at NVIDIA's GTC conference, where server vendors and customers alike are abuzz at building new servers for the next generation of AI accelerators. Our colleagues from Tom's Hardware have managed to grab a couple of pictures of Micron's 256 GB DDR5-8800 MCR DIMMs.
Image Credit: Tom's Hardware
Apparently, Micron's 256 GB DDR5-8800 MCRDIMMs come in two variants: a taller module with 80 DRAM chips distributed on both sides, and a standard-height module using 2Hi stacked packages. Both are based on monolithic 32 Gb DDR5 ICs and are engineered to cater to different server configurations with the standard-height MCRDIMM adressing 1U servers.The taller version consumes about 20W of power, which is in line with expectations as a 128 GB DDR5-8000 RDIMM consumes around 10W in DDR5-4800 mode. I have no idea about power consumption of the version that uses 2Hi packages, though expect it to be a little bit hotter and harder to cool down.
Image Credit: Tom's Hardware
Multiplexer Combined Ranks (MCR) DIMMs are dual-rank memory modules featuring a specialized buffer that allows both ranks to operate simultaneously. This buffer enables the two physical ranks to operate as though they were separate modules working in parallel, which allows for concurrent retrieval of 128 bytes of data from both ranks per clock cycle (compared to 64 bytes per cycle when it comes to regular memory modules), effectively doubling performance of a single module. Of course, since the modules retains physical interface of standard DDR5 modules (i.e., 72-bits), the buffer works with host at a very high data transfer rate to transfer that fetched data to the host CPU. These speeds exceed the standard DDR5 specifications, reaching 8800 MT/s in this case.
While MCR DIMMs make memory modules slightly more complex than regular RDIMMs, they increase performance and capacity of memory subsystem without increasing the number of memory modules involved, which makes it easier to build server motherboards. These modules are poised to play a crucial role in enabling the next generation of servers to handle increasingly demanding applications, particularly in the AI field.
Sources: Tom's Hardware, Micron
4 Comments
View All Comments
Threska - Friday, March 22, 2024 - link
Soon memory will be as big as the computer it's plugged into.schujj07 - Friday, March 22, 2024 - link
I would love these in my data center.lorribot - Friday, March 22, 2024 - link
Licensing coming to a datacenter near you soon courtesy of Oracle/BroadcomYou asked for an alternative to per core now we charge the same per GB of RAM. you have 6TB, that will be $500K per server please.
Kevin G - Monday, March 25, 2024 - link
This already exists on the IBM side for mainframes as you order the hardware and then need licensing to enable the memory you paid for physically.Intel tried to do something similar in the Sky Lake/Cascade Lake generations by limited memory capacity on lower end chips and requiring otherwise identical but more expensive models to leverage the full capabilities of the hardware.
With CXL and being able to leverage the PCIe bus for additional memory, it will be interesting to see how software companies license that since it will appear as another NUMA node but without any compute attached to it.