JEDEC made two important announcements about the future of DRAM and non-volatile DIMMs for servers last week. Development of both is proceeding as planned and JEDEC intends to preview them in the middle of this year and publish the final specifications sometimes in 2018.

Traditionally each new successive DRAM memory standard aims for consistent jumps: doubling the bandwidth per pin, reducing power consumption by dropping Vdd/Vddq voltage, and increasing the maximum capacity of memory ICs (integrated circuits). DDR5 will follow this trend and JEDEC last week confirmed that it would double the bandwidth and density over DDR4, improve performance, and power efficiency.

Given that official DDR4 standard covers chips with up to 16 Gb capacity and with up to 2133-3200 MT/s data rate per pin, doubling that means 32 Gb ICs with up to 4266-6400 MT/s data rate per pin. If DDR5 sustains 64-bit interface for memory modules, we will see single-sided 32 GB DDR5-6400 DIMMs with 51.2 GB/s bandwidth in the DDR5 era. Speaking of modules, it is interesting to note that among other things DDR5 promises “a more user-friendly interface”, which probably means a new retention mechanism or increased design configurability.


Samsung's DDR4 memory modules. Image for illustrative purposes only.

Part of the DDR5 specification will be improved channel use and efficiency. Virtually all modern random access memory sub-systems are single-channel, dual-channel or multi-channel, but actual memory bandwidth of such systems does not increase linearly with the increase of the number of channels (i.e., channel utilization decreases). Part of the problem is the fact that host cores fight for DRAM bandwidth, and memory scheduling is a challenge for CPU and SoC developers. Right now we do not know how DRAM developers at JEDEC plan to address the memory channel efficiency problem on the specification level, but if they manage to even partly solve the problem, that will be a good news. Host cores will continue to fight for bandwidth and memory scheduling will remain important, but if channel utilization increases it could mean both performance and power advantages. Keep in mind that additional memory channels mean additional DRAM ICs and a significant increase in power consumption, which is important for mobile DRAM subsystems, but it is also very important for servers.

JEDEC plans to disclose more information about the DDR5 specification at its Server Forum event in Santa Clara on June 19, 2017, and then publish the spec in 2018. It is noteworthy that JEDEC published the DDR4 specification in September 2012, whereas large DRAM makers released samples of their DDR4 chips/modules a little before that. Eventually, Intel launched the world’s first DDR4-supporting platforms in 2014, two years after the standard was finalized. If DDR5 follows the same path, we will see systems using the new type of DRAM in 2020 or 2021.

Another specification that JEDEC plans to finalize in 2018 is the NVDIMM-P that will enable high-capacity memory modules featuring persistent memory (flash, 3D XPoint, new types of storage-class memory, etc.) and DRAM. The capacity of today’s NVDIMM-Ns is limited to the capacity of regular server DRAM modules, but the NVDIMM-P promises to change that and increase capacities of modules to hundreds of GBs or even to TBs. The NVDIMM-P is currently a work in progress and we are going to learn more about the tech in June.

Related Reading

Sources of images: SNIA, Samsung

Source: JEDEC

Comments Locked

38 Comments

View All Comments

  • MikeMurphy - Monday, April 3, 2017 - link

    I suspect HBM and external DRAM with co-exist early on, with external memory eventually going the way of the Dodo, especially for consumer devices.
  • Xajel - Tuesday, April 4, 2017 - link

    HBM still more expensive than DDRx... and upgradability is a big issue here if it was embedded. While Ultrabooks makers might be happy, the majority of other PC market will not.

    HBM on CPU package has a lot of potential, but the next step might be just as vRAM for the iGPU on an APU.

    Maybe later we might see CPU's with HBM as high speed cache like L4 and the rest is DDRx.. Same like what AMD is doing with VEGA's HBCC, it might be a good idea also for CPU's.
  • helvete - Thursday, June 15, 2017 - link

    What a crazy world. GPU and memory within a CPU and storage in DRAMs. Tha cases shall be much flatter!
  • SalemF - Monday, February 26, 2018 - link

    Ironically I was asking fro the same thing on AMD sub and everyone disagree with me
    https://www.reddit.com/r/Amd/comments/8088i5/could...
    being cheap all time wont save you on long run , AMD should learn that lesson already
  • grant3 - Monday, April 3, 2017 - link

    Optane is a manufacturing technology and DDR5 is an interface standard, so I don't know why you claim they are "against" each other.

    Intel could put regular transistors, Optane transistors, or even miniature-elves on the memory chips and as long as they meet spec they're still DDR5.
  • MrSpadge - Monday, April 3, 2017 - link

    He was talking about the NVDIMMS.
  • BrokenCrayons - Monday, April 3, 2017 - link

    That's some good news all around coming from JDEC. iGPUs could use the extra bandwidth into system memory that DDR5 would offer and the idea of NVDIMM-P allowing a DIMM or two to act as a complete storage package to replace system memory and SSDs (am I understanding the intent of that correctly?) would reduce the number of system components and connectors.
  • Yojimbo - Monday, April 3, 2017 - link

    I think the idea with NVDIMMs isn't so much to replace system memory or storage, but rather to have a fourth tier: DRAM, NVRAM, SSD, HD. I guess you could consider a fifth tier of HBM in a CPU-GPU system. The NVRAM will allow them to have a large pool of reasonably fast memory to do in-memory operations on data sets that can't fit into the DRAM pool at a reasonable cost, but it won't be able to replace mass storage in most cases. These are server concerns. I am not sure if anyone is planning much NVDIMM usage on PCs. Most PC systems don't really need that much RAM. Giving up some performance for several times the capacity at the same price doesn't seem like a good trade off for PCs. On the other hand, 500GB of NVDIMMs would probably be prohibitively expensive for consumers or an office PC.
  • BrokenCrayons - Monday, April 3, 2017 - link

    Ah thanks for clearing that up!
  • danjw - Wednesday, April 5, 2017 - link

    My understanding is that the idea is eventually to merge it, but that is a not real soon thing. The problem is that none of the existing technology, that I am aware of, beats DRAM for speed and Flash memory for capacity. Those two things need to happen before this is viable. For now, the NVRAM is mostly targeted as a cache for hard drives.

Log in

Don't have an account? Sign up now