Micron, Rambus, & Others Team Up To Spur GDDR6 Adoption in Non-GPU Productsby Ryan Smith on January 23, 2018 9:00 AM EST
- Posted in
- Self-Driving Cars
For regular AnandTech readers, the drums of GDDR6 have been beating loudly for most of the last year now. The new memory standard replaces the venerable GDDR5 memory, which, to make long-time readers feel old, launched 10 years ago. While GDDR5 has evolved well beyond its initially planned lifecycle to meet the needs of the industry, it’s finally begun to reach its apex, and a new memory standard has been needed to take its place. GDDR6 then promises to be a big deal, offering a significant jump in memory bandwidth over GDDR5 – and even GDDR5X – giving processors of all sorts a much-needed boost.
And while the focus on any GDDR technology is understandably first and foremost on the Graphics aspect of GDDR, the technology itself is not inherently limited to just video cards. Rather GDDR is fundamentally just a product built to the other end of the capacity/bandwidth continuum, focusing on high memory bandwidth and smaller capacities as opposed to traditional, high-density DRAM. Video cards in turn are the most obvious use case given their bandwidth requirements, but they’re not the only high-bandwidth devices out there.
A long-term goal of the DRAM industry has been to spur the adoption of GDDR memory in non-graphics products in order to grow the overall market for the memory and provide higher bandwidth options for certain customers. GDDR IP vendors have long seen product categories such as networking gear as being the perfect ancillary market for this type of memory, given the bandwidth needs. However while this has been an ongoing effort since the GDDR5 days (if not before), any actual market penetration for non-graphics use of GDDR5 has been extremely limited, essentially setting up status quo as we know it.
As a result, for the launch of GDDR6, Micron is taking a different, more organized path to spurring GDDR6 adoption. Being announced today, Micron, Rambus, Northwest Logic, and Avery Design are banding together to develop a complete toolkit solution for chip designers to implement GDDR6 support on their products. The development of this common ecosystem is intended to allow designers to more easily adopt GDDR6 by offering a full suite of compatible GDDR6 IP, and the means to validate all of it.
By bringing together a group that supplies everything from the memory to the memory controller to validation tools, the group is looking to solve what Micron saw as the biggest roadblock to GDDR5 adoption: the lack of easily licensed IP. In practice if a vendor wanted to implement GDDR5, there was little in the way of prefabricated designs to work with; vendors would need to implement their own GDDR5 memory controller and all the tough work that comes with a high speed memory interface. Large players like NVIDIA and AMD could of course pull this off, but it made GDDR5 inaccessible to mid-size players. These are the kind of firms who may specialize in designing a specific aspect of a chip, and then license and integrate any remaining technology they may need.
The group isn’t giving this collaboration a specific name, but each member supplies a difference piece of the puzzle. Micron of course supplies the GDDR6 itself, while the memory controller IP is from Northwest Logic. Meanwhile the PHY for the memory controller – an especially nasty bit since it’s a mixed analog/digital circuit – comes from Rambus. Finally, Avery Design is supplying validation tools for the effort, giving chip designers the means to validate their designs after integrating the various bits of IP. While the complete toolkit isn’t being offered in a one-stop-shopping fashion – interested firms will need to reach out to each member to license the relevant IP bits rather than licensing all of it at once – when assembled the toolkit should greatly streamline the implementation of GDDR6 in new chips.
|GDDR6 IP Group|
|Northwest Logic||Memory Controller IP|
|Avery Design||Verificaiton IP|
As for what markets the group will be targeting, this GDDR6 IP effort is at least initially focused on supporting both ASICs and FPGAs for the networking and automotive markets. The networking market is somewhat self-explanatory here – high-end switches and routers process vast amounts of data and need the memory bandwidth to keep up – and GDDR memory has always been a good potential fit here. This is where speed/capacity tradeoffs become a factor, as even a 512-bit GDDR6 implementation only offers as much memory capacity as one good RDIMM, but for products that can work in those constraints, GDDR6 would offer better bandwidth at lower energy consumption – and with fewer total components – than DDR4.
The other big aim for the group is the rapidly expanding autonomous car market. This market has a lot in common with the graphics market in as much as it involves a lot of visual processing, though reversing the situation by making it incoming data instead of outgoing data. More advanced cars, particularly level 5 fully autonomous designs, require a massive amount of sensor data and accordingly a great deal of memory bandwidth to carry that data. In this respect the group is looking to grab a foothold in a new market, as this market is expected to boom over the coming years, and there’s ample opportunity to sell memory here.
Ultimately driving GDDR6 adoption outside of the graphics market still remains an uphill battle, both for inertia reasons and because it’s not the only high-bandwidth memory technology vying for a piece of the market. However compared to the fledging efforts to get GDDR5 adopted in this fashion, Micron’s efforts to bring together IP providers is a lot more organized than before, thanks in large part to the fact that it significantly reduces the barrier towards adding GDDR support on the logic side of matters. Micron for their part is already sampling their GDDR6, with mass production set to begin this quarter, so if Micron’s efforts make headway, then potential customers should be able to get started very soon on integrating GDDR6 IP into their designs.
Post Your CommentPlease log in or sign up to comment.
View All Comments
r3loaded - Tuesday, January 23, 2018 - linkHow does GDDR6 compare with HBM for these "non-graphics" applications? Suppose I was designing a high-end router, why would I pick GDDR6 over HBM and vice versa?
Kevin G - Tuesday, January 23, 2018 - linkFor a high end switch/router, HBM would be better suited due to lower latency. While HBM is more expensive due to interposer/EMIB, the high end market carries such a premium this isn't much of an issue.
A5 - Tuesday, January 23, 2018 - linkEh, networking vendors are looking to keep margins high whenever possible too. GDDR6 will have the key benefit of being cheaper than HBM.
Pork@III - Tuesday, January 23, 2018 - linkI want GDDR6 will use for PC memory? Like GDDR5 is PS4... GDDR6 is just from several to several dozen times faster than ordinary DDR4/5.
Pork@III - Tuesday, January 23, 2018 - linkOuch! :D I hope GDDR6 to be used for PC memory!
DanNeely - Tuesday, January 23, 2018 - linkI'd say it's unlikely except perhaps with future generations of Intel+Vega type products (and for most of those applications HBMs smaller footprint is probably a design winner). On the PS4 it was done because of sharing ram between the CPU and GPU vs separate address spaces for PC. Between the architectural differences that the OS would need, it being a soldered only product, and not needed for general CPU use (CPUs haven't been ram speed bottlenecked since P4) I don't see it happening.
(Ryzen/Epyc are faster with high speed ram, but only because an internal bus runs at ram speed; the faster ram itself isn't doing anything direct for it.)
willis936 - Tuesday, January 23, 2018 - linkWell we're always limited by memory throughput. Just as many transistors are spent on cache as there are spent on the core. If main memory was faster and lower latency that wouldn't be necessary. This is particularly useful for many core designs where keeping memory synchronized adds a lot of control complexity.
Lolimaster - Tuesday, January 23, 2018 - linkIs faster in BW but with higher latency, no that important for gpu's.
Lolimaster - Tuesday, January 23, 2018 - linkFor actual PC's is more important, low latency, density than BW.
nathanddrews - Tuesday, January 23, 2018 - linkWe'll never be able to buy GPUs again at this rate.