We have previously explored the importance of memory scaling within AMDs Ryzen CPUs: the question being answered today is how much of an effect on performance does the memory frequency have when Zen is paired with AMD’s own Vega graphics core. We run a complete suite of tests on AMD's Ryzen 3 2200G ($99) and Ryzen 5 2400G ($169) APUs with memory speeds from DDR4-2133 to DDR4-3466 using a kit of G.Skill Ripjaws V.

Recommended Reading on AMD Ryzen APUs

Memory Scaling on AMD Ryzen APUs

While adding Vega to Zen may be a new concept, the premise of the APU combing compute and graphics on the same chip remains the same. Graphics is often a memory bound operation - the speed at which the data can be accessed by the graphics is directly tied to the frame rate, and we have seen on chips in the past that the speed of the memory (or an interim cache) can vastly help accelerate the performance of the graphics. Graphics is usually the focus here, as faster memory only assists CPU workloads that are memory limited.

For example, see our articles on:

One of the main issues with memory right now is pricing. With the price of DDR4 having risen over the course of 2017 and with no signs of slowing in 2018, building a new desktop system has looked more expensive over the course of the last couple of years: the inflation of GPU pricing has also certainly contributed to those woes. While the general outlook on the current DDR4 DRAM market is that for a user wanting extra speed, more money must be spent is true, how that equates into actual performance becomes more relevant than ever before. On pricing, for example, here is a Corsair Vengeance LPX 2x8 DDR4-2666 memory kit over on Amazon:

The price of this memory when launched was $142, which decreased down to as low as $57 on sale but was an average of $75 during early 2016. Over the course of 2017 and 2018, this very popular memory kit is now trading at $179, having reached a high of $200. To put that in perspective, this kit launched at a cost of $8.88 per GB, went down as low as $3.56 per GB, and is now at $11.19 per GB. This is almost certainly a sellers market, not a buyers market. People are often spending money on capacity over speed. The goal of this article is to determine how much speed actually matters, especially when we look at lower-cost processors like the AMD Ryzen APUs.

Our APU with some other G.Skill TridentZ DRAM in an SFF test

For a user looking to build a budget system without focusing too much on high-end performance applications such as CAD or content creation, the Ryzen 3 2200 and Ryzen 5 2400 APUs has a lot to offer, especially when money is a highly limiting factor on purchasing decisions. As was within our Ryzen 5 2400G review, we concluded that AMD’s Ryzen 2000 series pairing offers the best value and performance compared against what’s currently on offer on both sides of the APU/CPU marketplace (Intel or AMD) when an iGPU is featured on chip.

Memory Scaling on APUs: More Infinity Fabric

Most of the following analysis in this section was taken from our previous Memory Scaling on Ryzen 7 article.

While we already know due to the previous testing we did with the Ryzen with what effect memory frequency has on the Zen cores, and AMD added a new element to this when it equipped the Ryzen 3 2200G and Ryzen 5 2400G with Vega. As per with the rest of the Ryzen processor range from AMD, each chip combines multiple technologies, but relatively speaking, the most relevant one which has the ability to affect memory performance on the Ryzen 2000 series is called Infinity Fabric.

The Infinity Fabric (hereafter shortened to IF) consists of two fabric planes: the Scalable Control Fabric (SCF) and the Scalable Data Fabric (SDF). The SCF is all about control: power management, remote management and security and IO. Essentially when data has to flow to different elements of the processor other than main memory, the SCF is in control. The SDF is where main memory access comes into play. There's still management here - being able to organize buffers and queues in order of priority assists with latency, and the organization also relies on a speedy implementation. The slide below is aimed more towards the IF implementation in AMD's server products, such as power control on individual memory channels, but still relevant to accelerating consumer workflow.

AMD's goal with IF was to develop an interconnect that could scale beyond CPUs, groups of CPUs, and GPUs. In the EPYC server product line, IF connects not only cores within the same piece of silicon, but silicon within the same processor and also processor to processor. Two important factors come into the design here: power (usually measured in energy per bit transferred) and bandwidth.

The bandwidth of the IF is designed to match the bandwidth of each channel of main memory, creating a solution that should potentially be unified without resorting to large buffers or delays. Discussing IF in the server context is a bit beyond the scope of what we are testing in this article, but the point we're trying to get across is that IF was built with a wide scope of products in mind. On the consumer platform, while IF isn't necessarily used to such a large degree as in server, the potential for the speed of IF to affect performance is just as high.

Test Bed and Hardware

As per our testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory. With this test setup, we are using the BIOS to set the CPU core frequency using the provided straps on the MSI B350I Pro AC motherboard. The memory is set to the range of speeds as given for our testing.

Test Setup
Processors AMD Ryzen 3 2200G AMD Ryzen 5 2400G
Motherboard MSI B350I Pro AC
Cooling Thermaltake Floe Riing RGB 360
Power Supply Thermaltake Toughpower Grand 1200 W Gold PSU
Memory G.Skill Ripjaws V
DDR4-3600 17-18-18
2x8 GB
1.35 V
Integrated GPU Vega 8
1100 MHz
Vega 11
1250 MHz
Discrete GPU ASUS Strix GTX 1060 6 GB
1620 MHz Base, 1847 MHz Boost
Hard Drive Crucial MX300 1 TB
Case Open Test Bed
Operating System Windows 10 Pro

With the aim being to procure a set of consistent results, the G.Skill Ripjaws V DDR4-3600 kit was set to latencies of 17-18-18-38 throughout each of the different straps tested. Due to an inability to support 100 MHz straps on our motherboard, the XMP profile was enabled in the BIOS on the memory Ripjaws V kit and the latency timings adjusted to 17-18-18-38 manually on the MSI B350I Pro AC motherboard. Each of the straps set for the aim in continuity of testing for frequency scaling.


A side note on our previous experience with memory scaling. In the past we introduced a concept of a Performance Index (PI) for each memory kit, to give a rough performance comparison metric between memory brands. This PI was defined as the data rate (such as DDR4-2400) divided by the CAS Latency (such as the 17 in 17-18-18) rounded down to the nearest whole number. In previous articles like this, typically the memory with the highest PI scored the best overall, especially in gaming, although in combinations with similar PIs, the one with the highest frequency was often ahead. We will revisit this concept later in the review.

In this review, we will be testing the following combinations of data rate and latencies:

Data Rates
Sub-Timings Performance
DDR4-2133 17-18-18 2133 / 17 = 125 1.35 V
DDR4-2400 17-18-18 2400 / 17 = 141 1.35 V
DDR4-2667 17-18-18 2667 / 17 = 157 1.35 V
DDR4-2866 17-18-18 2866 / 17 = 169 1.35 V
DDR4-3333* 17-18-18 3333 / 17 = 196 1.35 V
DDR4-3466 17-18-18 3466 / 17 = 204 1.35 V

*Corresponds to XMP Profile 1 on this memory kit

AGESA and Memory Support

At the time of the launch of Ryzen, a number of industry sources privately disclosed to us that the platform side of the product line was rushed. There was little time to do full DRAM compatibility lists, even with standard memory kits in the marketplace, and this lead to a few issues for early adopters to try and get matching kits that worked well without some tweaking. Within a few weeks this was ironed out when the memory vendors and motherboard vendors had time to test and adjust their firmware.

Overriding this was a lower than expected level of DRAM frequency support. During the launch, AMD had promised that Ryzen would be compatible with high speed memory, however reviewers and customers were having issues with higher speed memory kits (DDR4-3200 and above). These issues have been addressed via a wave of motherboard BIOS updates built upon an updated version of the AGESA (AMD Generic Encapsulated Software Architecture), specifically up to version, but now preceded by (thanks to AMD's unusual version numbering system).

Whilst the maturity of the Ryzen platform is no longer an issue generally faced, the AGESA microcode specifically focused on supporting the new Raven Ridge Ryzen 3 2200G and Ryzen 5 2400G APUs was announced before launch; we covered these BIOS updates for AMD's Ryzen APUs back in February at launch.

The whole purpose of the today's testing is to evaluate the scalability on AMD's Zen architecture and to see if the performance can be influenced by increasing the DRAM frequency. It would be foolish to not establish the effect on gaming performance and to see if memory frequency has a direct impact on frame rates given that previous generations of AMD APU have been reported to do so. 

This Review

In this article we cover:

  1. Overview and Test Bed (this page)
  2. CPU Performance
  3. Integrated Graphics Performance
  4. Discrete Graphics Performance with a GTX 1060
  5. Conclusions
CPU Performance
Comments Locked


View All Comments

  • GreenReaper - Saturday, June 30, 2018 - link

    Not sure CISC vs. RISC is right here - SIMD, sure, since that operates on large blocks of memory and so should be more suitable for GDDR's larger bus size.,
  • peevee - Tuesday, July 3, 2018 - link

    Type of memory does not determine bus size.
    128-bit GDDR5 is exactly as wide as 2-channel DDR4 in all the cheap CPUs.
    But it is a little bit smarter - for example, it contains hardware clear operation - no need to write a whole lot of zeros...
  • close - Saturday, June 30, 2018 - link

    DDR3 has been in use since 2007. Adoption rate aside, the cycle reached a peak with DDR3's 7 year reign and it might come back down if DDR5 comes soon.

    DDR1 was announced in 2000, DDR2 in 2003, DDR3 in 2007, DDR4 in 2014. DDR5 is rumored for next year.
  • peevee - Tuesday, July 3, 2018 - link

    " DDR is optimized for cisc operations while GDDR is optimized for risc operations"

    What a load of BS... Learn, people, before writing.
  • niva - Tuesday, July 3, 2018 - link

    I always thought it was that GDDR was faster memory that can't be mass produced in quantities to satisfy DRAM market, not that there was something fundementally different about the memory. I also questioned that RISC vs. CISC statement but simple google searching reveals this: https://www.quora.com/What-is-the-difference-betwe...

    So perhaps that wasn't way off base.
  • Dragonstongue - Tuesday, July 3, 2018 - link

    G for GDDR means GRAPHICS, DDR and GDDR "same thing"in theory "however"
    GDDR is not the same as DDR. Overall, GDDR is built for much higher bandwidth, thanks to a wider memory bus.
    GDDR has lower power and heat dispersal requirements compared to DDR, allowing for higher performance modules, with simpler cooling systems.
    DDR1, DDR2, and DDR3 have a 64 bit bus (or 128 bit in dual channel). GDDR3, comparatively, commonly uses between a 256 bit bus and 512 bit bus, or interface (across 4-8 channels).
    GDDR3 has a 4 bit prefetch and GDDR5 has an 8 bit prefetch, making GDDR5 twice as fast as GDDR3 in apples to apples comparisons.
    GDDR can request and receive data on the same clock cycle, where DDR cannot.
    DDR1 chips sends 8 data bits for every cycle of the clock, GDDR1 sends 16 data bits.

    things get extra "confusing" when GDDR5 came out because whatever the "rating is" for example GDDR5 900 "clock" you take this number and quadruple it which is the "effective speed" so this 900 becomes 3600 as it has a wider bus available to it A and B GDDR can send and receive data at the same time on the same clock cycle (normal DDR cannot, from what I have read)

    also GDDR is a chunk more expensive then "normal" DDR ram, though it does have multiple benefits.

    I suppose one can look at "DDR SDRAM is optimised to handle data packets from various in small bits with very low latency e.g browsers, programs, anti-virus scans, messengers.

    GDDR, on the other hand, can send and receive massive chunks of data on the same clock cycle.

    (source) http://www.dignited.com/27670/gddr-vs-ddr-sdram-gd...
  • bananaforscale - Saturday, June 30, 2018 - link

    You are complaining about DDR4 because APUs struggle with it? You're barking up the wrong tree. The issue is in using shared memory.
  • peevee - Tuesday, July 3, 2018 - link

    Or they could have supported 4 channels, given that they support 4 DIMMS anyway. Would be useful for CPU operations too, given that they run 8 threads in parallel...
  • Dragonstongue - Tuesday, July 3, 2018 - link

    AMD memory controller for desktop purposes is NOT built nor designed for quad channel usage, the cost is "not worth it" there is no way you can keep costs down for a "simple" APU for those looking for a computer on a budget and have access to quad channel memory A and B very very few things the average everyday consumer does with their computer needs or can effectively use beyond what AMD CPU have provided with their HT (hypertransport) or IF (infinity fabric for Ryzen) are able to provide with dual channel.

    More is not always better, most of the time it becomes chasing unicorns vs actually "needing it", you know, for those who have a massive wallet and buy it just to say they have it AH HAHA
  • Dragonstongue - Tuesday, July 3, 2018 - link

    we have not had DDR4 "that long" compared to say DDR3 or DDR2 which were and have been out far far longer
    DDR (2000) DDR2 (2003) DDR3 (2007) DDR4 (2014)....if you are "bad at maths" ^..^
    18+ years........15+ years.....11+ years.....4+ years
    DDR5 should be towards end of 2018 though JEDEC is saying 2020 for end consumer (me and you) purchase.

    It is not the raw "speed" holding things back FYI, latencies, cycle speed, bandwidth available, power required to keep them running, all the subtimings ALL matter in their own fashion (depending on the task they are being used for) I remember many DDR2 sticks that you could heavily overclock and they ran crazy fast but also got crazy hot and died early deaths (suicide runs)

    I do not ever hear of this happening with DDR3 or DDR4 (lower volts and the chip makers such as Intel do their damndest to monitor/control the memory controller speeds and volts to avoid killing things, back in the day these same safeguards were not in place)

    (max JEDEC certified specs best I can tell)
    DDR 400, DDR2 1066, DDR3 2133. DDR4 3200
    best "jump" on a percentage basis seems to have been from DDR to DDR2 (166.5%) DDR2 to DDR3 (100.1%) DDR3 to 4 (50%)

    current spec (not finalized as of yet) for DDR5 are "up to" approximately double what the fastest current modules of DDR4 are rated for 4266-6400 (vs 3200) (33.3% "gain" or at the "best" (100%)

    I hardly call either of those "double" but I am a simple man ^.^

    SOOOOOO the major jump absolutely was ddr to ddr2 when comparing official "spec" of the fastest rated memory, obviously there is even faster that one can manage when you overclock or whatever, but this is not a guarantee either, ratings and specs are ratings and specs.

    Now as far as "when are we going to get faster memory" that depends, can your cpu or motherboard "handle it" IMHO, nope, not at this point anyways, very few can "handle" say DDR4 4700 (G.Skill) and generally speaking the extreme "fastest" also suffer from far looser timings and subtimings and a marked increase in power required to "make it happen"

    RAM is not a "simple" thing to crank up the speeds with everything getting a "nitrous boosT" like you could with a car engine ^.^

Log in

Don't have an account? Sign up now