CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5

Starting off with the first of our synthetic tests, we’re looking into the memory subsystem of Alder Lake-S, as Intel has now included a great deal of changes to the microarchitecture, both on a chip-level, as well as on a platform-level due to the new DDR5 memory compatibility.

In our core-to-core latency test which showcases the physical topology of the chip, there’s a few things to note. Beginning with the P-cores, which are logically enumerated in the operating system as cores 0 to 15, we can see that latencies are about the same as what we’ve seen on Rocket Lake, although with a few nanosecond differences in the results. The latencies appear to be a bit more topologically uniform, which might indicate that Intel might have finally gotten rid of their uni-directional coherency snoop ring for a bi-directional one.

Latencies between the SMT siblings are also interesting as they decrease from 5.2ns on the Willow Cove cores to 4.3ns on the new Golden Cove cores. The actual L1 access latencies haven’t changed between the two microarchitectures, which means that Intel has improved the LOCK instruction cycle latency.

Between the Golden Cove cores and the smaller Gracemont cores we see higher latencies, as that was to be expected given their lower clock speeds and possible higher L2 overhead of the Gracemont cluster.

What’s however a bit perplexing is that the core-to-core latencies between Gracemont cores is extremely slow, and that’s quite unintuitive as one would have expected coherency between them to be isolated purely on their local L2 cluster. Instead, what seems to be happening is that even between two cores in a cluster, requests have to travel out to the L3 ring, and come back to the very same pathway. That’s quite weird, and we don’t have a good explanation as to why Intel would do this.

Cache Latencies and DDR5 vs DDR4


Next up, let’s take a look at the new cache hierarchy of Alder Lake, both from the view of the Golden Cove cores as well as the Gracemont cores, in DDR5 as well as DDR4.

Alder Lake changes up the big cores cache quite considerably. First off, the L1D remains identical – so not much to report there.

On the L2 side of things, compared to Rocket Lake’s Willow Cove cores, Alder Lake’s Golden Cove cores considerably increase the L2 cache from 512KB to 1.25MB. This does come at a 15% latency degradation for this cache, however given the 2.5x increase in size and thus higher hit rates, it’s a good compromise to make.

The Gracemont E-cores have a large 2MB L2 which is shared amongst the 4 cores in a cluster, so things do look quite differently in terms of hierarchy. Here latencies after 192KB do increase for some patterns as it exceeds the 48-page L1 TLB of the cores. Same thing happens at 8MB as the 1024-page L2 TLB is exceeded.

The L3 cache of the chip increases vastly from 16MB in RKL to 30MB in ADL. This increase also does come with a latency increase – at equal test depth, up from 11.59ns to 14.24ns. Intel’s ring and cache slice approach remains considerably slower than AMD’s CCX, which at a similar L3 size of 32MB comes in at 10.34ns for equivalent random-access patterns.

On the DRAM side of things, we can start off with the RKL DDR4 to ADL DDR4 results. The memory latency at 160MB goes up from 85ns to 90ns – generally expected given the larger memory subsystem of the new chip.

Shifting over from DDR4 to the DDR5 results on Alder Lake, at JEDEC speeds, comparing DDR4-3200 CL20 to DDR4-4800 CL40, the officially supported speeds of the chip, we see memory latency only go up to 92.8ns, which is actually below our expectations. In other prefetcher-friendly patterns, latency goes up by a larger 5ns, but still that’s all within reasonable figures, and means that DDR5 latency regressions we feared are overblown, and the chip is able to take advantage of the new memory type without any larger issues.


We only ever whip out our memory level parallelism test when there’s a brand-new microarchitecture which changes things quite considerably in regards to how it handles MLP. Alder Lake and its Golden Cove and Gracemont cores are such designs.

Memory level parallelism is the characteristic of a CPU being able to have multiple pending memory accesses – instead of doing things serially, out of order CPUs are able to fetch data from multiple memory locations at the same time. The definition of how many accesses this ends up as, depends on the design within the core, such as MHSR’s, but also the actual parallelism of the various caches as well as the fabric itself. Our test here compares the relative speedup of doing parallel access of random pointer chain chasing – a speedup of 2x means that the core is able to access two chains simultaneously with no degradation of per-element access times. At some point, we’ll be hitting bottlenecks of the various memory elements of the core and memory subsystem. A higher MLP speedup allows for faster execution in workloads which have data-level parallelism, and also improves the ability to hide latency in terms of performance.

Intel’s Golden Cove core is here a massive uplift in terms of its MLP capabilities. The L2 cache of the chip, because it’s so much larger, likely also has a lot more physical banks to it, likely allowing more parallel accesses.

On the L3 cache, Intel also notably mentioned that the new design is able to handle more outstanding transfers, as we immediately see this in the results of Golden Cove. Our test here only tracked up to 30 parallel accesses and we didn’t have time to check out a more extended test, but it does seem the core would be able to hit higher figures – at least until it hits TLB limits, where things slow down. The MLP capabilities here are similar, if not greater, than what AMD showcases in their Zen CPUs, something we had noted as being a strength of their microarchitecture.

MLP at deeper DRAM regions is essentially double that of Rocket Lake – at least on the DDR5 variant of Alder Lake. The DDR4 results reduce the MLP advantage, likely because the chip has to deal with only 2 memory channels rather than 4 on the DDR5 test, this allows the DDR5 variant more parallel sparse accesses to DRAM banks. Interestingly, Intel still doesn’t do as well as AMD even with DDR5 – I’m not sure where exactly the differences stem from, but it must be further down the fabric and memory controller side of things.

From the E-core Gracemont cores, the results also look good, albeit the L3 parallelism looks lower – maybe that’s a limit of the outstanding requests from the L2 cluster of the GRM cores – or maybe some interface limitation.

I think the MLP improvements of Alder Lake here are extremely massive, and represent a major jump in terms of memory performance of the design, something which undoubtedly lead to larger IPC gains for the new microarchitecture.

Instruction Changes for Golden Cove and Gracemont CPU Tests: SPEC ST Performance on P-Cores & E-Cores
Comments Locked

474 Comments

View All Comments

  • michael2k - Thursday, November 4, 2021 - link

    One is a bellwether for the other.

    Mobile parts will have cores and clocks slashed to hit mobile power levels; 7W-45W with 2p2e - 6p8e

    However, given that a single P core in the desktop variant can burn 78W in POV Ray, and they want 6 of them in a mobile part under 45W, that means a lot of restrictions apply.

    Even 8 E cores, per this review, clock in at 48W!

    That suggests a 6p8e part can't be anywhere near the desktop part's 5.2GHz/3.9GHz Turbo clocks. If there is a linear power-clock relationship (no change in voltage) then 8 E cores at 3GHz will be the norm. 6 P cores on POV-Ray burn 197W, then to hit 45W would mean throttling all 6 cores to 1.2GHz

    https://hothardware.com/news/intel-alder-lake-p-mo...
  • siuol11 - Thursday, November 4, 2021 - link

    Except that we know that the power-clock ratio is not linear and never has been. You can drop a few hundred MHz off of any Intel chip for the past 5 generations and get a much better performance per watt ratio. This is why mobile chips don't lose a lot of MHz compared to desktop chips.
  • michael2k - Thursday, November 4, 2021 - link

    We already know their existing Ice Lake 10nm 4C mobile parts are capped at 1.2GHz to hit 10W:
    https://www.anandtech.com/show/15657/intels-new-si...

    A 6p8e part might not clock that low, but I'm certain that they will have to for the theoretical 7W parts.

    Here's a better 10nm data point showing off their 15W-28W designs:
    https://www.anandtech.com/show/14664/testing-intel...

    4C 2.3GHz 28W TDP

    Suggests that a 4pNe part might be similar while the 6p8e part would probably be a 2.3GHz part that could turbo up to a single core to 4GHz or all cores to 3.6GHz
  • TheinsanegamerN - Thursday, November 4, 2021 - link

    Yes, once it gets in the way of performance, and intel's horrible efficiency means you need high end water cooling to keep it running, whereas AMD does not. Intel's inneficiency is going to be an issue for those who like air cooling, which is a lot of the market.
  • Wrs - Thursday, November 4, 2021 - link

    Trouble is I'm not seeing "horrible efficiency" in these benchmarks. The 12900k is merely pushed far up the curve in some of these benches - if the Zen3 parts could be pushed that far up, efficiency would likewise drop quite a bit faster than performance goes up. Some people already do that. PBO on the 5900x does up to about 220W (varies on the cooler).
  • jerrylzy - Friday, November 5, 2021 - link

    PBO is garbage. You can restrict EDC to 140A, let loose other restrictions and achieve a better performance than setting EDC to 220A.
  • Spunjji - Friday, November 5, 2021 - link

    "if the Zen3 parts could be pushed that far up"
    But you wouldn't, because you'd get barely any more performance for increased power draw. This is a decision Intel made for the default shipping configuration and it needs to be acknowledged as such.
  • Wrs - Saturday, November 6, 2021 - link

    As a typical purchaser of K chips the default shipping configuration holds rather little weight. A single BIOS switch (PBO on AMD, MTP on Intel), or one slight change to Windows power settings, is pretty much all the efficiency difference between 5950x and 12900k. It pains me every time I see a reviewer or reader fail to realize that. The chips trade blows on the various benches because they're so similar in efficiency, yet each by their design has strong advantages in certain commonplace scenarios.
  • Spunjji - Friday, November 5, 2021 - link

    If the competition are able to offer similar performance and you don't have to shell out the cash and space for a 360mm AIO to get it, that's a relevant advantage. If those things don't bother you then it's fine, though - but we're in a situation where AMD's best is much more power efficient than Intel's at full load, albeit Intel appears to reverse that at lower loads.
  • geoxile - Thursday, November 4, 2021 - link

    Clock/power scales geometrically. The 5900HS retains ~85% of the 5800X's performance while using 35-40W stable power vs 110-120W for the 5800X. That's almost 3x more efficient. Intel is clocking desktop ADL to the moon, it doesn't mean ADL is going to scale down poorly, if anything I expect it to scale down very well since the E-cores are very performant while using a fraction of the power and according to Intel can operate at lower voltages than the P-cores can, so they can scale down even lower than big cores like ADL P-cores and zen 3. ADL mobile should be way more interesting than ADL desktop.

Log in

Don't have an account? Sign up now