Power: P-Core vs E-Core, Win10 vs Win11

For Alder Lake, Intel brings two new things into the mix when we start talking about power.

First is what we’ve already talked about, the new P-core and E-core, each with different levels of performance per watt and targeted at different sorts of workloads. While the P-cores are expected to mimic previous generations of Intel processors, the E-cores should offer an interesting look into how low power operation might work on these systems and in future mobile systems.

The second element is how Intel is describing power. Rather than simply quote a ‘TDP’, or Thermal Design Power, Intel has decided (with much rejoicing) to start putting two numbers next to each processor, one for the base processor power and one for maximum turbo processor power, which we’ll call Base and Turbo. The idea is that the Base power mimics the TDP value we had before – it’s the power at which the all-core base frequency is guaranteed to. The Turbo power indicates the highest power level that should be observed in normal power virus (usually defined as something causing 90-95% of the CPU to continually switch) situation. There is usually a weighted time factor that limits how long a processor can remain in its Turbo state for slowly reeling back, but for the K processors Intel has made that time factor effectively infinite – with the right cooling, these processors should be able to use their Turbo power all day, all week, and all year.

So with that in mind, let’s start simply looking at the individual P-cores and E-cores.

Listed in red, in this test, all 8P+8E cores fully loaded (on DDR5), we get a CPU package power of 259 W. The progression from idle to load is steady, although there is a big jump from idle to single core. When one core is loaded, we go from 7 W to 78 W, which is a big 71 W jump. Because this is package power (the output for core power had some issues), this does include firing up the ring, the L3 cache, and the DRAM controller, but even if that makes 20% of the difference, we’re still looking at ~55-60 W enabled for a single core. By comparison, for our single thread SPEC power testing on Linux, we see a more modest 25-30W per core, which we put down to POV-Ray’s instruction density.

By contrast, in green, the E-cores only jump from 5 W to 15 W when a single core is active, and that is the same number as we see on SPEC power testing. Using all the E-cores, at 3.9 GHz, brings the package power up to 48 W total.

It is worth noting that there are differences between the blue bars (P-cores only) and the red bars (all cores, with E-cores loaded all the time), and that sometimes the blue bar consumes more power than the red bar. Our blue bar tests were done with E-cores disabled in the BIOS, which means that there might be more leeway in balancing a workload across a smaller number of cores, allowing for higher power. However as everything ramps up, the advantage swings the other way it seems. It’s a bit odd to see this behavior.

Moving on to individual testing, and here’s a look at a power trace of POV-Ray in Windows 11:

Here we’re seeing a higher spike in power, up to 272 W now, with the system at 4.9 GHz all-core. Interestingly enough, we see a decrease of power through the 241 W Turbo Power limit, and it settles around 225 W, with the reported frequency actually dropping to between 4.7-4.8 GHz instead. Technically this all-core is meant to take into account some of the E-cores, so this might be a case of the workload distributing itself and finding the best performance/power point when it comes to instruction mix, cache mix, and IO requirements. However, it takes a good 3-5 minutes to get there, if that’s the case.

Intrigued by this, I looked at how some of our other tests did between different operating systems. Enter Agisoft:

Between Windows 10 and Windows 11, the traces look near identical. The actual run time was 5 seconds faster on Windows 11 out of 20 minutes, so 0.4% faster, which we would consider run-to-run variation. The peaks and spikes look barely higher in Windows 11, and the frequency trace in Windows 11 looks a little more consistent, but overall they’re practically the same.

For our usual power graphs, we get something like this, and we’ll also add in the AVX-512 numbers from that page:

(0-0) Peak Power

Compared to Intel’s previous 11th Generation Processor, the Alder Lake Core i9 uses more power during AVX2, but is actually lower in AVX-512. The difficulty of presenting this graph in the future is based on those E-cores; they're more efficient, and as you’ll see in the results later. Even on AVX-512, Alder Lake pulls out a performance lead using 50 W fewer than 11th Gen.

When we compare it to AMD however, with that 142 W PPT limit that AMD has, Intel is often trailing at a 20-70 W deficit when we’re looking at full load efficiency. That being said, Intel is likely going to argue that in mixed workloads, such as two software programs running where something is on the E-cores, it wants to be the more efficient design.

Fundamental Windows 10 Issues: Priority and Focus Instruction Changes for Golden Cove and Gracemont
Comments Locked

474 Comments

View All Comments

  • Spunjji - Friday, November 5, 2021 - link

    "Cry more. LOL"
    Who put 50p in the dickhead?

    Seriously though, the thread's packed full of fanbots determined to exaggerate and posture.
  • Bagheera - Wednesday, November 10, 2021 - link

    you must be the loser from wccftech naked "Clown Sh*tter* hahahaha
  • opinali - Thursday, November 4, 2021 - link

    What a pathetic attempt at trolling. Not sure if you noticed but Ryzen CPUs actually win lots of the game benchmarks, ties lots more; and many of the ADL wins are only with the very top CPU with DDR5. In several games even the 5800X beats ADL (even against DDR5). Zen3 is now a full year old, no v-cache yet, the next refresh which is coming soon will probably beat ADL across the board (still without DDR5). Granted, Intel still dominates anything that makes heavy use of AVX-512, which is... almost nothing, you can count'em on one hand's fingers.

    Considering the current price of DDR5, even for a brand-new system where you have to buy everything including the RAM, a top-end ADL system is a pretty bad value right now. But thanks to this release the price of Zen3 CPUs is going further down, I can now find a 4900X for $480 on stockx, that's a good discount below MSRP (thanks Intel! since I've been waiting that to upgrade from my 5600X). That's also the same street price I find today for the 12700K; the 12900K is through the roof, it's all out of stock in places like newegg, or $1.5K where I found stock although the KF is much less bad.

    Also thanks to all the Intel fans that will burn cash in the first generation of DDR5 (overpriced and also with poor timings) so when Zen4 ships, 1y from today, DDR5 should be affordable and more mature, idem for PCIE5, so we Ryzen users can upgrade smoothly.
  • opinali - Thursday, November 4, 2021 - link

    (I meant 5900X above, damn typo.)
  • DannyH246 - Thursday, November 4, 2021 - link

    Don't waste your time responding, you can't account for abject stupidity. This is the absolute best CPU Intel could possibly build. Ok, it beats AMD by a couple percent in single threaded, but loses by a higher margin in multithreaded while consuming twice the power. Shortly, AMD will easily regain the performance crown with v-cache, while we wait for Zen 4. Sadly another poor review by www.IntelTech.com. Nobody wants a room heater for a CPU.
  • EnglishMike - Thursday, November 4, 2021 - link

    Last I looked, the vast majority of Anandtech readers don't run long-lasting 100% CPU multithreaded workloads, which is the only scenario where this one CPU falls a long way behind in power consumption.

    Competition is good, and Intel has a competitive CPU on its hands, after a long time (for them) without one, and the reviews reflect that fact.
  • Spunjji - Friday, November 5, 2021 - link

    ^ This.
  • mode_13h - Friday, November 5, 2021 - link

    > the vast majority of Anandtech readers don't run
    > long-lasting 100% CPU multithreaded workloads

    How many of us are software developers? My project currently takes about 90 minutes to build on a Skylake i7, and the build is fully multithreaded. I'm looking forward to an upgrade!
  • Wrs - Thursday, November 4, 2021 - link

    I'll point out that the Anand review uses JEDEC standard RAM timings. For DDR5 that's not terrible today, but for DDR4 it is. I mean, DDR4-3200 CL20?? A sister site (Toms) used commonplace DDR4 latencies (3200 CL14) and found it superior to DDR5 (using JEDEC B latencies) for gaming and most tasks, as well as putting ADL comfortably ahead of Zen3 in games. A further BIOS setting they made sure of was to allow ADL to sustain turbo power. Not sure how much that affected results. To be fair I did not hear them enabling PBO on Zen 3, which would be the comparable feature.

    But for now I wouldn't be assuming that Ryzen CPUs win even the majority of games, and I absolutely wouldn't assume ADL needs DDR5 to reach its game potential. Most of these reviews out are preliminary, given a short window of time between product sample and NDA lifting.
  • Oxford Guy - Friday, November 5, 2021 - link

    CL22 I think I read, not 20.

    Regardless, it’s ridiculously high.

Log in

Don't have an account? Sign up now