Instruction Changes

Both of the processor cores inside Alder Lake are brand new – they build on the previous generation Core and Atom designs in multiple ways. As always, Intel gives us a high level overview of the microarchitecture changes, as we’ve written in an article from Architecture Day:

At the highest level, the P-core supports a 6-wide decode (up from 4), and has split the execution ports to allow for more operations to execute at once, enabling higher IPC and ILP from workflow that can take advantage. Usually a wider decode consumes a lot more power, but Intel says that its micro-op cache (now 4K) and front-end are improved enough that the decode engine spends 80% of its time power gated.

For the E-core, similarly it also has a 6-wide decode, although split to 2x3-wide. It has a 17 execution ports, buffered by double the load/store support of the previous generation Atom core. Beyond this, Gracemont is the first Atom core to support AVX2 instructions.

As part of our analysis into new microarchitectures, we also do an instruction sweep to see what other benefits have been added. The following is literally a raw list of changes, which we are still in the process of going through. Please forgive the raw data. Big thanks to our industry friends who help with this analysis.

Any of the following that is listed as A|B means A in latency (in clocks) and B in reciprocal throughput (1/instructions).

 

P-core: Golden Cove vs Cypress Cove

Microarchitecture Changes:

  • 6-wide decoder with 32b window: it means code size much less important, e.g. 3 MOV imm64 / clks;(last similar 50% jump was Pentium -> Pentium Pro in 1995, Conroe in 2006 was just 3->4 jump)
  • Triple load: (almost) universal
    • every GPR, SSE, VEX, EVEX load gains (only MMX load unsupported)
    • BROADCAST*, GATHER*, PREFETCH* also gains
  • Decoupled double FADD units
    • every single and double SIMD VADD/VSUB (and AVX VADDSUB* and VHADD*/VHSUB*) has latency gains
    • Another ADD/SUB means 4->2 clks
    • Another MUL means 4->3 clks
    • AVX512 support: 512b ADD/SUB rec. throughput 0.5, as in server!
    • exception: half precision ADD/SUB handled by FMAs
    • exception: x87 FADD remained 3 clks
  • Some form of GPR (general purpose register) immediate additions treated as NOPs (removed at the "allocate/rename/move ellimination/zeroing idioms" step)
    • LEA r64, [r64+imm8]
    • ADD r64, imm8
    • ADD r64, imm32
    • INC r64
    • Is this just for 64b addition GPRs?
  • eliminated instructions:
    • MOV r32/r64
    • (V)MOV(A/U)(PS/PD/DQ) xmm, ymm
    • 0-5 0x66 NOP
    • LNOP3-7
    • CLC/STC
  • zeroing idioms:
    • (V)XORPS/PD, (V)PXOR xmm, ymm
    • (V)PSUB(U)B/W/D/Q xmm
    • (V)PCMPGTB/W/D/Q xmm
    • (V)PXOR xmm

Faster GPR instructions (vs Cypress Cove):

  • LOCK latency 20->18 clks
  • LEA with scale throughput 2->3/clk
  • (I)MUL r8 latency 4->3 clks
  • LAHF latency 3->1 clks
  • CMPS* latency 5->4 clks
  • REP CMPSB 1->3.7 Bytes/clock
  • REP SCASB 0.5->1.85 Bytes/clock
  • REP MOVS* 115->122 Bytes/clock
  • CMPXVHG16B 20|20 -> 16|14
  • PREFETCH* throughput 1->3/clk
  • ANDN/BLSI/BLSMSK/BLSR throughput 2->3/clock
  • SHA1RNDS4 latency 6->4
  • SHA1MSG2 throughput 0.2->0.25/clock
  • SHA256MSG2 11|5->6|2
  • ADC/SBB (r/e)ax 2|2 -> 1|1

Faster SIMD instructions (vs Cypress Cove):

  • *FADD xmm/ymm latency 4->3 clks (after MUL)
  • *FADD xmm/ymm latency 4->2 clks(after ADD)
  • * means (V)(ADD/SUB/ADDSUB/HADD/HSUB)(PS/PD) affected
  • VADD/SUB/PS/PD zmm  4|1->3.3|0.5
  • CLMUL xmm  6|1->3|1
  • CLMUL ymm, zmm 8|2->3|1
  • VPGATHERDQ xmm, [xm32], xmm 22|1.67->20|1.5 clks
  • VPGATHERDD ymm, [ym32], ymm throughput 0.2 -> 0.33/clock
  • VPGATHERQQ ymm, [ym64], ymm throughput 0.33 -> 0.50/clock

Regressions, Slower instructions (vs Cypress Cove):

  • Store-to-Load-Forward 128b 5->7, 256b 6->7 clocks
  • PAUSE latency 140->160 clocks
  • LEA with scale latency 2->3 clocks
  • (I)DIV r8 latency 15->17 clocks
  • FXCH throughput 2->1/clock
  • LFENCE latency 6->12 clocks
  • VBLENDV(B/PS/PD) xmm, ymm 2->3 clocks
  • (V)AESKEYGEN latency 12->13 clocks
  • VCVTPS2PH/PH2PS latency 5->6 clocks
  • BZHI throughput 2->1/clock
  • VPGATHERDD ymm, [ym32], ymm latency 22->24 clocks
  • VPGATHERQQ ymm, [ym64], ymm latency 21->23 clocks

 

E-core: Gracemont vs Tremont

Microarchitecture Changes:

  • Dual 128b store port (works with every GPR, PUSH, MMX, SSE, AVX, non-temporal m32, m64, m128)
  • Zen2-like memory renaming with GPRs
  • New zeroing idioms
    • SUB r32, r32
    • SUB r64, r64
    • CDQ, CQO
    • (V)PSUBB/W/D/Q/SB/SW/USB/USW
    • (V)PCMPGTB/W/D/Q
  • New ones idiom: (V)PCMPEQB/W/D/Q
  • MOV elimination: MOV; MOVZX; MOVSX r32, r64
  • NOP elimination: NOP, 1-4 0x66 NOP throughput 3->5/clock, LNOP 3, LNOP 4, LNOP 5

Faster GPR instructions (vs Tremont)

  • PAUSE latency 158->62 clocks
  • MOVSX; SHL/R r, 1; SHL/R r,imm8  tp 1->0.25
  • ADD;SUB; CMP; AND; OR; XOR; NEG; NOT; TEST; MOVZX; BSSWAP; LEA [r+r]; LEA [r+disp8/32] throughput 3->4 per clock
  • CMOV* throughput 1->2 per clock
  • RCR r, 1 10|10 -> 2|2
  • RCR/RCL r, imm/cl 13|13->11|11
  • SHLD/SHRD r1_32, r1_32, imm8 2|2 -> 2|0.5
  • MOVBE latency 1->0.5 clocks
  • (I)MUL r32 3|1 -> 3|0.5
  • (I)MUL r64 5|2 -> 5|0.5
  • REP STOSB/STOSW/STOSD/STOSQ 15/8/12/11 byte/clock -> 15/15/15/15 bytes/clock

Faster SIMD instructions (vs Tremont)

  • A lot of xmm SIMD throughput is 4/clock instead of theoretical maximum(?) of 3/clock, not sure how this is possible
  • MASKMOVQ throughput 1 per 104 clocks -> 1 per clock
  • PADDB/W/D; PSUBB/W/D PAVGB/PAVGW 1|0.5 -> 1|.33
  • PADDQ/PSUBQ/PCMPEQQ mm, xmm: 2|1 -> 1|.33
  • PShift (x)mm, (x)mm 2|1 -> 1|.33
  • PMUL*, PSADBW mm, xmm 4|1 -> 3|1
  • ADD/SUB/CMP/MAX/MINPS/PD 3|1 -> 3|0.5
  • MULPS/PD 4|1 -> 4|0.5
  • CVT*, ROUND xmm, xmm 4|1 -> 3|1
  • BLENDV* xmm, xmm 3|2 -> 3|0.88
  • AES, GF2P8AFFINEQB, GF2P8AFFINEINVQB xmm 4|1 -> 3|1
  • SHA256RNDS2 5|2 -> 4|1
  • PHADD/PHSUB* 6|6 -> 5|5

Regressions, Slower (vs Tremont):

  • m8, m16 load latency 4->5 clocks
  • ADD/MOVBE load latency 4->5 clocks
  • LOCK ADD 16|16->18|18
  • XCHG mem 17|17->18|18
  • (I)DIV +1 clock
  • DPPS 10|1.5 -> 18|6
  • DPPD 6|1 -> 10|3.5
  • FSIN/FCOS +12% slower

 

Power: P-Core vs E-Core, Win10 vs Win11 CPU Tests: Core-to-Core and Cache Latency, DDR4 vs DDR5 MLP
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > the only way I think they can remedy this is by designing a new core from scratch

    I'm not sure I buy this narrative. In the interview with AMD's Mike Clark, he said AMD takes a fresh view of each new generation of Zen and then only reuses what old parts still fit. As Intel is much bigger and better-resourced, I don't see why their approach would fundamentally differ.

    > or scaling Gracemont to target Zen 4 or 5.

    I don't understand this. The E-cores are efficiency-oriented (and also minimize area, I'd expect). If you tried to optimize them for performance, they'd just end up looking & behaving like the P-cores.
  • GeoffreyA - Sunday, November 7, 2021 - link

    I stand by my view that designing a CPU from scratch will bring benefit, while setting them back temporarily. Of course, am no expert, but it's reasonable to guess that, no matter how much they change things, they're still being restricted by choices made in the Pentium Pro era. In the large, sweeping points of the design, it's similar, and that is exerting an effect. Start from scratch, and when you reach Golden Cove IPC, it'll be at lower power I think. Had AMD gone on with K10, I do not doubt it would never have achieved Zen's perf/watt. Sometimes it's best to demolish the edifice and raise it again, not going to the opposite extreme of a radical departure.

    As for the E-cores, if I'm not mistaken, they're at greater perf/watt than Skylake, reaching the same IPC more frugally. If that's the case, why not scale it up a bit more, and by the time it reaches GC/Zen 3 IPC, it may well end up doing so with less power. Remember the Pentium M.

    What I'm trying to say is, you've got a destination: IPC. These three architectures are taking different routes of power and area to get there. GC has taken a road with heavy toll fees. Zen 3, much cheaper. Gracemont appears to be on an even more economical road. The toll, even on this path, will go up but it'll still be lower than GC's. Zen, in general, is proof of that, surpassing Intel's IPC at a lower point of power.
  • GeoffreyA - Sunday, November 7, 2021 - link

    Anyhow, this is just a generic comment by a layman who's got a passion for these things, and doesn't mean to talk as if he knows better than the engineers who built it.
  • Wrs - Sunday, November 7, 2021 - link

    It's not trivial to design a core from scratch without defining an instruction set from scratch, i.e., breaking all backward compatibility. x86 has a tremendous amount of legacy. ARM has quite a bit as well, and growing each year.

    Can they redo Golden Cove or Gracemont for more efficiency at same perf/more perf at same efficiency? Absolutely, nothing is perfect and there's no defined tradeoff between performance and efficiency that constitutes perfect. But simply enlarging Gracemont to near Golden Cove IPC (a la Pentium M to Conroe) is not it. By doing so you gradually sacrifice the efficiency advantage in Gracemont, and might get something worse than Golden Cove if not optimized well.

    The big.LITTLE concept has proven advantages in mobile and definitely has merit with tweaks/support on desktop/server. The misconception you may have is that Golden Cove isn't an inherently inefficient core like Prescott (P4) or Bulldozer. It's just sometimes driven at high turbo/high power, making it look inefficient when that's really more a process capability than a liability.
  • GeoffreyA - Monday, November 8, 2021 - link

    Putting together a new core doesn't necessarily mean a new ISA. It could still be x86.

    Certainly, Golden Cove isn't of Prescott's or Bulldozer's nature and the deplorable efficiency that results from that; but I think it's pretty clear that it's below Zen 3's perf/watt. Now, Gracemont is seemingly of Zen's calibre but at an earlier point of its history. So, if they were to scale this up slowly, while scrupously maintaining its Atom philosophy, it would reach Zen 3 at similar or less power. (If that statement seems laughable, remember that Skylake > Zen 1, and Gracemont is roughly equal to Skylake.) Zen 3 is right on Golden Cove's tail. So why couldn't Gracemont's descendant reach this class using less power? Its design is sufficiently different from Core to suggest this isn't entirely fantasy.

    And the fashionable big/little does have advantages; but question is, do those outweigh the added complexity? I would venture to say, no.
  • mode_13h - Monday, November 8, 2021 - link

    > they're still being restricted by choices made in the Pentium Pro era.

    No way. There's no possible way they're still beholden to any decisions made that far back. For one thing, their toolchain has probably changed at least a couple times, since then. But there's also no way they're going to carry baggage that's either not pulling its weight or is otherwise a bottleneck for *that* long. Anything that's an impediment is going to get dropped, sooner or later.

    > As for the E-cores, if I'm not mistaken, they're at greater perf/watt than Skylake

    Gracemont is made on a different node than Skylake. If you backported it to the original 14 nm node that was Skylake's design target, they wouldn't be as fast or efficient.

    > why not scale it up a bit more, and by the time it reaches GC/Zen 3 IPC,
    > it may well end up doing so with less power.

    Okay, so even if you make everything bigger and it can even reach Golden Cove's IPC without requiring major parts being redesigned, it's not going to clock as high. Plus, you're going to lose some efficiency, because things like OoO structures scale nonlinearly in perf/W. And once you pipeline it and do the other things needed for it to reach Golden Cove's clock speeds, it's going to lose yet more efficiency, probably converging on what Golden Cove's perf/W.

    There are ways you design for power-efficiency that are fundamentally different from designing for outright performance. You don't get a high-performance core by just scaling up an efficiency-optimized core.
  • GeoffreyA - Monday, November 8, 2021 - link

    Well, you've stumped me on most points. Nonetheless, old choices can survive pretty long. I've got two examples. Can't find any more at present. The instruction fetch bandwidth of 16 bytes, finally doubled in Golden, goes all the way back to Pentium Pro. That could've more related to the limitations of x86 decoding, though. Then, register reads were limited to two or three per clock cycle, going back to Pentium Pro, and only fixed in Sandy Bridge. Those are small ones but it goes to show.

    I would say, Gracemont is different enough for it to diverge from Golden Cove in terms of perf/watt. One basic difference is that it's using a distributed scheduler design (following in the footsteps of the Athlon, Zen, and I believe the Pentium 4), compared to Pentium Pro-Golden Cove's unified scheduler. Then, it's got 17 execution ports, more than Zen 3's 14 and GC's 12. It's ROB is 256 entries, equal to Zen 3. Instruction boundaries are being marked, etc., etc. It's clock speed is lower? Well, that's all right if its IPC is higher than frequency-obsessed peers. I think descendants of this core could baffle both their elder brothers and the AMD competition.
  • GeoffreyA - Monday, November 8, 2021 - link

    Sorry for all the it's! Curse that SwiftKey!
  • mode_13h - Tuesday, November 9, 2021 - link

    > it's got 17 execution ports

    That's for simplicity, not by necessity. Most CPUs map multiple different sorts of operations per port, but Gracemont is probably designed in some way that made it cheaper for them just to have dedicated ports for each. I believe its issue bandwidth is 5 ops/cycle.

    > It's clock speed is lower? Well, that's all right if its IPC is higher than frequency-obsessed peers.

    It would have to be waaay higher, in order to compensate. It's not clear if that's feasible or the most efficient route to deliver that level of performance.

    > I think descendants of this core could baffle both their elder brothers and the AMD competition.

    In server CPUs? Quite possibly. Performance per Watt and per mm^2 (which directly correlates with perf/$) could be extremely competitive. Just don't expect it to outperform anyone's P-cores.
  • GeoffreyA - Wednesday, November 10, 2021 - link

    I'm out of answers. I suppose we'll have to wait and see how the battle goes. In any case, what is needed is some new paradigm that changes how CPUs operate. Clearly, they're reaching the end of the road. Perhaps the answer will come from new physics. But I wouldn't be surprised there's some fundamental limit to computation. That's a thought.

Log in

Don't have an account? Sign up now