Fundamental Windows 10 Issues: Priority and Focus

In a normal scenario the expected running of software on a computer is that all cores are equal, such that any thread can go anywhere and expect the same performance. As we’ve already discussed, the new Alder Lake design of performance cores and efficiency cores means that not everything is equal, and the system has to know where to put what workload for maximum effect.

To this end, Intel created Thread Director, which acts as the ultimate information depot for what is happening on the CPU. It knows what threads are where, what each of the cores can do, how compute heavy or memory heavy each thread is, and where all the thermal hot spots and voltages mix in. With that information, it sends data to the operating system about how the threads are operating, with suggestions of actions to perform, or which threads can be promoted/demoted in the event of something new coming in. The operating system scheduler is then the ring master, combining the Thread Director information with the information it has about the user – what software is in the foreground, what threads are tagged as low priority, and then it’s the operating system that actually orchestrates the whole process.

Intel has said that Windows 11 does all of this. The only thing Windows 10 doesn’t have is insight into the efficiency of the cores on the CPU. It assumes the efficiency is equal, but the performance differs – so instead of ‘performance vs efficiency’ cores, Windows 10 sees it more as ‘high performance vs low performance’. Intel says the net result of this will be seen only in run-to-run variation: there’s more of a chance of a thread spending some time on the low performance cores before being moved to high performance, and so anyone benchmarking multiple runs will see more variation on Windows 10 than Windows 11. But ultimately, the peak performance should be identical.

However, there are a couple of flaws.

At Intel’s Innovation event last week, we learned that the operating system will de-emphasise any workload that is not in user focus. For an office workload, or a mobile workload, this makes sense – if you’re in Excel, for example, you want Excel to be on the performance cores and those 60 chrome tabs you have open are all considered background tasks for the efficiency cores. The same with email, Netflix, or video games – what you are using there and then matters most, and everything else doesn’t really need the CPU.

However, this breaks down when it comes to more professional workflows. Intel gave an example of a content creator, exporting a video, and while that was processing going to edit some images. This puts the video export on the efficiency cores, while the image editor gets the performance cores. In my experience, the limiting factor in that scenario is the video export, not the image editor – what should take a unit of time on the P-cores now suddenly takes 2-3x on the E-cores while I’m doing something else. This extends to anyone who multi-tasks during a heavy workload, such as programmers waiting for the latest compile. Under this philosophy, the user would have to keep the important window in focus at all times. Beyond this, any software that spawns heavy compute threads in the background, without the potential for focus, would also be placed on the E-cores.

Personally, I think this is a crazy way to do things, especially on a desktop. Intel tells me there are three ways to stop this behaviour:

  1. Running dual monitors stops it
  2. Changing Windows Power Plan from Balanced to High Performance stops it
  3. There’s an option in the BIOS that, when enabled, means the Scroll Lock can be used to disable/park the E-cores, meaning nothing will be scheduled on them when the Scroll Lock is active.

(For those that are interested in Alder Lake confusing some DRM packages like Denuvo, #3 can also be used in that instance to play older games.)

For users that only have one window open at a time, or aren’t relying on any serious all-core time-critical workload, it won’t really affect them. But for anyone else, it’s a bit of a problem. But the problems don’t stop there, at least for Windows 10.

Knowing my luck by the time this review goes out it might be fixed, but:

Windows 10 also uses the threads in-OS priority as a guide for core scheduling. For any users that have played around with the task manager, there is an option to give a program a priority: Realtime, High, Above Normal, Normal, Below Normal, or Idle. The default is Normal. Behind the scenes this is actually a number from 0 to 31, where Normal is 8.

Some software will naturally give itself a lower priority, usually a 7 (below normal), as an indication to the operating system of either ‘I’m not important’ or ‘I’m a heavy workload and I want the user to still have a responsive system’. This second reason is an issue on Windows 10, as with Alder Lake it will schedule the workload on the E-cores. So even if it is a heavy workload, moving to the E-cores will slow it down, compared to simply being across all cores but at a lower priority. This is regardless of whether the program is in focus or not.

Of the normal benchmarks we run, this issue flared up mainly with the rendering tasks like CineBench, Corona, POV-Ray, but also happened with yCruncher and Keyshot (a visualization tool). In speaking to others, it appears that sometimes Chrome has a similar issue. The only way to fix these programs was to go into task manager and either (a) change the thread priority to Normal or higher, or (b) change the thread affinity to only P-cores. Software such as Project Lasso can be used to make sure that every time these programs are loaded, the priority is bumped up to normal.

Intel Disabled AVX-512, but Not Really Power: P-Core vs E-Core, Win10 vs Win11
Comments Locked

474 Comments

View All Comments

  • GeoffreyA - Saturday, November 6, 2021 - link

    Looking at PC World's review just now, I saw that the 12900K uses less power than the 5950X when the load is lighter but more as it scales to maximum.
  • Zoolook - Saturday, November 6, 2021 - link

    To be expected, the fabric on 5950X is a big consumer and it's lit up all the time when only a few cores have work, which makes it performance/power ratio worse when under low load.
  • Oxford Guy - Saturday, November 6, 2021 - link

    How exciting that one can pay a lot for a powerful CPU in order to celebrate how it performs doing the tasks a far cheaper CPU would be more suitable for.

    This is really droll, this new marketing angle for Alder Lake.
  • GeoffreyA - Sunday, November 7, 2021 - link

    Conspicuous consumption?
  • Wrs - Saturday, November 6, 2021 - link

    They don't have to redesign Golden Cove. On lightly threaded stuff the 6-wide core is clearly ahead. That's a big plus for many consumers over Zen 3. The smaller competing core is expectedly more efficient and easier to pack for multicore but doesn't have the oomph. That Intel can pack both bigger snappy cores and smaller efficient cores is what should keep Su wide awake.

    Notice the ease in manufacturing, too. ADL is a simple monolithic slab. Ryzen is using two CCDs and one IOD on interposer. That's one reason Zen3 was in short supply a good 6-8 months after release. It wasn't because TSMC had limited capacity for 88mm2 chips on N7. Intel can spam the market with ADL, the main limit being factory yields of the 208 mm2 chip on Intel 7.
  • mode_13h - Saturday, November 6, 2021 - link

    > On lightly threaded stuff the 6-wide core is clearly ahead.

    Why do people keep calling it 6-wide? It's not. The decoder is 3 + 3. It can't decode 6 instructions per cycle from the same branch target.

    From the article covering the Architecture Day presentation:

    "the allocation stage feeding into the reservation stations can only process five instructions per cycle. On the return path, each core can retire eight instructions per cycle."

    > That's one reason Zen3 was in short supply a good 6-8 months after release.
    > It wasn't because TSMC had limited capacity for 88mm2 chips on N7.

    Source?

    Shortage of Ryzens was due *in part* to the fact that Epyc and Threadrippers draw from the same chiplet supply as the non-APU desktop chips. And if you tried to buy a Milan Epyc, you'd know those were even harder to find than desktop Ryzen 5000's.

    AMD seems to be moving towards a monolithic approach, in Zen 4. Reportedly, all of their desktop CPUs will then be APUs.
  • mode_13h - Saturday, November 6, 2021 - link

    BTW, the arch day quote was meant to show that it's not 6-wide anywhere else, either.
  • GeoffreyA - Sunday, November 7, 2021 - link

    6-wide might not be the idiomatic term, but Golden Cove supposedly has 6 decoders, up from 5 on Sunny Cove.
  • Wrs - Sunday, November 7, 2021 - link

    Oh we can definitely agree to disagree on how wide to call Golden Cove, but it's objectively bigger than Zen 3 and performs like a bigger core on just about every lightly threaded benchmark.

    One of many sources suggesting the cause of Ryzen shortage: https://www.tomshardware.com/news/amd-chip-shortag...

    The theory that TSMC was simply running that short on Zen3 CCDs never made much sense to me. Covid didn't stop any of TSMC's fabs, almost all of which run fully automated. For over a year they'd been churning out Zen2's on N7 for desktop/laptop and then server, so yields on 85 mm2 of the newer Zen3 on the same N7 should have been fantastic, and they weren't going to server, not till much more recently.

    But Covid impacts on the other fabs that make IOD/interposer, and the technical packaging steps, and transporting the various parts in time? Far, far more likely.
  • mode_13h - Monday, November 8, 2021 - link

    > Oh we can definitely agree to disagree on how wide to call Golden Cove

    Sorry, I thought you were talking about Gracemont. The arch day article indeed says it's 6-wide and not much else about the decode stage.

    > Covid didn't stop any of TSMC's fabs, almost all of which run fully automated.

    It triggered a demand spike, as all the kids and many office workers needed computers for school/work from home. Plus, people needing to do more recreation at home seems to have triggered an increased demand for gaming PCs.

    It's well known that TSMC is way backlogged. So, it's not as if AMD could simply order up more wafers to address the demand spikes.

    > they weren't going to server, not till much more recently.

    Not true. We know Intel and AMD ship CPUs to special customers, even before their public release. By the time Ice Lake SP launched, Intel reported having already shipped a couple hundred thousand of them. Also, AMD needs to build up inventory before they can do a public release. So, the chiplet supply will be getting tapped for server CPUs long before the public launch date.

Log in

Don't have an account? Sign up now