Fundamental Windows 10 Issues: Priority and Focus

In a normal scenario the expected running of software on a computer is that all cores are equal, such that any thread can go anywhere and expect the same performance. As we’ve already discussed, the new Alder Lake design of performance cores and efficiency cores means that not everything is equal, and the system has to know where to put what workload for maximum effect.

To this end, Intel created Thread Director, which acts as the ultimate information depot for what is happening on the CPU. It knows what threads are where, what each of the cores can do, how compute heavy or memory heavy each thread is, and where all the thermal hot spots and voltages mix in. With that information, it sends data to the operating system about how the threads are operating, with suggestions of actions to perform, or which threads can be promoted/demoted in the event of something new coming in. The operating system scheduler is then the ring master, combining the Thread Director information with the information it has about the user – what software is in the foreground, what threads are tagged as low priority, and then it’s the operating system that actually orchestrates the whole process.

Intel has said that Windows 11 does all of this. The only thing Windows 10 doesn’t have is insight into the efficiency of the cores on the CPU. It assumes the efficiency is equal, but the performance differs – so instead of ‘performance vs efficiency’ cores, Windows 10 sees it more as ‘high performance vs low performance’. Intel says the net result of this will be seen only in run-to-run variation: there’s more of a chance of a thread spending some time on the low performance cores before being moved to high performance, and so anyone benchmarking multiple runs will see more variation on Windows 10 than Windows 11. But ultimately, the peak performance should be identical.

However, there are a couple of flaws.

At Intel’s Innovation event last week, we learned that the operating system will de-emphasise any workload that is not in user focus. For an office workload, or a mobile workload, this makes sense – if you’re in Excel, for example, you want Excel to be on the performance cores and those 60 chrome tabs you have open are all considered background tasks for the efficiency cores. The same with email, Netflix, or video games – what you are using there and then matters most, and everything else doesn’t really need the CPU.

However, this breaks down when it comes to more professional workflows. Intel gave an example of a content creator, exporting a video, and while that was processing going to edit some images. This puts the video export on the efficiency cores, while the image editor gets the performance cores. In my experience, the limiting factor in that scenario is the video export, not the image editor – what should take a unit of time on the P-cores now suddenly takes 2-3x on the E-cores while I’m doing something else. This extends to anyone who multi-tasks during a heavy workload, such as programmers waiting for the latest compile. Under this philosophy, the user would have to keep the important window in focus at all times. Beyond this, any software that spawns heavy compute threads in the background, without the potential for focus, would also be placed on the E-cores.

Personally, I think this is a crazy way to do things, especially on a desktop. Intel tells me there are three ways to stop this behaviour:

  1. Running dual monitors stops it
  2. Changing Windows Power Plan from Balanced to High Performance stops it
  3. There’s an option in the BIOS that, when enabled, means the Scroll Lock can be used to disable/park the E-cores, meaning nothing will be scheduled on them when the Scroll Lock is active.

(For those that are interested in Alder Lake confusing some DRM packages like Denuvo, #3 can also be used in that instance to play older games.)

For users that only have one window open at a time, or aren’t relying on any serious all-core time-critical workload, it won’t really affect them. But for anyone else, it’s a bit of a problem. But the problems don’t stop there, at least for Windows 10.

Knowing my luck by the time this review goes out it might be fixed, but:

Windows 10 also uses the threads in-OS priority as a guide for core scheduling. For any users that have played around with the task manager, there is an option to give a program a priority: Realtime, High, Above Normal, Normal, Below Normal, or Idle. The default is Normal. Behind the scenes this is actually a number from 0 to 31, where Normal is 8.

Some software will naturally give itself a lower priority, usually a 7 (below normal), as an indication to the operating system of either ‘I’m not important’ or ‘I’m a heavy workload and I want the user to still have a responsive system’. This second reason is an issue on Windows 10, as with Alder Lake it will schedule the workload on the E-cores. So even if it is a heavy workload, moving to the E-cores will slow it down, compared to simply being across all cores but at a lower priority. This is regardless of whether the program is in focus or not.

Of the normal benchmarks we run, this issue flared up mainly with the rendering tasks like CineBench, Corona, POV-Ray, but also happened with yCruncher and Keyshot (a visualization tool). In speaking to others, it appears that sometimes Chrome has a similar issue. The only way to fix these programs was to go into task manager and either (a) change the thread priority to Normal or higher, or (b) change the thread affinity to only P-cores. Software such as Project Lasso can be used to make sure that every time these programs are loaded, the priority is bumped up to normal.

Intel Disabled AVX-512, but Not Really Power: P-Core vs E-Core, Win10 vs Win11
Comments Locked

474 Comments

View All Comments

  • EnglishMike - Thursday, November 4, 2021 - link

    Apple's CPU performance and performance/watt are impressive, but it's going to take a lot more than that to make Intel/AMD start quaking in their boots, and that's not going to happen as long as Apple remains solely a vertical integrator of premium priced computers.

    If anything, Apple's recent advances will only galvanize AMD and Intel's CPU designers now they can see what can be achieved, and how.
  • michael2k - Thursday, November 4, 2021 - link

    As long as Apple monopolizes TSMC’s leading edge nodes it really doesn’t matter how much Intel tries until they can get I4 online.

    Right now Intel can’t beat TSMC’s N5 or N5P process and AMD can’t afford either. On the flip side that means AMD can’t afford to design a better CPU because they’re also stuck on N7 and N7P.
  • Ppietra - Friday, November 5, 2021 - link

    people are focusing too much on nodes! Apple’s node advantage over AMD isn’t that big in terms of what efficiency you get out of it. AMD is already using the N7+ node in some of its processors, and that puts it just around 10% behind the N5 node used by the M1 Max in performance per watt.
  • michael2k - Thursday, November 4, 2021 - link

    For now. Apple has more desktops using the M1P/M1M incoming.

    It's astounding to consider that a 60W part is competitive at all with a 300W part:
    https://www.anandtech.com/show/17047/the-intel-12t...

    vs

    https://www.anandtech.com/show/17024/apple-m1-max-...

    Go from 8p2e to 16p4e and power only goes up to 120W and the M1 scores could double, 106 SPECint2017_r and 162 SPECfp2017_r barring complexity due to memory bus overhead, CPU bus/fabric communication overhead, etc, since it's clear that the rate-n test performs far better when paired with DDR5 vs DDR4
  • Ppietra - Thursday, November 4, 2021 - link

    Actually the M1 Max is a 43W part at CPU peak power, not 60W (60 was for the all machine).
    So when Apple doubles the cores it would be closer to 85W. 170W when using 4 times the cores, which will almost certainly happen.
    That would mean that Apple could easily have more than double the performance at almost half the power consumption.
  • roknonce - Thursday, November 4, 2021 - link

    It's not true that 12900k must use 300w, in fact, they can get over 90% performance with 150w. If you set voltage manually, you can get a P-core @ 3.2Ghz + E-core @2.4Ghz within 35w (Source: GeekerWan). Its Cinebench R23 score is ST1350, MT14k. What about M1 Max? ST 1500, Mt 12k. In addition, TSMC N5p is 30% better than 10nm ESF. Consider again if a 60W part is competitive at all with a 300W part?
  • roknonce - Friday, November 5, 2021 - link

    Edit: It's 6*P-core @ 3.2Ghz + 8*E-core @2.4Ghz within 35w to roughly simulate a H35/H45 mobile chip.
  • Ppietra - Friday, November 5, 2021 - link

    The thing with Cinebench is that it takes a lot of advantage from hyperthreading, which is good of course when you have it, something that the M1 doesn’t have.
    The problem is, because of this and many other differences between CPUs, Cinebench is only a good benchmark to compare to the M1 in a small set of tasks. Not exactly a general definition of competition.
    As for power consumption, consider that the M1 Max CPU has a peak power of 43W, while other high end Laptop CPUs have a typical peak power at around 75-80W, even if they say 45W TDP.
  • roknonce - Sunday, November 7, 2021 - link

    I'm literally saying peak power during the test. 6*P-core @0.75v, not the BS TDP, my friend. I totally agree that Cinebench cannot tell everything. But consider the enormous gap between N5P and 10nm ESF. The result is reasonable and for intel fans, is good enough to call it inspiring.
  • charlesg - Thursday, November 4, 2021 - link

    I think there's an error in the AVX2 Peak Power graph on the last page? I think one of the two 5900s listed is supposed to be 5950?

Log in

Don't have an account? Sign up now