Fundamental Windows 10 Issues: Priority and Focus

In a normal scenario the expected running of software on a computer is that all cores are equal, such that any thread can go anywhere and expect the same performance. As we’ve already discussed, the new Alder Lake design of performance cores and efficiency cores means that not everything is equal, and the system has to know where to put what workload for maximum effect.

To this end, Intel created Thread Director, which acts as the ultimate information depot for what is happening on the CPU. It knows what threads are where, what each of the cores can do, how compute heavy or memory heavy each thread is, and where all the thermal hot spots and voltages mix in. With that information, it sends data to the operating system about how the threads are operating, with suggestions of actions to perform, or which threads can be promoted/demoted in the event of something new coming in. The operating system scheduler is then the ring master, combining the Thread Director information with the information it has about the user – what software is in the foreground, what threads are tagged as low priority, and then it’s the operating system that actually orchestrates the whole process.

Intel has said that Windows 11 does all of this. The only thing Windows 10 doesn’t have is insight into the efficiency of the cores on the CPU. It assumes the efficiency is equal, but the performance differs – so instead of ‘performance vs efficiency’ cores, Windows 10 sees it more as ‘high performance vs low performance’. Intel says the net result of this will be seen only in run-to-run variation: there’s more of a chance of a thread spending some time on the low performance cores before being moved to high performance, and so anyone benchmarking multiple runs will see more variation on Windows 10 than Windows 11. But ultimately, the peak performance should be identical.

However, there are a couple of flaws.

At Intel’s Innovation event last week, we learned that the operating system will de-emphasise any workload that is not in user focus. For an office workload, or a mobile workload, this makes sense – if you’re in Excel, for example, you want Excel to be on the performance cores and those 60 chrome tabs you have open are all considered background tasks for the efficiency cores. The same with email, Netflix, or video games – what you are using there and then matters most, and everything else doesn’t really need the CPU.

However, this breaks down when it comes to more professional workflows. Intel gave an example of a content creator, exporting a video, and while that was processing going to edit some images. This puts the video export on the efficiency cores, while the image editor gets the performance cores. In my experience, the limiting factor in that scenario is the video export, not the image editor – what should take a unit of time on the P-cores now suddenly takes 2-3x on the E-cores while I’m doing something else. This extends to anyone who multi-tasks during a heavy workload, such as programmers waiting for the latest compile. Under this philosophy, the user would have to keep the important window in focus at all times. Beyond this, any software that spawns heavy compute threads in the background, without the potential for focus, would also be placed on the E-cores.

Personally, I think this is a crazy way to do things, especially on a desktop. Intel tells me there are three ways to stop this behaviour:

  1. Running dual monitors stops it
  2. Changing Windows Power Plan from Balanced to High Performance stops it
  3. There’s an option in the BIOS that, when enabled, means the Scroll Lock can be used to disable/park the E-cores, meaning nothing will be scheduled on them when the Scroll Lock is active.

(For those that are interested in Alder Lake confusing some DRM packages like Denuvo, #3 can also be used in that instance to play older games.)

For users that only have one window open at a time, or aren’t relying on any serious all-core time-critical workload, it won’t really affect them. But for anyone else, it’s a bit of a problem. But the problems don’t stop there, at least for Windows 10.

Knowing my luck by the time this review goes out it might be fixed, but:

Windows 10 also uses the threads in-OS priority as a guide for core scheduling. For any users that have played around with the task manager, there is an option to give a program a priority: Realtime, High, Above Normal, Normal, Below Normal, or Idle. The default is Normal. Behind the scenes this is actually a number from 0 to 31, where Normal is 8.

Some software will naturally give itself a lower priority, usually a 7 (below normal), as an indication to the operating system of either ‘I’m not important’ or ‘I’m a heavy workload and I want the user to still have a responsive system’. This second reason is an issue on Windows 10, as with Alder Lake it will schedule the workload on the E-cores. So even if it is a heavy workload, moving to the E-cores will slow it down, compared to simply being across all cores but at a lower priority. This is regardless of whether the program is in focus or not.

Of the normal benchmarks we run, this issue flared up mainly with the rendering tasks like CineBench, Corona, POV-Ray, but also happened with yCruncher and Keyshot (a visualization tool). In speaking to others, it appears that sometimes Chrome has a similar issue. The only way to fix these programs was to go into task manager and either (a) change the thread priority to Normal or higher, or (b) change the thread affinity to only P-cores. Software such as Project Lasso can be used to make sure that every time these programs are loaded, the priority is bumped up to normal.

Intel Disabled AVX-512, but Not Really Power: P-Core vs E-Core, Win10 vs Win11
Comments Locked

474 Comments

View All Comments

  • mode_13h - Friday, November 5, 2021 - link

    It basically comes down to a context-switch. And those take a couple microseconds (i.e. many thousands of CPU cycles), last I checked. And that assumes there's a P-core available to run the thread. If not, you're potentially going to have to wait a few timeslices (often 1 -10 ms).

    Now, consider the case of some software that assumes all cores are AVX-512 capable. This would be basically all AVX-512 software written to date, because we've never had a hybrid one, or even the suggestion from Intel that we might need to worry about such a thing. So, the software spawns 1 thread per hyperthread (i.e. 24 threads on the i9-12900K) but can only run 16 of them at any time. That's going to result in a performance slowdown, especially when you account for all the fault-handling and context-switching that happens whenever any of these threads tries to run on an E-core. You'd basically end up thrashing the E-cores, burning a lot of power and getting no real work done on them.
  • mode_13h - Friday, November 5, 2021 - link

    Forgot to address the case where the OS blocks the thread from running on the E-core, again.

    So, if we think about how worker threads are used to split up bigger tasks, you really want to have no more worker threads than actual CPU resources that can execute them. You don't want a bunch of worker threads all fighting to run on a smaller number of cores.

    So, even the solution of having the OS block those threads from running on the E-cores would yield lower performance than if the the app knew how many AVX-512 capable cores there were and spawned only that many worker threads. However, you have to keep in mind that whether some function uses AVX-512 is not apparent to a software developer. It might even do this dynamically, based on whether AVX-512 is detected, but this detection often happens at startup and then the hardware support is presumed to be invariant. So, it's problematic to dump the problem in the application developer's lap.
  • eastcoast_pete - Thursday, November 4, 2021 - link

    Plus, enabling AVX-512 on the big Cores would have meant having it on the E (Gracemont) cores also, or switching workloads from P to E cores on the fly won't "fly". And having AVX-512 in Gracemont would have interfered with the whole idea of Gracemonts being low-power and small footprint on the die. I actually find what Ian and Andrei did here quite interesting: if AVX-512 can really speed up whatever you want to do, disable the Gracemonts and run AL in Cove only. If that could be a supported option with a quick restart, it might be worthwhile under the right circumstances.
  • AntonErtl - Friday, November 5, 2021 - link

    There is no relevant AVX-512 state before the first AVX-512 instruction is executed. So trapping and switching to a P-core is entirely doable. Switching back would probably be a bigger problem, but one probably does not want to do that anyway.
  • Spunjji - Friday, November 5, 2021 - link

    Possible problem: how would you account for a scenario where the gain from AVX-512 is smaller than the gain from running additional threads on E cores? Especially when some processors have a greater proportion of E cores to P cores than others. That could get quite complicated.
  • TeXWiller - Friday, November 5, 2021 - link

    If you look at the Intel's prerelease presentation about Thread Director carefully, you see they are indeed talking about moving the integer (likely control) sections of AVX threads to E-cores and back as needed.
  • kobblestown - Friday, November 5, 2021 - link

    I'll reply to my comment because it seems the original one was not understood.

    When you have an AVX512-using thread on a P thread, it might happen that it needs to be suspended, say, because the CPU is overloaded. Then the whole CPU state is saved to memory so the execution can later be resumed as if nothing has happened. In particular, it may be rescheduled on another core when its time for it run again. If that new core is a P core, then we're safe. But if it's an E core, it might happen that we hit an AVX512 instruction. Obviously, the core cannot execute it so it traps into the OS. The OS can check what was the offending instruction and determine that the problem is not the instruction, but the core. So it moves it back to a P core, stores a flag that this thread should not be rescheduled on an E-core and keeps chugging.

    Now, someone suggested that there might be a problem with the CPU state. And, indeed, you can not restore the AVX512 part of the state on an E core. But it cannot get changed by an E core either, because at the first attempt to do it it will trap. So the AVX512 part of the state that was saved on a P core is still correct.

    Since this isn't being done, there might be (but not "must be" - intel, like AMD, will only do what is good for them, not what is good for us) some problem. One being that an AVX512 thread will never be rescheduled on an E core even if it executes a single AVX512 instruction. But it's still better than the current situation which postpones the wider adoption of AVX512 yet again. I mean, the transistors are already there!
  • factual - Thursday, November 4, 2021 - link

    Great win for consumers! AMD will need to cut prices dramatically to be competitive otherwise Intel will dominate until Zen4 comes out!
  • kobblestown - Friday, November 5, 2021 - link

    Let's first see Zen3D early next year. It will let me keep my investment into the AM4 platform yet offer top notch performance.
  • Spunjji - Friday, November 5, 2021 - link

    "AMD will need to cut prices dramatically"
    Not until Intel's platform costs drop. Nobody's buying an ADL CPU by itself.

Log in

Don't have an account? Sign up now