Intel Architecture Day 2021: Alder Lake, Golden Cove, and Gracemont Detailed
by Dr. Ian Cutress & Andrei Frumusanu on August 19, 2021 9:00 AM ESTIntel Thread Director
One of the biggest criticisms that I’ve levelled at the feet of Intel since it started talking about its hybrid processor architecture designs has been the ability to manage threads in an intelligent way. When you have two cores of different performance and efficiency points, either the processor or the operating system has to be cognizant of what goes where to get the best result from the end-user. This requires doing additional analysis on what is going on with each thread, especially new work that has never been before.
To date, most desktop operating systems operate on the assumption that all cores and the performance of everything in the system is equal. This changed slightly with simultaneous multithreading (SMT, or in Intel speak, HyperThreading), because now the system had double the threads, and these threads offered anywhere from zero to an extra 100% performance based on the workload. Schedulers were hacked a bit to identify primary and secondary threads on a core and schedule new work on separate cores. In mobile situations, the concept of an Energy Aware Scheduler (EAS) would look at the workload characteristics of a thread and based on the battery life/settings, try and schedule a workload where it made sense, particularly if it was a latency sensitive workload.
Mobile processors with Arm architecture designs have been tackling this topic for over a decade. Modern mobile processors now have three types of core inside – a super high performance core, regular high performance cores, and efficiency cores, normally in a 1+3+4 or 2+4+4 configuration. Each set of cores has its own optimal window for performance and power, and so it relies on the scheduler to absorb as much information as possible to determine the best way to do things.
Such an arrangement is rare in the desktop space - but now with Alder Lake, Intel has an SoC that has SMT performance cores and non-SMT efficient cores. With Alder Lake it gets a bit more complex, and the company has built a technology called Thread Director.
That’s Intel Thread Director. Not Intel Threat Detector, which is what I keep calling it all day, or Intel Threadripper, which I have also heard. Intel will use the acronym ITD or ITDT (Intel Thread Director Technology) in its marketing. Not to be confused with TDT, Intel’s Threat Detection Technology, of course.
Intel Threadripper Thread Director Technology
This new technology is a combined hardware/software solution that Intel has engineered with Microsoft focused on Windows 11. It all boils down to having the right functionality to help the operating system make decisions about where to put threads that require low latency vs threads that require high efficiency but are not time critical.
First you need a software scheduler that knows what it is doing. Intel stated that it has worked extensively with Microsoft to get what they want into Windows 11, and that Microsoft have gone above and beyond what Intel needed. This fundamental change is one reason why Windows 11 exists.
So it’s easy enough (now) to tell an operating system that different types of cores exist. Each one can have a respective performance and efficiency rating, and the operating system can migrate threads around as required. However the difference between Windows 10 and Windows 11 is how much information is available to the scheduler about what is running.
In previous versions of Windows, the scheduler had to rely on analysing the programs on its own, inferring performance requirements of a thread but with no real underlying understanding of what was happening. Windows 11 leverages new technology to understand different performance modes, instruction sets, and it also gets hints about which threads rate higher and which ones are worth demoting if a higher priority thread needs the performance.
Intel classifies the performance levels on Alder Lake in the following order:
- One thread per core on P-cores
- Only thread on E-cores
- SMT threads on P-cores
That means the system will load up one thread per P-core and all the E-cores before moving to the hyperthreads on the P-cores.
Intel’s Thread Director controller puts an embedded microcontroller inside the processor such that it can monitor what each thread is doing and what it needs out of its performance metrics. It will look at the ratio of loads, stores, branches, average memory access times, patterns, and types of instructions. It then provides suggested hints back to the Windows 11 OS scheduler about what the thread is doing, whether it is important or not, and it is up to the OS scheduler to combine that with other information about the system as to where that thread should go. Ultimately the OS is both topologically aware and now workload aware to a much higher degree.
Inside the microcontroller as part of Thread Director, it monitors which instructions are power hungry, such as AVX-VNNI (for machine learning) or other AVX2 commands that often draw high power, and put a big flag on those for the OS for prioritization. It also looks at other threads in the system and if a thread needs to be demoted, either due to not having enough free P-cores or for power/thermal reasons, it will give hints to the OS as to which thread is best to move. Intel states that it can profile a thread in as little as 30 microseconds, whereas a traditional OS scheduler may take 100s of milliseconds to make the same conclusion (or the wrong one).
On top of this, Intel says that Thread Director can also optimize for frequency. If a thread is limited in a way other than frequency, it can detect this and reduce frequency, voltage, and power. This will help the mobile processors, and when asked Intel stated that it can change frequency now in microseconds rather than milliseconds.
We asked Intel about where an initial thread will go before the scheduling kicks in. I was told that a thread will initially get scheduled on a P-core unless they are full, then it goes to an E-core until the scheduler determines what the thread needs, then the OS can be guided to upgrade the thread. In power limited scenarios, such as being on battery, a thread may start on the E-core anyway even if the P-cores are free.
For users looking for more information about Thread Director on a technical, I suggest reading this document and going to page 185, reading about EHFI – Enhanced Hardware Frequency Interface. It outlines the different classes of performance as part of the hardware part of Thread Director.
It’s important to understand that for the desktop processor with 8 P-cores and 8 E-cores, if there was a 16-thread workload then it will be scheduled across all 8 P-cores with 8 threads, then all 8 E-cores with the other 8 threads. This affords more performance than enabling the hyperthreads on the P-cores, and so software that compares thread-to-thread loading (such as the latest 3DMark CPU Profile test) may be testing something different compared to processors without E-cores.
On the question of Linux, Intel only went as far to say that Windows 11 was the priority, and they’re working upstreaming a variety of features in the Linux kernel but it will take time. An Intel spokesperson said more details closer to product launch, however these things will take a while, perhaps months and years, to get to a state that could be feature-parity equivalent with Windows 11.
One of the biggest questions users will ask is about the difference in performance or battery between Windows 10 and Windows 11. Windows 10 does not get Thread Director, but relies on a more basic version of Intel’s Hardware Guided Scheduling (HGS). In our conversations with Intel, they were cagy to put any exact performance differential metrics between the two, however based on understanding of the technology, we should expect to see better frequency efficiency in Windows 11. Intel stated that even though the new technology in Windows 11 will mean threads will move more often than in Windows 10, potentially adding latency, in their testing it wasn’t in any way human perceivable. Ultimately because the Win11 configuration can also optimize for power and efficiency, especially in mobile, Intel puts the win on Windows 11.
The only question is if Windows 11 will launch in time for Alder Lake.
223 Comments
View All Comments
mode_13h - Friday, August 20, 2021 - link
> they can't exactly prevent open source software from developingWell, they don't have to release all of the information that would be needed for Linux to do the same thing.
> Linux will probably have better support than Windows itself
Intel is one of the largest contributors to the Linux kernel. No doubt, any support for their Thread Director will be developed by them.
Obi-Wan_ - Thursday, August 19, 2021 - link
Is it likely that Alder Lake will consume noticeably less power when near idle or during video playback/streaming, or are existing CPUs already quite efficient in these cases?I'm thinking an HTPC that should be as silent as possible when idling and streaming, but also have a high power budget (effectively noise budget) when gaming for example.
mode_13h - Friday, August 20, 2021 - link
If you really care about minimizing idle power, then I think you probably need to use LPDDR memory. Integrated graphics should also be a priority.Another thing people miss is that the PSU should not only be a high-efficiency model, but also not heavily over-spec'd. Power supplies lose a lot of efficiency, when you run them well below peak load.
mode_13h - Thursday, August 19, 2021 - link
> The desktop processor will have sixteen lanes of PCIe 5.0I'll believe it when I see it. Let's not forget that it took Intel 2 generations to get PCIe 4.0 working! They had to reverse course on enabling it it Comet Lake, and that was years after POWER, Ryzen, and some ARM CPUs had it.
I also don't see the value of having it now, given that we know DG2 is going to be only PCIe 4, nor are we aware of any other upcoming GPUs that will support 5.0.
Bp_968 - Sunday, August 22, 2021 - link
Pcie 5 is twice as fast and backwards compatible. Why would you *not* want it in your system if possible? Its not like you can add it later.Pcie4 is mostly useless for gpus already (regardless if they "support" it or not) so pcie5 isnt going to improve anything gpu wise just like it didnt improve with pcie4.
But where it *will* be an improvement is in peripherals (the x4 channel to the chipset just got twice as fast) and support for pcie5 storage. Oh and easier support for high speed interconnects like USB 3-4 and 10gb ethernet.
Personally id prefer the layout be different. X16 or X8/x8 (plus 2 x4 slots for nvme i think) for the pcie5 is ok, but on the pcie4 side I'd like to see x16, x8/x8 as an option as well. That way you could use the pcie5 slots on other stuff and put the gpu in a x16 or x8 pcie4 slot (and it perform just as well). Support for 4+ nvme drives will be nice in the future. One or two pcie5 high speed units and then spots for slower SSDs for mass storage (x2 or x4 pcie4 being the "slow" slots).
mode_13h - Monday, August 23, 2021 - link
> Pcie 5 is twice as fast and backwards compatible.> Why would you *not* want it in your system if possible?
Mainly due to board and peripheral costs, I think. Beyond that, power dissipation should be well above PCIe 4.0 and it could also be a source of stability issues.
> But where it *will* be an improvement is in peripherals
> (the x4 channel to the chipset just got twice as fast)
This is actually the one place where it makes sense to me. The chipset can be located next to the CPU, so that hopefully no retimers will be needed. And the additional power needed to run a short x4 link @ 5.0 speeds hopefully shouldn't be too bad. When leaks first emerged about Alder Lake having PCIe 5.0, I suspected it was just for the chipset link.
> support for pcie5 storage.
By the time there are any consumer SSDs that exceed PCIe 4.0 x4 speeds, we'll already be on a new platform. It took over a year for PCIe 4.0 SSDs to finally surpass PCIe 3.0 x4 speeds, and many still don't.
> easier support for high speed interconnects like USB 3-4
The highest-rated speed for USB4 is PCIe 3.0 x4. However, even a chipset link of PCIe 4.0 x4 will mean you can support it with bandwidth to spare. That said, I think the highest-speed USB links are typically CPU-direct, in recent generations.
> 10gb ethernet.
You can already do that with a PCIe 4.0 x1 link.
Spunjji - Monday, August 23, 2021 - link
> But where it *will* be an improvement is in peripherals> (the x4 channel to the chipset just got twice as fast)
It doesn't use PCIe 5.0 for the chipset link, so it doesn't even have that advantage. I genuinely think it's premature. I guess we'll have to see what motherboard costs look like to know whether it was worth it for future-proofing, or whether it's just spec wankery.
mode_13h - Tuesday, August 24, 2021 - link
> It doesn't use PCIe 5.0 for the chipset linkI'm pretty sure they didn't specify that, one way or another. I'm pessimistic, though. Then again, didn't Rocket Lake have a PCIe 4.0 x8 link to the chipset? If so, moving up to PCIe 5.0 x4 is plausible.
> or whether it's just spec wankery.
It's definitely wankery. I'm just waiting for them either to walk it back, a la Comet Lake's PCIe 4.0 support, or for users to encounter a raft of issues, once some PCIe 5.0 GPU is finally released and people try to actually *use* the capability.
Spunjji - Friday, August 27, 2021 - link
All the resources I'm finding online say it's a DMI 4.0 x8 link to the chipset, so the same as Rocket Lake. Personally I think that's going to be plenty for the vast majority of their users, assuming they follow up at some point in the not-too distant future with an up-to-date HEDT platform for the users who need more.mode_13h - Saturday, August 28, 2021 - link
That's a shame, because the DMI link is the one place where Intel could've gotten practical benefits from using PCIe 5.0, right away.