AMD SmartShift

We’ve covered AMD’s SmartShift before, when it was announced at CES, as a technology that allows a system management controller to interact with both the mobile APU and an AMD graphics card in the same system in order to shift power where it is needed. This solution is still based on separate power rails between the two, however the use of the scalable control fabric (SCF) part of Infinity Fabric means that parts of the APU and parts of the GPU can interact together in this way. Ultimately AMD believes they can score 10-12% better on heavy CPU workloads like CineBench or gaming workloads like The Division.

The solution is firmware based, but requires interaction between qualified hardware. AMD states that often in these sorts of CPU+GPU designs, while the chassis has a total design TDP, if one of the elements of the system is idle, then the other can’t take advantage of the extra turbo headroom available. SmartShift aims to fix that.

What’s new here is that AMD is primarily focus will be on Ryzen Mobile 4000 + Vega 10 style systems. The first one with SmartShift enabled will be the Dell G5 SE. The Dell G5 SE is being labelled as ‘the ultimate mobile gaming experience’, and will feature a new H-series processor, the Radeon RX 5600M, SmartShift, FreeSync, and have a 15-inch display. The unit will be coming out in Q2.

System Temperature Tracking (Version 2)

SmartShift is also part of a new System Temperature Tracking paradigm that AMD is implementing in its new APUs. Even if there is power headroom, a system can’t turbo if there isn’t thermal headroom. Smart Temperature Tracing v2 (or STTv2) is designed to help a system boost for longer by knowing more about the thermal profile of the device.

My placing additional thermal probes inside the system, such as on hot controllers or discrete GPUs, the readings of these can be passed through the Infinity Fabric to an embedded management controller. Through learning how the system thermals interact when different elements are loaded, the controller can determine if the system still has headroom to stay in turbo for longer than the current methodology (AMD’s Skin Temperature Aware Power Management). This means that rather than having a small number of sensors getting a single number for the temerpature of the system, AMD takes in many more values to evaluate a thermal profile of what areas of the system are affected at what point.

What STTv2 does at the end of the day is potentially extend the boost time for a given system, depending on its thermal capabilities. For example, the Lenovo Yoga Slim 7 which we are expecting for review only has a 15 W processor inside, but the chassis has been built for a 25 W TDP design, which means that STTv2 should kick in and provide the user with peak performance for longer.

Power and Battery Life Conclusions, and the Special HS Processors
Comments Locked


View All Comments

  • eek2121 - Monday, March 16, 2020 - link

    This is an intriguing part. I am hoping for laptop designs with a 4800U and 5600M, but also desktop APUs. Hopefully AMD can bring some of the nee stuff forward to desktop Zen 3 as well.
  • heffeque - Monday, March 16, 2020 - link

    It would be interesting to see these in fan and fanless AMD versions of Surface Pro versus fan and fanless Intel versions of Surface Pro.

    I'm especially interested in battery life, since AMD 3780U Surface Pro has horrible battery life compared to its Intel counter part.
  • The_Assimilator - Monday, March 16, 2020 - link

    The fact that OEMs are willing to make custom designs for AMD is already a good sign that they're confident in the product. Lisa Su certainly has the right stuff.
  • Khenglish - Monday, March 16, 2020 - link

    I'm pretty unimpressed by the GPU vs the Vega 11 in APU desktops. The only major advantage Renoir has is higher clocks on the GPU core and higher officially supported memory speeds. They likely got the 56% performance per core improvement by comparing to a Zen+ with Vega 11, which will be severely clocked constrained on 12nm with a bigger core, where Renoir gets an even higher clock advantage not just from the nominal clock, but also from Picasso APUs hitting their TDP limit hard in a 25W or 35W environment.

    On desktop with much higher TDPs I expect Renoir to slightly beat the 3400g at stock clocks, but lose when comparing overclocked results. Picasso easily overclocks up to 1700-1800 MHz from the measly 1240 MHz stock clock. I would guess Renoir would hit around 2000, not enough to compensate for the smaller core.
  • eek2121 - Tuesday, March 17, 2020 - link

    There are a lot of problems with your comment, but let’s start with the obvious: The TDP of the part you mentioned is at least triple that of the 4800U. Depending on how the chip is configured it is quadruple.

    These are laptop parts, we haven’t seen desktop APUs. AMD could add 3X as many Vega cores and still hit a 45-65 watt TDP or they can go aggressive on the CPU clocks like they did the 4900H.
  • Spunjji - Tuesday, March 17, 2020 - link

    I'm pretty sure the desktop APU won't have more Vega CUs.
  • tygrus - Tuesday, March 17, 2020 - link

    These days doubling the GPU cores/units and running half the speed is more energy efficient. Uses more die space but I don't understand the focus on GPU MHz over energy efficiency.
  • Spunjji - Tuesday, March 17, 2020 - link

    1) Not sure evidence I've seen bears out that a 1700-1800Mhz GPU overclock is "easy". That sounds like the higher end of what you can expect. Would welcome evidence to the contrary, as I'm still considering picking one up.
    2) RAM speed is the big difference here. The desktop APU should get much higher memory speeds than the 3400G due to the improved Zen 2 memory controller, which ought to relieve a significant bottleneck. GPU core overclocks weren't actually the best route to wringing performance out of the 3400G.
  • Fataliity - Monday, March 16, 2020 - link

    @Ian Cuttress, did they say what version of N7 they used for this? The density looks like either a HPC + variant, or a N7 mobile variant from what I can tell?

    Thank you!
  • abufrejoval - Monday, March 16, 2020 - link

    What I want is choice. And flexibility to enable it.

    15 Watt TDP typically isn’t a hard limit nor is 35 or 45 Watt for that matter: It’s mostly about what can be *sustained* for more than a second or two. Vendors have allowed bursting at twice or more TDP because that’s what often defines ‘user experience’ and sells like hotcakes on mobile i7’s.

    We all know the silicon is the same. Yes, there be binning but a 15 Watt part sure won’t die at 35, 45 or even 65 or 95 Watts for that matter: It will just need more juice and cooling. And of course, a design for comfortable cooling of 15 Watts won’t take 25 or 35 Watts without a bit of ‘screaming’.

    But why not give a choice, when noise matters less than a deadline and you don’t want to buy a distinct machine for a temporary project?

    I admit to have run machine-learning on Nvidia equipped 15.4” slim-line notebooks for days if not weeks, and having to hide them in a closet, because nobody in the office could tolerate the noise they produced at >100 Watts of CPU and GPU power consumption: That’s fine, really, when you can choose what to do where and when.

    Renoir has a huge range of load vs. power consumption: Please, please, PLEASE ensure that in all form factors users can make a choice of power consumption vs. battery life or cooling by setting max and sustained Wattage preferably at run-time and not hard-wiring this into distinct SKUs. I’d want a 15 Watt ultrabook to sustain a 35 Watt workload screaming its head off, just like I’d like a 90 Watt desktop or a 60 Watt NUC to calm down to 45/35/25 Watt sustained for night-long batches in the living room or bed-side—if that’s what suits my needs: It’s not a matter of technology, just a matter of ‘product placement’.

Log in

Don't have an account? Sign up now