Hexagon 780: A Whole new IP for AI & DSP

Every year Qualcomm likes to talk about its new Hexagon DSPs, with the last few generations also adding to the table new Tensor Accelerators dedicated for ML inferencing. This year’s Snapdragon 888 design also hypes up the new Hexagon 780 design, with the difference being that this time around the generational improvements are truly humongous.

The new Hexagon 780 accelerator IP truly deserves a large increment in its IP numbering scheme, as it’s essentially a ground-up redesign of the company’s existing DSP with scalar and vector execution engines, and the recent Tensor Accelerators. Previously all these execution engines were acting as discrete independent blocks within the Hexagon 600 series family, but that’s now changed in the new IP design.

The new IP block fuses together all the scalar, tensor, and vector capabilities into a single monolithic IP, vastly increasing the performance and power efficiency of workloads that make a use of all the mixed capabilities of the design.

In terms of performance uplifts, scalar execution capabilities are said to be increased by 50%, while tensor execution throughput has doubled. The vector extension units seem to have remained the same this generation, but actual performance of workloads will still have been increased thanks to the new memory architecture of the new IP block.

Qualcomm states that they’ve increased the on-chip SRAM dedicated to the block 16-fold, allowing for larger machine-learning inference models to fit within the block’s memory, greatly accelerating their performance. This larger memory pool also allows for coherency between the scalar, vector and tensor units, allowing for the vastly increased workload handoff time between the different execution engines. I asked about the actual size of this new memory, but the company wouldn’t disclose any further details, just stating that it’s significant.

The company’s engineers were extremely hyped up about the new design, stating that the performance and flexibility of the new design is well beyond that of what other companies can achieve through disaggregated DSP and ML inference engines, sometimes even from different IP vendors.

The most important figure for the new design is the 3x performance per watt claim, which is just a massive generational improvement that you rarely see in the industry.

As is usual for Qualcomm, the company doesn’t actually state the per-block performance increases, but instead opts to showcase an aggregate computational throughput figure shared amongst all of the SoC’s IP blocks, including CPU, GPU, and the new Hexagon accelerator block. This new figure lands in at 26TOPs for the Snapdragon 888, which is 73% higher than the 15TOPs figure of the Snapdragon 865. Given that we’ve seen significant changes in all IP blocks this generation, I won’t attempt a breakdown estimate as it’s likely going to be wrong and off-the-mark anyhow.

The Adreno 660 - A 35% faster GPU

Amongst the improvements which lead up to that 26TOPs figure is a new vastly improved GPU in the form of the new Adreno 660.

Qualcomm still holds architectural details of their GPUs very close to their chest and thus doesn’t go disclose very much about the new GPU design and what has actually changed, but one thing they did talk about is the addition of new mixed-precision dot product as well as FP16/FP32 wave matrix-multiply instructions, which allow the new GPU to increase AI performance by up to 43%.

We’re also seeing the addition of variable rate shading (VRS) onto the Adreno GPU architecture, allowing for coarser pixel shading onto larger pixel blocks for object and screen areas which don’t require as much detail or the native resolution shading wouldn’t be noticeable. This is also a major feature that’s being introduced in the console and new-generation PC graphics cards and GPUs, which should bring greater performance uplift for new gaming titles which take advantage of the new features. It’s great to see Qualcomm bringing this to the mobile space along with the rest of the industry.

For graphics workloads, the new GPU is advertised as being able to increase performance by up to 35%, which is a very major generational performance leap.

Such a performance jump would actually signify that Qualcomm may very well regain the gaming performance crown this generation, having lost it to Apple’s SoCs over the last two generations. Apple’s latest A14 has seen rather conservative gains on the GPU side this year, so a 35% performance gain over the Snapdragon 865 should very much allow the new Snapdragon 888 to retake the leadership position.

A 35% performance increase with a 20% power efficiency increase would indicate that the new SoC would achieve the higher performance at cost of a little higher power consumption, but given the Snapdragon 865’s excellent power characteristics of below 4W, Qualcomm does have a little leeway to increase power this generation.

3200MHz LPDDR5

The new Snapdragon 888 moves from a hybrid memory controller to one that focuses on LPDDR5, and also increases the frequency support for new LPDDR5 to 3200MHz (Or LPDDR5-6400).

For the Snapdragon 865 Qualcomm was rather unenthusiastic about the LPDDR5 switch, saying that it didn’t bring all to great improvements to performance or power efficiency – something which we actually did test out and come to the same conclusion in our review of the two OnePlus 8 phones, where the LPDDR4X variant ended up being no slower and seemingly actually more efficient to us. Apple this year also kept on using LPDDR4X on their A14 and M1 SoCs – pointing out that the benefits aren’t all that great.

For the Snadpragon 888 however, Qualcomm’s engineers seemed more upbeat about LPDDR5 and the new SoC actually being able to utilise the increase memory bandwidth this generation. Without going into details, the company also stated that they’ve improved the overall design of the memory subsystem, improving aspects such as latency.

On the part of the memory subsystem, Qualcomm still employs a 3MB system-level-cache in front of the memory controllers, with the ability of all SoC IP blocks to take advantage of this cache.

The Snapdragon 888: Back to monolithic SoC on 5nm Triple ISPs: Concurrent Triple-Camera Usage
POST A COMMENT

123 Comments

View All Comments

  • RSAUser - Monday, December 7, 2020 - link

    It shouldn't be, it's about 10% denser than TSMC 7FFP (127 vs 114), a far cry from TSMC N5 though (173). Reply
  • Zeratul56 - Wednesday, December 2, 2020 - link

    Any word if/when this could end up in a 8cx platform. While probably short of the M1 it could make a windows on arm pc much more compelling Reply
  • domboy - Wednesday, December 2, 2020 - link

    I'm interested it see what the 8cx replacement based on the 888 will be like... Reply
  • colinisation - Wednesday, December 2, 2020 - link

    Andrei/Ian - Any thoughts on the suitability of a 1+3+4 core CPU configuration as against a 2+2+4 config. I recall you guys did a deep dive on the state of threading in Android and it seemed that you had one or two main threads and the rest were low performance.

    Do you think this is still the case or do you think SoC vendors should be looking at increasing the big core count, given more multitasking on phones.
    Reply
  • spaceship9876 - Wednesday, December 2, 2020 - link

    No AV1 video hardware decoding support? You have to be kidding! Reply
  • ikjadoon - Wednesday, December 2, 2020 - link

    Qualcomm has long loved the MPEG Industry. In 2017, Qualcomm was already hating on AV1: https://web.archive.org/web/20170611163031/https:/...

    In 2020, Qualcomm & Samsung are pushing for MPEG 5-EVC (presumably H.267?): https://www.qualcomm.com/media/documents/files/mpe...

    So the upcoming codec wars (again): AV1, VVC (H.266), and EVC (H.267?).

    However, most of the "big" hardware & software companies are backing AV1:

    NVIDIA
    AMD
    Intel
    Apple
    Arm
    Facebook
    Cisco
    Google
    Microsoft
    Netflix
    Samsung (many fingers in many pies)
    Adobe
    Hulu
    BBC
    Alibaba
    Broadcom
    Realtek
    Vimeo
    Xilinx

    Qualcomm already looking like chumps here. They literally use Arm's CPU cores on Google's operating system: Qualcomm's only serious work today is non-CPU IP: cameras, AI, etc. They're a little Texas Instruments?
    Reply
  • Raqia - Wednesday, December 2, 2020 - link

    CPUs, even high performance ones are now just much more commoditized; Qualcomm still does a lot of substantial work for the cache hierarchies and pre-fetchers which matter just as much as the core uArch. I'm always curious about CPU uArch and performance, but for a phone the other units matter more for the experience despite being less testable and prominent in most tech headlines. Reply
  • brucethemoose - Wednesday, December 2, 2020 - link

    "Apple"

    Not so sure about that... the M1 skipped AV1 too.
    Reply
  • RSAUser - Wednesday, December 2, 2020 - link

    No, the M1 has AV1 decode, it's a larger A14. Reply
  • halcyon - Wednesday, December 2, 2020 - link

    Yes, the lack of AV1 decode feels odd. Reply

Log in

Don't have an account? Sign up now