Disclaimer June 25th: The benchmark figures in this review have been superseded by our second follow-up Milan review article, where we observe improved performance figures on a production platform compared to AMD’s reference system in this piece.

SPEC - Multi-Threaded Performance

Picking up from the power efficiency discussion, let’s dive directly into the multi-threaded SPEC results. As usual, because these are not officially submitted scores to SPEC, we’re labelling the results as “estimates” as per the SPEC rules and license.

We compile the binaries with GCC 10.2 on their respective platforms, with simple -Ofast optimisation flags and relevant architecture and machine tuning flags (-march/-mtune=Neoverse-n1 ; -march/-mtune=skylake-avx512 ; -march/-mtune=znver2). For the new Zen3 Milan parts, as GCC 10.2 at the time didn’t have support for the new microarchitecture, we’re using the same Zen2 binaries as on Rome, as otherwise we’d have to rerun all numbers on all platforms with a newer GCC 11 baseline – something we will do in the future but out of the scope of this piece.

In terms of data-points, we’re comparing the new 7763 against a 7742, as well as the top socketed SKUs from the competition, including a Xeon 8280 (Equivalent to a Xeon 6258R), and Ampere’s Arm-based Altra Q80-33. It’s to be noted that the 7763 is a 280W part, and thus lands in higher than the other three chips, but we wanted to compare the top of the stack with the best available parts we had available for testing.

SPECint2017 Rate-N Estimated Scores (1 Socket)

Generationally, the new 7763 does outperform the 7742 across the board, but generally the magnitude isn’t quite as large as you’d expect given the 40W higher TDP of the chip – a performance delta that gets even tighter when configuring the 7742 to a 240W cTDP.

Against the competition, AMD’s traditional adversary Intel doesn’t really stand a chance as the Milan chip is posting well over double the performance in almost all workloads.

The newer competition AMD should worry about is Ampere’s new Altra system – which currently still outperforms the top-end Milan based 7763 in several compute-heavy benchmarks by notably margins. In more memory-heavy workloads, the EPYC more easily beats the Altra due to having essentially 8x the total cache per chip at 256MB vs 32MB.

SPECfp2017 Rate-N Estimated Scores (1 Socket)

We’re seeing a similar story in SPECfp2017, more oriented towards HPC workloads. The one result that stands out here is 511.povray, in which the 7763 loses out to the previous generation 7742 due to the workload being more core-bound, and the Milan chip having a lesser effective thermal envelope available for the cores, even at the higher 280W TDP.

Intel’s 8280 again really isn’t a viable competition to AMD’s chips, with the Ampere Altra being a closer match for AMD’s EPYC, winning some, losing some.

SPECint2017 Base Rate-N Estimated Performance 

With Milan, AMD is now retaking the performance lead in SPECint2017, although by only a small margin.

Generationally, the EPYC 7763 is 12.8% faster than the 7742 – unfortunately we don’t have figures of the corresponding 280W 7H12 Rome part.

Comparing the 7713 against the 7742, things aren’t looking as great, as the new Milan part sees a 4% performance regression. It’s to be noted again that AMD had claimed the 7713 is a direct successor to the 7662, where it does fare 10% better, however that is a $900 cheaper part.

Amongst the rather luke-warm results of the top-stack Milan SKUs, one result that stands out more is the 32-core frequency optimised 75F3 SKU. Featuring only half the cores, the part still manages to easily compete amongst its 64 core siblings, showcasing 82% of the performance of a 7713. This has rather large implications for per-thread performance of this part which we’ll cover in a later page.

Although AMD’s presentation slides using totally different SPEC result numbers due to very different compiler and optimisation settings, the actual relative positioning we are getting in our internal results actually exceed that of what AMD is presenting, with the 7763 coming in with a +128% advantage over the Intel 8280, a part that’s performance equivalent to the 6258R.

SPECfp2017 Base Rate-N Estimated Performance

In the SPECfp2017 suite which is more representative of HPC workloads and has a focus more towards memory performance, AMD had always retained their performance leadership, and has now widened it with the new Milan generation. The 7763 performs 14.4% better than the Rome 7742, while the 7713 almost outperforms it by a margin of error.

It’s again the 75F3 which is actually stealing the show, as it manages 97.8% of a 7713, and 85.8% of a 7763 even though it has only half the cores.

Against the competition and the figures AMD is showing, we’re measuring the 7763 outperforming the 8280 by 108% - near to the 106% the presentation material is showcasing. Intel should be able to make a larger leap with the next generation Ice Lake-SP server chips as the company moves from 6-channel to 8-channel memory, though we’ll have to see if that’s actually enough to catch up to AMD.

Power & Efficiency: IOD Power Overhead? SPEC - Single-Threaded Performance
Comments Locked

120 Comments

View All Comments

  • mkbosmans - Tuesday, March 23, 2021 - link

    Even if you have a nice two-tiered approach implemented in your software, let's say MPI for the distributed memory parallelization on top of OpenMP for the shared memory parallelization, it often turns out to be faster to limit the shared memory threads to a single socket of NUMA domain. So in case of an 2P EPYC configured as NPS4 you would have 8 MPI ranks per compute node.

    But of course there's plenty of software that has parallelization implemented using MPI only, so you would need a separate process for each core. This is often because of legacy reasons, with software that was originally targetting only a couple of cores. But with the MPI 3.0 shared memory extension, this can even today be a valid approach to great performing hybrid (shared/distributed mem) code.
  • mode_13h - Tuesday, March 23, 2021 - link

    Nice explanation. Thanks for following up!
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    This is vastly incorrect and misleading.

    The fact that I'm using a cache line spawned on a third main thread which does nothing with it is irrelevant to the real-world comparison because from the hardware perspective the CPU doesn't know which thread owns it - in the test the hardware just sees two cores using that cache line, the third main thread becomes completely irrelevant in the discussion.

    The thing that is guaranteed with the main starter thread allocating the synchronisation cache line is that it remains static across the measurements. One doesn't actually have control where this cache line ends up within the coherent domain of the whole CPU, it's going to end up in a specific L3 cache slice depended on the CPU's address hash positioning. The method here simply maintains that positioning to be always the same.

    There is no such thing as core-core latency because cores do not snoop each other directly, they go over the coherency domain which is the L3 or the interconnect. It's always core-to-cacheline-to-core, as anything else doesn't even exist from the hardware perspective.
  • mkbosmans - Saturday, March 20, 2021 - link

    The original thread may have nothing to do with it, but the NUMA domain where the cache line was originally allocated certainly does. How would you otherwise explain the difference between the first quadrant for socket 1 to socket 1 communication and the fourth quadrant for socket 2 to socket 2 communication?

    Your explanation about address hashing to determine the L3 cache slice may be makes sense when talking about fixing the inital thread within a L3 domain, but not why you want that that L3 domain fixed to the first one in the system, regardless of the placement of the two threads doing the ping-ponging.

    And about core-core latency, you are of course right, that is sloppy wording on my part. What I meant to convey is that roundtrip latency between core-cacheline-core and back is more relevant (at least for HPC applications) when the cacheline is local to one of the cores and not remote, possibly even on another socket than the two thread.
  • Andrei Frumusanu - Saturday, March 20, 2021 - link

    I don't get your point - don't look at the intra-remote socket figures then if that doesn't interest you - these systems are still able to work in a single NUMA node across both sockets, so it's still pretty valid in terms of how things work.

    I'm not fixing it to a given L3 in the system (except for that socket), binding a thread doesn't tell the hardware to somehow stick that cacheline there forever, software has zero say in that. As you see in the results it's able to move around between the different L3's and CCXs. Intel moves (or mirrors it) it around between sockets and NUMA domains, so your premise there also isn't correct in that case, AMD currently can't because probably they don't have a way to decide most recent ownership between two remote CCXs.

    People may want to just look at the local socket numbers if they prioritise that, the test method here merely just exposes further more complicated scenarios which I find interesting as they showcase fundamental cache coherency differences between the platforms.
  • mkbosmans - Tuesday, March 23, 2021 - link

    For a quick overview of how cores are related to each other (with an allocation local to one of the cores), I like this way of visualizing it more:
    http://bosmans.ch/share/naples-core-latency.png
    Here you can for example clearly see how the four dies of the two sockets are connected pairwise.

    The plots from the article are interesting in that they show the vast difference between the cc protocols of AMD and Intel. And the numbers from the Naples plot I've linked can be mostly gotten from the more elaborate plots from the article, although it is not entirely clear to me how to exactly extend the data to form my style of plots. That's why I prefer to measure the data I'm interested in directly and plot that.
  • imaskar - Monday, March 29, 2021 - link

    Looking at the shares sinking, this pricing was a miss...
  • mode_13h - Tuesday, March 30, 2021 - link

    Prices are a lot easier to lower than to raise. And as long as they can sell all their production allocation, the price won't have been too high.
  • Zone98 - Friday, April 23, 2021 - link

    Great work! However I'm not getting why in the c2c matrix cores 62 and 74 wouldn't have a ~90ns latency as in the NW socket. Could you clarify how the test works?
  • node55 - Tuesday, April 27, 2021 - link

    Why are the cpus not consistent?

    Why do you switch between 7713 and 7763 on Milan and 7662 and 7742 on Rome?

    Why do you not have results for all the server CPUs? This confuses the comparison of e.g. 7662 vs 7713. (My current buying decision )

Log in

Don't have an account? Sign up now