High performance computing is now at a point in its existence where to be the number one, you need very powerful, very efficient hardware, lots of it, and lots of capability to deploy it. Deploying a single rack of servers to total a couple of thousand cores isn’t going to cut it. The former #1 supercomputer, Summit, is built from 22-core IBM Power9 CPUs paired with NVIDIA GV100 accelerators, totaling 2.4 million cores and consuming 10 MegaWatts of power. The new Fugaku supercomputer, built at Riken in partnership with Fujitsu, takes the top spot on the June 2020 #1 list, with 7.3 million cores and consuming 28 MegaWatts of power.

The new Fugaku supercomputer is bigger than Summit in practically every way. It has 3.05x cores, it has 2.8x the score in the official LINPACK tests, and consumes 2.8x the power. It also marks the first time that an Arm based system sits at number one on the top 500 list.

Due to the onset of the Coronavirus pandemic, Riken accelerated the deployment of Fugaku in recent months. On May 13th, Riken announced that more than 400 racks, each featuring multiple 48-core A64FX cards per server, were deployed. This was a process that had started back in December, but they were so keen on getting the supercomputer up and running to assist with the R&D as soon as possible – the server racks didn’t have their official front panels when they started working. There are still additional resources to add, with full operation scheduled to begin in Riken’s Fiscal 2021, suggesting that Fugaku’s compute values on the top 100 list are set to rise even higher.

Alongside being #1 in the TOP500, Fugaku enters the Green500 List at #9, just behind Summit, and below the Fugaku Prototype installation which sits at #4.

At the heart of Fugaku is the A64FX, a custom Arm v8-A CPU-based chip optimised for compute. The total configuration uses 158,976 of these 48+4-core cards, running at 2.2 GHz peak performance (48 cores for compute, 4 for assistance). This allows for some substantial Rpeak numbers, such as 537 PetaFLOPs of FP64, the usual TOP500 metric. But A64FX also supports quantized models with lower precision, which is where we get into some fun numbers for Fugaku:

  • FP64: 0.54 ExaFLOPs
  • FP32: 1.07 ExaOPs
  • FP16: 2.15 ExaOPs
  • INT8: 4.30 ExaOPs

Due to the design of the A64FX, it also allows for a total memory bandwith of 163 PetaBytes per second.

To date, the A64FX compute card is the only implementation of Arm’s v8.2-A Scalable Vector Extensions (SVE). The goal of SVE is to allow Arm’s customers to build hardware with vector units ranging from 128-bit to 2048-bit, such that any software that is built to run on SVE will automatically scale regardless of the SVE execution unit size. A64FX uses two 512-bit wide pipes per core, with 48 compute cores per chip, and also adds in four 8 GiB HBM2 links per chip in order to feed the units for 1 TiB/s of total bandwidth into the chip.

As listed above, the unit supports INT8 through FP64, and the chip has an on-board custom Tofu interconnect, supporting up to 560 Gbps of interconnect to other A64FX modules. The chip is built on TSMC’s N7 process, and comes in at 8.79 billion transistors. 90% execution efficiency is claimed for DGEMM type workloads, and additional mechanisms such as combined gather and unaligned SIMD loading are used to help keep throughput high. There is also additional tuning that can be done at the power level for optimization, and extensive internal RAS (over 128k error checkers in silicon) to ensure accuracy.

Details on the A64FX chip were disclosed at Hot Chips in 2018, and we saw wafers and chips at Supercomputing in 2019. This chip is expected to be the first in a series of chips from Fujitsu along a similar HPC theme.

Work done on Fugaku to date includes simulations about Japan’s COVID-19 track and tracing app. According to Professor Satoshi Matsuoka, predictions calculated by Fugaku suggested a 60% distribution on the app development in order to be successful. Droplet simulations have also been performed on virus activity. Deployment of A64FX is set to go beyond Riken, with Sandia Labs to also have an A64FX system based in the US.

Source: TOP500

Related Reading

 

Comments Locked

46 Comments

View All Comments

  • TeXWiller - Monday, June 22, 2020 - link

    I wouldn't call these accelerator cards. The PCIe lanes are there to connect to a local IO and management for example, not to a host CPU.
  • SarahKerrigan - Monday, June 22, 2020 - link

    Indeed. There's no host - the entire software stack runs on the A64FX nodes themselves. This is a 100% CPU-only, accelerator-free system.
  • jeremyshaw - Monday, June 22, 2020 - link

    I think the writer may have confused these with the NEC Aurora cards.
  • jeremyshaw - Monday, June 22, 2020 - link

    Not overall, but in that moment. I'm assuming it's now edited away (to perfection)!
  • Ian Cutress - Monday, June 22, 2020 - link

    Anything that isn't instantly called a CPU I auto default to an accelerator card. My fault, it's been updated.
  • mode_13h - Monday, June 22, 2020 - link

    It sounds like each chip has 4 cores that act kind of like a host.
  • eastcoast_pete - Monday, June 22, 2020 - link

    Thanks Ian! Isn't one of the key differences of this new supercomputer to most others in the top 5 that it doesn't rely on GPU-like accelerators for its speed ? That makes setups like Fugaku more broadly usable, at least as far as I know or was told.
  • mode_13h - Monday, June 22, 2020 - link

    If SVE doesn't have inter-lane operations, then I don't see it being materially different than a GPU. To get good performance out of it, you're going to have to program it much like one.
  • eastcoast_pete - Monday, June 22, 2020 - link

    Don't disagree, except that this setup can actually run programs that aren't specifically written for GPU (or wide SVE), where accelerator-based systems often simply cannot. And, at least according to people who know this much better than I do, the need to have a program (and a problem) that is suited to limited routines (GPU-type or SVE) can be a real monkey wrench when you want a solution for a problem really quickly. Also, how long does it take to customize a computational approach to an accelerator even if it can be done? I can see how the time savings from the use of a supercomputer could be eaten up by the delay in getting the program ready rather frequently.
  • jeremyshaw - Monday, June 22, 2020 - link

    The main advantage of this sort of layout is the FPUs should have access to the CPU registers. Same with the cache, at the same speeds as the rest of the execution resources.

    Also, hopefully less context switching penalty (or failed branch penalty) than a traditional GPU.

Log in

Don't have an account? Sign up now