Gaming Performance Benchmarks: DDR5-4800

To show the performance of DDR5 memory in different configurations, we've opted for a more selective and short-form selection of benchmarks from our test suite. This includes Civilization VI, Grand Theft Auto V, and Strange Brigade (DirectX 12).

All of the tests were run with all of the memory at default (JEDEC) settings, which means DDR5-4800 CL40, regardless of the configuration, e.g, 2x16, 2x32, and 4x16 GB.

Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it is a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Civilization VI - 1080p Max - Average FPS

Civilization VI - 1080p Max - 95th Percentile

Civilization VI - 4K Min - Average FPS

Civilization VI - 4K Min - 95th Percentile

Despite games traditionally being a GPU bottleneck instead of a CPU/memory bottleneck, in our Civ VI testing we do find some small but statistically meaningful differences in our results. The 2 x 32 GB kits were the best of the bunch, with the Samsung 2 x 16 GB kit running slightly slower. The Samsung 4 x 16 GB kit however performed a couple of frames per second slower than the rest, coming in a bit over 3% slower than the 2 x 32 GB Samsung kit.

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA to help optimize the title. At this point GTA V is super old, but still super useful as a benchmark – it is a complicated test with many features that modern titles today still struggle with. With rumors of a GTA 6 on the horizon, I hope Rockstar make that benchmark as easy to use as this one is.

GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data. The benchmark can also be called from the command line, making it very easy to use.

Grand Theft Auto V - 1080p Max - Average FPS

Grand Theft Auto V - 1080p Max - 95th Percentile

Grand Theft Auto V - 4K Low - Average FPS

Grand Theft Auto V - 4K Low - 95th Percentile

Using Grand Theft Auto V's built-in benchmark at 1080p, all of the JEDEC DDR5-4800B kits performed competitively with each other – albeit with a higher degree of variability than usual due to the nature of the game. Still, in our 4K testing, we see that Samsung 4 x 16 GB kit once again brings up the rear, this time falling behind the 2 x 32 GB kit by 7%.

Strange Brigade (DX12)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arisen once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative-centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations. For our testing, we use the DirectX 12 benchmark.

Strange Brigade DX12 - 1080p Ultra - Average FPS

Strange Brigade DX12 - 1080p Ultra - 95th Percentile

Strange Brigade DX12 - 4K Low - Average FPS

Strange Brigade DX12 - 4K Low - 95th Percentile

There wasn't much difference in our testing between the 2 x 32 GB kits in our Strange Brigade Direct X12 testing. At 4K, the Samsung 4 x 16 GB once again performed slightly slower than the rest, although Samsung's 2 x 16 GB configuration performed in-line with the 2 x 32 GB kits.

CPU Performance Benchmarks: DDR5-4800 Conclusion: Two Ranks and 1DPC For Thee
Comments Locked

66 Comments

View All Comments

  • Oxford Guy - Thursday, April 14, 2022 - link

    Anyone?
  • WhatYaWant - Friday, April 8, 2022 - link

    How would 1x32 compare to 2x16?
  • MDD1963 - Friday, April 8, 2022 - link

    Seems like only a few years ago that even 32 GB was the 'you'll NEVER need that much RAM!' line in the silicon sand, and, I bought it and ...still don't really need it; now it seems 2x 32 GB sticks will be the new norm? :) (Break out the wallet if you need 64 GB of DDR5-6400 for some 'future proofing'! :)
  • Leeea - Friday, April 8, 2022 - link

    Interesting you say that, because it seems like 16 GB is still enough.

    The only advantage seems windows ram caching, which is nice, but not all that noticeable with NVMe SSDs.
  • Icehawk - Sunday, April 10, 2022 - link

    Agreed, 16gb is still plenty for almost all users particularly at home where you won’t be running 23 agents like my work PC (uses ~6gb of Ram at idle!). I went 32 on my current machine because DDR4 was cheap and it was like $100 more but even with lots of multitasking, gaming, encoding I don’t get close to the limits of 16gb let alone 32. Now if I wanted to run some VMs that would be a different story but that’s not what most people do.
  • RSAUser - Tuesday, April 19, 2022 - link

    Define your use-case, average user will still not hit 8GB nowadays, and 16GB will cover 90-95% of users. Those who need more than that you start talking about people running DB, VM, etc., and there it will constantly be whatever the amount of RAM that is affordable/enough gain at that price point. For me right now it's 64GB, but for someone else that can still be 16GB.
  • lightningz71 - Friday, April 8, 2022 - link

    Thank you for this test, I appreciate the work that went into it.

    I see a potential limitation in the applicability of this test to a variety of use cases. Choosing to only do the tests with the 12900K drastically limits the effects of memory latency on most of your tests because the 12900K has a significant amount of L3 cache. That large amount of cache will serve to significantly reduce the penalty of the higher latency of various RAM configurations. I would like to see the same tests run with a 12400 or 12100 to see how the much lower cache levels of those processors affects the test results.
  • The_Assimilator - Friday, April 8, 2022 - link

    Very interesting that none of these DIMMs are half-height, seemingly it's cheaper to use a single full-height PCB even if you only populate half of it.
  • Hamm Burger - Saturday, April 9, 2022 - link

    Belatedly … I think those throughput rates from AIDA should be in GB/s, not MB/s.
  • IBM760XL - Saturday, April 9, 2022 - link

    I believe they are correct; 74 thousand MBps = 74 GBps, which sounds reasonable; 74 TBps would be excessive.

    (Although it would look incorrect if viewed from a country that uses the , as the decimal separator, rather than the thousands separator)

Log in

Don't have an account? Sign up now