Demonstrating their commitment to keep improving the AM4 platform, AMD has just published a suite of details about their upcoming AGESA firmware. Of particular interest here, the latest firmware is going to enhance memory overclocking and compability, as well as add a much needed virtualization-related feature.

AGESA is an acronym for “AMD Generic Encapsulated System Architecture", and it is essentially the foundational code on which BIOS files for AM4 motherboards are built. When the Ryzen AM4 platform was launched back in March, the early AGESA versions lacked a lot of the core capabilities and settings that we have come to expect from a modern platform. As a result, motherboard manufacturers did not have a lot to work with when it came to creating feature-rich custom BIOSes for their own motherboards. Since then AMD has been pretty vocal and proactive about fixing any bugs, opening up new BIOS features, and improving overclocking.

With this new AGESA version, AMD has added 26 new memory-related parameters. The most dramatic improvement is the significant expansion of memory speed options. If we exclude base block overclocking - which relatively few motherboards support - the AM4 platform has thus far been effectively limited to memory speeds of DDR4-3200. Not only that, but the supported range of options from DDR4-1866 to DDR4-3200 was in large 266MT/s increments. With AGESA, memory frequencies have not only been expanded all the way up to DDR4-4000, but between DDR4-2667 and DDR4-4000 the increments have been reduced to 133MT/s. Not only does this mean that more memory kits will be able to be run at their rated speed - and not get kicked down to the nearest supported speed - but it also significantly reduces the high-speed memory gap that the AM4 platform had with Intel's mainstream LGA1151 platform.

The other important announcement is the unlocking of about two dozen memory timings. Up until now, only five primary memory timings have been adjustable and there wasn't even a command rate option, which was natively locked to the most aggressive 1T setting. All of this should help improve overclocking and most importantly compatibility with the large swathe of DDR4 memory kits that have largely been engineered with Intel platforms in mind.

The last addition should excite those interested in virtualization. AMD has announced "fresh support" for PCI Express Access Control Services (ACS), which enables the ability to manually assign PCIe graphics cards within IOMMU groups. This should be a breath of fresh air to those who have previously tried to dedicate a GPU to a virtual machine on a Ryzen system, since it is a task that has thus far been fraught with difficulties.

AMD has already distributed the AGESA to its motherboard partners, so BIOS updates should be available starting in mid to late June. Having said that, there are apparently beta versions currently available for the ASUS Crosshair VI and GIGABYTE GA-AX370-Gaming 5.

Related Reading

Source: AMD

Comments Locked


View All Comments

  • TiberiusJonez - Saturday, May 27, 2017 - link

    Same as below... Get some knowledge and perspective. Go back and read some history on the first year of the Intel x99 platform. WAAAAAY more problems than this.
  • ddriver - Friday, May 26, 2017 - link

    So they finally hacked it.
  • Samus - Friday, May 26, 2017 - link

    So is this a microcode update?
  • edlee - Friday, May 26, 2017 - link

    Wow, is it just me, or did it feel like amd dropped the mic and left the room. They absolutely crushed it with this one.
  • TristanSDX - Friday, May 26, 2017 - link

    Do CPU use memory channels interleaving mode to reduce access time ? I mean copy some data to two DIMM that works with different phases, so acces time is reduced two times (ideally),
  • willis936 - Friday, May 26, 2017 - link

    No. DRAM has a fixed latency (around 50 ns, hasn't changed much in the past 30 years). After that it can bulk transfer the addresses listed (sequential at the starting address). So if you expect you'll need data you can load it into cache early (preferch, done by programmer or compiler). Using two channels just doubles the throughput and also doubles the number of concurrent addresses you can access. Interleaving channels does effectively nothing for latency and throughput. It has been measured to increase latency very slightly, likely from the added work the CPU memory controller has to do.
  • emn13 - Friday, May 26, 2017 - link

    Most memory has a latency of below 15ns. Even museum pieces such pre-DDR PC100 SDR dram had a 20ns latency. Fast normal kit is around 9ns. Sub 5ns memory seems to exist, but I've never used or seen any, so I'm not sure if there's some gotcha. That's just the memory; the processor imposes latency too. See e.g. for a few CPU-level benchmarks using DDR3-1966 CL9 (9.6ns) memory; But the TL;DR is that the processor observes around 20ns of latency. I don't believe this varies hugely depending on processor, but I'm not sure. Using slower memory might add a few ns, but it's going to be faster than 50ns by a large margin.

    But you're right that this number hasn't changed much in years.
  • Dolda2000 - Friday, May 26, 2017 - link

    15 ns would be the CAS latency. He most likely meant the row-cycle time, which is often the relevant timing for truly random accesses.
  • Dolda2000 - Friday, May 26, 2017 - link

    Oops, meant to reply to emn13, not to willis936.
  • CaedenV - Friday, May 26, 2017 - link

    Pretty much any time you have interleaved storage (be it RAM in Dual/Quad DDR, or HDDs in RAID) there is still a single host clock, which means all latency related issues remain the same as working with a single device. The advantage is typically in bulk transfer after the latency, so RAM in Dual DDR gets a ~30% file speed increase, and RAID is typically ~60% per drive... but that is only on file transfer speed. If you are doing lots of little stuff then the latency issue is going to make those transfer speed advantages disappear.
    In fact; having RAID (not so sure about Dual/Quad DDR) generally adds extra complexity to the system which will increase latency. So for bulk transfers (say... video editing) you end up with huge speed increases, but for lots of small transfers (such as a hammered database) then you will actually get lower performance.

    Just goes to show that not everything is cut-and-dry 'better'. What is 'best' can change rather dramatically depending on what kinds of workloads you are doing.

Log in

Don't have an account? Sign up now