Engineers from Micron Development Center in Munich (also known as Graphics DRAM Design Center) are well known around the industry for their contribution to development of multiple graphics memory standards, including GDDR4 and GDDR5. The engineers from MDC also played a key role in development of GDDR5X memory, which is expected to be used on some of the upcoming video cards. Micron disclosed the first details about GDDR5X in September last year, publicizing the existance of the standard ahead of later JEDEC ratification and offering a brief summary of what to expect. Since then the company has been quiet on their progress with GDDR5X, but in a new blog post they have published this week, the company is touting their results with their first samples and offering an outline of when they expect to go into volume production.

The GDDR5X standard, as you might recall, is largely based on the GDDR5 technology, but it features three important improvements: considerably higher data-rates (up to 14 Gbps per pin or potentially even higher), substantially higher-capacities (up to 16 Gb), and improved energy-efficiency (bandwidth per watt) thanks to 1.35V supply and I/O voltages. To increase performance, the GDDR5X technology uses its new quad data rate (QDR) data signaling technology to increase the amount of data transferred, in turn allowing it to use a wider 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. Consequently, GDDR5X promises to double the performance of GDDR5 while consuming similar amounts of power, which is a very ambitious goal.

In their blog post, Micron is reporting that they already have their first samples back from their fab - this being earlier than expected - with these samples operating at data-rates higher than 13 Gbps in the lab. At present, the company is in the middle of testing its GDDR5X production line and will be sending samples to its partners this spring.

Thanks to reduction of Vdd/Vddq by 10% as well as new features, such as per-bank self refresh, hibernate self refresh, partial array self refresh and other, Micron’s 13 Gbps GDDR5X chips do not consume more energy than GDDR5 ICs (integrated circuits) — 2–2.5W per component (i.e., 10–30W per graphics card), just like the company promised several weeks ago. Since not all applications need maximum bandwidth, in certain cases usage of GDDR5X instead of its predecessor will help to reduce power consumption.

GDDR5X memory chips will come in new packages, which will be slightly smaller (14×10mm vs. 14×12mm) compared to GDDR5 ICs despite the increase of their ball count (190-ball BGA vs. 170-ball BGA). According to Micron, denser ball placement, reduced ball diameter (0.4mm vs. 0.47mm) and smaller ball pitch (0.65mm vs. 0.8mm) make PCB traces slightly shorter, which should ultimately improve electrical performance and system signal integrity. Keeping in mind higher data-rates of GDDR5X’s interface, improved signal integrity is just what the doctor ordered. The GDDR5X package maintains the same 1.1mm height as the predecessor.

Micron is using its 20 nm memory manufacturing process to make the first-generation 8 Gb GDDR5X chips. The company has been using the technology to make commercial DRAM products for several quarters now. As the company refines its fabrication process and design of the ICs, their yields and data-rate potential will increase. Micron remains optimistic about hitting 16 Gbps data-rates with its GDDR5X chips eventually, but does not disclose when it expects that to happen.

All of that said, at this time the company has not yet figured out its GDDR5X product lineup, and nobody knows for sure whether commercial chips will hit 14 Gbps this year with the first-generation GDDR5X controllers. Typically, early adopters of new memory technologies tend to be rather conservative. For example, AMD’s Radeon HD 4870 (the world’s first video card to use GDDR5) was equipped with 512 MB of memory featuring 3.6 Gbps data-rate, whereas Qimonda (the company which established Micron’s Graphics DRAM Design Center) offered chips with 4.5 Gbps data-rate at the time.

The first-gen GDDR5X memory chips from Micron have 8 Gb capacity, hence, they will cost more than 4 Gb chips used on graphics cards today. Moreover, due to increased pin-count, implementation cost of GDDR5X could be a little higher compared to that of GDDR5 (i.e., PCBs will get more complex and more expensive). That said, we don't expect to see GDDR5X showing up in value cards right away, as this is a high-performance technology and will have a roll-out similar to GDDR5. At the higher-end however, a video card featuring a 256-bit memory bus would be able to boast with 8 GB of memory and 352 GB/s of bandwidth.

Finally, Micron has also announced in their blog post that they intend to commence high-volume production of GDDR5X chips in mid-2016, or sometime in the summer. It is unknown precisely when the first graphics cards featuring the new type of memory are set to hit the market, but given the timing it looks like this will happen in 2016.

Source: Micron

Comments Locked

17 Comments

View All Comments

  • testbug00 - Tuesday, February 9, 2016 - link

    And any Rumors of GDDR5X cards before 4Q are now almost 100% false.
  • extide - Tuesday, February 9, 2016 - link

    Well, Micron isn't the only manufacturer. You have Hynix, Samsung, etc, as well.
  • Samus - Tuesday, February 9, 2016 - link

    The other concern is this memory technology is more complex to implement than Micron originally lead the public to believe. I am a big fan of updating the GDDR5 standard opposed to relying on HBM because HBM has too many drawbacks, specifically cost and implementation complexity.

    But now it is obvious GDDR5X is nearly as complex to implement as HBM (sans the interposer and die packaging) because the pincount, packaging size and PCB requirements are all different.

    After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)

    That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility.

    What I'm getting at is...GDDR5X isn't what we have been lead to believe. If it's as radically different from GDDR5 as we're now being told, from being QDR to having an entirely different pinout and size...this isn't just GDDR5 with a frequency bump and a voltage drop. It will not just "drop in" on existing video cards. Entirely new reference designs and complete OEM cooperation is going to be needed to implement it, which sucks because that means it isn't going to be cheap and it probably won't show up on any current-gen GPU architectures because AMD and nVidia are just going to wait until their next GPU's (Polaris and Pascal) instead of creating new reference designs for Fuji and Maxwell 2.

    And I doubt and OEM partners are going to make the R&D investment into fitting GDDR5X to older GCN or Maxwell products without AMD or nVidia telling them how to do it. Where do those extra 20-pins for each memory die even go? If the voltage is lower, then that would theoretically mean a reduction in pins.
  • close - Wednesday, February 10, 2016 - link

    Well the name says GDDR5 but it's actually GQDR. So it may be a larger departure from GDDR than the name suggests. Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses.
  • BurntMyBacon - Wednesday, February 10, 2016 - link

    @close: "Maybe at some point this QDR technology will be integrated into HBMx given that the channel interface for HBM still has DDR buses."

    Entirely reasonable. Given the stricter control over line lengths, the power distribution advantage afforded by using a single piece of silicon (interposer) vs multiple discrete chips, and the higher number of "pins" that can be supported by the interposer, I'd say there is less of a barrier of entry here. It probably won't happen immediately due to lack of need, but once GPU design figure out how to make good use of HBMs throughput, they may start considering it.
  • BurntMyBacon - Wednesday, February 10, 2016 - link

    @Samus: "After what was revealed by Samsung about HBM's memory controller requirements, it is basically drop-in compatible with any GPU's memory controller that supports GDDR5 and just comes down to video bios support (further confirmed by AMD's Polaris shipping in GDDR5 and HBM cards)"

    To be fair, I think this is also being simplified, but your point still stands. There does not appear to be as much of an advantage to GDDR5X implementation over HBM as we were lead to beleive.

    @Samus: "That isn't to say old GPU architectures will be optimized for HBM, but it's just another knock against GDDR5X's supposed flexibility."

    I think the key here will be the optimization. There is no reason to incur the cost of HBM if your chip is designed with a 256bit wide bus. If your bus is wider than 512bit, then GDDR isn't practical. Higher bit width buses are usually reserved higher end parts anyways due to complexities and cost.

    I imagine we'll see GDDR5X used to reduce bus width while maintaining the same performance or used to boost performance with the same bus width where bus width is constrained. One example might be bringing previous high end / mainstream designs down to the mainstream / entry level. Another could be squeezing more performance or battery life out of laptop chips. If the power requirements are the same for higher bandwidth, then reducing the bus width and maintaining the same bandwidth would reduce power requirements extending battery life (or potentially freeing up more power for GPU performance).
  • Samus - Wednesday, February 10, 2016 - link

    Good point on the bus bandwidth. Considering most NVidia cards are 128-bit and cap out between 192-256-bit historically (there are some 320-bit and 384-bit anomalies) they will probably stick with GDDR5X for Pascal unless it ends up having a wide 512-bit bus like many AMD GCN architectures. Obviously Fuji has a 1024-bit bus but it was built from the ground up around HBM, and this GPU would be crippled with a 256-bit GDDR5X implementation.
  • xdesire - Tuesday, February 9, 2016 - link

    Seems like HBM2 for high end gaming and probably pro-segment, gddr5x for mid (maybe some mid cards could see HBM1/2?), gddr5 for the lower-seg and gddr3 for the bottom line then? I'm waiting to see a HBM2 vs GDDR5X comparison soon (price/performance/capacity/power cons. etc). Nonetheless, 2016 will be the year of great batles in tech-realm :)
  • extide - Tuesday, February 9, 2016 - link

    I am willing to bet that GDDR5 will be phased out relatively quickly, in favor of GDDR5X (or HBM on the top tier cards). Bottom end will still be DDR3 (ugh!). There is little reason for GDDR5 to stick around as I bet GDDR5X will be very cost competitive relatively quickly.

    I have said this before, but I bet we will see one GPU from AMD with HBM, and one or two from nVidia. GP100 for sure, and GP104 maybe GDDR5X or HBM.
  • DanNeely - Tuesday, February 9, 2016 - link

    I'd expect the bottom end to switch to DDR4 fairly quickly. At Newegg, a 4GB dimm (smallest offered size of DDR4) is only $20 vs $17, so the price gap is almost gone. On the technical side the higher throughput will be pure win while the increased random access latency that limits real world gains on the CPU is largely irrelevant; while gaming GPUs stream large sequential chunks of memory instead of hammering the system with random access operations.

    I'd be shocked if GP104 doesn't offer HBM, even if GDDR5x is "good enough", they need to offer it on top end cards for marketing reasons. GP100 doesn't really count here since after skipping the high performance market with Maxwell, initial production is going to be sucked up by customers willing to pay more per card than even the most tricked out gaming PC cost. Titan might be GP100, but 1080/1070 will almost certainly be GP104.

Log in

Don't have an account? Sign up now