AMD Discusses 2016 Radeon Visual Technologies Roadmap
by Ryan Smith on December 8, 2015 9:00 AM EST- Posted in
- GPUs
- Displays
- AMD
- Radeon
- DisplayPort
- HDMI
- Radeon Technologies Group
FreeSync Over HDMI to Hit Retail in Q1’16
After pushing DisplayPort Freesync out the door earlier this year, back at Computex 2015 AMD began demonstrating a further Freesync proof-of-concept implementation: FreeSync over HDMI.
Implemented over a customized version of HDMI 1.4a and utilizing a prototype Realtek timing controller (TCON), AMD was able to demonstrate variable refresh rate technology running over HDMI. At the time of the presentation AMD was very clear that the purpose of the presentation was to shop around the concept and to influence the various members of the HDMI consortium, but they were also clear that bringing variable refresh rate tech to HDMI was something the company wanted to bring to retail sooner than later.
Sooner, as it turns out, was the operative word there. As part of their presentation last week, RTG has announced that FreeSync over HDMI will be heading to retail, and that it will be doing so very soon: Q1’16. This is just a year after the first DisplayPort adaptive sync monitors hit retail, which for a display technology is a rather speedy turnaround from proof of concept to retail product.
Now there are some key technical differences from FreeSync over DisplayPort(FS-DP) that should be noted here. Unlike FS-DP, which was just AMD’s implementation of DisplayPort adaptive sync on their GPUs and software stack, FS-HDMI is not an open standard, at least not at this time. HDMI does not have a variable refresh rate technology standard, and while RTG is pushing to have one included in a future version of HDMI, the HDMI consortium moves too slowly for RTG’s tastes. As a result RTG is looking to go it alone, and will be implementing FS-HDMI by creating a vendor specific extension for HDMI.
The use of vendor specific extensions is perfectly legal within the HDMI standard, but it does mean that FS-HDMI is proprietary, at least until such a time where the HDMI standard adopts a common variable refresh rate standard. This means that FS-HDMI monitors will need to support RTG’s proprietary extensions, which in turn requires TCON/monitor vendors to work a bit more closely with RTG than was necessary with FS-DP. Meanwhile RTG for their part hasn’t yet decided what to do about the proprietary nature of their implementation – they are open to sharing it, but they also want to retain control and avoid any scenario that results in outright balkanization of HDMI variable refresh rate technology. The fact that it’s an RTG-controlled specification calls into question whether any other GPU vendor would want to implement it in the first place – so concerns about openness may prove to be moot – but it does mean that it’s going to be up to RTG to make or break FS-HDMI.
Perhaps more surprising, and certainly a feather in RTG’s cap, is that RTG has brought so many TCON vendors on-board so early. Along with Realtek, Novatek and Mstar will all be producing TCONs that support FS-HDMI, so TCONs will be available from multiple vendors relatively quickly. With variable refresh rate tech it’s the TCONs that really decide whether the tech is supported, so this is an important set of partnerships for RTG to lock in so soon. Meanwhile traditional AMD/RTG display partners such as Acer, LG, and Samsung will be producing retail monitors with FS-HDMI capabilities.
Meanwhile at this point RTG isn’t talking about GPU compatibility in great detail, however it sounds like FS-HDMI support will be brought over to some of RTG’s current GPUs. Most likely these are the GCN 1.1+ Radeon 300 series cards, with GCN 1.1 also being the minimum requirement for FS-DP. AMD’s Carrizo APU should also support the technology, and RTG is specifically promoting that notebooks implementing an APU + dGPU Radeon dual graphics configuration will also support FS-HDMI, an important development especially given the fact that DisplayPort support is non-existent on consumer AMD laptops.
In fact the lack of DisplayPort availability in displays overall is a big part of why RTG has pursued this. According to numbers from RTG, only about 30% of all monitors sold include a DisplayPort, while the other 70% are only implementing HDMI or HDMI + DVI. Consequently FS-DP is an inherently limited market and the majority of monitor buyers will never be able to use FS-DP. Meanwhile from what I hear the actual cost of implementing variable refresh rate support on a TCON is very low, which means that RTG could get far greater penetration for FreeSync by extending it to support HDMI, not to mention bringing down the overall cost of entry-level FreeSync monitors. We’re still talking about a highly price sensitive commodity market – after all, there’s a reason that most monitors don’t ship with a DisplayPort – but if the costs of adding FreeSync are as low as RTG hints, then there is a market for consumers who would spend a bit more on a variable refresh rate monitor but don’t know anything about display I/O standards beyond HDMI.
Finally, along those lines, it should be no surprise that the first FS-HDMI monitors that have been announced are all focused on lower cost and lower resolution displays. That FP-HDMI is being implemented over HDMI 1.4 immediately rules out 4K monitors, so instead all announced monitors are 1080p or ultra-wide 21:9 aspect ratio 2560x1080 and 3440x1440 monitors. Otherwise there are a few more unknowns here that I expect we’ll see addressed ahead of the Q1 launch, particularly which monitors will support a wide-enough range of rates for low framerate compensation to work.
FreeSync Laptops: Shipping Now
Along with the FreeSync over HDMI announcement, RTG also used their event to announce the first FreeSync-capable laptop, the Lenovo Y700. One of the models of the laptop ships with a variable refresh rate capable 15.6” 1080p IPS panel, and when paired up with a Carrizo APU and R9 M380 GPU, can utilize FreeSync to control the refresh rate. The one notable limitation here is that while this otherwise a rather typical DisplayPort adaptive sync setup within a laptop, the specific panel being used here is only supports a range of 40Hz to 60Hz, so the first FreeSync laptop has a narrow effective range and can’t support LFC.
99 Comments
View All Comments
BurntMyBacon - Thursday, December 10, 2015 - link
@Samus: "GCN scales well, but not for performance. Fury is their future."Fury is GCN. Their issue isn't GCN as GCN is actually a relatively loose specification that allows for plenty of architectural leeway in its implementation. Also note that GCN 1.0, GCN 1.1, and GCN 1.2 are significantly different from each other and should not be considered a single architecture as you seem to take it.
ATi's current issue is the fact that they are putting out a third generation of products on the same manufacturing node. My guess is that many of the architectural improvements they were working on for the 20nm chips can't effectively be brought to the 28nm node. You see a bunch of rebadges because they decided they would rather wait for the next node than spend cash that they probably didn't have on new top to bottom architecture updates to a node that they can't wait to get off of and probably won't recoup the expense for. They opted to update the high end where the expenses could be better covered and they needed a test vehicle for HBM anyways.
On the other hand, nVidia, with deeper pockets and greater marketshare decided that it was worth the cost. Though, even they took their sweet time in bringing the maxwell 2.0 chips down to the lower end.
slickr - Friday, December 11, 2015 - link
Nvidia's products are based on pretty much slight improvements over their 600 series graphics architecture. They haven't had any significant architectural improvements since basically their 500 series. This is because both companies have been stuck on 28nm for the pat 5 years!Maxwell is pretty much a small update in the same technology that Nvidia has already been using before since the 600 series.
Budburnicus - Wednesday, November 16, 2016 - link
That is TOTALLY INCORRECT! Maxwell is a MASSIVE departure from Kepler! Not only does it achieve FAR higher clock speeds, but it does more with less!At GTX 780 Ti is effectively SLOWER than a GTX 970, even at 1080p where the extra memory makes no difference, and where the 780 Ti has 2880 CUDA cores, the 970 has just 1664!
There are FAR too many differences to list, and that is WHY Kepler has not been seeing ANY performance gains with newer drivers! Because the programming for Kepler is totally different from Maxwell or Pascal!
Also, now that Polaris and Pascal is released: LMFAO! The RX 480 cannot even GET CLOSE to the 1503 MHZ I have my 980 Ti running on air! And if you DO get it to 1400 it throws insane amounts of heat!
GCN is largely THE SAME ARCHITECTURE IT HAS ALWAYS BEEN! It has seen incremental updates such as memory compression, better branch prediction, and stuff like the Primitive Discard Accelerator - but otherwise is TOTALLY unchanged on a functional level!
Kind of like how Pascal is an incremental update to Maxwell, adding farther memory compression, Simultaneous Multi Projection, better branch prediction and so on. Simultaneous Multi Projection adds an extra 40% to 60% performance for VR and surround monitor setups, when Maxwell - particularly the GTX 980 and 980 Ti are already FAR better at VR than even the Fury X! Don't take my word for it, go check the Steam Benchmark results on LTT forums! https://linustechtips.com/main/topic/558807-post-y...
See UNLIKE Kepler to Maxwell, Pascal is BASICALLY just Maxwell on Speed, a higher clocked Maxwell chip! And it sucks FAR less power, creates FAR less heat and provides FAR more performance, as the RX 480 is basically tied with a GTX 970 running 1400 core! And FAR behind a 980 at the same or higher!
Meanwhile the GTX 1060 beats it with ease, while the GTX 1070 (which at even 2100 MHZ is just a LITTLE less powerful than the 980 Ti at 1500 MHZ) 1080, and Pascal Titan SHIT ALL OVER THE FURY X!
Hell the GTX 980 regular at 1500 MHZ kicks the ASS of the Fury X in almost every game at almost every resolution!
Oh and Maxwell as well as Pascal are both HDR capable.
Furzeydown - Tuesday, December 8, 2015 - link
Both companies have been rather limited by the same manufacturing node for the past four years as well though. It limits things to tweaks, efficiency improvements, and minor features. As far as performance goes, both companies are neck and neck with monster 600mm dies.ImSpartacus - Tuesday, December 8, 2015 - link
But Nvidia's monster die is generally considered superior to amd's monster die despite using older memory tech. Furthermore, amd's monster die only maintains efficiency because it's being kept very chilly with a special water cooler.It's not neck and neck.
Asomething - Wednesday, December 9, 2015 - link
That is down to transistor density, amd are putting more into the same space which drives minimum requirements for the cooler up.Dirk_Funk - Wednesday, December 9, 2015 - link
Neck and neck as in there's hardly a difference in how many frames are rendered per second. It's not like either one has any big advantages over the other, and they are made almost exclusively for gaming so if fps is the same then yes it is neck and neck as far as most people are concerned.OrphanageExplosion - Thursday, December 10, 2015 - link
Not at 1080p and 1440p they aren't...RussianSensation - Wednesday, December 23, 2015 - link
The reference cards are very close.1080p - 980Ti leads by 6.5%
1440p - 980Ti leads by just 1.1%
4k - Fury X leads by 4.5%
Neither card is fast enough for 4K, while both are a waste of money for 1080p without using VSR/DSR/Super-sampling. That leaves 1440p resolution where they are practically tied.
http://www.techpowerup.com/reviews/ASUS/R9_380X_St...
The only reason 980Ti is better is due to its overclocking headroom. As far as reference performance goes, they are practically neck and neck as users above noted.
zodiacsoulmate - Tuesday, December 8, 2015 - link
waht do u mean nvidia is moving to a gcn-like architecture?