AMD Radeon HD 7970 GHz Edition Review: Battling For The Performance Crown
by Ryan Smith on June 22, 2012 12:01 AM EST- Posted in
- GPUs
- AMD
- GCN
- Radeon HD 7000
Three months ago NVIDIA launched their GeForce GTX 680 to rave reviews and a boatload of editor recommendations, reclaiming their crown for the fastest single-GPU video card in the process. And for the first time in many years NVIDIA didn’t just beat AMD on raw performance, but they achieved the complete holy trifecta of video card competition – higher gaming performance, lower power consumption, and a lower price.
Consequently, for AMD this launch marked both the closest and the farthest they’ve ever been from outright beating NVIDIA in modern times. On the one hand NVIDIA beat them by more than usual by achieving the holy trifecta as opposed to focusing just on performance. And yet on the other hand when it comes to raw performance AMD has never been this close. Where the GTX 580 beat the 6970 by 15% the GTX 680 led by just 10%, and even then it lost to the 7970 on some games. With such a close gap an obvious question arises: maybe, just maybe AMD could meet or beat NVIDIA with a higher clocked 7970 and rival them for the performance crown?
Today AMD is putting that idea to the test with the launch of the Radeon HD 7970 GHz Edition. Although AMD is not calling the 7970 GHz Edition a response to the GTX 680 – instead choosing to focus on it being Tahiti’s 6 month birthday – for all intents and purposes this is AMD’s response to the GTX 680. A higher clocked 7970 with AMD’s take on GPU turbo intended to make a run at the GTX 680 and that performance crown. So how does AMD fare? As we’ll see, after today it will no longer be clear who holds the performance crown.
AMD GPU Specification Comparison | ||||||
AMD Radeon HD 7970 GHz Edition | AMD Radeon HD 7970 | AMD Radeon HD 7950 | AMD Radeon HD 6970 | |||
Stream Processors | 2048 | 2048 | 1792 | 1536 | ||
Texture Units | 128 | 128 | 112 | 96 | ||
ROPs | 32 | 32 | 32 | 32 | ||
Core Clock | 1000MHz | 925MHz | 800MHz | 880MHz | ||
Boost Clock | 1050MHz | N/A | N/A | N/A | ||
Memory Clock | 6GHz GDDR5 | 5.5GHz GDDR5 | 5GHz GDDR5 | 5.5GHz GDDR5 | ||
Memory Bus Width | 384-bit | 384-bit | 384-bit | 256-bit | ||
VRAM | 3GB | 3GB | 3GB | 2GB | ||
FP64 | 1/4 | 1/4 | 1/4 | 1/4 | ||
Transistor Count | 4.31B | 4.31B | 4.31B | 2.64B | ||
PowerTune Limit | 250W+ | 250W | 200W | 250W | ||
Manufacturing Process | TSMC 28nm | TSMC 28nm | TSMC 28nm | TSMC 40nm | ||
Architecture | GCN | GCN | GCN | VLIW4 | ||
Launch Date | 06/22/2012 | 01/09/2012 | 01/31/2012 | 12/15/2010 | ||
Launch Price | $499 | $549 | $449 | $350 |
As far as performance and functionality goes, the Radeon HD 7970 GHz Edition (7970GE) is a rather straightforward upgrade to the existing Radeon HD 7970. In fact the hardware is absolutely identical right down to the GPU – there have been no changes to the PCB, the cooling, or the VRMs, and even the Tahiti GPU is the same revision that has been shipping in the 7970 since the beginning. Everything the 7970GE adds to the 7970 is accomplished through chip binning and new Catalyst and BIOS features specific to the 7970GE. So in many ways this is the 7970 we’ve already become familiar with, but with more pep in its step.
With identical hardware the real difference is in clockspeeds. The 7970 shipped at a rather conservative 925MHz core, which as we’ve seen in our 7970 overclocking adventures ends up being a good 175MHz less than what our worst 7970 can hit while overclocked without overvolting. At the time AMD left a lot on the table in order to maximize yields and to give their partners headroom to launch a range of factory overclocked cards, and now AMD has come to take that headroom back for themselves.
The 7970GE will ship at 1GHz, 75MHz faster than the 7970. Furthermore the 7970GE introduces AMD’s PowerTune Technology with Boost, which is AMD’s name for GPU turbo, and similar to the GPU turbo feature that is already on AMD’s APUs. The 7970GE can boost a further 50MHz up to 1050MHz, which means the 7970GE’s core clock increase is anywhere between 8% and 13.5% depending on how high it can go under a specific workload. We’ve seen that AMD’s performance scales very well with clockspeeds – which is to say it’s typically not memory bandwidth bottlenecked – so this bodes well for its performance. All the same AMD has also boosted their memory clocks from 5.5GHz to 6GHz, which will give the card 9% more memory bandwidth when it needs it. AMD hasn’t provided any specific guidance for performance, but overall you can expect around 10% better performance over the 7970 in GPU-bound situations, which is exactly what AMD needs to close the GTX 680 gap.
Beyond the higher clockspeeds and introduction of PowerTune Technology with Boost, that sums up the changes for the 7970GE. There are no board changes and it’s the same Tahiti GPU, meaning 2048 stream processors paired with 128 texture units and 32 ROPs, all on a 4.31B transistor GPU with a die size of 365mm2. With the increase in clockspeed from 7970 this pushes AMD’s theoretical double precision (FP64) compute performance over 1 TFLOPs to 1.08 TFLOPs, which AMD is in no way shy about mentioning since they’re the first GPU vendor to get there. On the memory side of things, AMD is using the same 3GB of GDDR5 we’ve previously seen, just clocked higher.
Idential Twins: Radeon HD 7970 GHz Edition & Radeon HD 7970
On that note, because AMD hasn’t made any hardware changes for the 7970GE the 7970GE’s TDP/PowerTune limit is equally unchanged. The 7970GE will have a PowerTune limit of 250W, identical to that of the 7970. With 6 months between the launch of the 7970 and the 7970GE, that’s 6 months of 28nm process improvements over at TSMC, which AMD will be using as the basis of their binning for the 7970GE. With that said there’s no such thing as a free lunch, and in practice the 7970GE’s power consumption has still increased relative to the 7970, as we’ll see in our benchmarks.
On a final point, at this point we would be remiss to not point out that once again AMD has once again added confusion to their product naming system in the name of simplicity. We have always pushed for clear naming schemes where parts with different specifications have different names and for good reason. AMD’s decision to name their new card the 7970 GHz Edition is unfortunate; while it’s true it has the same Tahiti GPU its performance and feature set (PowerTune Boost) are entirely different from the 7970. What’s the point of a 4 digit number if AMD is only ever going to use a fraction of them? In a rational universe this card would be the 7975 and that would be the end of that.
Our primary concern here is that a potential customer is going to read this review and then go out and buy a vanilla 7970 thinking they got the GHz Edition, which is the kind of misleading situation we want product names to avoid. At this point if AMD is going to continue producing multiple products under the name model number – and I can’t believe I’m saying this – they need to bring back proper suffixes. They were less sufferable than “GHz Edition”, which is just long enough to be ignored. At the end of the day clockspeed is not a proper product name.
Anyhow, with clocks and hardware settled, let’s talk about competitive positioning, pricing, and availability. As we alluded to in the introduction, the 7970GE is a clear swipe at the GeForce GTX 680. NVIDIA had a smaller than usual 10% lead with the GTX 680, and as a result AMD is making a run at it with a higher clocked Tahiti part. Realistically speaking, on average AMD can’t beat the GTX 680 with the 7970GE, but with good performance scaling they can tie.
Seeing as how it’s a GTX 680 competitor then, it should come as no surprise that AMD has put the MSRP on the 7970GE at $499, the exact same price as the GTX 680. It’s a slugfest for sure. At the same time it’s no secret that Tahiti cards are relatively expensive to manufacture – thanks to the larger-than-GK104 GPU and 3GB of GDDR5 – so AMD is keen on not just challenging NVIDIA for the crown but also bringing their margins back up to where they were prior to the GTX 680’s launch.
While the price of the 7970 and 7950 aren’t officially changing in the wake of the 7970GE’s launch, the launch of the GTX 600 series has already pushed pricing down to levels below even AMD’s April MSRPs. Reference clocked 7970s are down to around $430 after rebate, and the 7950 (having been pushed out of the picture by the GTX 670) is down to about $360 after rebate. Barring a move from NVIDIA, we expect AMD’s stack to settle here for the time being. As an aside, it looks like AMD will be continuing their Three For Free promotion for their existing 7900 series cards for some time to come, but they will not be extending it to the 7970GE. So while the 7970 will come with free games the 7970GE will not, which is going to further affect the value difference between the two cards.
Finally, while general card availability should be good – we’ve already seen that most 7970s can overclock to 7970GE speeds – AMD has pushed the launch out in front of when cards will actually ship. The 7970GE will not appear in stores until next week and widespread availability isn’t expected until July. But once cards do start flowing we don’t see any reason that AMD won’t be able to keep them in stock.
Summer 2012 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
Radeon HD 7970 GHz Edition | $499 | GeForce GTX 680 | |||
Radeon HD 7970 | $429 | ||||
$399 | GeForce GTX 670 | ||||
Radeon HD 7950 | $359 | ||||
Radeon HD 7870 | $319 | ||||
$279 | GeForce GTX 570 | ||||
Radeon HD 7850 | $239 |
110 Comments
View All Comments
piroroadkill - Friday, June 22, 2012 - link
While the noise is bad - the manufacturers are going to spew out non-reference, quiet designs in moments, so I don't think it's an issue.silverblue - Friday, June 22, 2012 - link
Toms added a custom cooler (Gelid Icy Vision-A) to theirs which reduced noise and heat noticably (about 6 degrees C and 7-8 dB). Still, it would be cheaper to get the vanilla 7970, add the same cooling solution, and clock to the same levels; that way, you'd end up with a GHz Edition clocked card which is cooler and quieter for about the same price as the real thing, albeit lacking the new boost feature.ZoZo - Friday, June 22, 2012 - link
Would it be possible to drop the 1920x1200 definition for test? 16/10 is dead, 1080p has been the standard for high definition on PC monitors for at least 4 years now, it's more than time to catch up with reality... Sorry for the rant, I'm probably nitpicking anyway...Reikon - Friday, June 22, 2012 - link
Uh, no. 16:10 at 1920x1200 is still the standard for high quality IPS 24" monitors, which is a fairly typical choice for enthusiasts.paraffin - Saturday, June 23, 2012 - link
I haven't been seeing many 16:10 monitors around thesedays, besides, since AT even tests iGPU performance at ANYTHING BUT 1080p your "enthusiast choice" argument is invalid. 16:10 is simply a l33t factor in a market dominated by 16:9. I'll take my cheap 27" 1080p TN's spaciousness and HD content nativiness over your pricy 24" 1200p IPS' "quality" anyday.CeriseCogburn - Saturday, June 23, 2012 - link
I went over this already with the amd fanboys.For literally YEARS they have had harpy fits on five and ten dollar card pricing differences, declaring amd the price perf queen.
Then I pointed out nVidia wins in 1920x1080 by 17+% and only by 10+% in 1920x1200 - so all of a sudden they ALL had 1920x1200 monitors, they were not rare, and they have hundreds of extra dollars of cash to blow on it, and have done so, at no extra cost to themselves and everyone else (who also has those), who of course also chooses such monitors because they all love them the mostest...
Then I gave them egg counts, might as well call it 100 to 1 on availability if we are to keep to their own hyperactive price perf harpying, and the lowest available higher rez was $50 more, which COST NOTHING because it helps amd, of course....
I pointed out Anand pointed out in the then prior article it's an ~11% pixel difference, so they were told to calculate the frame rate difference... (that keeps amd up there in scores and winning a few they wouldn't otherwise).
Dude, MKultra, Svengali, Jim Wand, and mass media, could not, combined, do a better job brainwashing the amd fan boy.
Here's the link, since I know a thousand red-winged harpies are ready to descend en masse and caw loudly in protest...
http://translate.google.pl/translate?hl=pl&sl=...
1920x1080: " GeForce GTX680 is on average 17.61% more efficient than the Radeon 7970.
Here, the performance difference in favor of the GTX680 are even greater"
So they ALL have a 1920x1200, and they are easily available, the most common, cheap, and they look great, and most of them have like 2 or 3 of those, and it was no expense, or if it was, they are happy to pay it for the red harpy from hades card.
silverblue - Monday, June 25, 2012 - link
Your comparison article is more than a bit flawed. The PCLab results, in particular, have been massively updated since that article. Looks like they've edited the original article, which is a bit odd. Still, AMD goes from losing badly in a few cases to not losing so badly after all, as the results on this article go to show. They don't displace the 680 as the best gaming card of the moment, but it certainly narrows the gap (even if the GHz Edition didn't exist).Also, without a clear idea of specs and settings, how can you just grab results for a given resolution from four or five different sites for each card, add them up and proclaim a winner? I could run a comparison between a 680 and 7970 in a given title with the former using FXAA and the latter using 8xMSAA, doesn't mean it's a good comparison. I could run Crysis 2 without any AA and AF at all at a given resolution on one card and then put every bell and whistle on for the other - without the playing field being even, it's simply invalid. Take each review at its own merits because at least then you can be sure of the test environment.
As for 1200p monitors... sure, they're more expensive, but it doesn't mean people don't have them. You're just bitter because you got the wrong end of the stick by saying nobody owned 1200p monitors then got slapped down by a bunch of 1200p monitor owners. Regardless, if you're upset that NVIDIA suddenly loses performance as you ramp up the vertical resolution, how is that AMD's fault? Did it also occur to you that people with money to blow on $500 graphics cards might actually own good monitors as well? I bet there are some people here with 680s who are rocking on 1200p monitors - are you going to rag (or shall I say "rage"?) on them, too?
If you play on a 1080p panel then that's your prerogative, but considering the power of the 670/680/7970, I'd consider that a waste.
FMinus - Friday, June 22, 2012 - link
Simply put; No!1080p is the second worst thing that happened to the computer market in the recent years. The first worst thing being phasing out 4:3 monitors.
Tegeril - Friday, June 22, 2012 - link
Yeah seriously, keep your 16:9, bad color reproduction away from these benchmarks.kyuu - Friday, June 22, 2012 - link
16:10 snobs are seriously getting out-of-touch when they start claiming that their aspect ratio gives better color reproduction. There are plenty of high-quality 1080p IPS monitors on the market -- I'm using one.That being said, it's not really important whether it's benchmarked at x1080 or x1200. There is a neglible difference in the number of pixels being drawn (one of the reasons I roll my eyes at 16:10 snobs). If you're using a 1080p monitor, just add anywhere from 0.5 to 2 FPS to the average FPS results from x1200.
Disclaimer: I have nothing *against* 16:10. All other things being equal, I'd choose 16:10 over 16:9. However, with 16:9 monitors being so much cheaper, I can't justify paying a huge premium for a measily 120 lines of vertical resolution. If you're willing to pay for it, great, but kindly don't pretend that doing so somehow makes you superior.