Previewing GeForce RTX 2080 Ti

Turing and NVIDIA’s focus on hybrid rendering aside, let’s take a look at the individual GeForce RTX cards.

Before getting too far here, it’s important to point out that NVIDIA has offered little in the way of information on the cards’ performance besides their formal specifications. Essentially the entirety of the NVIDIA Gamescom presentation – and even most of the SIGGRAPH presentation – was focused on ray tracing/hybrid rendering and the Turing architecture’s unique hardware capabilities to support those features. As a result we don’t have a good frame of reference for how these specifications will translate into real-world performance. Which is also why we’re disappointed that NVIDIA has already started pre-orders, as it pushes consumers into blindly buying cards.

At any rate, with NVIDIA having changed the SM for Turing as much as they have versus Pascal, I don’t believe FLOPS alone is an accurate proxy for performance in current games. It’s almost certain that NVIDIA has been able to improve their SM efficiency, especially judging from what we’ve seen thus far with the Titan V. So in that respect this launch is similar to the Maxwell launch in that the raw specifications can be deceiving, and that it’s possible to lose FLOPS and still gain performance.

In any case, at the top of the GeForce RTX 20 series stack will be the GeForce RTX 2080 Ti. A major departure from the GeForce 700/900/10 series, NVIDIA is not retaining the Ti card as a mid-generation kicker. Instead they’re launching with it right away. This means that the high-end of the RTX family is now a 3 card stack from the start, instead of a 2 card stack as has previously been the case.

NVIDIA has not commented on this change in particular, and this is one of those things that I expect we’ll know more about once we reach the actual hardware launch. But there’s good reason to suspect that since NVIDIA is using the relatively mature TSMC 12nm “FFN” process – itself an optimized version of 16nm – that yields are in a better place than usual at this time. Normally NVIDIA would be using a more bleeding-edge process, where it would make sense to hold back the largest chip another year or so to let yields improve.

NVIDIA GeForce x80 Ti Specification Comparison
  RTX 2080 Ti
Founder's Edition
RTX 2080 Ti GTX 1080 Ti GTX 980 Ti
CUDA Cores 4352 4352 3584 2816
ROPs 88? 88? 88 96
Core Clock 1350MHz 1350MHz 1481MHz 1000MHz
Boost Clock 1635MHz 1545MHz 1582MHz 1075MHz
Memory Clock 14Gbps GDDR6 14Gbps GDDR6 11Gbps GDDR5X 7Gbps GDDR5
Memory Bus Width 352-bit 352-bit 352-bit 384-bit
VRAM 11GB 11GB 11GB 6GB
Single Precision Perf. 14.2 TFLOPs 13.4 TFLOPs 11.3 TFLOPs 6.1 TFLOPs
"RTX-OPS" 78T 78T N/A N/A
TDP 260W 250W 250W 250W
GPU Big Turing Big Turing GP102 GM200
Architecture Turing Turing Pascal Maxwell
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" TSMC 16nm TSMC 28nm
Launch Date 09/20/2018 09/20/2018 03/10/2017 06/01/2015
Launch Price $1199 $999 MSRP: $699
Founders: $699
$649

The king of NVIDIA’s new product stack, the GeForce RTX 2080 Ti is without a doubt an interesting card. And if we’re being honest, it’s not a card I was expecting. Based on these specifications, it’s clearly built around a cut-down version of NVIDIA’s “Big Turing” GPU, which the company just unveiled last week at SIGGRAPH. And like the name suggests, Big Turing is big: 18.6B transistors, measuring 754mm2 in die size. This is closer in size to GV100 (Volta/Titan V) than it is any past x80 Ti card, so I am surprised that, even as a cut-down chip, NVIDIA can economically offer it for sale. None the less here we are, with Big Turing coming to consumer cards.

Even though it’s a cut-down part, RTX 2080 Ti is still a beast, with 4352 Turing CUDA cores and what I estimate to be 544 tensor cores. Like its Quadro counterpart, this card is rated for 10 GigaRays/second, and for traditional compute we’re looking at 13.4 TFLOPS based on these specifications. Note that this is only 19% higher than GTX 1080 Ti, which is all the more reason why I want to learn more about Turing’s architectural changes before predicting what this means for performance in current-generation rasterization games.

Clockspeeds have actually dropped from generation to generation here. Whereas the GTX 1080 Ti started at 1.48GHz and had an official boost clock rating of 1.58GHz (and in practice boosting higher still), RTX 2080 Ti starts at 1.35GHz and boosts to 1.55GHz, while we don’t know anything about the practical boost limits. So assuming NVIDIA is being as equally conservative as the last generation, then this means the average clockspeeds have dropped slightly. Which in turn means that whatever performance gains we see from GTX 2080 Ti are going to ride entirely on the increased CUDA core count and any architectural efficiency improvements.

Meanwhile the ROP count is unknown, but as it needs to match the memory bus width, we’re almost certainly looking at 88 ROPs. Even more so than the core compute architecture, I’m curious as to whether there are any architectural improvements here. Otherwise because the ROP count is identical, then the maximum pixel throughput (on paper) is actually ever so slightly lower than it was on GTX 1080 Ti.

Speaking of the memory bus, this is another area that is seeing a significant improvement. NVIDIA has moved from GDDR5X to GDDR6, so memory clockspeeds have increased accordingly, from 11Gbps to 14Gbps, a 27% increase. And since the memory bus width itself remains identical at 352-bits wide, this means the final memory bandwidth increase is also 27%. Memory bandwidth has long been the Achilles heel of GPUs, so even if NVIDIA’s theoretical ROP throughput has not changed this generation, the fact of the matter is that having more memory bandwidth is going to remove bottlenecks and improve performance throughout the rendering pipeline, from the texture units and CUDA cores straight out to the ROPs. Of course, the tensor cores and RT cores are going to be prolific bandwidth consumers as well, so in workloads where they’re in play, NVIDIA is once again going to have to do more with (relatively) less.

Past this, things start diverging a bit. NVIDIA is once again offering their reference-grade Founders Edition cards, and unlike with the GeForce 10 series, the 20 series FE cards have slightly different specifications than their base specification compatriots. Specifically, NVIDIA has cranked up the clockspeed and the resulting TDP a bit, giving the 2080 Ti FE an on-paper 6% performance advantage, and also a 10W higher TDP. For the standard cards then, the TDP is the x80 Ti-traditional 250W, while the FE card moves to 260W.

Meanwhile, starting with the GeForce 20 series cards, NVIDIA is rolling out a new design to their reference/Founders Edition cards, the first such redesign since the original GeForce GTX Titan back in 2013. Up until now NVIDIA has focused on a conservative but highly effective blower design, pairing the best blower in the industry with a metal grey & black metal shroud. The end result is that these reference/FE cards could be dropped in virtually any system and work, thanks to the self-exhausting nature of blowers.

However for the GeForce 20 series, NVIDIA has blown off the blower, and instead opted to design their cards around the industry’s other favorite cooler design: the dual-fan open air cooler. Combined with NVIDIA’s metallic aesthetics, which they have retained, and the resulting product pretty much looks exactly like you’d expect a high-end open air cooled NVIDIA card to look like: two fans buried inside a meticulous metal shroud. And while we’ll see where performance stands once we review the card, it’s clear that NVIDIA is at the very least aiming to lead the pack in industrial design once again.

The switch to an open air cooler has three particular ramifications versus NVIDIA’s traditional blower, which for regular AnandTech readers you’ll know we’ve discussed before.

  1. Cooling capacity goes up
  2. Noise levels go down
  3. A card can no longer guarantee that it can cool itself

In an open air design, hot air is circulated back into the chassis via the fans, as the shroud is not fully closed and the design doesn’t force hot air out of the back of the case. Essentially in an open air design a card will push the hottest air away from itself, but it’s up to the chassis to actually get rid of that hot air. Which a well-designed case will do, but not without first circulating it through the CPU cooler, which is typically located above the GPU.

GPU cooler design is such that there is no one right answer. Because open air designs can rely on large axial fans with little air resistance, they can be very quiet. But overall cooling becomes the chassis’ job. Otherwise blowers are fully exhausting and work in practically any chassis – no matter how bad the chassis cooling is – but it is nosier thanks to the high-RPM radial fan. NVIDIA for their part has long favored blowers, but this appears to be at an end. It does make me wonder what this means for their OEM customers (whose designs often count on the video card being a blower), but that’s a deeper discussion for another time.

At any rate, from NVIDIA’s press release we know that each fan features 13 blades, and that the shroud itself is once again made out of die-cast aluminum. Also buried in the press release is information that NVIDIA is once again using a vapor chamber here to transfer heat between the GPU and the heatsink, and that it’s being called a “full length” vapor chamber, which would mean it’s notably larger than the vapor chamber in NVIDIA’s past cards. Unfortunately this is the limit to what we know right now about the cooler, and I expect there’s more to find out in the coming days and weeks. In the meantime NVIDIA has disclosed that the resulting card the standard size for a high-end NVIDIA reference card: dual slot width, 10.5-inches long.

Diving down, we also have a few tidbits about the reference PCB, including the power delivery system. NVIDIA’s press release specifically calls out a 13 phase power delivery system, which matches the low-resolution PCB render they’ve posted to their site. NVIDIA has always been somewhat frugal on VRMs – their cards have more than enough capacity for stock operation, but not much excess capacity for power-intensive overclocking – so it sounds like they are trying to meet overclockers half-way here. Though once we get to fully custom partner cards, I still expect the MSIs and ASUSes of the world to go nuts and try to outdo NVIDIA.

NVIDIA’s photos also make it clear that in order to meet that 250W+ TDP, we’re looking at an 8pin + 8pin configuration for PCIe power connectors. On paper such a setup is good for 375W, and while I don’t expect NVIDIA to go quite that far, typically we’d see a 300W 6pin + 8pin setup instead. So NVIDIA is clearly planning on drawing more power, and they’re using the connectors to match. Thankfully 8pin power connectors are fairly common on 500W+ PSUs these days, however it’s possible that older PSU owners may get pinched by the need for dual 8pin cables.

Finally, for display outputs, NVIDIA has confirmed that their latest generation flagship once again supports up to 4 displays. However there are actually 5 display outputs on the card: the traditional 3 DisplayPorts and a sole HDMI port, but now there’s also a singular USB Type-C port, offering VirtualLink support for VR headsets. As a result, users can pick any 4 of the 5 ports, with the Type-C port serving as a DisplayPort when not hooked up to a VR headset. Though this does mean that the final DisplayPort has been somewhat oddly shoved into the second row, in order to make room for the USB Type-C port.

Wrapping up the GeForce RTX 2080 Ti, NVIDIA’s new flagship has been priced to match. In fact it is seeing the greatest price hike of them all. Stock cards will start at $999, $300 above the GTX 1080 Ti. Meanwhile NVIDIA’s own Founders Edition card carries a $200 premium on top of that, retailing for $1199, the same price as the last-generation Titan Xp. The Ti/Titan dichotomy has always been a bit odd in recent years, so it would seem that NVIDIA has simply replaced the Titan with the Ti, and priced it to match.

Announcing The GeForce RTX 20 Series Previewing RTX 2080, RTX 2070, & Pre-Orders
Comments Locked

223 Comments

View All Comments

  • GraXXoR - Tuesday, August 21, 2018 - link

    It's Nvidia's blind-prepurchase payment that is really pissing me off. As long as fanboiz are willing to empty their wallets of $1k before a single review has even been written means that this sort of prepayment will start even earlier next year...

    ... and before you know it, we'll be pledging for "perks" for the next generation of GPUs instead of purchasing.
  • HammerStrike - Monday, August 20, 2018 - link

    Pricing issues aside (although, IMO I'm not to shocked or put out by the prices, obviously lower is better but I'm not angry or losing sleep) I'm surprised that there is less enthusiasm on the forums for the first GPU to highlight real time ray tracing as a fully supported feature set designed to be used in game at playable frame rates. That's pretty revolutionary. Ever since I started following PC graphics (dual VooDoo2 SLI 8MB's were my first GPU's, still remember them fondly) real time ray tracing was considered the holy grail of rendering - the mile stone that was always talked about as "you'll know we've arrived at nirvana when it comes."

    We'll, it's here! And, for all the talk about Nvidia's next gen of cards over the last 6 months, with the exception of the week since the Quadro announcement at SIGGRAPH, it's been kept pretty much under wraps. As of a month ago I would not have guessed that the new gen of cards would have such a focus on ray tracing, so the news is somewhat surprising and, from a pure techno-nerd stand point I think it's awesome! Not saying that as a Nvidia fanboy (I've owned many cards from many different OEM's over the years), but just as an avid gamer and technology enthusiast this is a pretty seminal moment. Regardless of the price point it's at today, this tech is going to filter out into all of Nvidia's (and presumably AMD's) product stack over the next 1-2 years, and that's extremely exciting news! Assuming AMD has been active with MS in developing DXR, their next gen of GPU should be supporting ray tracing as well, which means there is a decent chance that a DXR/ray tracing feature set is included in the next gen of consoles due in the next couple of years. This is really ground breaking tech - 6 months ago the common wisdom was that "real time ray tracing" was still years away, yet it's launching Sept 20th.

    Granted, it's a hybrid approach, but given the 20+ year investment in rasterization 3D modeling, and the fact that every game currently released and in development is designed for a rasterized pipeline, it's not surprising, and frankly that's probably the smartest way to deploy it - rasterization has many good qualities to it, and if you can improve once if it's biggest weaknesses (lighting) through a hybrid approach with ray tracing that is the best of both worlds.

    Anyway, I'm rambling. From a long time gamer I'm very excited to see some reviews of these cards and how dev's integrate ray tracing into their engines. Great time to be a gamer! Great job Nvidia! Say what you will about the price of the cards, that's a (relatively) short term phenomenon, while the direction this is pushing the industry in will be felt over the next decade, if not beyond.
  • schen8 - Monday, August 20, 2018 - link

    Very well said HammerStrike!
    I'm and old school gamer too, grew up on the Voodoo 2 SLIs w/ Riva 128.
    Those were the days! Nvidia's keynote today really hit it right out of the park!
    It is propelling us into the next decade of gaming graphics. I'm truly excited and can't wait to pick up a new card!
  • gerz1219 - Monday, August 20, 2018 - link

    I think you’re not seeing more enthusiasm for real-time ray-tracing because we won’t see any AAA titles that take advantage of it until the average GPU supports it. And that’s not going to happen when the price point is $1200. It’s not going to happen until there are new Xbox and PlayStation consoles that have ray-tracing capabilities. And by the time that happens the 20** series will be obsolete. It’s exciting that they’re going in this direction, but these cards will never actually take us there.
  • Dug - Monday, August 20, 2018 - link

    You didn't watch the video, did you?
  • wyatterp - Tuesday, August 21, 2018 - link

    They literally showed 20+ games upcoming, to include 2 near term triple A launches (granted it's unknown if those games will support RTX at launch). BFV and Tomb Raider looked insanely good.
  • eva02langley - Tuesday, August 21, 2018 - link

    That they paid to get their support. It is all about the perception, but this is nothing more than Physix 2.0.

    Nobody developing their games on console are going to bother because AMD is the sole GPU provider.

    The money is not worth it for maybe 4% of the PC market purchasing these cards.
  • wyatterp - Tuesday, August 21, 2018 - link

    I'm with you - reading these comments is depressing. Gamers and tech lovers are some salty people of late - a jaded, skeptical lot here at AT! I know it's fair to be somewhat reserved, but the IQ increase of Ray Tracing looked more than a gimmick - it fundamentally changes how real time games looked, at least by demos. I know most people here want it, they are just pissed they can't afford it, so they want to dump on the capability here to justify their anger. I get it.

    I decided to preorder the 2080. I'm all in on this...assuming increased performance over the 1080 for "old rendered" games (from 8 to 10 TFlops?) - and hopefully it can run BFV at 60 FPS with RTX on. I know I'll be buying 3 of the RTX enabled games anyways. The way lighting and shadow worked in Shadow of Tomb Raider was impressive. BFV looked like offline renders.
  • bogda - Tuesday, August 21, 2018 - link

    You cannot put pricing aside. In order to get useful raytracing performance 1080p one will need 2080 Ti. From that I can conclude that RTX 2070 and 2080 will not be powefull enough to run realtime raytracing in games.
    I think for gamer spending 1200$ just to get realistic reflections and shadows is ridiculous.
    BTW there was no mention of VR in keynote. What VR neededed was more rendering power for less money so these cards probably sound end of VR on PC.
  • HammerStrike - Tuesday, August 21, 2018 - link

    1. I never said don't take pricing as a point of data, I just said there is a LOT more about the RTX series announcement then just the price, which is what the majority of comments seemed focused on. Real time ray tracing is the holy grail of rendering - the fact that it's showing up in any form in 2018 is pretty exciting.

    2. Where are you getting your info that you'll need a 2080 ti to run "useful" frames at 1080p with RT enabled? While I agree that we should all reserve a BUYING decision until trusted reviews are out there, this is just pure conjuncture that is apparently driven by your disappointment / saltiness / anger over the price of the cards.

    3. We could argue that RT, even as it's implemented in the RTX series, is a lot more then just "realistic shadows", but that's besides the point. Regardless of what aspects of the image quality are improved, that's the whole reason to upgrade a GPU right now. If image quality is of minor or no importance, you can drop your IQ settings to low and get 60+ FPS on a GTX 1050TI right now for $150. I get confused with all this "it's not useful, it just makes the image quality better" line of argument. People weren't drooling over the Cyberpunk 2077 demo at E3 because it was running at a super high frame rate.

    3. While I don't disagree with your point on VR, I wasn't really debating that. However, there still is the question of what Nvidia will position for their cards that cost less then $500. Regardless of what they are pricing the 2070+ cards at, my (semi) educated guess is that 95%+ of GPU's sold are sub $500. I'd bet that 80%+ are less then $300. While we can talk about "market" pricing, and whether Nvidia is good or evil for pricing these cards where they did, it doesn't change the fact that the mainstream of the market is at $300 or less, and no amount of halo product / pricing is going to change that. Say what you will of Nvidia, they are not stupid, and I'd be surprised if they didn't refresh their product stack in that price range, and I'd be equally surprised if they didn't provide a meaningful performance boast in comparable price tiers.

    It's also worth noting that the die's on the RTX series are FREAKING HUGE. Lots of consternation of "price inflation" based on model series from 7 gen to 9th to 10th to 20th, but end of the day those are just names. The primary driver of the cost of production in any semiconductor is the size of the die. The 2080ti has a die Size of ~757mm^2, where the 1080ti has a 471mm^2 die. That's basically a 60% increase in die size. Even assuming the yields are the same, they still use 300mm wafers, and the cost of manufacturing is per wafer, so everything else being equal their productions costs went up by 60%. DRAM prices are all higher then they were 2 years ago. If you take the $700 price of the 1080ti at launch, add an additional 60% to it and throw in $30-$50 more in GDDR pricing and you have a card in the $1200 range.

    Not saying that makes the pricing "ok" but simply to suggest that there no reason other then "greed" (I know you didn't make that specific argument, but it's pretty common on the forums) is pretty short sighted.

    This is a freakin' monster chip with revolutionary tech that is enabling for the first time the holy grail of rendering tech, real time ray tracing. Whether or not blowing out the silicon die area to support that, and charging the prices they are that are necessitated by that, turn out to be a good business decision remains to be seen. But from a pure tech standpoint it's pretty awesome. It't would be like Ford announcing a Mustang with a 1500 HP engine in it that costs $150K. Maybe there is enough of a market there, maybe there isn't, but I'm not going to get so hung up on the price to not appreciate the pure ridiculousness of what they made. Same here.

Log in

Don't have an account? Sign up now