NVIDIA SIGGRAPH 2018 Keynote Live Blog (4pm Pacific)
by Ryan Smith on August 13, 2018 6:56 PM EST07:00PM EDT - Kicking off momentarily will be NVIDIA's 2018 SIGGRPAH keynote
07:00PM EDT - This is a bit of an unusual event for NVIDIA, as while they present at SIGGRAPH every year, it's normally a lower-key affair
07:01PM EDT - Instead, this year the man himself, NVIDIA CEO Jensen "the more you buy, the more you save" Huang is giving a full keynote address at the show
07:03PM EDT - This keynote is scheduled to last for roughly 2 hours, which means we're expecting a fairly prolific presentation
07:04PM EDT - Meanwhile if you'd like to watch the keynote yourself, NVIDIA is streaming it over on their Ustream channel: http://www.ustream.tv/nvidia
07:04PM EDT - Which is also how we're covering this, as AnandTech is not present at SIGGRAPH this year
07:04PM EDT - And of course, the stream, already struggling, has just died
07:05PM EDT - We're back. And here we go
07:06PM EDT - Starting with a traditional video roll
07:08PM EDT - This is basically a historical recap video, showing major graphics moments since the beginning of computer graphics
07:10PM EDT - (The stream is once again acting up)
07:10PM EDT - This is the 30th anniversary of Pixar's Renderman software
07:13PM EDT - Jensen is talking about the various Pixar rendering innovations over the years
07:14PM EDT - "4000 times more computation later"
07:15PM EDT - Artists will use as much computing resources as they are provided. The real limitation is how long they're willing to wait
07:17PM EDT - Now recapping NVIDIA's own GPU history. Riva TNT, GeForce 1, GeForce 3, etc
07:17PM EDT - Jensen is on another one of his Moore's Law spiels
07:18PM EDT - GPU performance has been growing faster than Moore's Law. And Jensen wants to keep it that way
07:19PM EDT - Now talking about photorealistic real-time rendering, physics simulations, and other tasks that need cutting-edge GPU performance
07:21PM EDT - Recounting all of the various graphical hacks used over the eras to reduce graphics rendering to something that contemporary processors can handle
07:21PM EDT - This leading into Jensen's other spiel as of late, ray tracing and how it's the holy grail of computer graphics
07:23PM EDT - Ray tracing is incredibly computationally expensive. Pretty. But expensive
07:24PM EDT - Now recapping NVIDIA's RTX ray tracing technology, which was first introduced at GDC 2018
07:25PM EDT - Rolling the Star Wars RTX demo
07:26PM EDT - 5 rays per pixel in that demo
07:27PM EDT - Suprise #1: What NV just demoed was running on just 1 card
07:27PM EDT - World's first ray tracing GPU
07:28PM EDT - Jensen is harassing the cameraman with the reflections off of the card
07:29PM EDT - The text on the card reads "Quadro RTX"
07:29PM EDT - "Up to 10 GigaRays per second"
07:30PM EDT - "New computer architecture"
07:30PM EDT - "Up to 16 TFLOPs"
07:31PM EDT - (Jensen is still messing with the cameraman. He seems to be genuinely enjoying it)
07:31PM EDT - Comes with NVLink
07:31PM EDT - 100GB/sec
07:32PM EDT - "500 trillion tensor ops per second"
07:32PM EDT - Brand new processor called the "RT Core"
07:33PM EDT - "Greatest leap since 2006", which was the launch of the G80 GPU (8800 GTX)
07:34PM EDT - "Motion adaptive shading"
07:34PM EDT - Architecture is confirmed. Say hello to Turing
07:36PM EDT - Jensen is now talking a bit more about the RT core
07:36PM EDT - It accelerates ray-triangle intersection and processing the Bounding Volume Hierarchy (BVH)
07:37PM EDT - Also talking about the various precision modes, from FP32 down to INT4
07:37PM EDT - Supports 8K displays
07:37PM EDT - 8K video encoding on the NVENC block as well
07:38PM EDT - 754mm2 GPU, 18.6B transistors
07:38PM EDT - 14Gbps memory clock
07:39PM EDT - This is a big chip. Not quite GV100 Volta big, but still very big by any kind of GPU standard
07:40PM EDT - Now diving into hybrid rendering, and the value of the tensor core
07:40PM EDT - NVIDIA will also be open sourcing their Material Description Language (MDL), which they've been releasing for the past couple of years
07:41PM EDT - NVIDIA will also be supporting Pixar's Universal Scene Description (USD) language
07:41PM EDT - (Was that someone mooing?)
07:43PM EDT - Now moving on to demos, starting with comparing different rendering types (rasterization, ray tracing, etc)
07:46PM EDT - Given the extreme die size, I'm going to be a bit surprised if we see this specific Turing GPU in consumer products
07:46PM EDT - Though anything is possible
07:50PM EDT - Compute is being used here to denoise the ray tracing output
07:52PM EDT - Now running another video clip. Everything here is being done in real time
07:55PM EDT - NVIDIA wants a bigger piece of the pie in the visual effects industry
07:58PM EDT - "Large render farms". Now imagine if they were full of Quadros instead of Xeons...
07:58PM EDT - Rolling another video, this time from Porsche
08:00PM EDT - It was also rendered in real-time
08:02PM EDT - (This is a surprisingly laid-back presentation for a Jensen Huang keynote. He's largely content gawking at graphics)
08:06PM EDT - A 6x speed-up in graphics over Pascal
08:07PM EDT - Using AI to get away with lower resolution rendering and then essentially upsample/anti-alias to get more quality
08:07PM EDT - Now discussing Deep Learning Anti-Aliasing
08:09PM EDT - As a result, the combination of ray tracing, faster shading, and DLAA, NVIDIA gets the 6x speed improvement
08:11PM EDT - Now demonstrating real-time scene editing with ray tracing, using RTX
08:13PM EDT - Using RTX for architectural engineering and design
08:15PM EDT - Now demonstratign film-quality rendering. It's moving at a couple of frames per second, which is incredibly fast for a single machine
08:16PM EDT - Now announcing the NVIDIA RTX Server
08:17PM EDT - 8 Quadro RTX 8000s in a single box
08:17PM EDT - General availability in Q1'2019
08:18PM EDT - $125,000
08:18PM EDT - 3250W power consumption
08:20PM EDT - NVIDIA has been working hard behind the scenes to get software and tool developers to adopt RTX, and it looks like they've had a fair bit of success
08:21PM EDT - Quadro RTX 5000: $2300
08:21PM EDT - RTX 6000: $6300. 24GB VRAM. 10 GigaRays/second
08:22PM EDT - RTX 8000: $10000. 48GB VRAM. 10 GigaRays/second
08:22PM EDT - "It's a steal"
08:22PM EDT - And there it is "the more you buy, the more you save"
08:24PM EDT - Now recapping the keynote thus far
08:24PM EDT - Turing and RTX. The most important NVIDIA GPU since G80/Tesla
08:24PM EDT - One more surprise
08:25PM EDT - NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More - https://www.anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture
08:26PM EDT - (I'm getting the distinct impression that something was cut from this keynote at the last moment)
08:26PM EDT - Jensen is now thanking everyone, including the audience
08:29PM EDT - And that's a wrap. Be sure to check out our complete Turing article
08:29PM EDT - https://www.anandtech.com/show/13215/nvidia-siggraph-2018-keynote-live-blog-4pm-pacific
16 Comments
View All Comments
ikjadoon - Monday, August 13, 2018 - link
But to clarify... RTX isn't working as a fully independent silo. AFAIK. RTX runs through DirectX Ray Tracing, recently included into DirectX 12:https://www.anandtech.com/show/12547/expanding-dir...
>For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option
smilingcrow - Monday, August 13, 2018 - link
Did you actually watch the video stream?This is aimed at a wide range of industries now with gaming not the focus at an event for Pros.
A clue if you missed the live stream is the Quadro cards start at over $2k.
smilingcrow - Monday, August 13, 2018 - link
Interesting timing with Nvidia releasing radical new tech which may well expand multiple industries on the same day that AMD release their chopped down server CPUs for HEDT/Workstation.Why? Because Nvidia are looking at replacing even more CPU workloads with GPUs so there's a battle with AMD fighting Intel for the current market share whilst Nvidia aim to reduce that market size.
Not a good day for Intel overall.
mode_13h - Tuesday, August 14, 2018 - link
You're silly. AMD is going after the same market as Nvidia, but using their (sadly inferior) Vega GPUs.The fact that they didn't add AVX-512 to Ryzen can be seen as evidence that they "got it" sooner than Intel, who will also be joining the party with their own GPUs (not Xeon Phi, as they previously hoped).
smilingcrow - Tuesday, August 14, 2018 - link
Sure, AMD would like to compete with NV in the data centre but that may be harder than catching up with Intel who have been asleep at the wheel for a while now whereas NV seem very perky.So it may not be silly at all, time will tell.
mode_13h - Tuesday, August 14, 2018 - link
Agreed. Nvidia seems to be a formidable competitor.AMD's Vega looked competitive when it was announced, but was behind when it launched. By the time Vega 2 was announced, it was already behind. That is not a promising trend.
AMD's best hope is that their next GPU redesign will be the "Zen" of GPUs. If they can't get it right then, they will have to settle for being the kind of also-ran in the GPU space that their CPUs have been for much of the past decade.