Power Consumption

There's a lot of uncertainty around whether or not Kepler is suitable for ultra low power operation, especially given that we've only seen it in relatively high TDP (compared to tablets/smartphones) PCs. NVIDIA hoped to put those concerns to rest with a quick GLBenchmark 2.7 demo at Siggraph. The demo pitted an iPad 4 against a Logan development platform, with Logan's Kepler GPU clocked low enough to equal the performance of the iPad 4. The low clock speed does put Kepler at an advantage as it can run at a lower voltage as well, so the comparison is definitely one you'd expect NVIDIA to win. 

Unlike Tegra 3, Logan includes a single voltage rail that feeds just the GPU. NVIDIA instrumented this voltage rail and measured power consumption while running the offscreen 1080p T-Rex HD test in GLB2.7. Isolating GPU power alone, NVIDIA measured around 900mW for Logan's Kepler implementation running at iPad 4 performance levels (potentially as little as 1/5 of Logan's peak performance). NVIDIA also attempted to find and isolate the GPU power rail going into Apple's A6X (using a similar approach to what we documented here), and came up with an average GPU power value of around 2.6W. 

I won't focus too much on the GPU power comparison as I don't know what else (if anything) Apple hangs off of its GPU power rail, but the most important takeaway here is that Kepler seems capable of scaling down to below 1W. In reality NVIDIA wouldn't ship Logan with a < 1W Kepler implementation, so we'll likely see higher performance (and power consumption) in shipping devices. If these numbers are believable, you could see roughly 2x the performance of an iPad 4 in a Logan based smartphone, and 4 - 5x the performance of an iPad 4 in a Logan tablet - in as little as 12 months from now if NVIDIA can ship this thing on time.

If NVIDIA's A6X power comparison is truly apples-to-apples, then it would be a huge testament to the power efficiency of NVIDIA's mobile Kepler architecture. Given the recent announcement of NVIDIA's willingness to license Kepler IP to any company who wants it, this demo seems very well planned. 

NVIDIA did some work to make Kepler suitable for low power, but it's my understanding that the underlying architecture isn't vastly different from what we have in notebooks and desktops today. Mobile Kepler retains all of the graphics features as its bigger counterparts, although I'm guessing things like FP64 CUDA cores are gone.

Final Words

For the past couple of years we've been talking about a point in the future when it'll be possible to start playing console class games (Xbox 360/PS3) on mobile devices. We're almost there. The move to Kepler with Logan is a big deal for NVIDIA. It finally modernizes NVIDIA's ultra mobile GPU, bringing graphics API partity to everything from smartphones to high-end desktop PCs. This is a huge step for game developers looking to target multiple platforms. It's also a big deal for mobile OS vendors and device makers looking to capitalize on gaming as a way of encouraging future smartphone and tablet upgrades. As smartphone and tablet upgrade cycles slow down, pushing high-end gaming to customers will become a more attractive option for device makers.

Logan is expected to ship in the first half of 2014. With early silicon back now, I think 10 - 12 months from now is a reasonable estimate. There is the unavoidable fact that we haven't even seen Tegra 4 devices on the market yet and NVIDIA is already talking about Logan. Everything I've heard points to Tegra 4 being on the schedule for a bunch of device wins, but delays on NVIDIA's part forced it to be designed out. Other than drumming up IP licensing business, I wonder if that's another reason why we're seeing a very public demo of Logan now - to show the health of early silicon. There's also a concern about process node. Logan will likely ship at 28nm next year, just before the transition to 20nm. If NVIDIA is late with Logan, we could have another Tegra 3 situation where NVIDIA is shipping on an older process technology.

Regardless of process tech however, Kepler's power story in ultra mobile seems great. I really didn't believe the GLBenchmark data when I first saw it. I showed it to Ryan Smith, our Senior GPU Editor, and even he didn't believe it. If NVIDIA is indeed able to get iPad 4 levels of graphics performance at less than 1W (and presumably much more performance in the 2.5 - 5W range) it looks like Kepler will do extremely well in mobile.

Whatever NVIDIA's reasons for showing off Logan now, the result is something that I'm very excited about. A mobile SoC with NVIDIA's latest GPU architecture is exactly what we've been waiting for. 

Introduction
Comments Locked

141 Comments

View All Comments

  • Refuge - Thursday, July 25, 2013 - link

    Give it a tick and a tock and you will be surprised.

    Haswell is Intel's actual attempt at creating a mobile product that meets the expectations of having the Intel moniker with it.

    It is doing much better than Ivy did, and the graphics options are better, but the whole thing is still relatively young and juvenile. The next round I think we will see some very impressive results, like I keep telling people, the Atom of tomorrow isn't going to be the Atom of yestedays netbooks.
  • rwei - Wednesday, July 24, 2013 - link

    Serious question, why do mobile GPUs matter? I'm something of a declining gamer who probably last played a serious game around when ME3 came out, and I guess SC2:HotS briefly - and nothing on mobile platforms has excited me. On the other hand, I've accumulated a fat stack of games to play on consoles - the above, and Heavy Rain, Uncharted 3, The Last of Us - but I wouldn't actually play those on, say, a tablet (Heavy Rain maybe?), and even less so a phone.

    Infinity Blade was impressive for its time, but I would hardly buy a device to play it, and even in my reduced-passion state I still care more about games than most people.
  • randomhkkid - Wednesday, July 24, 2013 - link

    I think it will become more of a need as phones become the one device that does everything ie. when docked it becomes your desktop and then undocked its a smartphone. Check out the Ubuntu edge to see what I mean.
  • rwei - Wednesday, July 24, 2013 - link

    As things stand, I wouldn't even do that with an Ultrabook-class laptop, never mind a typical (non-Win8 convertible) tablet - and phones are still on a whole other plane entirely...!

    Particularly if high-DPI catches on (and I hope it does), my understanding is chips of this size won't have anywhere near the bandwidth to support that use case.
  • blacks329 - Wednesday, July 24, 2013 - link

    I had never thought of that but Heavy Rain on a tablet would actually be kind of awesome! Too bad that studio is Sony owned (ie only PS games) and the director is a pretentious douche. Nonetheless, they make interesting 'games' and I look forward to playing Beyond Two Souls.
  • Refuge - Thursday, July 25, 2013 - link

    There is always that slim chance it will pop up in the PlayStation store on some "Approved" HTC devices. I know my HTC One X+ got access to it because the Tegra 3 in it, but the selection is a joke if you ask me.
  • name99 - Wednesday, July 24, 2013 - link

    You're right --- the population that care about games is tiny, meaningless to Apple, Samsung, Nokia et al.

    The GPU is relevant on iOS, however, because the entire UI is built around "layers" which are essentially the backing store for every view (think controls like buttons, status bars, text windows, etc). These layers are composited together by the GPU to generate the final image you see. For this to give fluid scrolling, that compositing engine needs to be fast (and remember it is driving retina displays, so lots of pixels). Even today (and a lot more so in iOS) each of these views can be subject to frame by frame transformations (scaling, translation, becoming darker, lighter or more or less transparent) to provide the animations that one takes for granted in iOS, and once again we want those to run glitch free.

    All this stuff basically pushes the backend (texture) part of the GPU, not geometry and lighting. However something which DOES push geometry (I don't know about lighting) is Apple's flyover view in Maps. [Yeah, yeah, if you feel the need to make some adolescent comment about Apple Maps, please, for the love of god, go act like a child somewhere else.] The flyovers in Maps as of today (for the cities that have them) are, truth be told, PRETTY FREAKING INCREDIBLE. They combine the best features of Google Earth and StreetView, in a UI which is easier to use than either of those predecessors, and which runs a lot smoother than those predecessors. But I am guessing that the Maps 3D views pushes the GPU HW to its limits. They are smooth, yes, but they seem to do a careful job of limiting quality to keep smoothness going. There is no anti-aliasing in play, for example, and the tessellation of irregular objects (most obviously trees) is clearly too coarse. All of which means that if a GPU 2x as fast were available, Apple could make the 3D view in Maps look just that much better.

    Finally I suspect (without proof) that Apple also does a large amount of its rendering (ie the stroking and filling of paths, the construction of glyphs, and so on) on the GPU. They've wanted to do it that way for years on OSX, but were always hindered by bwd compatibility concerns. With the chance to start over on iOS, I'd expect they made sure this was a feasible path.
  • tviceman - Wednesday, July 24, 2013 - link

    It's nice to see Nvidia make comparisons to their own products. In this case, outperforming an 8800GTX puts things into good perspective when looking at anand's mobile GPU benchmarks.

    If Nvidia can deliver Logan "on time" then it truly will be a very, very great SoC. The biggest issue they'll still have to deal with is A15's power hungry design. Wayne's (Tegra 6) custom cores will hopefully be more power conscious like the Krait cores are.
  • xype - Wednesday, July 24, 2013 - link

    Oh, wow, I am sure this time around their outlandish performance claims will actually come true and Apple, Samsung, Qualcomm, et al will be totally outclassed.

    Especially since we all know companies like Apple—whose A6X the "sometime next year" is compared again, just decided to stop developing their mobile CPUs and will ship the next 4 iterations of each product with a A6X variant.
  • cdripper2 - Wednesday, July 24, 2013 - link

    Forgotten about Lucid Virtu have we? It would seem we SHOULD ignore your further posts ;)

Log in

Don't have an account? Sign up now