Ever since its arrival in the ultra mobile space, NVIDIA hasn't really flexed its GPU muscle. The Tegra GPUs we've seen thus far have been ok at best, and in serious need of improvement at worst. NVIDIA often blamed an immature OEM ecosystem unwilling to pay for the sort of large die SoCs necessary in order to bring a high-performance GPU to market. Thankfully, that's all changing. Earlier this year NVIDIA laid out its mobile SoC roadmap through 2015, including the 2014 release of Project Logan - the first NVIDIA ultra mobile SoC to feature a Kepler GPU. Yesterday in a private event at Siggraph, NVIDIA demonstrated functional Logan silicon for the very first time.

NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.

NVIDIA isn't talking about CPU cores, but it's safe to assume that Logan will be another 4+1 arrangement of cores - likely still based on ARM's Cortex A15 IP (but perhaps a newer revision of the core). On the GPU front, NVIDIA confirmed our earlier speculation that Logan includes a single Kepler SMX:

One Kepler SMX features 192 CUDA cores. NVIDIA isn't talking about shipping GPU frequencies either, but it did provide this chart to put Logan's GPU capabilities into perspective:

Don't get too excited as we're looking at a comparison of GFLOPS and not game performance, but the peak theoretical ALU bound performance of mobile Kepler should exceed that of a Playstation 3 or GeForce 8800 GTX (memory bandwidth is another story however). If we look closely at NVIDIA's chart and compare mobile Kepler to the iPad 4, we get a better idea of what sort of clock speeds NVIDIA would need to attain this level of performance. Doing some quick Photoshop estimation it looks like NVIDIA is claiming mobile Kepler has somewhere around 5.2x the FP power of the PowerVR SGX 554MP4 in the iPad 4 (76.8 GFLOPS). That works out to be right around 400 GFLOPS. With a 192 core implementation of Kepler, you get 2 FLOPS per core or 384 FLOPS per cycle. To hit 400 GFLOPS you'd need to clock the mobile Kepler GPU at roughly 1GHz. That's certainly doable from an architectural standpoint (although we've never seen it done on any low power 28nm process), but it's probably a bit too high for something like a smartphone.

NVIDIA didn't want to talk frequencies but they did tell me that we might see something this fast in some sort of a tablet. I suspect that most implementations will be clocked significantly lower. Even at half the frequency though, we're still talking about roughly Playstation 3 levels of FP power out of a mobile SoC. We know nothing of Logan's memory subsystem, which obviously plays a major role in real world gaming performance but there's no getting around the fact that Logan's Kepler implementation means serious business. For years we've lamented NVIDIA's mobile GPUs, Logan looks like it's finally going to change that.

API Support and Live Demos
 

Unlike previous Tegra GPUs, Kepler is a fully unified architecture and OpenGL ES 3.0, OpenGL 4.4 and DirectX 11 compliant. The API compliance alone is a huge step forward for NVIDIA. It's also a big one for game developers looking to move more seriously into mobile. Epic's Tim Sweeney even did a blog post for NVIDIA talking about Logan's implementation of Kepler and how it brings feature parity between PCs, next-gen consoles and mobile platforms. NVIDIA responded in kind by running some Unreal Engine 4 demos on Android on a Logan test platform. That's really the big story behind all of this. With Logan, NVIDIA will bring its mobile GPUs up to feature parity with what it's shipping in the PC market. Game developers looking to port games between console, PC, tablet and smartphone should have an easier job of doing that if all platforms supported the same APIs. Logan will take NVIDIA from being very behind in API support (with no OpenGL ES 3.0 support) to the head of the class.

NVIDIA took its Ira demo, originally run on a Titan at GTC 2013, and got it up and running on a Logan development board. Ira did need some work to make the transition to mobile. The skin shaders were simplified, smaller textures are used and the rendering resolution is dropped to 1080p. NVIDIA claims this demo was done in a 2 - 3W power envelope.

The next demo is called Island and was originally shown on a Fermi desktop part. Running on Logan/mobile Kepler, this demo shows OpenGL 4.3 and hardware tessellation working.

The development board does feature a large heatspreader, but that's not too unusual for early silicon just out of bring up. Logan's package size should be comparable to Tegra 4, although the die size will clearly be larger. The dev board is running Android and is connected to a 10.1-inch 1920 x 1200 touchscreen.

Power Consumption & Final Words
Comments Locked

141 Comments

View All Comments

  • ollienightly - Thursday, July 25, 2013 - link

    And what do we need OpenGL 4.4 for? More wasted silicon or power? You do realize NO ONE would ever develop OpenGL 4.4 games for the Tegra 5 EVER.
  • djgandy - Thursday, July 25, 2013 - link

    Nvidia haven't even implemented OpenGL ES 3.0 on anything that ships or is near to shipping! And now they are going completely to the other end of the spectrum and doing a fully DX11 compliant chip!? Oh and the reason there is no OpenGL 4.4 on other mobile GPU's is because there is no point in doing it. Why burn area for an API you cannot use.

    Keep smoking that Nvidia marketing bs
  • ltcommanderdata - Wednesday, July 24, 2013 - link

    The expectation is that this year is Apple's "S" refresh which last time with the iPad 2/iPhone 4S brought 9x (iPad 2)/7x (iPhone 4S) claims of GPU performance increases over the previous generation by Apple which I believed translated into ~5x real world performance improvements. As such a 5x theoretical GPU performance increase by nVidia in 2014 is not out of line with what Apple could be delivering later in 2013. As michael2k points out, PowerVR 6 Rogue is certainly scalable to those performance levels. We'll have to see them implemented in actual devices to see how well they compare within realistic power constraints to really compare their effectiveness of course.
  • mmrezaie - Wednesday, July 24, 2013 - link

    I also think the end result will be like what we have with AMD. AMD offers very good hardware, but nvidia pushes more on api/driver lvl. So if PowerVR and Kepler become comparable I wonder what kind of competition we will see in Driver/API/Game Engine front.
  • ltcommanderdata - Wednesday, July 24, 2013 - link

    http://gfxbench.com/compare.jsp?D1=Apple+iPhone+5&...

    Apple actually writes their own drivers for their PowerVR GPUs which seem to be very efficient considering the iPhone 5's SGX543MP3 actually achieves double the triangle throughput of the Galaxy S4's SGX544MP3 despite the Galaxy S4 having higher clock speed and memory bandwidth. So there is the comparison between PowerVR reference drivers which is probably what most device manufacturers are using vs Keplar and Apple PowerVR drivers vs Keplar. It's a safe bet Apple will continue to make gaming a focus on iOS given that's a major part of the App Store, so they'll continue putting effort into GPU driver performance optimization. nVidia of course has a long track record with driver optimization so we'll definitely see lots of competition in this area.

    In terms of features, Apple continues expose new features in PowerVR GPUs through OpenGL ES extensions. They've already implemented or are going to implement in iOS 7 a number OpenGL ES 3.0 features on existing Series 5/5XT GPUs including sync objects, instanced rendering, and additional texture formats, etc. The major untapped feature is multiple render target support which should be coming since the EXT extension has now been finalized. Series6/Rogue is DX10 compliant, so I expect we'll be seeing those additional features like geometry shaders exposed through OpenGL ES 3.0 extensions. So it'll be a comparison between OpenGL 3.3 vs OpenGL 4.4 exposed through OpenGL ES 3.0 extensions.

    One advantage Apple does have is that they can regularly release performance optimizations and new features in regular iOS updates which see rapid adoption throughout the userbase so developers can count on the features and use them. nVidia likely has more trouble pushing out new drivers broadly on Android. As such, it's good that nVidia is aiming high to begin with in terms of performance and features since they have less opportunity to gradually increase them over time through driver updates.
  • name99 - Wednesday, July 24, 2013 - link

    You also have to remember that most of Apple's customers (like most customers of all phones and tablets) don't give a damn about the most demanding games. Apple is happy to help out game developers, happy even to boast about them occasionally, but games are not central to what Apple cares about in a GPU.

    What Apple DOES care about is enhancing the entire UI. This means that (especially as they get more control over their entire SOC) they're going to be doing more and more things that will be invisible if you just look at specs and traditional benchmarks. For example, it means that they will pick and choose whatever features are valuable in Open GL4.4 and integrate those into their devices (maybe, maybe not, exposing API).

    In this context, for example, the shared buffers of Open GL4.4 would be extremely valuable to Layer Manager, and they have enough control over the CPU (and I assume could negotiate enough control over the GPU) to implement what's necessary to get this working on their systems. It could be there, invisible to external programmers, but making Layer Manager operations (ie the entire damn UI) run 20% faster and with 20% less energy.

    A second example. Apple, on both OSX and iOS, have constantly tried to push image processing ever closer to what is "theoretically" correct rather than computationally easy. So they would like all image processing to be done in something like Lab space, and only converted to gamma corrected RGB at the very stage of display. They do this on OSX, on iOS with less CPU power they do as much as they can in SRGB space. But if you had HW that ran the transformations from RGB (with various profiles) or YUV (again with various profiles) to Lab and back, you could expand this for usage everywhere. It's the kind of small thing (like kerning or ligatures) that many people won't notice, but it will make all images (and especially manipulated/processed images) on iOS look just that much better --- but to really pull it off requires dedicated HW.

    Point is: I am sure Apple will continue to throw more transistors at the GPU part of their SOC. But they won't necessarily be throwing those transistors at the same things nVidia is throwing them at. They may even seem to lag nVidia in benchmarks, but it would be foolish to map that onto assuming the devices feel slower than nVidia devices.
  • happycamperjack - Wednesday, July 24, 2013 - link

    I think your opinion about Apple's position on gaming is 4 years old. 75% of App store revenue come from games. iPhone 3GS, iPhone 4S, iPhone 5, iPads all featuring GPU, not necessarily CPU, that were well ahead of competitions when it came out. So Apple doesn't just CARE about games, they effing love games/money! It didn't start out this way of course, but they learn that's what their customers wanted through app downloads and sale, so that's what they are focusing now.
  • lmcd - Wednesday, July 24, 2013 - link

    Apple's mentality there is accurate for Macs but not iOS devices.
  • happycamperjack - Wednesday, July 24, 2013 - link

    Well I can't blame them. Most computer games are written and optimized in DirectX. With the mac market still a small fraction of of PC, it's hard to see developers change their DirectX stance. Thus it's hard to see Apple change their stance on computer games as well. It's more likely for them to make a console based on iOS then push for Mac games.
  • TheJian - Monday, August 5, 2013 - link

    The app store doesn't make a ton of money, neither does google's currently. Revenue and profit are two different things also. They are both really just designed to push the platforms and devices. Maybe they will be huge money makers in the end as games get more evolved and rise in price, but I doubt Apple makes 1B on the store vs something like 39B from the hardware. In 2011 Piper Jaffray said they made ~$239 mil but apple divulges nothing so no real proof of what they pay in costs to run the store or what they make after that. Apple has only claimed to run at break even (oppenheimer 2010). If numbers are correct, they make ~$3bil in revenue on downloads but how much is left for profits after server costs, maintaining them, bandwidth etc.

    Apple cares about games when they start OPTIMIZING for their products. Currently they are inspiring nobody to do this. I see nobody but Nvidia doing anything to make things better looking on their hardware. The fact that apple's store has 75% of revenue from games just says game devs own most of the revenue from the store, and that consoles will suffer because many game elsewhere (which shows in wiiu/vita sales). It says nothing about APPLE themselves promoting games development.

    Links to Apple proof of game investing please. So far all I see is "you should be thankful we let you put your game on IOS" rather than "Here' please make it BETTER on IOS & A6 and we'll help you". With NV they send help to the game dev to optimize for their hardware (literally send people to help out). Check out Ouya comments etc. If apple is doing this I'm not aware of it and have seen no articles from any dev saying "apple was great to work with, gave tons of help to optimize for A6" etc...AMD/NV have courted game devs forever (ok, 20yrs). I have seen nothing from Apple and devs didn't even jump on macs conversions until they hit 10% share (which is dropping now, so I expect it to drop again for devs). Apple appears to be learning nothing. If they had made games a HUGE part of their priority 3yrs ago android would have never taken off. They played the enterprise card properly but so far they have wasted the game card. Enterprise brought down Rimm (everyone gaining exchange), but only google seems to be helping to court devs (and NV helping). Shield wasn't on display in main view for all at Google IO for nothing. Google seems to understand gaming is needed to kill windows, take over some PC sales and kill directx. All of these are done with games on android. Couple that strategy with a free OS and Free office package at some point for home users (in a decent package) and WINTEL isn't needed. Qcom/NV/Samsung will provide the soc power and they google provides the platform to run on them. All of us win in games if we get off directx. Devs make more money because of easy porting thus having more money for risk on better games since they have so many to target. Any decent game should make money just due to the sheer # of devices to sell to at that point.

Log in

Don't have an account? Sign up now