NVIDIA Tegra X1 Preview & Architecture Analysis
by Joshua Ho & Ryan Smith on January 5, 2015 1:00 AM EST- Posted in
- SoCs
- Arm
- Project Denver
- Mobile
- 20nm
- GPUs
- Tablets
- NVIDIA
- Cortex A57
- Tegra X1
GPU Performance Benchmarks
As part of today’s announcement of the Tegra X1, NVIDIA also gave us a short opportunity to benchmark the X1 reference platform under controlled circumstances. In this case NVIDIA had several reference platforms plugged in and running, pre-loaded with various benchmark applications. The reference platforms themselves had a simple heatspreader mounted on them, intended to replicate the ~5W heat dissipation capabilities of a tablet.
The purpose of this demonstration was two-fold. First to showcase that X1 was up and running and capable of NVIDIA’s promised features. The second reason was to showcase the strong GPU performance of the platform. Meanwhile NVIDIA also had an iPad Air 2 on hand for power testing, running Apple’s latest and greatest SoC, the A8X. NVIDIA has made it clear that they consider Apple the SoC manufacturer to beat right now, as A8X’s PowerVR GX6850 GPU is the fastest among the currently shipping SoCs.
It goes without saying that the results should be taken with an appropriate grain of salt until we can get Tegra X1 back to our labs. However we have seen all of the testing first-hand and as best as we can tell NVIDIA’s tests were sincere.
NVIDIA Tegra X1 Controlled Benchmarks | |||||
Benchmark | A8X (AT) | K1 (AT) | X1 (NV) | ||
BaseMark X 1.1 Dunes (Offscreen) | 40.2fps | 36.3fps | 56.9fps | ||
3DMark 1.2 Unlimited (Graphics Score) | 31781 | 36688 | 58448 | ||
GFXBench 3.0 Manhattan 1080p (Offscreen) | 32.6fps | 31.7fps | 63.6fps |
For benchmarking NVIDIA had BaseMark X 1.1, 3DMark Unlimited 1.2 and GFXBench 3.0 up and running. Our X1 numbers come from the benchmarks we ran as part of NVIDIA’s controlled test, meanwhile the A8X and K1 numbers come from our Mobile Bench.
NVIDIA’s stated goal with X1 is to (roughly) double K1’s GPU performance, and while these controlled benchmarks for the most part don’t make it quite that far, X1 is still a significant improvement over K1. NVIDIA does meet their goal under Manhattan, where performance is almost exactly doubled, meanwhile 3DMark and BaseMark X increased by 59% and 56% respectively.
Finally, for power testing NVIDIA had an X1 reference platform and an iPad Air 2 rigged to measure the power consumption from the devices’ respective GPU power rails. The purpose of this test was to showcase that thanks to X1’s energy optimizations that X1 is capable of delivering the same GPU performance as the A8X GPU while drawing significantly less power; in other words that X1’s GPU is more efficient than A8X’s GX6850. Now to be clear here these are just GPU power measurements and not total platform power measurements, so this won’t account for CPU differences (e.g. A57 versus Enhanced Cyclone) or the power impact of LPDDR4.
Top: Tegra X1 Reference Platform. Bottom: iPad Air 2
For power testing NVIDIA ran Manhattan 1080p (offscreen) with X1’s GPU underclocked to match the performance of the A8X at roughly 33fps. Pictured below are the average power consumption (in watts) for the X1 and A8X respectively.
NVIDIA’s tools show the X1’s GPU averages 1.51W over the run of Manhattan. Meanwhile the A8X’s GPU averages 2.67W, over a watt more for otherwise equal performance. This test is especially notable since both SoCs are manufactured on the same TSMC 20nm SoC process, which means that any performance differences between the two devices are solely a function of energy efficiency.
There are a number of other variables we’ll ultimately need to take into account here, including clockspeeds, relative die area of the GPU, and total platform power consumption. But assuming NVIDIA’s numbers hold up in final devices, X1’s GPU is looking very good out of the gate – at least when tuned for power over performance.
194 Comments
View All Comments
Aenean144 - Monday, January 5, 2015 - link
Honestly, why? What does doing that really benefit Apple?chizow - Monday, January 5, 2015 - link
They'll gain access to better GPU IP immediately, if they are going to pay a licensing fee, might as well pay (presumably) more for better IP I suppose.From a software support standpoint, they'd gain 100% compatibility and portability with all the latest OpenGL specs, so you'd have a shared graphics platform there with their iOS platforms. Then they wouldn't have to much around with Metal and iOS API so much, which would make it easier for them to merge the two platforms as the rumors have suggested.
Aenean144 - Monday, January 5, 2015 - link
None of what you state is a material advantage to Apple.The essence of your belief is that Nvidia will have a more powerful GPU solution than what Apple can come up with their own or through ImgTec. A lot of people don't think that will be true.
On top of that, this Tegra X1 SoC does not appear suitable for phones, and moreover, Nvidia appears to be concentrating on embedded applications with higher TDP with the X1. This would pose a problem for Apple as the iPhone is currently their most important product.
chizow - Monday, January 5, 2015 - link
Not having to support two different graphics APIs, one of which is being developed from scratch, isn't a material advantage to Apple?And a lot of people don't think what Nvidia has isn't already better than what Imgtech can produce? Who are you referring to exactly? Realists already know Nvidia's last-gen Kepler is already better than PowerVR rogue. Remember when PowerVR released roadmaps and those same people expected PowerVR Rogue XT 6-core to match Tegra K1 based on paper spec FLOP counts alone? What happened in reality? Apple needed to add ANOTHER 2 cores in a custom 8-core Rogue XT configuration to trade blows with K1.
The whole point of licensing is that even if the Tegra SoC as a whole doesn't fit their needs, they can license the IP and integrate it however they like into their own SoC.....
Yojimbo - Monday, January 5, 2015 - link
He is talking about Apple licensing a GPU from NVIDIA instead of from IMG. The Tegra X1 is not a GPU, it is an SOC targeted at a specific market segment. The target of the Tegra X1 has been set by the market NVIDIA thinks it can penetrate more than by the abilities of the GPU it contains. NVIDIA was not able to penetrate the smartphone market because of a lack of a good modem option. They have since stopped trying, and this is well-known. Apple has access to a modem, so this is not a concern for them. All they need to consider is if licensing a GPU, or GPU IP, from NVIDIA helps them more than licensing from IMG. I think on the same process technology, NVIDIA's offering would be superior, no matter if used in a phone or tablet.lucam - Wednesday, January 7, 2015 - link
Really? Why NV doesn't do any product with GPU in a phone anymore?name99 - Monday, January 5, 2015 - link
Who says nV is WILLING to license? Do they license to anyone else?pSupaNova - Tuesday, January 6, 2015 - link
http://www.anandtech.com/show/7083/nvidia-to-licen...
chizow - Tuesday, January 6, 2015 - link
And they also generate roughly $266 million per year in revenue as a result of their cross-licensing agreement/settlement with Intel. That obviously makes Intel the most likely suitor as the 1st Nvidia SoC GPU IP licensee, since they already have Nvidia GPU IP in their desktop CPU IGPs but still license IMGtech IP for their mobile SoCs.But my bet is on Apple being the 1st major licensee, largely dependent on the outcome of the Samsung/Qualcomm IP litigation.
chitownbarber - Tuesday, January 6, 2015 - link
Per Samsung's track record, the legal eagles will drag things on for years to come. Not sure what Qaulcomm will do? If you think Apple is in negotiations with Nvidia, which I agree with, then they should be coming to an agreement sometime soon to give the GPU engineers Apple has hired to do their mojo? I'm sure Apple would like nothing better to leave Samsung in the dust by exploiting the advanced gaming market that they are inspiring to, along with a rumored do-all living room console.