The DirectX 12 Performance Preview: AMD, NVIDIA, & Star Swarm
by Ryan Smith on February 6, 2015 2:00 PM EST- Posted in
- GPUs
- AMD
- Microsoft
- NVIDIA
- DirectX 12
Star Swarm & The Test
For today’s DirectX 12 preview, Microsoft and Oxide Games have supplied us with a newer version of Oxide’s Star Swarm demo. Originally released in early 2014 as a demonstration of Oxide’s Nitrous engine and the capabilities of Mantle, Star Swarm is a massive space combat demo that is designed to push the limits of high-level APIs and demonstrate the performance advantages of low-level APIs. Due to its use of thousands of units and other effects that generate a high number of draw calls, Star Swarm can push over 100K draw calls, a massive workload that causes high-level APIs to simply crumple.
Because Star Swarm generates so many draw calls, it is essentially a best-case scenario test for low-level APIs, exploiting the fact that high-level APIs can’t effectively spread out the draw call workload over several CPU threads. As a result the performance gains from DirectX 12 in Star Swarm are going to be much greater than most (if not all) video games, but none the less it’s an effective tool to demonstrate the performance capabilities of DirectX 12 and to showcase how it is capable of better distributing work over multiple CPU threads.
It should be noted that while Star Swarm itself is a synthetic benchmark, the underlying Nitrous engine is relevant and is being used in multiple upcoming games. Stardock is using the Nitrous engine for their forthcoming Star Control game, and Oxide is using the engine for their own game, set to be announced at GDC 2015. So although Star Swarm is still a best case scenario, many of its lessons will be applicable to these future games.
As for the benchmark itself, we should also note that Star Swarm is a non-deterministic simulation. The benchmark is based on having two AI fleets fight each other, and as a result the outcome can differ from run to run. The good news is that although it’s not a deterministic benchmark, the benchmark’s RTS mode is reliable enough to keep the run-to-run variation low enough to produce reasonably consistent results. Among individual runs we’ll still see some fluctuations, while the benchmark will reliably demonstrate larger performance trends.
The Test
For today’s preview Microsoft, NVIDIA, and AMD have provided us with the necessary WDDM 2.0 drivers to enable DirectX 12 under Windows 10. The NVIDIA driver is 349.56 and the AMD driver is 15.200. At this time we do not know when these early WDDM 2.0 drivers will be released to the public, though we would be surprised not to see them released by the time of GDC in early March.
In terms of bugs and other known issues, Microsoft has informed us that there are some known memory and performance regressions in the current WDDM 2.0 path that have since been fixed in interim builds of Windows. In particular the WDDM 2.0 path may see slightly lower performance than the WDDM 1.3 path for older drivers, and there is an issue with memory exhaustion. For this reason Microsoft has suggested that a 3GB card is required to use the Star Swarm DirectX 12 binary, although in our tests we have been able to run it on 2GB cards seemingly without issue. Meanwhile DirectX 11 deferred context support is currently broken in the combination of Star Swarm and NVIDIA's drivers, causing Star Swarm to immediately crash, so these results are with D3D 11 deferred contexts disabled.
For today’s article we are looking at a small range of cards from both AMD and NVIDIA to showcase both performance and compatibility. For NVIDIA we are looking at the GTX 980 (Maxwell 2), GTX 750 Ti (Maxwell 1), and GTX 680 (Kepler). For AMD we are looking at the R9 290X (GCN 1.1), R9 285 (GCN 1.2), and R9 260X (GCN 1.1). As we mentioned earlier support for Fermi and GCN 1.0 cards will be forthcoming in future drivers.
Meanwhile on the CPU front, to showcase the performance scaling of Direct3D we are running the bulk of our tests on our GPU testbed with 3 different settings to roughly emulate high-end Core i7 (6 cores), i5 (4 cores), and i3 (2 cores) processors. Unfortunately we cannot control for our 4960X’s L3 cache size, however that should not be a significant factor in these benchmarks.
DirectX 12 Preview CPU Configurations (i7-4960X) | |||
Configuration | Emulating | ||
6C/12T @ 4.2GHz | Overclocked Core i7 | ||
4C/4T @ 3.8GHz | Core i5-4670K | ||
2C/4T @ 3.8GHz | Core i3-4370 |
Though not included in this preview, AMD’s recent APUs should slot between the 2 and 4 core options thanks to the design of AMD’s CPU modules.
CPU: | Intel Core i7-4960X @ 4.2GHz |
Motherboard: | ASRock Fatal1ty X79 Professional |
Power Supply: | Corsair AX1200i |
Hard Disk: | Samsung SSD 840 EVO (750GB) |
Memory: | G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | Asus PQ321 |
Video Cards: | AMD Radeon R9 290X AMD Radeon R9 285 AMD Radeon R7 260X NVIDIA GeForce GTX 980 NVIDIA GeForce GTX 750 Ti NVIDIA GeForce GTX 680 |
Video Drivers: | NVIDIA Release 349.56 Beta AMD Catalyst 15.200 Beta |
OS: | Windows 10 Technical Preview 2 (Build 9926) |
Finally, while we’re going to take a systematic look at DirectX 12 from both a CPU standpoint and a GPU standpoint, we may as well answer the first question on everyone’s mind: does DirectX 12 work as advertised? The short answer: a resounding yes.
245 Comments
View All Comments
Ryan Smith - Friday, February 6, 2015 - link
Gen 7.5 graphics and up will support DX12 . So that's Haswell/4th Gen Core and newer.Murloc - Saturday, February 7, 2015 - link
as you said, it's unlikely to bring much advantage. Just the usual new features from one version to the next.inighthawki - Sunday, February 8, 2015 - link
Remember that lower CPU requirements means that there is more power available for the integrated GPU. There was an article that sort of described the behavior a while back:http://www.extremetech.com/gaming/187970-directx-1...
tipoo - Friday, February 6, 2015 - link
I'm confused at this"What we find is that Star Swarm and DirectX 12 are so efficient that only our most powerful card, the GTX 980, is not CPU bound even with 2 cores. "
I'm not sure how the first part of that proves the second. Wouldn't more CPU efficiency more likely be shown in being GPU bound, not CPU bound?
tipoo - Friday, February 6, 2015 - link
Yeah, having read the next few pages I think that should either say "is CPU bound" rather than "not CPU bound", as the rest of the cards can be fed with just a 2 core CPU, while the 980 has headroom for more performance.OrphanageExplosion - Friday, February 6, 2015 - link
There are some pretty big differences in the CPU utilisation of the DX11 NVIDIA and AMD drivers. Maybe reviewing all GPUs with a high-end i7 isn't such a good idea, particularly on the lower-end and mainstream cards which aren't likely to be paired with top-end processors?yannigr2 - Friday, February 6, 2015 - link
Thats a very interesting article and a big victory for Maxwell architecture. I hope AMD's 300 series to be more mature under SX12 and Mantle and perform much better that 200 series.It will be extremely interesting to see an AMD FX in this test. Maybe the ugly duck could transform to a swan?
200380051 - Friday, February 6, 2015 - link
Your comment sums it up well. FX test, great idea.zmeul - Friday, February 6, 2015 - link
quick question:why VRAM usage hasn't been taken into account ?
Ryan Smith - Saturday, February 7, 2015 - link
The short answer is that all of these cards have enough VRAM that it's not a real issue.