The AMD Radeon R9 290X Review
by Ryan Smith on October 24, 2013 12:01 AM EST- Posted in
- GPUs
- AMD
- Radeon
- Hawaii
- Radeon 200
PowerTune: Improved Flexibility & Fan Speed Throttling
The final new technology being introduced with Hawaii and 290X is the latest iteration of AMD’s PowerTune technology. Although not being given a formal name to differentiate it from previous incarnations of PowerTune, the latest iteration brings with it a number of important changes that will significantly alter how the 290X and future cards will behave and how those behaviors can be adjusted.
In a nutshell, with the latest iteration of PowerTune AMD is gaining the necessary hardware monitoring and adjustment abilities to modernize PowerTune, bringing it functionally up to par with NVIDIA’s GPU Boost 2.0, which itself was introduced earlier this year. This includes not only the ability to do fine grained clockspeed/voltage stepping, that alone being a major improvement over what Tahiti could do, but also far more flexible control over the video card to control it by power consumption, temperature, or even fan speeds/noise.
Diving right into matters, to once again use Tahiti as a baseline for comparison here, PowerTune as implemented on pre-GCN 1.1 cards like Tahiti has 3 (non-boost) or 4 (boost) power management clockspeed/voltage states. These are idle, intermediate (low-3D), high (full-3D), and for the cards that use it, boost. When for whatever reason PowerTune needed to clamp down on power usage to stay within the card’s designated limits, it could either jump states or merely turn down the clockspeed within a state, depending on how far over the throttle point the card was operating at. In practice state jumps were rare – it’s a big gap between high and intermediate – so for non-boost cards it would merely turn down the GPU clockspeed within the high state until power consumption was where it needed to be, while boost cards would either do the same within the boost state, or less frequently drop to the high state and then modulate.
Power States Available In Tahiti & Other GCN 1.0 GPUs
Modulating clockspeeds in such a manner is a relatively easy thing to implement, but it’s not without its drawbacks. That drawback being that semiconductor power consumption scales at a far greater rate with voltage than it does with clockspeed. So although turning down clockspeeds does reduce power consumption, it doesn’t do so by a large degree. If you want big power savings, you need to turn down the voltage too, and to do so in a fine grained manner.
Now given the limitations of Tahiti and other pre-GCN 1.1 cards, in order to implement fine grained power states significant changes needed to be made to both the GPU and the card, which is why AMD has not been able to bring this about until Hawaii and Bonaire. As power management is primarily handled by an external controller, the GPU needs to have a telemetry interface to provide the necessary data to the external controller and the ability/programming to quickly jump between states. Meanwhile the external controller needs to capable enough to handle the telemetry data (it’s a lot of data) and able to quickly switch between states (the faster the better).
With that in mind, for GCN 1.1 AMD set out to solve those problems by giving GCN 1.1 parts the necessary telemetry interface to be paired with equally capable 3rd party voltage controllers. Dubbed the Serial VID Interface (SVI2), the interface is the lynchpin of AMD’s latest iteration of PowerTune. In short, by adding this interface and thereby providing the necessary data to the external controller AMD finally has the ability to support a large number of states and to rapidly switch between them.
For the 290X and 260X, when combined with the IR 3567B controller AMD is currently using, this means translates into the ability to switch voltages as frequently as every 10 microseconds, and to do so by switching between upwards of 255 voltage steps. This massive increase in flexibility in turn allows AMD to control for power consumption, temperature, and even noise in ways that weren’t practical with the coarse grained power management features of GCN 1.0 cards.
With this level of flexibility in hand, AMD has significantly overhauled PowerTune, both with respect to how PowerTune operates and how the user can manipulate it. Starting under the hood, the inferred states used Tahiti and other GCN 1.0 GPUs are gone, replaced with a wide number of real power states, thereby giving AMD the ability to reduce power consumption in a fine grained manner with real voltage changes as opposed to resorting to ineffective clock speed modulation. Coupled with that is a new, relaxed (“fuzzy”) fan control scheme, which is based around the concept of slowing down the fan speed response time in order to avoid rapid changes in noise and pitch, and thereby avoiding drawing attention to the card (this being very similar to NVIDIA’s adaptive fan controller).
Equally significant however are the changes to the actual system management algorithms used by PowerTune. Taking a page from GPU Boost 2, now that AMD can properly step between a large number of voltage stages they’re also giving 290X cards the ability to throttle based on a larger number of conditions. On top of traditional power limit throttling, 290X in particular gains the ability to throttle based on explicit temperature limits, and even explicit fan speed limits.
Bringing this all together, for the first card to feature the full suite of these new capabilities AMD has set some very interesting throttle points that’s unlike anything they or NVIDIA have ever quite done before. Out of the box, in the card’s default “quiet” mode (more on modes later), the 290X has a 95C temperature throttle, a 40% fan speed throttle, and an unofficially estimated 300W power throttle. Meanwhile in the card’s alternative “uber” mode, those throttle points are 95C, 55% fan speed, and 300W respectively.
AMD Radeon R9 290X Throttle Points | |||||||||||
Card | Quiet Mode (Default) | Uber Mode | |||||||||
Temperature | 95C | 95C | |||||||||
Fan Speed | 40% (~2100 RPM) | 55% (~3050 RPM) | |||||||||
Power (Estimated) | 300W | 300W |
The addition of the fan speed throttle in turn is very much an X factor that changes how the entire system operates. Whereas previous AMD cards are primarily throttled by power and implicit temperature limits, and more recent NVIDIA cards are throttled by power and explicit temperature limits (with temperature serving as a proxy for fan speeds and noise) AMD takes this one step further by making the fan speed its own throttle, creating a new relationship between temperature and fan speeds that doesn’t exist in the old power management paradigms.
The end result of having the fan speed throttle is that for the 290X (and presumably future cards) the temperature throttle become a joint clause where both conditions have to be met to trigger throttling. So long as power limits are being met (you can never violate the power limit) a 290X will not throttle unless both the fan speed throttle point and the temperature throttle point is reached. And even then, the temperature throttle point has a direct impact on the behavior of the fan, with the GPU temperature (relative to the throttle point) being used as one of the principle inputs on fan speed. In that sense the temperature throttle point becomes a simple abstraction for the underlying fan curve itself.
Boost Throttle Priority: Power = Fan Speed + Temperature
Now there is one exception to this that’s worth pointing out. The above is applicable to the 290X’s boost states, which is where it should be spending all of its time under load. However if for whatever reason the card has to drop out of the boost states and revert to the base clockspeed state of 727MHz, then the relationship between fan speed and temperature becomes reversed, and the card will outright violate fan speed throttles in order to maintain the target temperature while also staying at the base clockspeed.
Base Throttle Priority: Power = Temperature > Fan Speed
The end result of this scheme is that for the bulk of gaming scenarios the 290X will be throttled not on power consumption or even by temperature alone (since you will eventually always hit 95C in Quiet mode), but rather on fan speed/noise, a method unlike anything NVIDIA or AMD have done previously. By doing this AMD has established a direct, simple relationship between performance and noise. If a card is too loud, it can be turned down at the cost of performance. Or if a card needs more performance, then it can be increased (to a point) at the cost of noise. And as noise is going to be the most visible aspect of the power/temp/noise triumvirate to the end user, this in turn gives the end user a high level of control over what’s usually the biggest drawback to running a high power, high performance video card. It really is that much better than any of the management paradigms that have come before it, and it is something we’d fully expect NVIDIA to copy in due time.
Before moving on from the subject of throttling however, let’s briefly touch on what’s undoubtedly going to prove to be a controversial element to the 290X’s power tune implementation: AMD’s 95C temperature throttle. Simply put, no desktop 28nm card thus far has been designed/intended to operate at such a high sustained temperature by default. NVIDIA’s explicit throttle point for the 700 series is 80C, and AMD’s implicit throttle point for Tahiti cards is also in the 80C range, putting both cards well below 95C under regular operation. Now to be clear both are spec’ed to allow temperatures up to 95C (i.e. TjMax), however that 95C throttle point is not the point where either party has previously designed their equilibrium points around.
So why the sudden change on AMD’s behalf? There are a few reasons for it. But first and foremost, let’s talk about the physical costs of higher temperatures. All other elements being held equal, temperatures affect silicon devices in 3 important ways: longevity, power consumption (leakage), and attainable clockspeeds. For longevity there’s a direct relationship between temperature and the electromigration effect, with higher temperatures causing electromigration and ultimately ASIC failure to occur sooner than lower temperatures. For power consumption there is a direct relationship between temperature and power consumption, such that higher temperatures will increase static transistor leakage and therefore increase power consumption, even under identical workloads. And finally, there is a weak relationship between temperature and attainable clockspeeds, such that the switching performance of silicon transistors drop as they become warmer, making it harder to attain high clockspeeds (which is part of the reason why record setting overclocks are with GPUs well into the negative Celsius range).
An example of the temperature versus power consumption principle on an Intel Core i7-2600K. Image Credit: AT Forums User "Idontcare"
The important part to take away from all of this however is that these relationships occur across the entire range of temperatures a product is rated to operate under, and more importantly that all of these factors are taken into consideration in product planning. The 95C maximum operating temperature that most 28nm devices operate under is well understood by engineering teams, along with the impact to longevity, power consumption, and clockspeeds when operating both far from it and near it. In other words, there’s nothing inherently wrong with letting an ASIC go up to 95C so long as it’s appropriately planned for. And this, more than anything else, is what has changed for 290X and Hawaii.
As a second wind product, one of the biggest low-level changes AMD has made to Hawaii relative to Tahiti is that they have been able to significantly clamp down on their leakage. Not that Tahiti was a particularly leaky chip (and not that it was particularly leakless either), but as the first GPU to roll out of TSMC it was very conservatively designed and had to be able to deal with the leakage and other nagging issues that come with an immature fabrication process. Hawaii in turn is designed against a very mature 28nm process, and designed in such a way that AMD doesn’t have to be conservative. As a result Hawaii’s leakage, though not quantified, is said to be notably reduced versus Tahiti.
What this means for 290X in turn is that one of the biggest reasons for keeping temperatures below 95C has been brought under control. AMD no longer needs to keep temperatures below 95C in order to avoid losing significant amounts of performance to leakage. From a performance perspective it has become “safe” to operate at 95C.
Meanwhile from a longevity perspective, while the underlying silicon hasn’t necessarily changed AMD’s understanding of ASIC longevity on TSMC’s 28nm process has. Nearly two years of experience in shipping 28nm GPUs means that AMD has hard evidence for how long a GPU can last at various temperatures, and the maturation of the 28nm process in turn has extended that longevity by improving both the quality and consistency of the GPUs that come out of it. Ultimately there is always going to be a longevity cost to increasing temperatures – and only AMD knows what that cost is – but as the entity ultimately responsible for warrantying their GPUs, at this point AMD is telling us that Hawaii will meet all of their longevity requirements even with the higher operating temperatures.
With that in mind, why would AMD even want to increase their operating temperatures to 95C? In short, to take full advantage of Newton’s Law of Cooling. Newton’s Law of Cooling dictates that the greater the gradient between a heat source and its environment, the more heat energy can be transferred. Or in other words, AMD is able to remove more heat energy from the GPU with the same cooling apparatus simply by operating at a higher temperature. Ergo a 290X operating at 95C can consume more power (operate at greater performance levels) while requiring no increase in cooling (noise) over what a 290X that operates at a lower temperature would require.
Now admittedly none of this makes 95C any less unsettling when first looking at temperatures, as we have become accustomed to 80C range temperatures over the years. But so long as the longevity of Hawaii matches AMD’s claims then this ultimately won’t be an issue. 95C will just be a number, and high ASIC temperatures will be another tool to maximize cooling performance. With that in mind, it will be interesting to see what AMD’s board partners do with their eventual custom Hawaii designs, assuming that they follow the same cooling paradigm as AMD. How much quieter would a Gigabyte Windforce or Asus DirectCU II based Hawaii card be able to operate if it was allowed to (and capable of) operating at 95C sustained? The answer to that, we expect, should prove to be a lot of fun.
Having established in detail how the latest iteration of PowerTune works, let’s finally talk about how this iteration of PowerTune will affect end-user tweaking and overclocking.
As to be expected, AMD has opted to expose all of their new PowerTune power controls via their Overdrive control panel, and as such users have full control over both overclocking and throttle controls. On the throttle side this includes both the traditional power limit controls, and new controls to set the target GPU temperature and the maximum fan speed. These follow the rules we noted earlier, so adjusting the GPU temperature target for example causes the fan speed to ramp up more quickly, or bringing down the maximum fan speed will result in a greater throttle on overall performance.
Meanwhile overclocking controls have also received a facelift, and unlike the throttle controls we’ve having a harder time getting behind these changes. In short, Overdrive now adjusts the GPU and memory clockspeed on a relative percentage basis rather than an absolute frequency basis. On the one hand this brings consistency with how power adjustments have always worked, and yet on the other hand we can’t help but feel that percentage based overclocking is decidedly unhelpful and unintuitive. 10% is far less meaningful than 100MHz in this context, and it’s going to get even worse once we see factory overclocked cards and multiple tiers of Hawaii cards. Consequently we’d really rather have the original absolute frequency basis controls back. AMD is simply abstracting clockspeeds by too much.
Finally, along with the traditional sliders and settings boxes, AMD has introduced one final graphical element into Overdrive, and that is a 2D heatmap for overclocking. Placing the power limit on the X axis and the GPU clockspeed on the Y axis, the heatmap provides a simple graphical representation of the impact is of adjusting those values. The heatmap is a bit imprecise, and I suspect most seasoned overclockers will stick to punching in numbers directly, but otherwise it’s a nifty simplification of overclocking.
With the above in mind, the last factor we’re waiting to see play out is how 3rd party utilities such as MSI’s Afterburner choose to implement these new controls. AMD meets and exceeds GPU Boost 2.0 with respect to flexibility, but monitoring/reporting was never a strong suit for Overdrive. Just based on our own experiences in putting this article together, an equivalent to NVIDIA’s “reason” throttling flags would be incredibly helpful as it’s not always obvious why the 290X is throttling, especially if it’s throttling for power reasons. If AMD can provide that data to 3rd party utilities, then combined with the rest of the functionality we’ve seen they would have an unquestionable claim to bragging rights on whose power management technology is better.
396 Comments
View All Comments
itchyartist - Thursday, October 24, 2013 - link
Incredible performance and value from AMD!The fastest single chip video card in the world. Overall it is faster than the nvidia Titan and only $549! Almost half the price!
Truly great to see the best performance around at a cost that is not bending you over. Battlefield 4 with AMD Mantle just around the corner. These new 290X GPUs are going to be uncontested Kings of the Hill for the Battlefield 4 game. Free battlefield game with the 290X too.Must buy.
Incredible!
Berzerker7 - Thursday, October 24, 2013 - link
...really? The card is $600. You reek of AMD PR.Novulux - Thursday, October 24, 2013 - link
It says $549 in this very review?Berzerker7 - Thursday, October 24, 2013 - link
It does indeed. His article still smells like pre-written script.siliconwizard - Thursday, October 24, 2013 - link
Like all the reviews state GTX Titan is now irrelevant. 290X took the crown and saved the wallet.siliconwizard - Thursday, October 24, 2013 - link
Thinking that sphere toucher' s comment is accurate. Bit of salt here over amd taking over the high end slot and ridiculing the titan card. Only going to get worse once the Mantle enabled games are rleased. Nvidia is finished for battlefield 4. Crushed by amd, 290x and mantle.MousE007 - Thursday, October 24, 2013 - link
Mantle.....lol , nvidia Gsync just killed AMDninjaquick - Thursday, October 24, 2013 - link
lol? a G-Sync type solution is a good candidate for being integrated into a VESA standard, and make it part of the Display's Information that is exchanged though DP/HDMI/DVI, so all AMD would need to do is make sure their drivers are aware that they can send frames to the screen as soon as they are finished. The best part would be that, with the whole Mantle deal, AMD would probably expose this to the developer, allowing them to determine when frames are 'G-Sync'd' and when they are not.MousE007 - Thursday, October 24, 2013 - link
No, there is a "hand- shake" between GPU and the monitor or tv, will not be supported with any other brand.inighthawki - Thursday, October 24, 2013 - link
You do realize that it can still be put into the VESA standard, right? Then only GPUs supporting the standard can take advantage of it. Also ANYONE who believes that GSync OR Mantle is going to "kill the other" is just an idiot.