NVIDIA's GeForce GTX 560 Ti w/448 Cores: GTX 570 On A Budget
by Ryan Smith on November 29, 2011 9:00 AM ESTA little more than a year ago NVIDIA introduced the GF110 GPU, the power-optimized version of their Fermi patriarch, GF100. The first product was their flagship GTX 580, followed by the eventual GTX 570. Traditionally NVIDIA would follow this up with a 3rd product. The GTX 200 series had 285/275/260, and the GTX 400 series had GTX 480/470/465. However in the past year we have never seen the 3rd tier GF110 card… until now.
Today NVIDIA will be launching the GeForce GTX 560 Ti With 448 Cores (and yes, that’s the complete name), a limited edition product that will serve as the 3rd tier product, at least for a time. And while NVIDIA won't win any fans with the name, the performance is another matter entirely. If you've ever wanted a GTX 570 but didn't want to pay the $300+ price tag, as we'll see NVIDIA has made a very convincing argument that this is the card for you.
GTX 580 | GTX 570 | GTX 560 Ti w/448 Cores | GTX 560 Ti | |
Stream Processors | 512 | 480 | 448 | 384 |
Texture Address / Filtering | 64/64 | 60/60 | 56/56 | 64/64 |
ROPs | 48 | 40 | 40 | 32 |
Core Clock | 772MHz | 732MHz | 732MHz | 822MHz |
Shader Clock | 1544MHz | 1464MHz | 1464MHz | 1644MHz |
Memory Clock | 1002MHz (4008MHz data rate) GDDR5 | 950MHz (3800MHz data rate) GDDR5 | 900Mhz (3600MHz data rate) GDDR5 | 1002Mhz (4008MHz data rate) GDDR5 |
Memory Bus Width | 384-bit | 320-bit | 320-bit | 256-bit |
Frame Buffer | 1.5GB | 1.25GB | 1.25GB | 1GB |
FP64 | 1/8 FP32 | 1/8 FP32 | 1/8 FP32 | 1/12 FP32 |
Transistor Count | 3B | 3B | 3B | 1.95B |
Manufacturing Process | TSMC 40nm | TSMC 40nm | TSMC 40nm | TSMC 40nm |
Price Point | $489 | $329 | $289 | $229 |
The GTX 560 Ti With 448 Cores is based on the same GF110 GPU as the GTX 580 and GTX 570. Where GTX 580 is a fully enabled GF110 product and GTX 570 is a partially binned part, the GTX 560 Ti With 448 Cores – which we’ll refer to as the GTX 560-448 for simplicity’s sake – is a further binned GF110 intended to take the position of the traditional 3rd tier product, putting it below the GTX 570.
Looking at the organization of the GF110 being used in the GTX 560-448, the difference between the GTX 570 and GTX 560-448 is that NVIDIA has disabled a further SM unit, cutting the compute/shading, texturing, and geometry performance by 7%. ROP performance remains untouched, as does the number of memory controllers. The core clock is the same as the GTX 570 at 732MHz, while the memory clock has been reduced slightly from 950MHz (3800MHz data rate) to 900MHz (3600MHz). All together compared to the GTX 570, the GTX 560-448 has 93% of the compute/shader performance, 100% of the ROP performance, and 95% of the memory bandwidth. In practice this is closer to the performance of the GTX 570 than the larger product spacing we’re used to seeing.
Power and cooling are also very similar to the GTX 570. NVIDIA has put the TDP at 210W, versus 219W for the GTX 570. As always NVIDIA does not supply an idle TDP, but it should be practically identical to the GTX 570. The end result is that the GTX 560-448 should have slightly lower performance than the GTX 570 with similar power consumption.
Now if we’re making all of these comparisons to the GTX 570, why is the GTX 560-448 a GTX 560? That’s a good question, and not one that we’ll get a completely satisfactory answer to. NVIDIA is well aware of what they’ve done, and they’ve already prepared a response:
Question: Why is the “GTX 560 Ti” designation used for this product instead of “565” or “570 LE”
The designation is meant to reflect the fact that this is not an addition to our 500 series line-up, but rather a limited edition product.
This is a completely truthful answer – and we’ll get to the limited edition aspect in a moment – but it’s not a real answer to the question. Ultimately NVIDIA has to balance OEM, consumer, and regional concerns since not every market will be getting this product, but more practically the GTX 560 Ti is a well received and well selling card whose success NVIDIA wants to extend. The result is that NVIDIA can (and will) call it whatever they want, and this time they’re calling it a GTX 560 Ti. Thus, this is why we have a GF110 product launching as a GTX 560 Ti even though it has more in common with a GTX 470 than anything else. It’s that kind of a launch.
As far as being a limited edition product, that’s not particularly complex. NVIDIA bins GF110 GPUs for a number of products, not just GeForce but for Tesla and Quadro too. The best chips go into the most expensive products, while chips with several bad SMs go into products like low-end Quadros and NVIDIA’s 4th tier OEM only card – which is also the GTX 560 Ti. In the past year of production NVIDIA has built up a supply of mid-tier chips: chips that aren’t good enough to be in a GTX 570, but better than what the lower end markets need. Rather than taking a revenue hit by shipping these chips in those lower end products, NVIDIA has decided to mint a new GeForce product instead, and that’s the GTX 560-448.
The reason the GTX 560-448 is a limited edition product is that NVIDIA is not accumulating suitably dysfunctional chips at a rapid pace as they do chips for their other product lines. As a result they only have a small, largely fixed number of chips to produce GTX 560-448s with. With this limited supply NVIDIA will only be chasing particularly affluent markets with a limited number of cards: The US and Canada, the UK, France, Germany, the Nordic countries, and Russia. South America and the Asia Pacific region (APAC) are notably absent. Furthermore for those markets that will be getting the GTX 560-448, it’s essentially a seasonal product specifically for Christmas: NVIDIA only expects the supply of cards to last 1-2 months, after which NVIDIA’s product lineup reverts to the 580/570/560 stack we already are accustomed to. So while a limited edition product is nothing new, we haven’t seen a coordinated launch for an LE product quite like this in recent years.
Given the hardware similarities to the GTX 570, it should come as no surprise that NVIDIA is forgoing a reference design while their partners will be launching cards based on their existing GTX 570 designs. At this point all of them have custom GTX 570 designs, and as such the GTX 560-448 cards will be using those custom designs. The card we’ve been sampled with, Zotac’s GeForce GTX 560 Ti 448 Cores Limited Edition, is one such card, based on their custom GTX 570 design. Furthermore as was the case with many proper GTX 560 Ti cards, the GTX 560-448 will be launching in overclocked designs, such as Zotac’s which ships at 765MHz instead of 732MHz. So the performance of individual GTX 560-448 products can vary by upwards of several percent.
The MSRP on the GTX 560-448 will be $289, however launch partners will be free to price it higher to match any factory overclocks they do. At $289 the GTX 560-448 is priced extremely close to the cheapest GTX 570s, and depending on clockspeeds and sales a GTX 570 could end up being the same price or cheaper, so it will be prudent to check prices. Meanwhile the GTX 560-448’s closest competition from AMD will be the Radeon HD 6950, which trends around $250 after rebate while the Radeon HD 6970 is still closer to $340. Overall NVIDIA’s pricing may be a bit high compared to their other products, but compared to AMD’s products it’s consistent with the performance.
80 Comments
View All Comments
ericore - Tuesday, November 29, 2011 - link
This card is the most Perfect example of a corporation trying to milk the consumer.The new Geforce cards are just after Christmas, so what does Nvidia do release a limited addition crap product VS what's around the corner and with a crappy name. The limited namespace is ingenious, but I must hardheadedly agree with Anand on the namespace issue.
Intelligent people will forgot this card, and wait till after Christmas. Nvidia will have no choice to release Graphics card in Q1 because AMD is going to deliver a serious can of whip ass because of their ingenious decision to go with a low power process silicon VS high performance. You see, they've managed to keep the performance but at half the power then add that it is 28nm VS 40nm and what an nerdy orgasm that is. Nvidia will be on their knees, and we may finally see them offer much lower priced cards; so do you buy from the pegger or from the provider? That's a rhetorical question haha.
Revdarian - Tuesday, November 29, 2011 - link
Actually after Christmas you can expect is a 7800 by AMD (that is mid range of the new production, think around or better than current 6900), one month later with luck the high end AMD, and you won't expect the green camp to get a counter until March at the earliest.Now that was said on a Hard website by the owner directly, so i would take it as being very accurate all in all.
ericore - Tuesday, November 29, 2011 - link
HAha, so same performance at half the power + 28nm VS 40nm + potential Rambus memory which is twice as fast, all in all we are looking at -- at least -- double frame rates. Nvidia was an uber fail with their fermi hype. AMD has not hyped the product at all, but rest assure it will be a bomb and in fact is the exact opposite story to fermi. Clever AMD you do me justice in your intelligent business decisions, worthy of my purchase.HStanford1 - Wednesday, December 7, 2011 - link
Can't say the same about their CPU lineupRoflmao
granulated - Tuesday, November 29, 2011 - link
The ad placement under the headline is for the old 384 pipe card !If that isn't an accident I will be seriously annoyed.
DanNeely - Tuesday, November 29, 2011 - link
"It’s quite interesting to find that idle system power consumption is several watts lower than it is with the GTX 570. Truth be told we don’t have a great explanation for this; there’s the obvious difference in coolers, but it’s rare to see a single fan have this kind of an impact."I think it's more likely that Zotak used marginally more efficient power circuitry than on the 570 you're comparing against. 1W there is a 0.6% efficiency edge, 1W on a fan at idle speed is probably at least a 30% difference.
LordSojar - Tuesday, November 29, 2011 - link
Look at all the angry anti-nVidia comments, particularly those about them releasing this card before the GTX 600 series.nVidia is a company. They are here to make money. If you're an uninformed consumer, then you are a company's (no matter what type they are) bread and butter, PERIOD. You people seem to forget companies aren't in the charity business...
As for this card, it's an admirable performer, and a good alternative to the GTX 570. That's all it is.
As for AMD... driver issues or not aside, their control panel is absolutely god awful (and I utilize a system with a fully updated CCC daily). CCC is a totally hilarious joke and should be gutted and redone completely; it's clunky, filled with overlapping/redundant options and ad-ridden. Total garbage... if you even attempt to defend that, you are the very definition of a fanboy.
As for microstutter, AMD's Crossfire is generally worse at first simply because of the lack of frequent CFX profile updates. Once those updates are in place, it's a non issue between the two companies, they both have it in some capacity using dual/tri/quad GPU solutions. Stop jumping around with your red or green pompoms like children.
AMD has fewer overall features at a lower overall price. nVidia has more overall features at a higher overall price. Gee... who saw that coming...? Both companies make respectable GPUs and both have decent drivers, but it's a fact that nVidia tend to have the edge in the driver category while AMD have an edge in the actual hardware design category. One is focused on very streamlined, gaming centric graphics cards while the other is focused on more robust, computing centric graphics cards. Get a clue...
...and let's not even discuss CUDA vs Stream... Stream is total rubbish, and if you don't program, you have no say in countering that point, so please don't even attempt to. Any programmer worth their weight will tell you, quite simply, that for massively parallel workloads where GPU computing has an advantage that CUDA is vastly superior to ANYTHING AMD offers by several orders of magnitude and that nVidia offers far better support in the professional market when compared to AMD.
I'm a user of both products, and personally, I do prefer nVidia, but I try not to condemn people for using AMD products until the moment they try to assert that they got a better deal or condemn me for slightly preferring nVidia due to feature sets. People will choose what they want; power users generally go with nVidia, which does carry a price premium for the premium feature sets. Mainstream and gaming enthusiasts go with AMD, because they are more affordable for every fps you get. Welcome to Graphics 101. Class dismissed.
marklahn - Wednesday, November 30, 2011 - link
Simply put, nvidia has cuda and physx, amd has higher ALU performance which can be beneficial in some scenarios - gogo OpenCL for not being vendor specific though!marklahn - Wednesday, November 30, 2011 - link
Oh and Close to the Metal, brook and stream are all mainly things of the past, so don't bring that up please. ;)Revdarian - Wednesday, November 30, 2011 - link
Such a long post does not make you right, in the part of "CUDA vs Stream" you actually mean "CUDA vs OpenCL and DirectCompute" for example, as those are the two vendor agnostic standards, so that just shows that what is really "rubbish" is your attempt to pose as an authority on the subject.