At the beginning of April, Jarred walked us through the refresh of the bottom two-thirds of NVIDIA's 700M series of GeForce GPUs, all under the GeForce GT heading. We still didn't have any idea what their solutions at the top end would be, though. Today we have those details with the launch of the GTX 700M series.

  GTX 660M GTX 760M GTX 670MX GTX 765M GTX 675MX GTX 770M GTX 680M GTX 780M
GPU* GK107 GK106 GK104 or GK106 GK106 GK104 or GK106 GK106 GK104 GK104
CUDA cores 384 768 960 768 960 960 1344 1536
Base Clock 835MHz 657MHz+ 600MHz 850MHz+ 600MHz 811MHz+ 720MHz 823MHz+
Memory Clock 4GHz 4GHz 2.8GHz 4GHz 3.6GHz 4GHz 3.6GHz 5GHz
Memory Bus Width 128-bit 128-bit 192-bit 128-bit 256-bit 192-bit 256-bit 256-bit
Memory Config 2GB GDDR5 2GB GDDR5 3GB GDDR5 2GB GDDR5 2GB/4GB GDDR5 3GB GDDR5 2GB/4GB GDDR5 4GB GDDR5

Note that the 680MX (not listed) exceeded the 100W TDP and was never made available in notebooks; it was an iMac exclusive GPU. Spec-wise it was identical to the 780M, but had a slightly lower core clock at 720MHz. That means that in the 700M series, NVIDIA has a fully-enabled GK104 chip at the top of the pack and I can confirm you'll be seeing it in notebooks since I have one of those notebooks in for testing as I write this.

NVIDIA has made some incredibly impressive gains in potential performance with the GTX 700M series. Core clocks are up across the board, memory clocks even more so, and NVIDIA is bringing GPU Boost 2.0 to the table with all of these parts to allow as much as 15% higher clocks during gaming. Unfortunately I don't quite have all the benchmark results available yet for the new 780M to compare against the outgoing 680M, but you'll be able to see them in a full review in not too long.

As far as specs go, the big jumps are easily visible in the above table. At the top, GTX 780M gets 14% more shaders than GTX 680M, courtesy of the full GK104 chip, but it also gets a base clock that's at least 14% higher, giving a theoretical performance increase of 30%. The memory clock receives a similar increase, this time to the tune of 39%, so we should definitely see a large jump in performance at the top of the product stack.

Stepping down to the GTX 770M, the changes are a bit more difficult to guage. We have the same number of cores as the GTX 675MX at a higher clock, giving 35% more compute power, but memory bandwidth has dropped 17% thanks to the 192-bit bus. In situations where games are compute constrained, we'll see much improved performance, but bandwidth hungry games may end up slower. We see a similar situation with the GTX 765M vs. the GTX 670MX, with a 128-bit bus on the new model and fewer cores at higher clocks. On paper, the GTX 765M has 13% more compute and 5% less bandwidth than the 670MX.

Rounding things out, at the bottom we finally have an x60M part that's worthy of the GTX designation. The GTX 660M was really just a moderate clock speed increase over the GT 650M. This time, the GTX 760M gets twice as many cores as the GT 750M, and while clock speeds are slightly lower it should still result in compute performance that's 35% higher than the closest non-GTX part; the GTX 760M also has 57% more compute than the previous GTX 660M. The bandwidth story is a bit more difficult to call, with an effective 64GB/s behind the GPU core; the GTX 660M had 4GHz GDDR5 as well, but 750M comes with up to 5GHz GDDR5. Sigh. Can't we get rid of the give and take for a change and have a clear winner? At least there are no DDR3 options listed for the GTX parts, but that's been the case for a few generations now.

We can't talk about Haswell yet, so you'll have to wait a bit longer for our first GTX 780M notebook review, but overall we're looking at some relatively sizeable gains over the 600M, with GPU Boost 2.0 potentially increasing the performance even more. Considering Intel's GT3e parts are expected to at least flirt with GT 650M levels of performance (depending on the game and at substantially lower power draws), this sort of performance upgrade isn't particularly surprising. Now we just need to wait for hardware to start shipping.

Update: BioShock Infinite is able to produce 41.5 fps at our enthusiast settings with the GTX 780M, which are 1080p and all the settings dialed up. The outgoing GTX 675MX produced only 35.6 fps, while HD 7970M currently gets 45.3 fps.

Comments Locked

37 Comments

View All Comments

  • nixtish - Friday, May 31, 2013 - link

    Do you think this will make it into the next gen rMBP ?
  • shompa - Friday, May 31, 2013 - link

    No. Apple have never used a high powered GPU in their laptops. Its not "elegant". They prefer thin and long battery time. (And rMBP really needs a SLI 780. Even at non gaming the 15 inch rMBP is sluggish when you use non native resolution since OSX renders the picture at 4 times the resolution before converting to the non-native resolution. We are talking upwards 8 megapixels for a 1920x1200 resolution)

    This is why I beg Alienware to release a Retinal display laptop with SLI.
  • xinthius - Saturday, June 1, 2013 - link

    Do you own a rMBP, because I do, and it is not slugging at non-native resolutions. Fact of the day for you.
  • CadentOrange - Friday, May 31, 2013 - link

    If Haswell's GT3e performance is of a similar level to the nVidia 650m, it's little wonder that nVidia needed to dramatically improve the performance of it's midrange parts. That 660m -> 760m jump should provide a substantial boost. What I'd like to see are the benchmarks between the GT3e and the 750m, as the 750m appears to be the 650m with a minor clock speed bump.
  • shompa - Friday, May 31, 2013 - link

    But at what power is GT3e? I read 65watts. Not to many ultrabooks that can use that.
  • Flunk - Friday, May 31, 2013 - link

    These new Nvidia mobile chips are fantastic, but I can't shake the feeling that Nvidia purposely underclocked their previous generation so that they could release new chips with the same architecture this year. I'm gotten my 650m up to 1150/1400 (GDDR5, bios hack) and it runs quite cool and stable.
  • Khenglish - Friday, May 31, 2013 - link

    Even the 680m will overclock well over 30% on stock voltage. The most ridiculous overclocker is the 675mx. The default is 600mhz and people have clocked them over 1100mhz with moderate voltage increases.

    What I find interesting is that ES 780ms that people have gotten can clock far higher on the memory than what a 680m can do, even though they have the same memory chips and with the 680m's memory raised up to the 780m's memory voltage. The highest 680m memory I've seen is around 5350 effective, while someone has already ran benchmarks on a 780m over 6000.

    http://forum.techinferno.com/alienware-m18x/3596-a...
  • shompa - Friday, May 31, 2013 - link

    Actually Nvidia was very lucky that their midrange GPU was so fast. Remember that Titan was the real GTX 680. It was released almost one year after its intended release date.
    Nvidia didn't "underclock" their previos generation. They even had to fuse of parts since yields where not high enough. With one year time and chip redesign (I think this is the 5 stepping/spin of the GK104) they could enable all clusters + they underwolted a bit + GPU boost 2 to get upwards 30% gains.

    If you want to talk about a company that have held back: Intel. Since 2006 they have had no competition. Every single chip for the last 5 years have clocked 4ghz+. Still you cant buy an official 4ghz chip. Why? Intel have no competition.

    Nvidia have competition. Thats why they cant hold back (and as I wrote: Nvidia was very lucky with the GK104 that it could clock so high. A mid card chip that could be sold as a 680GTX. GK110 takes almost double die area = cost almost twice for Nvidia to produce. Thats why Titan/GTX 780 are so expensive. This will be solved with the next TSMC shrink to 20 nm)
  • jasonelmore - Saturday, June 1, 2013 - link

    and then nvidia will make a bigger gpu with more cores and do the same thing all over again. 480 was supposed to be 580, titan was supposed to be gtx 680 and so on.
  • cstring625 - Friday, May 31, 2013 - link

    Is the table wrong? Earlier it was said that the gtx 780 would use the gk110 http://www.anandtech.com/show/6973/nvidia-geforce-...

Log in

Don't have an account? Sign up now