Comments Locked

17 Comments

Back to Article

  • jjj - Tuesday, March 19, 2013 - link

    Pretty sure you messed up on Volta.
    The DRAM is stacked and it uses TSV, so you got DRAM over DRAM and they should be able to put quite a bit around the GPU ( they might as well use Micron's Hybrid Memory Cube since that's advertised at 1TB too but that one is a bit different having the logic layer at the base).
    The GPU should be connected to the RAM through the silicon substrate , don't think they ever said "silicon interposer" but i imagine that's what they are doing.
    Wish they would use it in mobile for NAND and RAM, maybe we they go there when the industry goes with RRAM , shouldn't be that far , we should get 3D NAND in 2015-2016 and RRAM not too long after.
  • MrSpadge - Wednesday, March 20, 2013 - link

    You're right: from nVidias picture it looks like the die-stacked DRAM will be placed on-package. This solves the capacity issue directly :)

    Stacking GPU and DRAM would create serious cooling problems: the heat would have to flow through all DRAM dies before it could reach the cooler, in return the chip would run much hotter for a given cooler. I guess that's why Intel reserves the tier 3 GPU with embedded DRAM for mobile Haswell, where there's not as much heat to get rid off.
  • warezme - Monday, April 1, 2013 - link

    If you 3D stack your die with DRAM on one side and the core on the other side (visualize a chip design that has an opening on the die to both the front and back of the card motherboard). You could the cool both the DRAM side and the GPU side independently and even with the same cooler if you offset the board to center on your slot better. It would be a new design but not that exotic really.
  • Kevin G - Tuesday, March 19, 2013 - link

    The question for Maxwell is what architecture is its virtual memory schema tuned? The obvious answer for the desktop is x86 but if nVida wants to scale this technology into the mobile arena they'll also have to support ARM's memory management.

    The other thing worth pondering in that context is how closely tied Project Denver is to Maxwell.
  • spigzone - Tuesday, March 19, 2013 - link

    So when Nvidia finally releases it's first HSA style architecture, AMD will be releasing it's 3rd generation, massively supported, HSA APUs?
  • TheJian - Wednesday, March 20, 2013 - link

    Considering they're losing 1.18 Billion last year and on track to produce another loss this year I'm not sure what your point is? NV made 725mil during the same time and added ~300mil to their cash, while buying back $100mil in stock AND paying a dividend. Until AMD makes money or tops the supercomputer list (where NV dominates now with Titan) it would appear bragging rights over who beat who to HSA means nothing correct? It wouldn't matter if AMD was on their 10th gen if they still can't top their competition with it. NV hired ~800 people over the last ~2years. AMD laid off 30% of their workforce. How does that help you vault to #1 in the future? What's worse? No ARM soc from AMD until next year and probably late next year for any revenue. But even worse, NV will be on Rev5 of socs by then which as with T4 will have a modem (and T5 will surely include cdma etc thus serving everyone, where T4 is limited). Being in the SOC race NOW is more important than the HSA race or all the numbers would be in AMD's favor right?
    http://arstechnica.com/gadgets/2013/03/nvidias-nex...
    Then again, their HSA isn't really due to be used until 2014 when memory is fully shared & a unified address space (kaveri) :) Currently copying the data twice/to and from cpu/gpu makes HSA useless. They will get nothing from this until 2014 at the earliest. Sometimes, being 1st isn't really being ahead at all ;) We still have die shrinks working also, which slows this progression to HSA type stuff (we haven't hit the nm wall yet), and you need software to take advantage of all this anyway (that's a few years off).

    If consoles don't pay off in a big way AMD will go further into debt as their recent 164mil renting of their Austin land won't do squat vs. another Billion loss this year. I expect consoles to suck this time (dev support dropping already for Vita, wiiu/3ds sucking wind shows this already), and when sales at xmas suck they'll just cancel projects and head further to mobile. They need to quit giving away top games which directly steals from their gpu profits when a card sells. Not to mention the IDEA that if you have to give that much crap away to sell your cards you must suck (no matter what the perf REALLY is of your cards - it looks desperate to a lot of people or they'd own more market share right?) vs. the competition who practically gives nothing (free to play crap? costing them next to nothing also). Market share of discrete gpu's shows this as NV controls ~65% of it and has for years (up from 53% in 2011).

    AMD is currently funding R&D with more debt and selling or renting everything they have, while NV funds it with profits and Intel payments. The last Intel payment is 2016. So they'll be using some free money to do R&D for the next few years on top of whatever they profit (which shows no sign of slowing down, they just set records for revenue, margins and cash). The employee reduction cost is shown by how long it took AMD to get a decent driver to compete in gpus (never settle driver, pre this driver they got smacked all year as hardocp recently discussed). Another sign is the delay in Kaveri into 2014 most likely which of course allows NV to put maxwell later (both look Q1 now). They couldn't afford to R&D for consoles and discrete gpu updates at the same time (unfortunately). You can't outfox your enemy on a visa card forever if at all. No answer to Titan doesn't look good either (no matter how overpriced someone thinks it is).

    Finally, AMD isn't in the top ten supercomputers (feeding the K20x in titan doesn't count, amd isn't doing the work here). NV is in TWO of them and I'm sure more to follow as others add K20x machines and tegra starts to feed the gpus for a green machine though Italy just claimed per/watt lead with K20x supposedly and piz daint just got announced:
    http://blogs.nvidia.com/2013/01/just-mix-water-wit...

    So again, what evidence is there that being on HSA early is working so far? Your comment insinuates NV is behind. I think it's quite the opposite based on all the data. This may have been a totally different story if AMD had taken the time/money they spent on consoles and devoted it instead to Socs and discrete gpus. Not much money will be made on 35mil units of consoles this year (if they even sell that total together) vs. a billion unit market in tablets/phones and growing every year. NV is moving ~20% of their revenue via tegra already and will grow that this year with T4/T4i. AMD spent their wad on a dying market (consoles/handhelds) and are losing badly in the two markets they have now (cpu/gpu, cpu's lost ~325mil, and gpus only made ~22mil...OUCH). They seem to keep repeating this losing formula and it doesn't look to change for FY2013.
  • Kevin G - Wednesday, March 20, 2013 - link

    AMD's lead for HSA is going to be measured in months, not full generations of hardware. It'll be arriving in 2013 at the very least with the PS4. Dev kits are currently in the hands of developers with early prototype hardware. Kaveri and Maxwell are both 2014 parts and without further details it is difficult to determine which will arrive first. The other wild card for 2013/2014 are discrete HSA GPU's from AMD. All indications point toward AMD arriving first with a unified address space but it won't ultimately be a significant lead.

    As for AMD's position in the Top500, they've been there before. Tianhe-1A, the other nVidia based system referenced, used to use AMD GPU's prior to an upgrade. Though I wouldn't use the Top500 as a good metric as both nVidia's and AMD's GPGPU boards are not seen as viable options by many due to the lack of key RAS features (ECC etc.). Performance of these cards are good given an extremely parallel workload but if scientists cannot trust the integrity of the results, they're worthless.
  • Ktracho - Wednesday, March 20, 2013 - link

    NVIDIA's Tesla boards have had ECC since at least the last generation (Fermi). That's one reason why Cray is willing to use these boards in their supercomputers.
  • Kevin G - Thursday, March 21, 2013 - link

    EDC, not traditional ECC. The memory bus width is not explicitly widened to support the extra parity bits for ECC. Conceptually nVidia's EDC is uses a technique similar to RAID5 which utilizes existing memory channels and reduces overall memory capacity by shuffles parts of the parity across all of them. This does allow for protection from common soft errors like ECC.

    The downside is that memory transaction involved the entire width of the memory bus where as previously each channel could operate autonomously depending on requests. nVidia tacked on EDC on Fermi with a hardware/software mix which significantly decreases memory bandwidth. EDC on GK110 and Tahiti have a bit more hardware support for EDC so bandwidth impact isn't as dire but still present.

    EDC with hard errors is an interesting case for GPU's. Modern ECC implementations can work around a single DRAM chip failure on a DIMM and safely move the data to another DIMM (and a few high end servers will even allow memory hot swapping). Fermi had no means of recovering from a hard memory error due to its partial use of software. I'm unsure if GK110 and Tahiti would be able to migrate data off of the card in the advent of a hard memory failure and I'm certain that they cannot continue to operate normally.

    Still GPU's have relatively weak RAS compared to traditional server hardware and it is a reason why some are currently waiting for GPU's to mature a bit further.
  • Winterblade - Tuesday, March 19, 2013 - link

    I wonder what kind of CPU will denver be, a customized ARM A57/A53 big.LITTLE setup maybe?? Will they finally get a x86 license?? or maybe a fully new architecture??

    Honestly I cannot stop marvelling every time I see this kind of roadmaps, we live in great times for tech junkies :D
  • ImSpartacus - Wednesday, March 20, 2013 - link

    I think Nvidia is licensing the ARM v8 ISA so Denver is there own design, but I'm not certain.
  • Kevin G - Wednesday, March 20, 2013 - link

    Denver being nVidia's own custom CPU design has been known in the rumor circles for awhile, though nVidia has yet to officially confirm.

    One of the odd rumors about Project Denver was that it was nVidia's response to Larrabee but from the ARM side. nVidia was going to tack a wide vector unit onto their own custom ARM cores to utilize them fully programmable shader hardware. Seeing Maxwell and Denver as part of the Parker SoC ends that rumor chain.
  • chizow - Sunday, March 24, 2013 - link

    Tesla is already Nvidia's pre-emptive response keeping Larrabee at arm's length for the last few years. Project Denver imo is Nvidia's attempt to enter the server market with 64-bit ARM and their attempt to remove x86 from the equation in these supercomputers. Basically, it would be their other half to the "hetereogenous computing model" they've been evangelizing for years. Since they can't get an x86 license. the smartest move for them is to marginalize x86 which is why you see them pushing Tegra, Android, and Project Denver on the server side. Surprised they haven't been a bigger backer of Windows RT.
  • mayankleoboy1 - Tuesday, March 19, 2013 - link

    At this point, we know more about Volta, than we know about Maxwell. Rather ridiculous.
  • vFunct - Tuesday, March 19, 2013 - link

    You'll also be seeing 3-D stacked-chip technology using through-silicon vias in many other upcoming devices. It was a big trend in EDA software over the last few years, and so will be seeing more of it as designers take advantage of the software that allows it to happen.

    on a side-note, the contrast for the article text is way too low. i actually had to edit the CSS using web inspector to read the article. The text color is way too light..
  • MrSpadge - Wednesday, March 20, 2013 - link

    Not sure if they changed anything in-between, but on an IPS screen of normal resolution it looks absolutely fine / normal in FF 19.
  • watersb - Tuesday, March 19, 2013 - link

    Great coverage, very much appreciated!

    Not many details yet, but how does Volta compare with JEDEC-standard Wide I/O? Isn't that in the same time-frame?

Log in

Don't have an account? Sign up now