In the past couple of weeks, we have been re-testing and re-analysing our second generation Ryzen 2000-series review. The extra time and writing, looking at the results and the state of the market, led me down some interesting thoughts, ideas, and concepts, about how the competitive landscape is set to look over the next 12-18 months.

Based on our Ryzen 2000-series review, it was clear that Intel’s 8-core Skylake-X product is not up to task. The Core i7-7820X wins in memory bandwidth limited tests because of the fact that it is quad channel over the dual channel competition, but it falls behind in almost every other test and it costs almost double compared to the other chips in benchmarks where the results are equal. It also only has 28 PCIe lanes, rather than the 40 that this chip used to have two generations ago, or 60 that AMD puts on its HEDT Threadripper processors.

Intel uses its monolithic low-core-count (LCC) Xeon design for the 6-8-10 Skylake-X processors, as it has 10 cores in the silicon floor plan. AMD is currently highly competitive at 8 cores, with a much lower price point in the consumer space, making it hard for Intel to justify its 8-core Skylake-X design. Intel is also set to launch 8-core mainstream processors later this year, and is expected to extend its consumer ring-bus design from six-cores to eight-cores to do so, rather than transpose the 8-core LCC design using the latest Coffee Lake microarchitecture updates.

Because of all this, I am starting to be of the opinion that we will not see Intel release another LCC Xeon in the high-end desktop space in the future. AMD’s Threadripper HEDT processors run mainly at 12 and 16 cores, and we saw Intel ‘had to’* release its mid-range core count (called high core count, HCC) silicon design to compete.

*Officially Intel doesn’t consider its launch of 12-18 core Core i7/Core i9 processors a ‘response’ to AMD launching 16-core Threadripper processors. Many in the industry, due to the way the information came to light in spots and without a unified message, disagree.  

In this high-end desktop space, looking to the future, AMD is only ever going to push higher and harder, and AMD has room to grow. The Infinity Fabric, between different dies on the same package, is now a tried and tested technology, allowing AMD to scale out its designs in future products. The next product on the block is Threadripper 2, a minor update over Threadripper but based on 12nm and presumably with higher frequencies and better latencies as well. We expect to see similar 3-10% uplift over the last generation, and it is likely to be up to 16 cores in a single package coming out later this year.


A der8auer delid photo

With AMD turning the screw, especially with rumors of more high performance cores in the future, several things are going to have to happen from Intel to compete:

  1. We will only see HCC processors for HEDT to begin
  2. The base LCC design is relegated to low-end Xeons, and
  3. Intel will design its next big microarchitecture update with EMIB* in mind
  4. To compete, Intel will have to put at least two dies on a single package.

*EMIB: Embedded Multi-Die Interconnect Bridge, basically an intra-package interposer to connect two chips at high bidirectional speed without a bulky interposer by inserting a micro-interposer in the package PCB/substrate. We currently see this technology on Intel’s Core with Radeon RX Vega (‘Kaby Lake-G’) processors in the latest Intel NUC.

For the next generation of server-class Xeon processors, called Cascade Lake-SP and which are expected to be coming either this year or early next (Intel hasn’t stated), we believe it to be a minor update over the current Skylake-SP. Thus for CL-SP, option (1)+(2) could happen then. If Intel wants to make the mainstream platform on Coffee Lake go up to 8 cores, the high-end desktop is likely to only see 10 cores and up. The simple way to do this is to put the HCC core design (could be up to 18 cores) and cut it as necessary for each processor. Unless Intel are updating the LCC design to 12 cores (not really feasible given the way the new inter-core mesh interconnect works, image below), Intel should leave the LCC for the low count Xeons and only put the HCC chips in the high-end desktop space.


Representation of Intel's Mesh topology for its SP-class processors

Beyond CL-SP, for future generations, options (3)+(4) are the smarter paths to take. EMIB adds additional expense for packaging, but using two smaller dies should have a knock-on effect with better yields and a more cost effective implementation. Intel could also leave out EMIB and do an intra-package connection like AMD.

But one question is if Intel’s current library of interconnects, i.e. the ones that are competitors or analogues to AMD’s Infinity Fabric, are up to the task. Intel currently uses its UPI technology to connect between 2 socket, 4 socket and 8 socket platforms. Intel also uses it in the upcoming Xeon+FPGA products to combine two chips in a single package using an intra-package connection, but it comes at the expense of limiting those Xeon Gold processors down to two sockets rather than four (this is more a design thing of how the Xeon Gold has only 3 UPI connectors). But we will have to see if Intel can appropriately migrate UPI (or other technologies) across EMIB and over multiple dies in the same package. With the side of Intel, those dies might not need to be identical, like AMD, but as mentioned, AMD already has its Infinity Fabric in the market and selling today.

The question will be if Intel has had this in mind. We have seen ‘leaks’ in the past of Intel combining two reasonably high-core count chips into a single package, however we have never seen products like it in the market. If these designs are flying around Intel, which I’m sure they are, are they only for 10nm? Based on delays of 10nm, are Intel still waiting it out, or will they back-port the design as 14nm delays grow?

Intel’s Dr. Murthy Renduchintala, in a recent JP Morgan investment call, was clear that 10nm high volume manufacturing is set for 2019 (didn’t say when), but Intel is learning how to get more design wins within a node rather than waiting for new ones. I would not be surprised if this is one project that gets brought into 14nm in order to be competitive.

If Intel hasn’t done it by the time AMD launch Zen 2 on 7nm, the 10-15 year one-sided seesaw will tip the other way in the HEDT market.

Based on previous discussions from one member of the industry, I do not doubt that Intel might still win in the absolute raw money-is-no-object performance with its best high-end $10k+ parts. They are very good at that, and they have the money and expertise for these super halo, super high-bin monolithic parts. But if AMD makes the jump to Zen 2 and 7 nm before Intel comes to market with a post-Cascade Lake product on 10nm, then AMD is likely have the better, more aggressive, and more investment friendly product portfolio.

Competition is good.

POST A COMMENT

31 Comments

View All Comments

  • euskalzabe - Friday, June 1, 2018 - link

    I find your choice of the exculpatory paragraph "*Officially Intel doesn’t consider..." quite interesting. This is an editorial, like any article published on Anandtech, it's your opinion and analysis, so why you felt like you had to placate Intel, by saying that this "response" isn't official to them, tells me a lot of how vulnerable you feel against them.

    What does any reader here care about what Intel considers official or not? We care about their hardware, not what they like their PR to look like. You would never say that AMD's Vega was not officially a "response" to the GTX1080, because it's obvious that it is, it's competition. The same situation applies to Intel VS AMD, so your explanatory paragraph comes out as plainly bizarre.

    If anything, it points to Intel's menacing position in the market, if it makes you feel like you can't state your opinion on your website without clarifying for them and walking on egg shells. Further, it reinvigorates my preference to buy AMD (and I say that while still annoyed I had to buy a i5-8400 for my last build that I can't wait to replace, and feel stupid I traded my 480 for a 1060 - although I did make money thanks to the mining craze).
    Reply
  • MrSpadge - Friday, June 1, 2018 - link

    To me this read like "they are trying to hide behind such BS, better judge them by what they actually do (and don't do)". I don't think it has to be more obvious than this. Reply
  • bortiz - Friday, June 1, 2018 - link

    You mentioned the issue and skipped right over it. Traditionally Intel has had higher yields on large die than AMD (lower defect rate). This is not a design flaw on anyone's side, but prioritization. Intel sacrifices performance to get better yields (lower cost). So the cost savings AMD gets with smaller dies is more than Intel would get on comparable process technology (say 14 nm node).
    The big advantage for Intel is not in single socket systems, but multi-socket systems (high end). AMD has already used up it's socket count by connecting all the chips on a single substrate while Intel can still give you higher multiple socket count. With this Intel can scale to build larger systems.
    Either way, this is good competition - which is great for everyone!!!
    Reply
  • FreckledTrout - Friday, June 1, 2018 - link

    Boritz, that post is confusing. What the heck does AMD has already used up it's socket count" mean? The Intel and AMD server chips you speak of all take up one socket. AMD connected a few chips together it's still a single socket, you can plug in multiple EPYC cpus into multi socket motherboards just like you can with Intel. Sure I can see intels large monolithic CPU's out performing AMD's approach on the high end but for none of the reasons you stated. Reply
  • MrSpadge - Friday, June 1, 2018 - link

    More reasons to drop the LLC die or at least increase it's core count significantly:

    - at 6 - 10 cores the ring bus is obviously doing well, whereas Skylake-X is loosing performance and power efficiency. The mesh probably only starts to shine at over 20 cores or so. Otherwise they would have introduced it sooner.

    - the HCC die at moderate core counts features a lower power density, so it can be cooled easier and hence be driven harder for workstation use.
    Reply
  • zodiacfml - Friday, June 1, 2018 - link

    Competition is not good. I still remember the days of AMD putting out good CPU designs but always come short due to Intel's superior node where AMD is behind by two generations. Lately, Intel is stubbornly waiting it out for 10nm, to save pennies, which makes it appear AMD is competing nicely. No wonder AMD or its CEO has been reserved with their comments for Ryzen, only positive statements that Ryzen will keep improving. Reply
  • jjj - Saturday, June 2, 2018 - link

    Lisa Su said recently that Zen 2 is first in server and unless she misspoke, that means Zen 2 desktop won't be same die as server anymore. Reply
  • serendip - Sunday, June 3, 2018 - link

    I'm curious as to the use cases for 16 cores on HEDT. If you're doing stuff like video encoding, wouldn't that be better served by GPUs? What desktop tasks are suitable for such high core counts? Reply
  • Vatharian - Sunday, June 3, 2018 - link

    I strongly disagree on Intel's need to switch to mesh AMD-like topology. What's more important, they NOW would probably do anything to avoid such a step. They do have access to something much more aggressive: Intel has invested billions in joint-venture with Micron, and they do have access to highly efficient stacking technology. Dropping massive, like 2-4-8k wide ring bus under whole chip would allow Intel to basically rescale current designs, without regards to surface area taken by it. Literally no one else is capable of this approach.

    Also top tier for XCC Xeons can be further boosted by sticking few gigabytes of HBM beside it, as LLC.

    I highly doubt that FPGA Xeons will hit mass market (or any market, even), first their TDP is obnoxiously high, like 270W+ per socket, and second, there is very high intro cost for usage - experience, tools, etc., so they will probably end up in direct customers' systems only. I guess FPGAs will instead trickle down to other products, like network infrastructure, specialized accelerators, and to some small degree, ML systems.
    Reply
  • zepi - Monday, June 4, 2018 - link

    I think there are some very lucrative FPGA markets available for example in financial industry. Investment banks have the money and willingness to pay for both HW and programming talent if they smell money.

    This kind of chips would most likely give super fat margins for Intel and I'm 110% sure that they will want to get them out sooner rather than later.
    Reply

Log in

Don't have an account? Sign up now