Another snippet of information from Intel today relates to the company’s future mobile platform CPU. We know it’s called Ice Lake-U, that it is built on Intel’s 10nm process, that it has Sunny Cove cores, and has beefy Gen11 integrated graphics. We’re still waiting on finer details about where it’s going to be headed, but today Intel is unloading some of its integrated graphics performance data for Ice Lake-U.

It should be noted that this data is performed by Intel, and we have had no ability to verify it in any way. Intel shared this information with a number of press in order to set a level of expectations. We’ve been told that this is Intel’s first 1 TeraFLOP graphics implementation, and it performs as such. The presentation was given by Ryan Shrout, ex owner and editor-in-chief of PC Perspective, and data was performed by his team inside Intel.

Ryan first showed us a direct comparison between the Gen9 graphics found in Intel’s latest and best Whiskey Lake platform at 15W up against a 15W Ice Lake-U product. The results make for pleasant reading. In the game demo scenes that Intel showed us, we saw upwards of a 40% gain in performance in average frame rates. Percentile numbers were not shown.

When comparing to an equivalent AMD product, Intel stated that it was almost impossible to find one of AMD’s latest 15W APUs actually running at 15W in a device – they stated that every device they could find was actually running one of AMD’s higher performance modes. To make the test fair, Intel pushed one of its Ice Lake-U processors to the equivalent of a 25W TDP and did a direct comparison. This is essentially AMD’s Vega 10 vs Intel’s Gen 11.

For all the games in Intel’s test methodology, they scored anywhere from a 6% loss to a 16% gain, with the average somewhere around a 4-5% gain. The goal here is to show that Intel can focus on graphics and gaming performance in ultra-light designs, with the aim to provide a smooth 1080p experience with popular eSports titles.

Update: As our readers were quick to pick up on from Intel's full press release, Intel is using faster LPDDR4X on their Ice Lake-U system. This is something that was not disclosed directly by Intel during their pre-Computex presentation.

Intel Test Systems Spec Comparison
  Ice Lake-U Core i7-8565U
Ryzen 7 3700U
CPU Cores 4 4 4
GPU Gen 11
(<=64 EUs?)
UHD Graphics 620
(24 EUs)
Vega 10
(10 CUs)
Memory 8GB
Storage Intel SSD 760P
Intel SSD 760P
SK Hynix BC501

For some background context, LPDDR4X support is new to Ice Lake-U, and long overdue from Intel as a consequence of Intel's 10nm & Cannon Lake woes. It offers significant density and even greater bandwidth improvements over LPDDR3. Most 7/8/9th Gen Core U systems implemented LPDDR3 for power reasons, and OEMs have been chomping at the bit for LPDDR4(X) so that they don't have to trade off between capacity and power consumption.

That Intel used LPDDR4X in Ice Lake-U versus DDR4 in the AMD system means that Intel had a significant memory bandwidth and latency advantage – around 56%, on paper at least. This sort of differential matters most in integrated graphics performance, suggesting that this is one angle that Intel will readily leverage when it comes to comparisons between the two products.

Moving on, the last set of data comes from Intel’s implementation of Variable Rate Shading (VRS), which was recently introduced in DirectX 12. VRS is a technique that allows the game developer to change the shading resolution of an area on the screen on the fly, allowing a developer to reduce the amount of pixel shading used in order to boost performance, and ideally doing this with little-to-no impact in image quality. It is a new supported feature on Gen11, but it does require the game to support the feature as well. The feature is game specific, and the settings are tuned by the game, not the driver or GPU.

Intel showed that in an ideal synthetic test, they scored a 40% uplift with VRS enabled, and in the synthetic test comparing VRS on and off, that extra performance put it above an equivalent AMD Ryzen system. AMD’s GPU does not support this feature at this time.

Intel is also keen to promote Ice Lake as an AI CPU, due to its AVX512 implementation, and any software than can take advantage of AI can be equipped with accelerated algorithms to speed it up.

We expect to hear more about Ice Lake this week at Computex, given Intel’s keynote on Tuesday, but we also expect to see some vendors showing off their Ice Lake-U designs.

Want to keep up to date with all of our Computex 2019 Coverage?
Follow AnandTech's breaking news here!


View All Comments

  • npz - Sunday, May 26, 2019 - link

    So why did Intel overclock their gpu and cpu then? That TDP won't be available to consumers either on their end Reply
  • tipoo - Sunday, May 26, 2019 - link

    If an OEM put in a 25W TDP cooler, wouldn't it perform like that, as it would turbo and use all that room? Reply
  • HStewart - Sunday, May 26, 2019 - link

    Do is turbo mode consider over clocking now days. My dell XPS 15 2in1 has Turbo speed of 4.1Ghz so there is no reason that new generation processor on 10nm will not be similar. Reply
  • ajc9988 - Tuesday, May 28, 2019 - link

    What counts as stock versus overclock performance is a weird thing nowadays. It depends on if it is the hardware's default behavior versus it being modified by another component or software.

    Take, for example, AMD's automatic scaling overclock. That is CPU behavior, not based on other factors except for it monitoring available voltage, heat output, etc.This is NOT considered an overclock.

    Then there is the multi-core enhancement on mother boards that run Intel CPUs all core at the single core boost. This IS overclocking because it is not based on the CPU hardware boosting naturally, rather it is a motherboard setting.

    Intel's single core and all core boost IS a feature and is NOT considered overclocking. Motherboard manufacturers ignoring Intel's specs on boost duration and TDP can be argued to be overclocking.

    So, turbo mode, or rather Intel's built in boost speeds, are not overclocking, UNLESS the BIOS/UEFI has been modified to ignore Intel's TDP and boost duration specs, which then IS overclocking. If Intel wants to fix that, either make their partners comply with spec OR change the spec. It really is that simple.

    We could also get into the problems surrounding the calculation of TDP, etc., for both AMD and Intel, but that would take more time and nuance.
  • AshlayW - Sunday, May 26, 2019 - link

    Don't believe a word this scumbag company puts out when comparing products, especially to AMD. Do I need to remind everyone of the "9900K 50% faster than 2700X" 'study' they commissioned?
    Intel would have done everything in its power, skirting the boundary of deceit, to make the Intel CPU have an advantage.
  • Klimax - Monday, May 27, 2019 - link

    As if there is any difference between AMD and Intel. (AMD is forced to sort of behave, for now) Reply
  • CBeddoe - Sunday, May 26, 2019 - link

    They are benchmarking thd intel system with high frequency high capacity memory
    And the AMD system with half the capacity and lower frequency ram.
    They must have given the marketing department lots of leeway on this one.
    Does Intel make allowances for their sieve like security and performance losses from patches?
  • PeachNCream - Sunday, May 26, 2019 - link

    Benchmarks from the manufacturer of a product are never biased. Never. Not at all. I'll believe the numbers when I see a credible independent third party like notebookcheck post benchmarks.

    With that said, that applies to the Ryzen vs Intel information. I doubt they would be as untrustworthy when comparing their own GPUs to one another. Still, grains of salt are being taken until someone gets their hands on retail hardware.
  • Krayzieka - Sunday, May 26, 2019 - link

    now intel getting to the point to marketing. i suggest people go support AMD all the way Reply
  • abufrejoval - Monday, May 27, 2019 - link

    Looking at the numbers and the discussion there seems to be some consensus that it puts the new Ice Lake standard iGPU on a similar performance level as the GT3 variants from previous generations.

    I have always been fascinated by these chips, because they are oddly priced.

    They are extremely hard to get outside a Mac, where their end-user price is obviuosly insane.

    The only other form, where you can get them easily is a NUC, where they conform to a classic Intel rule: Don't charge for the iGPU no matter what size!

    So even if common sense would dictate that the extra 64/128MB of eDRAM as well as the double-sized GT3 (or quad GT4) iGPU should cost extra money, end-user pricing on NUCs doesn't reflect that: Those are soley priced on Pentium/i3/i5/i7 or CPU power "merits", even if the GPU in these configurations is taking up much more die space then the CPU.

    But Intel doesn't seem to sell them to anyone but Apple.

    There is one single other instance where I have ever seen an Iris Pro/Plus outside a Mac or a NUC and that was a Medion notebook sold via Aldi in Germany, based on the i5-5257U and sold at €600, quite an ordinary price for an ordinary (HD520) i5 Skylake at the time, and an obvious bargain at double GPU power for free. So I grabbed one, especially because the dGPUs at the time were all still 28nm and very clunky.

    Alas, while double GPU power turned out to be true and the machine is fine and remains in good shape with great Linux compatibility, it doesn't turn the notebook into a viable gaming device, nor very likely into a viable AI inferencing monster.

    At least not when you have desktops with Nvidia dGPUs running next door or somewhere in a cloud close by.

    So to all this hot-headed discussion that's been going on in this thread I say: I doesn't really matter if Intel is cheating here or has made radical improvements, because every machine with either generation (or iGPU configuration) essentially remains a 2D device. It still takes GDDR RAM and at least 50 Watts of pure GPU power to make most of my games playable at the full resolution of the screen. So an Ultrabook simply isn't going to cut it, if it's PC gaming you're after (Android games work, but rarely attractive).

    But what also works just as well in both configurations is Steam streaming. A GTX 1060 is good enough for 1920x1080 on the server side and will give you the performance no APU or beefed up iGPU will give you for a long time to come on a 15 or even 10Watt ultrabook without even running hot or short.

    So that's what I do. I show people my ultrabook and impress them with the most demanding games running at full hilt seemingly without even breaking a sweat at battery power.

    Some actually figure out that I must be cheating, but most people actually believe in both magic and advertisement.

Log in

Don't have an account? Sign up now