Comments Locked

54 Comments

Back to Article

  • BrokenCrayons - Friday, November 6, 2015 - link

    What bothers me is this idea:

    "...due to the high voltage, this is the least efficient way to run a processor and wasteful in terms of energy used, which for mobile devices means a shorter battery life or thermal throttling."

    Admittedly, it's a bit unrelated to the article, but it's somewhat annoying that mobile devices are built with cooling that's insufficient to sustain full load/maximum heat generating potential scenarios of modern processors without the CPU having to reduce its speed in order to avoid overheating. That's stupid. Sure the CPU is capable of responding to a situation where cooling isn't good enough, but I dislike the idea that some mobile systems are built so that thermal throttling happens pretty regularly under load. Sorry, that's just bad design.
  • DanNeely - Friday, November 6, 2015 - link

    No, it's optimizing for bursty loads and lightness; they could put a heavier heatsink in and either make the laptop significantly bigger or have a significantly smaller battery instead. Not that it would matter much since a sustained max CPU load would nuke your battery in an extremely short period of time.

    If you go back and look at mobile clock speeds before and after large turbo modes were added you'll see that its main effect was to allow bursty loads to run at higher speeds while the sustained speeds more or less stayed where they were. The net result was that for the non-CPU bound workloads that most users have a laptop could go from being much slower than a desktop to almost as fast as one.
  • beginner99 - Monday, November 9, 2015 - link

    Agree with DanNeely. That's why imherently you can't use laptops the same as desktops. The will throttle more. Explain that to the stupid IT Department in my company that offers exactly 2 models of laptops you can choose from. Both are Ultrabooks with 15 W CPUs. And I'm supposed to work with that crap while I would at least need a something like an i7-6700k.
  • jospoortvliet - Monday, February 27, 2017 - link

    Note that an i5-6500 will already run circles around any ultrabook cpu... no need for an i7 in that regard. Perhaps if you point out that that is both cheaper and faster????
  • neonspark - Friday, November 6, 2015 - link

    It depends. It is only "bad" design if you run a sustained load. But it is not bad design if you're doing quick ramp ups and downs such as many people do in mobile devices. Sacrificing the weight and size of the device to achieve the theoretical maximum and sustain it for hours is foolish on a mobile device. Yes you CAN have this, but at what cost? In mobility, weight and size matter as much as performance, to imply that performance sustained is the ONLY metric that matters is narrow minded to the application of mobility. The UX of a mobile device that is able to deliver performance in bursts instead of long periods of time will be superiror to one that is bulkier and needs to dissipate more heat to achieve the same. Don't be fooled, design matters a lot and in the race for thinner lighter, consumers will not really care to give a bit of performance to stay within thermal safety.
  • Darkstone - Saturday, November 7, 2015 - link

    I think you missed his point. It's fine if the processor throttles because the TDP of the chip isn't high enough. What isn't fine is manufacturers combining 45W CPU's with cooking that can really only handle 25W.

    Sustained performance does matter for some people, but the amount of laptops you can but with crapy cooling is large.
  • Nagorak - Saturday, November 7, 2015 - link

    There are a lot of apologists for crappy laptop design. I agree that lackluster cooling is one of the biggest issues with laptops. If you have a gaming laptop you will game plugged in most of the time so battery life isn't as much of an issue. But when you're whole system can only run at 80% speed due to being thermally throttled, then that is a real kick in the nuts.

    I wish review sites would spend more time, or actually any time, measuring performance under sustained loads, because with gaming laptops that is the thing that matters most. I really couldn't care less what numbers are spit out on a 2 minute benchmark run when the laptop doesn't even have time to fully heat up.
  • naris - Monday, November 9, 2015 - link

    This is why "Gaming Laptop" is an oxymoron and why I would never buy one. Besides, what is the point of a laptop that has to be plugged in all the time? If you want performance, don't buy a laptop! If you want mobility, buy a Laptop, but don't expect performance!
  • MapleWheels - Saturday, November 14, 2015 - link

    You guys are confusing categories and falling for marketing tricks; regular laptops are in no way designed to be used under sustained load circumstances. "Gaming laptops" that don't just use the marketing terms such as "...play games on the go powered by X." actually have sufficient cooling (although the margin for throttling is low which needs to be improved).

    Most laptops are designed to be marketed to normal users and casual gaming means something different to the general consumer.

    Also, more importantly to your point, some people need laptops because they are on the move but would like an actual gaming laptop since when they will be using it for gaming they will be plugged in. It isn't as simple as "get a desktop" unless you can figure out an easy and economical way to haul and use a desktop everywhere you go.
  • danwat1234 - Saturday, October 15, 2016 - link

    Most laptops I have used, whether gaming/engineering laptops or cheapo ones, can sustain the base clock speeds at 100% load on all cores. Some do have crappy cooling but most are OK. Be sure to test it out when you get a new laptop so you can return it.
    About gaming/engineering laptops, they have better battery life than you may think, if they have nVidia Optimus. The dedicated GPU can be off and thus quite decent battery life.
  • willis936 - Friday, November 6, 2015 - link

    Sorry? You should be. Bad design would be saying "welp. I can dissipate 20W of heat sustained. Better make the absolute maximum heat output 20W." Nice attempt though.
  • xthetenth - Friday, November 6, 2015 - link

    Exactly, you can't make use of the full capability of the thermal solution in non-sustained workloads unless you can burst above the power use of a sustained workload.
  • xthetenth - Friday, November 6, 2015 - link

    Let's turn this on its head. Let's say you've got a mobile device where the cooling is sufficient to sustain full load without throttling. That means that if you're running a bursty workload, you're leaving performance on the table because it won't be using the maximum the chip is capable of all the time, so by definition the device is going to be underperforming whenever the load isn't constantly the maximum.

    In order to attain maximum performance in both bursty and sustained workloads, the device needs to be able to handle a given amount of heat. Both bursty and sustained workloads need to average that heat, and therefore bursty workloads need to be able to spike above the sustained maximum power.

    So not only is it not bad design to make a device that can clock higher in short bursts than in sustained workloads, it is bad design to make one that cannot because it is leaving power on the table.

    And this is why we shifted toward clocks that can be dynamically increased over the designed maximum sustained clock.
  • PaulHoule - Saturday, November 7, 2015 - link

    It depends on what kind of device you are making.

    I have a desktop replacement laptop that got its fan full of dust and it was getting hot and going into thermal throttling. I took it apart and puffed some freon into the fan and when I put the machine back together the CPU started running above 3GHz again.

    For tablet applications you are going to go fanless so the ultimate limit on heat removal is how much you can dissipate through the exterior surface so there isn't a lot of room for "good engineering". (Put the fan back in and you might as well put in a spinning HDD, open up more holes in the case, and pretty soon you have one of these convertable computers that nobody wants to buy and that exist just to confuse people like flight attendants.)
  • TomWomack - Sunday, November 8, 2015 - link

    If your mobile computer is regularly under load, you're running inappropriate software. For a more realistic situation - say, you apply a complicated Photoshop filter or MSVC compile, which loads the processor to 100% for twenty seconds, then look at the result for twenty seconds to see what to do next - then a cooling solution which lets the chip get hot over the twenty seconds and cool again during the thinking time is a pretty neat idea.

    Leave video transcoding to the farms of non-mobile processors at Netflix, who only have to do it once.
  • emn13 - Sunday, November 8, 2015 - link

    In principle you're right, but your examples are too heavy a load (i.e. more like kraken than webxprt). If your workload takes significantly longer than 100ms (the time needed to increase clockrate without speedshift, apparently), then the approximately 65ms speedshift saves isn't going to matter. MSVC compiles and even complicated photoshop filters likely fall in that territory.

    It's going to matter doing a web-page load, however - that takes a fraction of a second on a modern PC, so saving 65ms might matter. It'll matter doing a simple photoshop filter.

    But anything that takes a second or more? You're not going to notice the (at best) 65ms improvement.
  • danwat1234 - Saturday, October 15, 2016 - link

    You calling Rosetta@home and Folding@home inappropriate software? lol
  • FalcomPSX - Monday, November 9, 2015 - link

    its not so much bad design, as poor system choice if your workloads exceed what the machine can realistically handle. A thin and light ultrabook shouldn't be expected to sit at full turbo frequency for extended periods of time, they don't have the cooling capacity for it. The burst performance is there when needed, but if you are running tasks that will max the cpu for extended periods of time, perhaps a different choice of system is more appropriate than complaining that physics of power consumption and cooling requirements don't match your desire for something thin and light. Mobile stuff is getting better all the time but it will ALWAYS lag behind a machine with better cooling, be it a desktop, or more substantial laptop.
  • Shadowmaster625 - Friday, November 6, 2015 - link

    So if I disable speed step on my machine and force it to run at max clock all the time, I should see massively increased performance in WebXPRT because I am simulating what speed shift is doing, and then some.
  • Ian Cutress - Friday, November 6, 2015 - link

    You'll also lose a lot of power by virtue of remaining at high clocks when idle, as well as producing extra heat which limits your sustained performance time when at load. In a desktop, that might not matter much if you have sufficient cooling, but it becomes important in a mobile device.
  • Sixonetwo - Friday, November 6, 2015 - link

    By running at max clock speed you would shave off the 35 thousandths of a second that it takes to ramp up to full speed on Speed Shift. Any performance gains would be negligible. In fact, you may score even lower- by running at max frequency all the time your CPU will be at a higher resting temperature and will have to thermal throttle sooner.
  • TeXWiller - Friday, November 6, 2015 - link

    You would probably also need to disable C-states to see any benefit. P-state transitions have been "traditionally" faster in comparison.
  • zodiacfml - Saturday, November 7, 2015 - link

    Just a bit better as Intel's implementation is not that far from a CPU with a constant max frequency. Still, this is a welcome improvement as I noticed that the latest mobile CPUs still don't compare to older desktop CPUs such as a Sandy Bridge in terms of responsiveness.
  • Pissedoffyouth - Sunday, November 8, 2015 - link

    Wooooooosh
  • Wolfpup - Friday, November 6, 2015 - link

    Another great article! It's a neat technology, maybe not a huge change, but every bit helps, and it works as advertised!
  • Wolfpup - Friday, November 6, 2015 - link

    Oh, and another reason to upgrade to Windows 10...
  • sheh - Sunday, November 8, 2015 - link

    Or wait for the update that adds support to Win8 and possibly Win7 (if Microsoft won't, Intel probably will).
  • Gigaplex - Monday, November 9, 2015 - link

    It definitely won't come to Windows 7, and isn't likely to come to Windows 8.1. I'd be surprised if any significant features get backported to 8.1, Microsoft expects you to switch to 10.
  • danwat1234 - Saturday, October 15, 2016 - link

    Yep, oh well, I'll stick with 7/8 on my future Kaby or Coffee Lake laptop.
  • StrangerGuy - Friday, November 6, 2015 - link

    You mean how it's a complete yawnfest compared to the ARM competition?
  • Michael Bay - Sunday, November 8, 2015 - link

    Endless little tweaks of basic design coming out of ARM by mediateks of this world is just as much a yawnfest.
  • MrSpadge - Tuesday, November 10, 2015 - link

    +1
  • III-V - Friday, November 6, 2015 - link

    Wow, I can't wait to ditch Nehalem for Skylake.
  • maketas - Friday, November 6, 2015 - link

    Any idea if the latest iMacs with Skylake chips have this Speed Shift enabled?
  • Brett Howse - Friday, November 6, 2015 - link

    That's a great question. At the moment, it's Windows 10 only, but let me check with Intel.
  • Brett Howse - Friday, November 6, 2015 - link

    The official answer is that it's up to Apple to enable this. We'll see what we can find out.
  • maketas - Saturday, November 7, 2015 - link

    Great, thank you ;) Hope they enable it, and as other says, in Linux too.
  • extide - Friday, November 6, 2015 - link

    Hopefully they will get this into the Linux kernel soon
  • ikjadoon - Friday, November 6, 2015 - link

    Nice. Free performance.

    So, I heard the other "power-related" improvement in Skylake was duty cycling. Has that always been enabled? Err, is that 400Mhz the actual clock speed or is that like 800MHz with 50% duty cycling?
  • Ian Cutress - Saturday, November 7, 2015 - link

    Correct me if I'm wrong, but I was under the impression they mentioned at IDF that it was as least a Broadwell technology but they didn't promote it as much. Or it may have been limited in some way (PCH only or something).
  • Ryan Smith - Sunday, November 8, 2015 - link

    DCC was implemented on the GPU of BDW. For SKL it's available for the CPU as well.
  • Samus - Saturday, November 7, 2015 - link

    Another reason to upgrade to Windows 10.
  • lilmoe - Saturday, November 7, 2015 - link

    Great. Now, Intel, would you please work harder on your HD Graphics drivers for Windows 10? The current drivers buggy as heck.
  • MattMe - Monday, November 9, 2015 - link

    I can just imagine Intel trawling through the Anandtech comments looking for what jobs to do next.
    "Oh, look, this random internet forum users asks us politely to work on the HD drivers for Windows 10. We'll get on with it straight away."

    ;)
  • jeffry - Saturday, November 7, 2015 - link

    Im kind of disappointed. Its not that much of a deal actually. Battery Efficiency 10.64 -> 10.73. Thats barely measurable. Overall performance benefit is less than 10 percent, and thats benchmarks, not really life applications...not at all a reason to use windows 10.
  • markbanang - Saturday, November 7, 2015 - link

    Battery *life* went from 10.64 to 10.73 (around 1%), so coupled with the performance gains, battery *efficiency* gains are the same as the performance gains.

    Normally performance and battery life are a trade-off, so getting a performance increase without a battery life decrease is impressive enough on its own.
  • Kevin G - Saturday, November 7, 2015 - link

    Getting a small performance gain along with an equally small battery life gain isn't a bad thing. Both are moving in the right direction. This helps distinguish itself further from Haswell/Broadwell which is something it really needs to do.
  • Nagorak - Saturday, November 7, 2015 - link

    It still doesn't distinguish itself very much.
  • sheh - Sunday, November 8, 2015 - link

    Or wait for the update that adds support to Win8 and possibly Win7 (if Microsoft won't, Intel probably will).
  • sheh - Sunday, November 8, 2015 - link

    (Delete the above. Posted in the wrong place due to site not working correctly when Javascript is disabled.)
  • sr1030nx - Sunday, November 8, 2015 - link

    NoScript?
  • Denithor - Monday, November 9, 2015 - link

    In your first two paragraphs you misspelled noticeable as "noticable" - lost an "e" somehow.
  • MrSpadge - Tuesday, November 10, 2015 - link

    The first two frequency steps are VERY quick with Speed Shift. The CPU seems to go from 1.0 GHz to ~2.5 GHz in about 2 ms. That's encouraging. When doing number crunching on GPUs with BOINC we always have the problem that idle CPUs are very slow to feed the GPU with new work. It's not much work required, but it's got to happen quickly - otherwise the GPU runs dry. That's why partly loading the CPU with other tasks is currently a good "work-around" for that.
  • danwat1234 - Saturday, October 15, 2016 - link

    Wait.... the time it takes for a CPU core to ramp to full turboboost is way less than a second. .. I run Seti@home on my laptop's GPU and it takes maybe 20 seconds or so of CPU crunching for the GPU to start working. During that time, the CPU core is at full clockspeed I assume. I don't think that is your issue.
    Maybe the CPU power is just a limitation in your case or the priority of the process needs to be increased. What project are you running?

Log in

Don't have an account? Sign up now