NVMe Recap

Before diving into our results, I want to spend a bit of time talking about NVMe (Non-Volatile Memory Express) as a command set for PCIe-based storage first. NVMe has been in the OEM and enterprise space for over a year, but it's still very much a new thing in the end-user system builder space due to the fact that NVMe drives were not available through regular retail channels until now. So let's start things by spending a moment to recap what NVMe is, how it works, and why it is such an important improvement over AHCI for SSDs.

Traditional SATA drives, such as mechanical hard drives or SSDs, are connected to the system through a controller sometimes referred to as a Host Bus Adapter (HBA) or Host Controller. This is powered through the south bridge of the system, which hosts the input/output and talks to the main processor. On the "upstream" side of the HBA to the main processor is a PCI Express-like connection, and on the downstream side from the chipset to the drives are SATA links. Almost all SATA controllers, including the ones built in to the chipsets on motherboards, adhere to the Advanced Host Controller Interface (AHCI) standard, which allows them to all work with the same class driver. SATA drives connected through such controllers are accessed using the ATA command set, which maintains a standard language for all drives to use.

Early PCIe SSDs on the other hand either implemented proprietary interfaces requiring custom drivers, or they implemented older AHCI and ATA command set and as such appeared to the OS to be a SATA drive in every way (except for the 6Gb/s speed limit). As a transition mechanism to help smooth out the rollout of PCIe SSDs, using AHCI over PCIe was a reasonable short-term solution, however over time AHCI itself became a bottleneck to what the PCIe interface and newer SSDs were capable of.

  NVMe AHCI
Latency 2.8 µs 6.0 µs
Maximum Queue Depth Up to 64K queues with
64K commands each
Up to 1 queue with
32 commands each
Multicore Support Yes Limited
4KB Efficiency One 64B fetch Two serialized host
DRAM fetches required

To allow SSDs to better take advantage of the performance possible through PCI Express, a new host controller interface and command set called NVMe was developed and standardized. NVMe's chief advantages are lower latency communication between the SSD and the CPU, and lower CPU usage when communicating with the SSD (though the latter usually only matters in enterprise scenarios). The big downside is that it's not backwards-compatible: in order to use an NVMe SSD you need an NVMe driver, and in order to boot from an NVMe drive your motherboard's firmware needs NVMe support. The NVMe standard has now been around long enough that virtually every consumer device with an M.2 slot providing PCIe lanes should have NVMe support or a firmware update available to add it, so booting off the 950 Pro poses no particular trouble (as odd as it sounds, some older enterprise/workstation systems may not have an NVMe update, and users should check with their system manufacturer).

Meanwhile on the software side of matters, starting with Windows 8.1 and Windows 10, Windows has a built-in NVMe driver that implements all the basic functionality necessary of NVMe for everyday use. But basic is it; the default NVMe class driver is missing some features necessary for things like updating drive firmware and accessing some diagnostic information. For that reason, and to accommodate users who can't update to a version of Windows that has an NVMe driver available from Microsoft, most vendors are providing a custom NVMe driver. Samsung provided a beta version of their driver, as well as a beta of their SSD Magician software that now supports the 950 Pro (but not their previous M.2 drives for OEMs). Almost all of the features of SSD Magician require Samsung's NVMe driver for use with the 950 Pro.

Finally, some of our benchmarking tools are affected by the switch to NVMe. Between most of our tests, we wipe the drive back to a clean state. For SATA drives and PCIe drives using AHCI, this is accomplished using the ATA security features. The NVMe command set has a format command that can be specified to perform a secure erase, producing the same result but requiring a different tool to issue the necessary commands. Likewise, our usual tools for recording drive performance during the AnandTech Storage Bench tests don't work with NVMe, so we're using a different tool to capture that data, but the same tools to process and analyze it.

Measuring PCIe SSD Power Consumption

Our SSD testbed is now equipped to measure power consumption of PCIe cards, and we're using an adapter to extend that capability to PCIe M.2 drives. M.2 drives run on 3.3V where most SATA devices use 5V, but we account for that difference by using reporting power draw in Watts or milliWatts. This review is our first look at the power draw behavior of a PCIe drive and our first opportunity to explore the power management capabilities of PCIe drives. To offer some points of comparison, we're re-testing our samples of the Samsung SM951 and XP941, earlier PCIe M.2 drives that were sold to OEMs but not offered in the retail channel.

This analysis has turned up some surprises. For starters, the 950 Pro's power consumption noticeably increases as it heats up - indicating that the heat/leakage effect is enough of a factor to be measurable here - and I've seen its idle power climb by as much as 4.5% from power on to equilibrium. Pointing a fan at the drive quickly brings the power back down. For this review, I made no special effort to cool the 950 Pro. Samsung ships it without a heatsink and assures us that it has built-in thermal management capabilities, so I tested it as-is in our standard case scenario with the side panel removed.

It appears that Samsung's NVMe drives have much higher idle power consumption than the AHCI drives, even when using the same UBX controller. It's clear that our system configuration is not putting the 950 Pro or the SM951 with NVMe into a low power state when idle, but the cause for what that happens is not clear. The power levels reported in the graph below are all attainable even before the operating system has loaded and they don't improve any once Samsung's NVMe driver loads, which points to an issue with either the drive firmware or the motherboard firmware.

M.2 PCIe Drives Idle Power

Furthermore, we have a clear indication of at least one motherboard bug. PCI Express Active State Power Management (ASPM) is a feature that allows a PCIe link to be slowed down to save power, something that is quite useful for a SSD that experiences long idle periods. ASPM can be activated in just the downstream direction (CPU to device) or in both directions. The latter is what offers significant power savings for a SSD. Our testbed motherboard offers options to configure ASPM, but when enabling the more aggressive bidirectional ASPM level, it locks up very frequently. I tried to test ASPM on my personal Haswell-based machine with a different motherboard from a different vendor, but it didn't offer any option to enable ASPM.

Using a slightly older Ivy Bridge machine with an Intel motherboard, I was able to confirm that the 950 Pro doesn't have any issues with ASPM, and that it does offer significant power savings. However, I wasn't able to dig for further power savings on that system, and all of the power measurements reported with the performance benchmarks in this review were performed on our usual testbed with ASPM off, as it has been for all previous reviews.

Motherboard power management bugs are tragically common in the desktop space, and devices that incorrectly implement ASPM are common enough that it is seldom enabled by default. As PCIe peripherals of all kinds become more common, the industry is going to have to shape up in this department, but for now consumers should not assume that ASPM will work correctly out of the box.

Introduction Performance Consistency
Comments Locked

142 Comments

View All Comments

  • MHz Tweaker - Monday, November 16, 2015 - link

    I have an IT service company and have been doing IT for a few decades. I am with you on how incredibly annoying it is to work on some of the hodge podge of hardware that comes in for service. Specifically there are those that refuse to upgrade early P4's or Sempron single core systems with 512 or 256 megs of RAM. Certainly these should have failed by now. They barely run XP let alone modern security needed to protect them. Internally all our workstations are Z97, X99, x79, Z170's with 4690K's, 4790K's, 5930K, 3930K and 6700K CPUs. Every machine has a Samsung or Intel SSD for a boot drive and a Hitachi 4, 5 or 6TB drive for work and storage. All systems also have 16 or 32GB RAM. I am just amazed at how much time the average person wastes waiting on an entry level PC to do things. I mean many of us spend in excess of 40 hours a week at a keyboard. I get much more done when I keep my equipment in tip top shape and fresh. Nothing has a chance to break down. Many companies throw tons of money at "image" but internally the infrastructure is held together with bubble gum and band aids.
    Oh BTW, I got our first 950 Pro 512MB Yesterday :-)
  • Deders - Thursday, October 22, 2015 - link

    Until Samsung announce again that SSD's are not meant to use sleep, their excuse for computers with 830/840's freezing for 30 seconds after waking up.
  • BurntMyBacon - Friday, October 23, 2015 - link

    @Deders: "Until Samsung announce again that SSD's are not meant to use sleep, their excuse for computers with 830/840's freezing for 30 seconds after waking up."

    Is this a known issue or are you just speculating about what might (probably?) occur in the future?
    I am using quite a few 830s and have deployed quite a few more all (up to this point) without this issue. In my experience, they've been some of the most reliable drives I've used. I know of a few 840s without issues as well, but I've tried to avoid them as much as practical as I'm still not convinced the industry fully understands the long term consequences of simultaneously increasing leakage and cutting down the margin of error to get that third (fourth, fifth, etc.) bit in. I suspect it'll be fine on some (larger) process nodes, but the 840EVO issues suggests that there is a lower limit to how small you can get while trying to get that extra bit (and storing long term). In any case, it would be nice to know if there is something I should be looking out for here and whether practical mitigations exist.
  • bill.rookard - Sunday, October 25, 2015 - link

    Agreed. I have several systems with the 830s in them and have zero problems with them. I avoided the 840s because those went with the TLC and I was iffy about them, and the 830s are rock solid and very fast.
  • Chaser - Tuesday, October 27, 2015 - link

    Boot times matter to me too. Makaveli must have a remote control while he's making his bagels.
  • carl0ski - Monday, May 16, 2016 - link

    I'd say the most annoying part is the Windows boot logo is a fixed time to complete.
    So even if Windows 7 8 10 boot up faster it still waits for the pretty video.

    Turn it off from msconfig and windows 7 on 850 will boot in 3 seconds instead of 10 seconds
  • beginner99 - Thursday, October 22, 2015 - link

    Most people that buy such a product usually are enthusiast and play around like over clocking. 10 sec more per boot is then pretty annoying.
  • Makaveli - Thursday, October 22, 2015 - link

    I'm from the era of overclocking with jumpers on a motherboard and far longer post times. If waiting 10 seconds for a machine to boot is too long while trying to get a stable overclock then do it in windows. Or does everyone in this generation have ADD?
  • AnnonymousCoward - Thursday, October 22, 2015 - link

    Oh, so time is only valuable to those with ADD? You're a fool.
  • Makaveli - Thursday, October 22, 2015 - link

    Go troll else where tool!

Log in

Don't have an account? Sign up now