Burst IO Performance

Our burst IO tests operate at queue depth 1 and perform several short data transfers interspersed with idle time. The random read and write tests consist of 32 bursts of up to 64MB each. The sequential read and write tests use eight bursts of up to 128MB each. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

QD1 Burst IO Performance
Random Read Random Write
Sequential Read Sequential Write

For quite a while, NVMe SSDs with Silicon Motion controllers have been delivering some of the best QD1 burst random read performance scores. The Intel SSD 670p pushes this even further, when the test is only hitting the SLC cache. When testing against an 80% full drive, the burst random read performance is faster than most other QLC drives but slower than any good TLC drive.

For QD1 random writes, the 670p is actually slightly slower than the 660p when testing a mostly-full drive, though it is again competitive with more high-end TLC drives when writing to the SLC cache. For both sequential reads and sequential writes, the 670p offers very good QD1 throughput for a PCIe 3.0 drive, and is much improved over the 660p that is seriously bottlenecked by its low-end controller.

Sustained IO Performance

Our sustained IO tests exercise a range of queue depths and transfer more data than the burst IO tests, but still have limits to keep the duration somewhat realistic. The primary scores we report are focused on the low queue depths that make up the bulk of consumer storage workloads. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Sustained IO Performance
Random Read Throughput Power Efficiency
Random Write Throughput Power Efficiency
Sequential Read Throughput Power Efficiency
Sequential Write Throughput Power Efficiency

As with the burst IO scores, the longer sustained IO tests show the Intel 670p doing very well with sequential reads or writes: the performance doesn't betray the fact that it's using QLC NAND, and the power efficiency is typical of a last-generation controller. For random reads or writes, the performance at low queue depths is similarly great when testing the SLC cache, but testing across an 80% full drive knocks performance down to typical entry-level NVMe and mainstream SATA territory. Random writes in particular are disappointing on the mostly-full drive: it's slower than the 660p and the Phison E12-based Corsair MP400, though still several times faster than the DRAMless Mushkin Helix-L.

Random Read
Random Write
Sequential Read
Sequential Write

The Intel 670p is fairly well-behaved through the sustained IO tests as the queue depth ramps up. Random reads saturate around QD32, random writes around QD8, and sequential transfers at QD2. Performance is very consistent after the drive reaches its full speed; the only big drop comes at the very end of the sequential write test on a mostly-full drive, when the SLC cache finally runs out while testing at QD128. This is pretty much never going to happen during ordinary consumer workloads.

Random Read Latency

This test illustrates how drives with higher throughput don't always offer better IO latency and Quality of Service (QoS), and that latency often gets much worse when a drive is pushed to its limits. This test is more intense than real-world consumer workloads and the results can be a bit noisy, but large differences that show up clearly on a log scale plot are meaningful. For more details, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Taking a closer look at random read throughput and latency, the Intel 670p fares better than most of the other QLC drives, save for the 8TB Sabrent Rocket Q. The 670p has slightly worse performance than the DRAMless TLC Mushkin Helix-L. The throughput achieved by the 670p is similar to mainstream TLC SATA drives, but the latency is considerably higher: the SATA drives are bottlenecked by the SATA link itself, while the 670p's bottleneck is on the NAND side, which causes latency to spike as the drive is pushed to its limit.

Trace Tests: AnandTech Storage Bench and PCMark 10 Advanced Synthetic Tests: Block Sizes and Cache Size Effects
Comments Locked

72 Comments

View All Comments

  • bananaforscale - Wednesday, March 3, 2021 - link

    Fascinating. I have a Netgear GS810EMX connected to an Aquantia AQC-108, and the NIC has issues in Linux when it's receiving lots of data, but works fine in Windows.This requires further research.
  • justaviking - Monday, March 1, 2021 - link

    A MATTER OF PERSPECTIVE...

    Billy wrote: "More importantly, at 0.2 DWPD Intel's QLC SSDs aren't that far behind the 0.3 DWPD that most consumer TLC SSDs are rated for."

    A 0.1 DWPD difference might not sound like it is "that far behind," but on the other hand that is 33% behind, and 33% *is* significant.
  • Billy Tallis - Monday, March 1, 2021 - link

    My thinking is that the 33% difference on paper is a lot less significant than it looks at first glance, because most consumers won't come close to crossing either limit. If 0.1 DWPD is probably sufficient for your usage and 0.2 DWPD definitely is, then 0.3 DWPD doesn't really have much added benefit.
  • frbeckenbauer - Monday, March 1, 2021 - link

    I bought a Samsung PM9A1 for 115€. What is intel doing with these prices? A 1TB QLC SSD should be the price they're offering here for the 512GB version.
  • Machinus - Monday, March 1, 2021 - link

    You can still run linux on an X-25E RAID for the next 100 years.
  • MDD1963 - Tuesday, March 2, 2021 - link

    Intel does not Eff around when you have used up your allotted writes....; good or bad still, you are damn well done writing once you've used them up!
  • Hifihedgehog - Monday, March 1, 2021 - link

    Hey Billy. What is the best 240-256GB NVMe today? I am looking for something under $50 that is the fastest there is currently for system boot times and mixed I/O.
  • Tomatotech - Tuesday, March 2, 2021 - link

    To start with I wouldn't buy a 256GB NVME. Speed scales with size quite well for NVME, and the difference from 256 -> 512 -> 1TB is astounding. Go for a 1TB. This is going to be the fastest drive on your system by far, and more fast space is always useful.

    The next thing is make sure you get a drive that folds *all* (or almost all) unused space into SLC space. This means that with an empty 1TB TLC drive, you get 330GB of high-speed SLC space. Smaller drives give you far less cache space. My 1TB is about 500GB full, means I still have about around 150GB SLC storage left. (it's a 2018 Adata SX8200 1TB, non-pro).

    Beyond that, eh, from a user perspective they're all roughly equal, look at the table on the last page of the article. Used 1TB NVMe drives are a good buy too, there's not much that can go wrong with them, and if there is, you'll find out on first boot. The only things I would check for in a used working NVME drive is a) total writes, but it's extremely rare for that to be excessively high; and b) run a speed test - if that seems slow, then do a full secure erase and the SSD should be back to full performance, but even that is rarely needed with modern OSes.
  • Hifihedgehog - Tuesday, March 2, 2021 - link

    My price point was $50 and under so you ignored a key point from the very beginning.
  • abufrejoval - Tuesday, March 2, 2021 - link

    USB sticks are used all over the place for booting.

    And you get relatively fast µSD-cards which you could combine with a USB reader-stick.
    A "class 10/A2" rating card can be had at many capacity points where NVMe no longer goes.

Log in

Don't have an account? Sign up now