Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

Both of the 8TB QLC SSDs provide burst sequential read performance that is on par for their respective market segments. The Sabrent Rocket Q performs similarly to both the Mushkin Helix DRAMless TLC and Intel 660p QLC SSDs. The 8TB Samsung 870 QVO is just a bit slower than the other Samsung SATA SSDs.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data. This test is run twice: once with the drive prepared by sequentially writing the test data, and again after the random write test has mixed things up, causing fragmentation inside the SSD that isn't visible to the OS. These two scores represent the two extremes of how the drive would perform under real-world usage, where wear leveling and modifications to some existing data will create some internal fragmentation that degrades performance, but usually not to the extent shown here.

Sustained 128kB Sequential Read

On the longer sequential read tests, the Sabrent Rocket Q starts to fall behind the other low-end NVMe drives, though it still offers competitive performance reading data that was written with random writes. The Samsung 870 QVO holds on to its status as only slightly slower than the other Samsung SATA drives, but due to the SATA bottleneck this is still far slower than any of the NVMe drives.

Sustained 128kB Sequential Read (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Sabrent Rocket Q is clearly the least efficient consumer NVMe drive in this bunch for sequential reads of contiguous data; the DRAMless TLC drives outperform it while using much less power, and the more power-hungry high-end TLC SSDs have higher performance to match. The 8TB Samsung 870 QVO again scores just a bit worse than its lower-capacity siblings, because the 8TB model is slightly slower and draws slightly more power.

Like many Phison-based NVMe SSDs, the Sabrent Rocket Q's sequential read performance doesn't really begin to scale up until queue depths go beyond 4, explaining its poor low-QD scores above. By QD16 it is basically saturating the PCIe 3 x4 interface. The Samsung 870 QVO saturates the SATA interface starting at QD2.

Sequential Reads - All Drives
Sabrent Rocket Q 8TB Samsung 870 QVO 8TB

While both 8TB drives saturate their respective host interfaces with sequential reads when the queue depths are sufficiently high, they also both draw more power than average among our entire collection of test results. However, neither is power-hungry enough to stand out as an outlier from that crowd.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The burst sequential write test primarily illustrates SLC cache performance, and the Sabrent Rocket Q does quite well here, outperforming the rest of the NVMe drives in this bunch. The 8TB Samsung 870 QVO is the slowest drive, but is only slightly slower than the other SATA drives.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the Rocket Q falls behind the high-end consumer NVMe drives but remains clearly faster than the other budget NVMe drives. Meanwhile, the 8TB 870 QVO stays in last place, but is not actually meaningfully slower than the other SATA drives.

Sustained 128kB Sequential Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Sabrent Rocket Q has the worst power efficiency among the consumer NVMe drives during the sequential write test, but it still offers better performance per Watt than the SATA drives. The 8TB 870 QVO has a lower efficiency score than the other consumer SATA drives, but the enterprise drives are even worse.

Both of the 8TB QLC drives hit their full sequential write speed at QD2 and maintain it for the rest of the test without the SLC cache running out. However, the performance from the Rocket Q is a somewhat variable, probably indicating that it is affected by background work the controller is doing to flush the SLC cache.

Sequential Writes - All Drives
Sabrent Rocket Q 8TB Samsung 870 QVO 8TB

Plotted against the full set of results from all the SATA SSDs we've tested, the performance and power consumption of the 8TB 870 QVO on the sequential write test appears to be good but not pushing any limits. The Rocket Q's performance is higher than most entry-level NVMe drives, but its power consumption creeps up to unusually high levels (over 6W).

Random I/O Performance Mixed Read/Write Performance
Comments Locked

150 Comments

View All Comments

  • Oxford Guy - Monday, December 7, 2020 - link

    I have three OCZ 240 GB Vertex 2 drives. They're all bricked. Two of them were replacements for bricked drives. One of them bricked within 24 hours of being used. They bricked in four different machines.

    Pure garbage. OCZ pulled a bait and switch, where it substituted 64-bit NAND for the 32-bit the drives were reviewed/tested with and rated for on the box. The horrendously bad Sandforce controller choked on 64-bit NAND and OCZ never stabilized it with its plethora of firmware spew. The company also didn't include the 240 GB model in its later exchange program even though it was the most expensive in the lineup. Sandforce was more interested in protecting the secrets of its garbage design than protecting users from data loss so the drives would brick as soon as the tiniest problem was encountered and no tool was ever released to the public to retrieve the data. It was designed to make that impossible for anyone who wasn't in spycraft/forensics or working for a costly drive recovery service. I think there was even an announced partnership between OCZ and a drive recovery company for Sandforce drives which isn't at all suspicious.
  • Oxford Guy - Monday, December 7, 2020 - link

    The Sandforce controller also was apparently incompatible with the TRIM command but customers were never warned about that. So, TRIM didn't cause performance to rebound as it should.
  • UltraWide - Saturday, December 5, 2020 - link

    AMEN for silence. I have a 6 x 8TB NAS and even with 5,400rpm hdds it's quite loud.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    I really want to like the slim, and would love one that I could load up with 2TB SATA SSDS in raid, but they've drug their feet on a 10G version. 1G or even 2.5G is totally pointless for SSD NASes.
  • bsd228 - Friday, December 4, 2020 - link

    sequential transfer speed isn't all that matters.

    two mirrored SSDs on a 10G connection can get you better read performance than any SATA ssd locally. But it can be shared across all of the home network.
  • david87600 - Friday, December 4, 2020 - link

    My thoughts exactly. SSD rarely makes sense for NAS.
  • Hulk - Friday, December 4, 2020 - link

    What do we know about the long term data retention of these QLC storage devices?
  • Oxford Guy - Friday, December 4, 2020 - link

    16 voltage states to deal with for QLC. 8 voltage states for TLC. 4 for 2-layer MLC. 2 for SLC.

    More voltage states = bad. The only good thing about QLC is density. Everything else is worse.
  • Spunjji - Monday, December 7, 2020 - link

    It's not entirely. More voltage states is more difficult to read, for sure, but they've also begun implementing more robust ECC systems with each new variant of NAND to counteract that.

    I'd trust one of these QLC drives more than I'd trust my old 120GB 840 drive in that regard.
  • Oxford Guy - Tuesday, December 8, 2020 - link

    Apples and oranges. More robust things to try to work around shortcomings are not the shortcomings not existing.

Log in

Don't have an account? Sign up now