Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The 8TB Sabrent Rocket Q's performance on the mixed random IO test is much better than any of the other low-end NVMe drives; the DRAMless TLC drives are the slowest in this bunch, and the Intel 660p with its four-channel controller cannot keep up with the Rocket Q's 8-channel Phison E12. The 8TB Samsung 870 QVO is slower than most of the other SATA drives in this bunch, but still has a clear advantage over the 1TB model.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The high-end consumer NVMe drives and the Samsung 860 EVO TLC SATA drive top the power efficiency chart for the mixed random IO test. The Sabrent Rocket Q's efficiency is a significant step down from there, but still a bit better than any of the other low-end drives. The 8TB 870 QVO's efficiency score is worse than the 4TB model's, but clearly better than the 1TB model or either of the DRAMless TLC NVMe drives.

Both of the 8TB QLC drives show fairly typical performance curves for the mixed random IO test: little or no performance drop when writes are first added to the mix, and then increasing performance that accelerates toward the end of the test as write caching becomes more effective. The 8TB 870 QVO doesn't show the signs of a filled SLC cache that we see from the 1TB model, and neither 8TB QLC drive shows the nearly-flat performance exhibited by the two DRAMless TLC drives.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The Sabrent Rocket Q's performance on the mixed sequential IO test is competitive with the high-end consumer TLC drives, and far better than the other low-end NVMe options. The 8TB Samsung 870 QVO has distinctly lower performance than the smaller capacities, but isn't quite the worst overall performer.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The good performance of the Rocket Q on the mixed sequential IO test comes at the cost of worse power efficiency than the DRAMless TLC competition, but its efficiency scores are still decent. The 8TB 870 QVO's efficiency scores are worse than any of the other consumer SSDs in this bunch.

As with several other synthetic tests in our suite, the mixed sequential IO test has the Sabrent Rocket Q showing rather variable performance, though fortunately without any severe drops. It performs a bit better during the more write-heavy half of the test.

The Samsung 870 QVO shows relatively flat and consistent performance throughout this test, but as is common for Samsung drives there's a bit of a decreasing performance trend during the read-heavy half of the test.

Sequential I/O Performance Power Management
Comments Locked

150 Comments

View All Comments

  • heffeque - Friday, December 4, 2020 - link

    No worries on a NAS: BTRFS will take care of that in the background.
  • Billy Tallis - Friday, December 4, 2020 - link

    Not sure if that's a joke about BTRFS RAID5/6 ensuring you lose your data.

    A BTRFS scrub isn't automatic; you need a cron job or similar to automate periodic scrubbing. But assuming you do that and stay away from the more dangerous/less tested RAID modes, you shouldn't have to worry about silent data loss. I've been using BTRFS RAID1 with various SSDs as my primary NAS ever since I amassed enough 1 and 2TB models, and it's worked well so far. ZFS would also work reasonably well, but it is less convenient when you're using a pile of mismatched drives.

    Getting back to the question of data retention of QLC itself: the write endurance rating of a drive is supposed to be chosen so that at the end of the rated write endurance the NAND is still healthy enough to provide 1 year unpowered data retention. (For client/consumer drives; for enterprise drives the standard is just 3 months, so they can afford to wear out the NAND a bit further, and that's part of why enterprise drives have higher TBW ratings.)
  • heffeque - Wednesday, December 9, 2020 - link

    BTRFS background self-healing is automatic in Synology as of DSM 6.1 and above.
  • TheinsanegamerN - Saturday, December 5, 2020 - link

    Long term cold storage of any flash memory is terrible. QLC wont be any better then TLC in this regard.
  • Oxford Guy - Sunday, December 6, 2020 - link

    How could it possibly be better (than 3D TLC)?

    It can only be worse unless the TLC is really shoddy quality. This is because it has 16 voltage states rather than 8.
  • TheinsanegamerN - Monday, December 7, 2020 - link

    Hence why I said it wont be any better, because it cant be. That leaves the door open for it to be worse.

    Reeding iz hard.
  • Oxford Guy - Monday, December 7, 2020 - link

    But your comment obviously wasn't clear enough, was it?

    QLC is worse than TLC. Next time write that since that's the clear truth, not that QLC and TLC are somehow equivalent.
  • joesiv - Friday, December 4, 2020 - link

    I love the idea of 8TB SSD drives, it's the perfect size for a local data drive, I could finally be rid of my spinning rust! Just need the price to drop a bit, maybe next year!

    Thank you for the review. Though I wish reviews of SSD's would be more clear to consumers what endurance really means to the end user. "DWPD" and TB/D, are mentioned, noone seems to highlight the fact that, it's not end user's writes that matter in these specifications, it's "writes to nand", which can be totally different from user/OS writes. It is reliant on the firmware, and some firmwares do some wonky things for data collection, speed, or even have bugs, which drastically drop the endurance of a drive.

    Of course I would love an exhaustive endrance test in the review, at the bare minimum, if anandtech could check the smart data after the benchmark is done, and verify two things, it would give you some useful information.

    Check:
    - nand writes (average block erases is usually available)
    - OS writes (sometimes is not easily available), but since you run a standardized bench suite, perhaps you have an idea of how many GB's you typically run through your drives anyways.

    You might need to do a bit of math on the block erase count, to get it back to GBs, and you might need to contact the manufacturer for SMART data attribute documentation, but if they don't have good smart data attributes, or documentation available, perhaps it's something to highlight in the review.

    But then you could weed out, and present to the consumer drives that have firmwares have outrageously inefficient nand write patterns.

    My company has had several failures, and because of that, have had to test in this way potential drives for our products, and have had to outright skip drives that's specs were great, but the firmwares were doing very inefficient drive writes, limiting their endurance.

    anyways, feedback, and fingers crossed!

    Keep up the good work, and thanks for the quality content!
  • heffeque - Friday, December 4, 2020 - link

    Well... 2 TB per day every day seems like a lot of writes. Not sure it'll be a problem for normal use.
  • joesiv - Friday, December 4, 2020 - link

    well firmware bugs can cause writes to be magnified 10x, 100x higher than what is expected. I've seen it. So, you're 2TB's, would just be 20GB's... Of course we hope that firmwares don't have such bugs, but how would we know unless someone looked at the numbers?

Log in

Don't have an account? Sign up now