Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The 8TB Sabrent Rocket Q's performance on the mixed random IO test is much better than any of the other low-end NVMe drives; the DRAMless TLC drives are the slowest in this bunch, and the Intel 660p with its four-channel controller cannot keep up with the Rocket Q's 8-channel Phison E12. The 8TB Samsung 870 QVO is slower than most of the other SATA drives in this bunch, but still has a clear advantage over the 1TB model.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The high-end consumer NVMe drives and the Samsung 860 EVO TLC SATA drive top the power efficiency chart for the mixed random IO test. The Sabrent Rocket Q's efficiency is a significant step down from there, but still a bit better than any of the other low-end drives. The 8TB 870 QVO's efficiency score is worse than the 4TB model's, but clearly better than the 1TB model or either of the DRAMless TLC NVMe drives.

Both of the 8TB QLC drives show fairly typical performance curves for the mixed random IO test: little or no performance drop when writes are first added to the mix, and then increasing performance that accelerates toward the end of the test as write caching becomes more effective. The 8TB 870 QVO doesn't show the signs of a filled SLC cache that we see from the 1TB model, and neither 8TB QLC drive shows the nearly-flat performance exhibited by the two DRAMless TLC drives.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The Sabrent Rocket Q's performance on the mixed sequential IO test is competitive with the high-end consumer TLC drives, and far better than the other low-end NVMe options. The 8TB Samsung 870 QVO has distinctly lower performance than the smaller capacities, but isn't quite the worst overall performer.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The good performance of the Rocket Q on the mixed sequential IO test comes at the cost of worse power efficiency than the DRAMless TLC competition, but its efficiency scores are still decent. The 8TB 870 QVO's efficiency scores are worse than any of the other consumer SSDs in this bunch.

As with several other synthetic tests in our suite, the mixed sequential IO test has the Sabrent Rocket Q showing rather variable performance, though fortunately without any severe drops. It performs a bit better during the more write-heavy half of the test.

The Samsung 870 QVO shows relatively flat and consistent performance throughout this test, but as is common for Samsung drives there's a bit of a decreasing performance trend during the read-heavy half of the test.

Sequential I/O Performance Power Management
Comments Locked

150 Comments

View All Comments

  • Great_Scott - Sunday, December 6, 2020 - link

    QLC remains terrible and the price delta between the worst and good drives remains $5.

    The most interesting part of this review is how insanely good the performance of the DRAMless Mushkin drive is.
  • ksec - Friday, December 4, 2020 - link

    I really wish a segment of market move towards high capacity and low speed like QVO This is going to be useful for like NAS, where the speed is limited to 1Gbps or 2.5Gbps Ethernet.

    The cheapest SSD I saw for 2TB was a one off deal from Sandisk at $159. I wonder when we could see that being the norm if not even lower.
  • Oxford Guy - Friday, December 4, 2020 - link

    I wish QLC wouldn't be pushed on us because it ruins the economy of scale for 3D TLC. 3D TLC drives could have been offered in better capacities but QLC is attractive to manufacturers for margin. Too bad for us that it has so many drawbacks.
  • SirMaster - Friday, December 4, 2020 - link

    People said the same thing when they moved from SLC to MLC, and again from MLC to TLC.
  • emn13 - Saturday, December 5, 2020 - link

    There is an issue of decreasing returns, however.

    SLC -> MLC allowed for 2x capacity (minus some overhead) I don't remember anybody gnashing their teeth to much at that.
    MLC -> TLC allowed for 1.5x capacity (minus some overhead). That's not a bad deal, but it's not as impressive anymore.
    TLC -> QLC allows for 1.33x capacity (minor some overhead). That's starting to get pretty slim pickings.

    Would you rather have a 4TB QLC drive, or a 3TB TLC drive? that's the trade-off - and I wish sites would benchmark drives at higher fill rates, so it'd be easier to see more real-world performance.
  • at_clucks - Friday, December 11, 2020 - link

    @SirMaster, "People said the same thing when they moved from SLC to MLC, and again from MLC to TLC."

    You know you're allowed to change your mind and say no, right? Especially since some transitions can be acceptable, and others less so.

    The biggest thing you're missing is that the theoretical difference between TLC and QLC is bigger than the difference between SLC and TLC. Where SLC hasto discriminate between 2 levels of charge, TLC has to discriminate between 8, and QLC between 16.

    Doesn't this sound like a "you were ok with me kissing you so you definitely want the D"? When TheinsanegamerN insists ATers are "techies" and they "understand technology" I'll have this comment to refer him to.
  • magreen - Friday, December 4, 2020 - link

    Why is that useful for NAS? A hard drive will saturate that network interface.
  • RealBeast - Friday, December 4, 2020 - link

    Yup, my eight drive RAID 6 runs about 750MB/sec for large sequential transters over SFP+ to my backup array. No need for SSDs and I certainly couldn't afford them -- the 14TB enterprise SAS drives I got were only $250 each in the early summer.
  • nagi603 - Friday, December 4, 2020 - link

    Not if it's a 10G link
  • leexgx - Saturday, December 5, 2020 - link

    If you have enough drives in RAID6 you can come close to saturate a 10gb link (read post above 750MB/s with 8 hdds in RAID6)

Log in

Don't have an account? Sign up now