Comments Locked

12 Comments

Back to Article

  • The Chill Blueberry - Wednesday, May 22, 2019 - link

    Will PCIe 4.0 accelerate such drives or the interface isn't a bottleneck right now anyway?
  • shabby - Wednesday, May 22, 2019 - link

    Not these ones, these are the bottom of the barrel ssd's. Im sure some of the samsung ones might go post 3.2gb/sec.
  • Billy Tallis - Wednesday, May 22, 2019 - link

    PCIe 4.0 x4 will only really help 8-channel controllers (SM2263XT is four channels), and only if there's plenty of NAND to keep all of those channels busy. There's some variation in how fast the different brands of NAND are, but generally: 1TB drives will show some improvement from PCIe 4.0, but you'll need 2TB to max out what the controller is capable of. And first-gen PCIe 4.0 SSD controllers probably won't be able to come close to saturating that PCIe link.
  • GreenReaper - Wednesday, May 22, 2019 - link

    Don't forget that latency may improve as well as capacity - the signalling rate increases, after all. Of course the controller has to support it to start with.
  • Billy Tallis - Wednesday, May 22, 2019 - link

    Any latency improvement would be on a timescale too small to matter for NVMe devices. I don't think the PCIe link itself is a meaningful contributor to latency for anything more complex than mapped DRAM (eg. NVMe CMB).
  • lightningz71 - Thursday, May 23, 2019 - link

    I do expect that the lower end NVMe drives, once converted to PCIe 4.0 x 4 controller chips, will show a significant improvement in certain areas of performance. One of the issue for the low end drives that have no local RAM cache is that they typically used host RAM for indexing and light caching. Coupling this, many of those controllers were only 2x. That's a significant bottleneck in performance there. going to 4v PCIe 4 should make that setup perform much more like a local DRAM cache for the drive. The question will be, what makes for a less expensive to manufacture drive? A drive with an updated controller and no DRAM cache, or a drive with an older controller that DOES have a local DRAM cache?
  • Billy Tallis - Thursday, May 23, 2019 - link

    NVMe Host Memory Buffer is used exclusively for caching mapping tables, not user data. Most drives use 20-40MB of HMB, and only make tiny transfers between that cache and the SSD as part of looking up an address or making an update to the cache. This is not bandwidth limited in any way, and the only time it's on the critical path for end-user performance is when the extremely quick HMB lookup precedes a NAND read that will take a few orders of magnitude longer than the HMB lookup. PCIe 4.0 will not help here, because PCIe 3.0 x2 isn't the bottleneck.
  • Valantar - Thursday, May 23, 2019 - link

    Does this support HMB? I would hope so, but IMO it's important to point out if it doesnt.
  • Billy Tallis - Thursday, May 23, 2019 - link

    The WD Blue SN500 and its OEM counterpart are the only DRAMless NVMe SSDs I'm aware of that do not use HMB.
  • dromoxen - Thursday, May 23, 2019 - link

    Are these launch prices ( higher) and whats a good indication of where they could sell at in 2-3 months ? (1tb or £p/Gb).
    seem very reasonable
  • DeepLake - Thursday, May 23, 2019 - link

    DRAM-less TLC vs QLC with DRAM like Crucial P1 is gonna be an interesting battle to watch.
  • Dug - Thursday, May 23, 2019 - link

    Maybe the sequential read and write isn't the best determining factor for buying a drive, but isn't there quite a few 1tb drives that do 3000+ that cost just a little bit more?

Log in

Don't have an account? Sign up now