During the hard drive era, the Serial ATA International Organization (SATA-IO) had no problems keeping up with the bandwidth requirements. The performance increases that new hard drives provided were always quite moderate because ultimately the speed of the hard drive was limited by its platter density and spindle speed. Given that increasing the spindle speed wasn't really a viable option for mainstream drives due to power and noise issues, increasing the platter density was left as the only source of performance improvement. Increasing density is always a tough job and it's rare that we see any sudden breakthroughs, which is why density increases have only given us small speed bumps every once in a while. Even most of today's hard drives can't fully saturate the SATA 1.5Gbps link, so it's obvious that the SATA-IO didn't have much to worry about. However, that all changed when SSDs stepped into the game.

SSDs no longer relied on rotational media for storage but used NAND, a form of non-volatile storage, instead. With NAND the performance was no longer dictated by the laws of rotational physics because we were dealing with all solid-state storage, which introduced dramatically lower latencies and opened the door for much higher throughputs, putting pressure on SATA-IO to increase the interface bandwidth. To illustrate how fast NAND really is, let's do a little calculation.

It takes 115 microseconds to read 16KB (one page) from IMFT's 20nm 128Gbit NAND. That works out to be roughly 140MB/s of throughput per die. In a 256GB SSD you would have sixteen of these, which works out to over 2.2GB/s. That's about four times the maximum bandwidth of SATA 6Gbps. This is all theoretical of course—it's one thing to dump data into a register but transferring it over an interface requires more work. However, the NAND interfaces have also caught up in the last couple of years and we are now looking at up to 400MB/s per channel (both ONFI 3.x and Toggle-Mode 2.0). With most client platforms being 8-channel designs, the potential NAND-to-controller bandwidth is up to 3.2GB/s, meaning it's no longer a bottleneck.

Given the speed of NAND, it's not a surprise that the SATA interface quickly became a bottleneck. When Intel finally integrated SATA 6Gbps into its chipsets in early 2011, SandForce immediately came out with its SF-2000 series controllers and said, "Hey, we are already maxing out SATA 6Gbps; give us something faster!" The SATA-IO went back to the drawing board and realized that upping the SATA interface to 12Gbps would require several years of development and the cost of such rapid development would end up being very high. Another major issue was power; increasing the SATA protocol to 12Gbps would have meant a noticeable increase in power consumption, which is never good.

Therefore the SATA-IO had to look elsewhere in order to provide a fast yet cost efficient standard in a timely matter. Due to these restrictions, it was best to look at already existing interfaces, more specifically PCI Express, to speed up the time to the market as well as cut costs.

  Serial ATA PCI Express
  2.0 3.0 2.0 3.0
Link Speed 3Gbps 6Gbps 8Gbps (x2)
16Gbps (x4)
16Gbps (x2)
32Gbps (x4)
Effective Data Rate ~275MBps ~560MBps ~780MBps
~1560MBps
~1560MBps
~3120MBps (?)

PCI Express makes a ton of sense. It's already integrated into all major platforms and thanks to scalability it offers the room for future bandwidth increases when needed. In fact, PCIe is already widely used in the high-end enterprise SSD market because the SATA/SAS interface was never enough to satisfy the enterprise performance needs in the first place.

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn't 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn't 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don't have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

But what exactly is SATA Express? Hop on to next page to read more!

What Is SATA Express?
Comments Locked

131 Comments

View All Comments

  • Guspaz - Thursday, March 13, 2014 - link

    The only justification for why anybody might need something faster than SATA6 seems to be "Uncompressed 4K video is big"...

    Except nobody uses uncompressed 4K video. Nobody uses it precisely BECAUSE it's so big. 4K cameras all record to compressed formats. REDCODE, ProRes, XAVC, etc. It's true that these still produce a lot of data (they're all intra-frame codecs, which mean they compress each frame independently, taking no advantage of similarities between frames), but they're still way smaller than uncompressed video.
  • JarredWalton - Thursday, March 13, 2014 - link

    But when you edit videos, you end up working with uncompressed data before recompressing, in order to avoid losing quality.
  • willis936 - Thursday, March 13, 2014 - link

    The case you described (4K, 12bpc, 24fps) would also take an absolutely monumental amount of RAM. I can't think of using a machine with less than 32GB for that and even then I feel like you'd run out regularly.
  • Guspaz - Thursday, March 13, 2014 - link

    Are you rendering from Premiere to uncompressed video as an intermediate format before recompressing in some other tool? If you're working end-to-end with Premiere (or Final Cut) you wouldn't have uncompressed video anywhere in that pipeline. But even if you're rendering to uncompressed 4K video for re-encoding elsewhere, you'd never be doing that to your local SSD, you'd be doing it to big spinning HDDs or file servers. One hour of uncompressed 4K 60FPS video would be ~5TB. Besides, disk transfer rates aren't going to be the bottleneck on rendering and re-encoding uncompressed 4K video.
  • Kevin G - Thursday, March 13, 2014 - link

    That highly depends on the media you're working with. 4K consumes far too much storage to be usable in an uncompressed manner. Upto 1.6 GByte/s is needed for uncompressed recording. A 1 TB drive would fill up in a less than 11 minutes.

    As mentioned by others, losses compression is an option without any reduction in picture quality, though at the expensive of high performance hardware needed for recording and rendering.
  • JlHADJOE - Thursday, March 13, 2014 - link

    You pretty much have to do it during recording.

    Encoding 4k RAW needs a ton of CPU that you might not have inside your camera, not to mention you probably don't want any lossy compression at that point because there's still a lot of processing work to be done.
  • JlHADJOE - Friday, March 14, 2014 - link

    Here's the Red Epic Dragon, a 6k 100fps camera. It uses a proprietary SSD array (likely RAID 0) for storage:

    http://www.red.com/products/epic-dragon#features
  • popej - Thursday, March 13, 2014 - link

    "idling (with minimal <0.05W power consumption)"
    Where did you get this value from? I'm looking at your SSD reviews and clearly see, that idle power consumption is between 0.3 and 1.3W, far away form quoted 0.05W. What is wrong, your assumption here or measurements at reviews? Or maybe you measure some other value?
  • Kristian Vättö - Thursday, March 13, 2014 - link

    <0.05W is normal idle power consumption in a mobile platform with HIPM+DIPM enabled: http://www.anandtech.com/bench/SSD/732

    We can't measure that in every review because only Anand has the equipment for that. (requires a modified laptop).
  • dstarr3 - Thursday, March 13, 2014 - link

    How does the bandwidth of a single SATAe SSD compare to two SSDs on SATA 6GB/s in Raid0? Risk of failure aside.

Log in

Don't have an account? Sign up now