During the hard drive era, the Serial ATA International Organization (SATA-IO) had no problems keeping up with the bandwidth requirements. The performance increases that new hard drives provided were always quite moderate because ultimately the speed of the hard drive was limited by its platter density and spindle speed. Given that increasing the spindle speed wasn't really a viable option for mainstream drives due to power and noise issues, increasing the platter density was left as the only source of performance improvement. Increasing density is always a tough job and it's rare that we see any sudden breakthroughs, which is why density increases have only given us small speed bumps every once in a while. Even most of today's hard drives can't fully saturate the SATA 1.5Gbps link, so it's obvious that the SATA-IO didn't have much to worry about. However, that all changed when SSDs stepped into the game.

SSDs no longer relied on rotational media for storage but used NAND, a form of non-volatile storage, instead. With NAND the performance was no longer dictated by the laws of rotational physics because we were dealing with all solid-state storage, which introduced dramatically lower latencies and opened the door for much higher throughputs, putting pressure on SATA-IO to increase the interface bandwidth. To illustrate how fast NAND really is, let's do a little calculation.

It takes 115 microseconds to read 16KB (one page) from IMFT's 20nm 128Gbit NAND. That works out to be roughly 140MB/s of throughput per die. In a 256GB SSD you would have sixteen of these, which works out to over 2.2GB/s. That's about four times the maximum bandwidth of SATA 6Gbps. This is all theoretical of course—it's one thing to dump data into a register but transferring it over an interface requires more work. However, the NAND interfaces have also caught up in the last couple of years and we are now looking at up to 400MB/s per channel (both ONFI 3.x and Toggle-Mode 2.0). With most client platforms being 8-channel designs, the potential NAND-to-controller bandwidth is up to 3.2GB/s, meaning it's no longer a bottleneck.

Given the speed of NAND, it's not a surprise that the SATA interface quickly became a bottleneck. When Intel finally integrated SATA 6Gbps into its chipsets in early 2011, SandForce immediately came out with its SF-2000 series controllers and said, "Hey, we are already maxing out SATA 6Gbps; give us something faster!" The SATA-IO went back to the drawing board and realized that upping the SATA interface to 12Gbps would require several years of development and the cost of such rapid development would end up being very high. Another major issue was power; increasing the SATA protocol to 12Gbps would have meant a noticeable increase in power consumption, which is never good.

Therefore the SATA-IO had to look elsewhere in order to provide a fast yet cost efficient standard in a timely matter. Due to these restrictions, it was best to look at already existing interfaces, more specifically PCI Express, to speed up the time to the market as well as cut costs.

  Serial ATA PCI Express
  2.0 3.0 2.0 3.0
Link Speed 3Gbps 6Gbps 8Gbps (x2)
16Gbps (x4)
16Gbps (x2)
32Gbps (x4)
Effective Data Rate ~275MBps ~560MBps ~780MBps
~1560MBps
~1560MBps
~3120MBps (?)

PCI Express makes a ton of sense. It's already integrated into all major platforms and thanks to scalability it offers the room for future bandwidth increases when needed. In fact, PCIe is already widely used in the high-end enterprise SSD market because the SATA/SAS interface was never enough to satisfy the enterprise performance needs in the first place.

Even a PCIe 2.0 x2 link offers about a 40% increase in maximum throughput over SATA 6Gbps. Like most interfaces, PCIe 2.0 isn't 100% efficient and based on our internal tests the bandwidth efficiency is around 78-79%, so in the real world you should expect to get ~780MB/s out of a PCIe 2.0 x2 link, but remember that SATA 6Gbps isn't 100% either (around 515MB/s is the typical maximum we see). The currently available PCIe SSD controller designs are all 2.0 based but we should start to see some PCIe 3.0 drives next year. We don't have efficiency numbers for 3.0 yet but I would expect to see nearly twice the bandwidth of 2.0, making +1GB/s a norm.

But what exactly is SATA Express? Hop on to next page to read more!

What Is SATA Express?
Comments Locked

131 Comments

View All Comments

  • phoenix_rizzen - Friday, March 14, 2014 - link

    I was thinking more for the situation where you replace the current SATA ports on a mobo with PCIe x2 slots.

    So you go from cabling your drives to the SATA ports to cabling your drives to the PCIe ports. Without using up any of the slots on the back of the board/case.
  • SirKnobsworth - Saturday, March 15, 2014 - link

    If you don't want to use actual PCIe slots then have M.2 sockets on the motherboard. There's no reason to have another cabling standard.
  • phoenix_rizzen - Monday, March 17, 2014 - link

    That works too, and is something I mention in another comment above.

    This cable and connector doesn't make sense, any way you look at it.
  • Kracer - Thursday, March 13, 2014 - link

    Are you able to run any sort of PCI-Device over SATAe (GPUs, capture cards, etc.)?
    Two lanes are not enough for GPU use but it would open up much more possibilities.
    Are you able to use it as a boot device?
  • The Von Matrices - Thursday, March 13, 2014 - link

    I understand the desire for faster SSDs, but I still fail to see the purpose of SATA express over competing standards. There's nothing compelling about it over the competition.

    M.2 already provides the PCIe x2 interface and bandwidth (albeit without the ability to use cables).
    Motherboards that support PCIe 3.0 SATA Express without either a high priced PCIe switch or compromising discrete graphics functionality are one to two years away.
    SF3700 is PCIe 2.0 x4, meaning that SATA express can only use half its performance and PCIe x4 cards will still be the enthusiast solution.
    NVMe can already be implemented on other standards.
    The cables are bulky, which is unusual considering that SAS at 12Gb/s (which is available) is using the same small connectors as 6Gb/s.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    M.2 provides a PCIe x4 interface in certain configurations. I think the SATAe specification has the provision for adding another two lanes at some point in the future but that's not going to happen for a long time.
  • Kevin G - Thursday, March 13, 2014 - link

    SATAe and NVMe is fast and important for expandable IO. However I believe that it will be secondary over the long term. I fathom that the NAND controller will simply move on-die for mobile SoCs. The reason for this will be power savings, lower physical area and performance reasons. Some of the NVMe software stack will be used here but things like lane limitations will be entirely by-passed since it all on die. Bandwidth would scale by the number of NAND channels. Power savings will come from a reduction in an external component (SATAe controller and/or external chipset) and the ability to integrate with the SoC's native power management controller. Desktop versions of these chips will put the NAND on a DIMM form factor for expansion.

    The SATAe + NVMe will be huge in the server market though. Here RAS plays a bigger role. Features like redundancy and hotswap are important, even with more reliable SSD's compared to their hard drive predecessors. I eventually see a backplane version of a connector like mSATA or M.2 replacing 2.5" hard drives/SSD in servers. This would be great for 1U servers as they would no longer be limited to 10 drives. The depth required on a 1U server wouldn't be as much either. PCIe NVMe cards will fill the same niche today: radically high storage bandwidth at minimal latencies.

    One other thing worth pointing out is that since Thunderbolt encapsulates PCIe, using external SATAe storage at full speed becomes a possibility. Working in NVMe mode is conceptually possible over Thunderbolt too.
  • xdrol - Thursday, March 13, 2014 - link

    Parallel ATA is back, just look at the cable size..
  • JDG1980 - Thursday, March 13, 2014 - link

    A ribbon cable *plus* a Molex? Oh, goody. This looks like a massive step backward.
  • sheh - Thursday, March 13, 2014 - link

    Who doesn't love them flatcables?

Log in

Don't have an account? Sign up now