SandForce TRIM Issue & Corsair Force Series GS (240GB) Review
by Kristian Vättö on November 22, 2012 1:00 PM ESTIntroduction to the TRIM Issue
TRIM in SandForce based SSDs has always been trickier than with other SSDs due to the fact that SandForce's way to deal with data is a lot more complicated. Instead of just writing the data to the NAND as other SSDs do, SandForce employs a real-time compression and de-duplication engine. When even your basic design is a lot more complex, there is a higher chance that something will go wrong. When other SSDs receive a TRIM command, they can simply clean the blocks with invalid data and that's it. SandForce, on the other hand, has to check if the data is used by something else (i.e. thanks to de-duplication). You don't want your SSD to erase data that can be crucial to your system's stability, do you?
As we have shown in dozens of reviews, TRIM doesn't work properly when dealing with incompressible data. It never has. That means when the drive is filled and tortured with incompressible data, it's put to a state where even TRIM does not fully restore performance. Since even Intel's own proprietary firmware didn't fix this, I believe the problem is so deep in the design that there is just no way to completely fix it. However, the TRIM issue we are dealing with today has nothing to do with incompressible data: now TRIM doesn't work properly with compressible data either.
Testing TRIM: It's Broken
SandForce doesn't behave normally when we put it through our torture test with compressible data. While other SSDs experience a slowdown in write speed, SandForce's write speed remains the same but read speed degrades instead. Below is an HD Tach graph of 240GB Corsair Force GS, which was first filled with compressible data and then peppered with compressible 4KB random writes (100% LBA space, QD=32):
And for comparison, here is the same HD Tach graph ran on a secure erased Force GS:
As you can see, write speed wasn't affected at all by the torture. However, read performance degraded by more than 50% from 402MB/s to 182MB/s. That is actually quite odd because reading from NAND is a far simpler process: You simply keep applying voltages until you get the desired outcome. There is no read-modify-write scheme, which is the reason why write speed degrades in the first place. We don't know the exact reason why read speed degrades in SandForce based SSDs but once again, it seems to be the way it was designed. My guess is that the degradation has something to do with how the data is decompressed but most likely there is something much more complicated in here.
Read speed degradation is not the real problem, however. So far we haven't faced a consumer SSD that wouldn't experience any degradation after enough torture. Given that consumer SSDs typically have only 7-15% of over-provisioned NAND, sooner than later you will run into a situation where read-modify-write is triggered, which will result in a substantial decline in write performance. With SandForce your write speed won't change (at least not by much) but the read speed goes downhill instead. It's a trade-off but neither is worse than the other as all workloads consist of both reads and writes.
To test TRIM, I TRIM'ed the drive after our 20 minute torture:
And here is the real issue. Normally TRIM would restore performance to clean-level state, but this is not the case. Read speed is definitely up from dirty state but it's not fully restored. Running TRIM again didn't yield any improvement either, so something is clearly broken here. Also, it didn't matter if I filled and tortured with drive with compressible, pseudo-random data, or incompressible data; the end result was always the same when I ran HD Tach.
I didn't want to rely completely on HD Tach as it's always possible that one benchmark behaves differently from others, especially when it comes to SandForce. I turned to ATTO since it uses highly compressible data as well to see if it would report data similar to our HD Tach results. Once again, I first secure erased the drive, filled it with sequential data and proceeded to torture the drive with 4KB random writes (LBA space 100%, QD=32) for 20 minutes:
As expected, write speed is not affected except for an odd bump at transfer size of 32KB. Since we are only talking about one IO size and performance actually degraded after TRIM, it's completely possible that this is simply an error.
The read speed ATTO graph is telling the same story as our HD Tach graphs; read speed does indeed degrade and is not fully restored after TRIM. The decrease in read speed is a lot smaller compared to our HD Tach results, but it should be kept in mind that ATTO reads/writes a lot less data to the drive compared to HD Tach, which reads/writes across the whole drive.
What we can conclude from the results is that TRIM is definitely broken in SandForce SSDs with firmware 5.0.0, 5.0.1, or 5.0.2. If your SandForce SSD is running the older 3.x.x firmware, you have nothing to worry about because this TRIM issue is limited strictly to 5.x.x firmwares. Luckily, this is not the end of the world because SandForce has been aware of this issue for a long time and a fix is already available for some drives. Let's have a look how the fix works.
56 Comments
View All Comments
FunnyTrace - Wednesday, November 28, 2012 - link
Yes, I did read an article on Tweaktown about this in August 2012.JellyRoll - Friday, November 23, 2012 - link
OMG...SandForce does not do dedupe (deduplication). It does not "has to check if the data is used by something else."!!The drive is unaware of the actual file usage above the device level. That is a host level responsibility.
I cannot believe that this article was not vetted before it was posted.
Kristian Vättö - Friday, November 23, 2012 - link
SandForce does deduplication at the device level. It doesn't look for actual files like the host does because it's all ones and zeros for the controller. However, what it does is look for similar data patterns.For example, if you have two very similar photos which are 5MB each, the controller may not write 10MB. Instead, it will only write let's say 8MB to the NAND because some of the data is duplicate and the whole idea of deduplication is to minimize NAND writes.
If you go and delete one of these photos, the OS sends a TRIM command that tells the LBA is no longer in use and it can be deleted. What makes SandForce more complicated is the fact that the photos don't necessarily have their own LBAs, so what you need to do is to check that the LBA you're about to erase is not mapped to any other LBA. Otherwise you might end up erasing a portion of the other photo as well.
JellyRoll - Friday, November 23, 2012 - link
I challenge you to offer one document that supports your assertion that sandforce does deduplication. There ins't any, as it does not. Feel free to link to the technical document to support your claims in your reply.SandForce supports compression, not deduplication.
Here is a link to documentation and product data sheets.
http://www.lsi.com/products/storagecomponents/Page...
Kristian Vättö - Friday, November 23, 2012 - link
SandForce/LSI has published very little about the technology behind DuraWrite and how it works, but a combination of technologies including compression and deduplication is what they have told us.http://www.anandtech.com/show/2899/3
JellyRoll - Friday, November 23, 2012 - link
Linking an Anandtech article is not proof that SandForce does deduplication. A quick Google will reveal that there is no other source, outside of Anandtech, that claims that they offer deduplication on the current series of processors.As a matter of fact, that article is the only other reference to deduplication and SandForce that can be found.
There was a mistake made in that article.
JellyRoll - Friday, November 23, 2012 - link
As a matter of fact, if deduplication were to apply to the SandForce series of processors then incompressible data would also experience decreases in write amplification. SandForce is very public that they have "top follow the same rules" with incompressible data as everyone else. IE, they suffer the same amount of write amplification.Since SandForce controllers only exhibit performance enhancing and endurance increasing benefits from compressible data, that alone indicates that deduplication is not in use.
Deduplication can be applied regardless of the compressibility of the data.
extide - Saturday, November 24, 2012 - link
Incompressible data generally doesnt have duplications in it... that's kinda what makes it incompressible... I mean the whole POINT of compression is removing duplications!JellyRoll - Saturday, November 24, 2012 - link
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
extide - Wednesday, November 28, 2012 - link
That could also be considered compression. Take 2 copies of the same MP3 file and put them into a zip file, how big is the zip file? Pretty close to the size of one copy...