SSD versus Enterprise SAS and SATA disks
by Johan De Gelas on March 20, 2009 2:00 AM EST- Posted in
- IT Computing
Energy Consumption
For our performance testing we used a 3.3GHz (120W TDP) Core 2 X5470; we admit to being a bit paranoid and we wanted the CPU to have plenty of processing power in reserve. In the case of purely storage related tasks, the CPU never achieved more than 15% CPU load with software RAID. Only SysBench was capable of pushing it up to 80%, but if we want to measure the power consumption of our SC-836TQ storage enclosure the SysBench value is unrealistic. In most cases, the server will run the database and perform the transactions. The storage enclosure attached to the server will perform only the I/O processing. Therefore we measure the power consumption of our storage enclosure using IOMeter, and we use a more sensible (80W) 2.5GHz Core 2 E5420 CPU. High performance enclosures (such as those of EMC) also use Xeons to perform the I/O processing.
The SC-836TQ uses one Ablecom PWS-902-1R 900W 75A power supply, one Xeon E5420 "Harpertown", 4x2GB 667MHz FB-DIMM, and one Adaptec 5085 RAID controller. "Full Load" means that the storage enclosure is performing the IOMeter Random Read/Write tests. The difference between sequential reads and random writes is only a few watts (with both SSD and SAS).
Drive Power Consumption | ||||||
Idle | Full Load | Idle (Drives Only) |
Full Load (Drives Only) |
Idle (per Drive) |
Full Load (per Drive) |
|
8 x SSD X25-E | 257 | 275 | 6 | 24 | 0.75 | 3 |
4 x SSD X25-E | 254 | 269 | 3 | 18 | 0.75 | 4.5 |
8 x SAS (Seagate) | 383 | 404 | 132 | 153 | 16.5 | 19.125 |
4 x SAS (Seagate) | 316 | 328 | 65 | 77 | 16.25 | 19.25 |
No disks at all (One system disk) |
251 | n/a | n/a | n/a | n/a | n/a |
While the Intel SLC X25-E consumes almost nothing in idle (0.06W), the reality is that the drive is attached to a RAID controller. That RAID controller consumes a little bit of energy to keep the connection to the idle drive alive. Still, the fact that eight SLC drives need 129W less power than eight SAS drives while offering 3 to 13 times better OLTP performance is a small revolution in storage land.
Let us do a small thought experiment. Assume that you have a 100GB database that is performance limited. Our SysBench benchmark showed that eight SLC X25-E drives perform at least three times (up to 13 times) better than ten 15000RPM SAS drives. You need at least 30 SAS drives to achieve the same performance as the SSDs. We'll ignore the fact that you would probably need another enclosure for the 30 drives and simply look at the costs associated with an eight SLC SSD setup versus a 30 drive 15000RPM SAS setup.
We base our KWh price on the US Department of Energy numbers which states that on average 1 KWh costs a little more than 10 cents[2]; the real price is probably a bit higher, but that's close enough. It is important to note that we add 50% more power to account for the costs of air conditioning for removing the heat that the disks generate. We assume that the drives are working eight hours under full load and 16 under light load.
TCO Comparison | |||
X25-E | SAS 15000RPM | Comment | |
Power per drive | 1.5 | 17.375 | 16 hours idle, 8 hours full load |
years | 3 | 3 | |
KWh per drive (3 years) | 38.88 | 450.36 | 360 days, 24 hours |
Number of drives | 8 | 30 | Based on SysBench performance measurements |
Total KWh for disks | 311.04 | 13510.8 | |
Cooling (50%) | 155.52 | 6755.4 | to remove heat from array |
. | |||
Total KWh in datacenter | 466.56 | 20266.2 | disks power + cooling |
Price per KW | $0.10 | $0.10 | |
Total Power costs (3 years) | $46.656 | $2026.62 | |
TCA | $6400 | $6000 | Eight 64GB SLC drives at $800 Thirty 15000RPM SAS drives at $200 |
. | |||
Savings | $1579.964 |
If you use six drives for the RAID 10 data LUN (two drives for the logs), you need the 64GB SLC drives. That is why we use those in this calculation. Note that our calculation is somewhat biased in favor of the SAS drives: the SLC drives probably run at idle much more than the SAS drives, and it is very likely that even 30 SAS drives won't be able to keep with our eight SSDs. Even with the bias, the conclusion is crystal clear: if you are not space limited but you are performance limited, SSDs are definitely a better deal and will save you quite a bit of money as they lower the TCO.
67 Comments
View All Comments
JohanAnandtech - Friday, March 20, 2009 - link
Ok, good feedback. On monday, I'll check the exact length of the test (it is several minutes), and we do have a follow up which shows you quite a bit of what is happening. Disk queue lengths are quite high, so that should tell you also that it is not just a "fill cache", "dump cache" thing. We did see this behavior with small databases though (2 GB etc.)Just give me a bit of time, After the Nehalem review, I'll explore these kind of things. We also noticed that the deadline scheduler is the best for the SAS disks, but noop for the SSD. I'll explore the more in depth stuff in a later article.
JarredWalton - Friday, March 20, 2009 - link
As stated in several areas in the article, SSDs clearly don't make sense if you need a large database - which is why Google as an example wouldn't consider them at present. The current size requirements are quite reasonable (less than 512GB if you use 8x64GB SSDs... and of course you can bump that up to 16x64GB if necessary, though you'd have to run more SATA cards, use software RAID, or some other changes), but there will certainly be businesses that need more storage.However, keep in mind that some companies will buy larger SCSI/SAS drives and then partition/format them as a smaller drive in order to improve performance - i.e. if you only use 32GB on a 300GB disk, the seek times will improve because you only need to seek over a smaller portion of the platters - and transfer rates will improve because all of the data will be on the outer sectors.
At one point I worked for a corporation that purchased the then top-of-the-line 15k 32GB disks and they were all formatted to 8GB. We had a heavily taxed database - hundreds of concurrent users working in a warehouse, plus corporate accesses and backups - but the total size of the database was small enough that we didn't need tons of storage. Interestingly enough, we ran the database on an EMC box that probably cost close to $1 million (using IBM POWER5 or POWER6 servers that added another couple million I think). I wonder if they have looked at switching to SSDs instead of SCSI/SAS now? Probably not - they'll just do whatever IBM tells them they should do!
virtualgeek - Friday, March 20, 2009 - link
The key is that as a general statement:Lowest capital cost optimization happen at the application tier (query optimization),
Next lowest capital cost optimization happens at the database tier (proper up-front DB design)
Next lowest capital cost optimization happens by adding RAM to the database tier.
Next lowest capital cost optimization happens by adding database server horsepower or storage performance (depending on what is the gate to performance).
But - in various cases, sometimes the last option is the only one (for lots of reasons - legacy app, database structure is extremely difficult to change, etc).
icrf - Friday, March 20, 2009 - link
"Capital cost" is a bit of a misnomer. It tends to be far cheaper to buy some memory than pay a DBA to tune queries.Dudler - Friday, March 20, 2009 - link
Hi,Thx for the article,
1. But your cost comparison is only valid until you have to buy new disks. Would be interesting to have an assumption on how long the SSD's would survive i a server environment, since they would be written to a lot. Even with all the wear levelling algorithms their lifespan may be short. Would the SAS disks live longer?
2. How did you test the write speed/latency? In the great article by Anand it was pretty clear that the performance of SSD's started to degrade when they got full and they wrote many small blocks. Did you simulate a "used" drive or only "fresh secure erase" it beforehand?
JarredWalton - Friday, March 20, 2009 - link
Intel states a higher MTBF for their SSD than for any conventional HDD. We have no real way of determining how long they will truly last, but Intel suggests they will last for five years running 24/7 doing constant 67% read/33% write operations. Check back in five years and we'll let you know how they're doing. ;-)As for the degraded performance testing, remember that Anand's article showed the X25 was the least prone to degraded performance, and the X25-E is designed to be even better than the X25-M. Anand http://www.anandtech.com/storage/showdoc.aspx?i=35...">didn't test new performance of the X25-E, but even in the degraded state it was still faster than any other SSD with the exception of the X25-M in its "new" state. Given the nature of the testing, I would assume that the drives are at least partially (and more likely fully) degraded in Johan's benchmarks - I can't imagine he spent the time to secure erase all the SSDs before each set of benchmarks, but I could be wrong.
IntelUser2000 - Sunday, March 22, 2009 - link
Unfortunately, I'd have to agree with mikeblas even for X25-E.Here look at this site: http://www.mysqlperformanceblog.com/2009/03/02/ssd...">http://www.mysqlperformanceblog.com/200...e-cache-...
The point in that article is that the SSD can outperform similarly priced RAID 10 setup by 5x, but due to data loss risks they have to turn off the write cache which degrades the X25-Es performance to 1/5x and ends up in the same level.
JohanAnandtech - Monday, March 23, 2009 - link
We will look into this, but the problem does not seem to occur with the ext3 filesystem. Could this be something XFS specific?It is suggested here that this is the case:
http://ondrejcertik.blogspot.com/2008/02/xfs-is-20...">http://ondrejcertik.blogspot.com/2008/02/xfs-is-20...
We'll investigate the issue.
That is good feedback, but just like reviewers should be cautious to jump to conclusions, readers should too. The blog test was quick as the blogger admits, this does not mean that the X25-E is not reliable. Also notice that he notes that he should also test with a BBU enabled RAID-card, something we did.
IntelUser2000 - Monday, March 23, 2009 - link
Thanks for the reply. While I'm not an expert on the settings for servers, they do have a point.Intel's IOP results are done with write cache on. Several webpages and posts have said they turn off write caches to prevent data loss.
And I have X25-M using Windows XP. On a simple Crystaldiskmark, my random write 4K result goes from 35MB/s to 4MB/s when the write cache setting is disabled on disk settings. Of course I have NO reason to turn write caching off.
It's something not to be ignored. If this is true the X25-E is really only suitable for extreme enthusiast PC than servers as Intel claims.
JarredWalton - Monday, March 23, 2009 - link
No enterprise setup I've ever encountered runs without a hefty backup power system, so I don't think it's as critical a problem as some suggest. If power fails and the UPS doesn't kick in to help out, you're in a world of hurt regardless.That said, there was one time where one of the facility operations team did some "Emergency Power Off" testing at my old job. Unfortunately, they didn't put the system into test mode correctly, so when they hit the switch to "test" the system, the whole building went dark!
LOL. You never saw the poop hit the fan so hard! My boss was getting reamed for letting anyone other than the computer people into the datacenter; meanwhile we're trying to get everything back up and running, and the GM of the warehouse is wondering how this all happened.
That last one is easy to answer: your senior FacOps guy somehow forgot to put the warehouse into test mode. That's hard to do since it's listed as the second or third step in the test procedures. Not surprisingly, he was in a hurry because the testing was supposed to be done two weeks earlier and somehow slipped through the cracks. Needless to say, FacOps no longer got to hold the key that would allow them to "test" that particular item.
Bottom line, though, is that in almost four years of working at that job, that was the only time where we lost power to the datacenter unexpectedly. Since we were running an EMC box for storage, we also would have had their "super capacitor" to allow the cache to be flushed to flash, resulting in no data loss.