SSD versus Enterprise SAS and SATA disks
by Johan De Gelas on March 20, 2009 2:00 AM EST- Posted in
- IT Computing
Energy Consumption
For our performance testing we used a 3.3GHz (120W TDP) Core 2 X5470; we admit to being a bit paranoid and we wanted the CPU to have plenty of processing power in reserve. In the case of purely storage related tasks, the CPU never achieved more than 15% CPU load with software RAID. Only SysBench was capable of pushing it up to 80%, but if we want to measure the power consumption of our SC-836TQ storage enclosure the SysBench value is unrealistic. In most cases, the server will run the database and perform the transactions. The storage enclosure attached to the server will perform only the I/O processing. Therefore we measure the power consumption of our storage enclosure using IOMeter, and we use a more sensible (80W) 2.5GHz Core 2 E5420 CPU. High performance enclosures (such as those of EMC) also use Xeons to perform the I/O processing.
The SC-836TQ uses one Ablecom PWS-902-1R 900W 75A power supply, one Xeon E5420 "Harpertown", 4x2GB 667MHz FB-DIMM, and one Adaptec 5085 RAID controller. "Full Load" means that the storage enclosure is performing the IOMeter Random Read/Write tests. The difference between sequential reads and random writes is only a few watts (with both SSD and SAS).
Drive Power Consumption | ||||||
Idle | Full Load | Idle (Drives Only) |
Full Load (Drives Only) |
Idle (per Drive) |
Full Load (per Drive) |
|
8 x SSD X25-E | 257 | 275 | 6 | 24 | 0.75 | 3 |
4 x SSD X25-E | 254 | 269 | 3 | 18 | 0.75 | 4.5 |
8 x SAS (Seagate) | 383 | 404 | 132 | 153 | 16.5 | 19.125 |
4 x SAS (Seagate) | 316 | 328 | 65 | 77 | 16.25 | 19.25 |
No disks at all (One system disk) |
251 | n/a | n/a | n/a | n/a | n/a |
While the Intel SLC X25-E consumes almost nothing in idle (0.06W), the reality is that the drive is attached to a RAID controller. That RAID controller consumes a little bit of energy to keep the connection to the idle drive alive. Still, the fact that eight SLC drives need 129W less power than eight SAS drives while offering 3 to 13 times better OLTP performance is a small revolution in storage land.
Let us do a small thought experiment. Assume that you have a 100GB database that is performance limited. Our SysBench benchmark showed that eight SLC X25-E drives perform at least three times (up to 13 times) better than ten 15000RPM SAS drives. You need at least 30 SAS drives to achieve the same performance as the SSDs. We'll ignore the fact that you would probably need another enclosure for the 30 drives and simply look at the costs associated with an eight SLC SSD setup versus a 30 drive 15000RPM SAS setup.
We base our KWh price on the US Department of Energy numbers which states that on average 1 KWh costs a little more than 10 cents[2]; the real price is probably a bit higher, but that's close enough. It is important to note that we add 50% more power to account for the costs of air conditioning for removing the heat that the disks generate. We assume that the drives are working eight hours under full load and 16 under light load.
TCO Comparison | |||
X25-E | SAS 15000RPM | Comment | |
Power per drive | 1.5 | 17.375 | 16 hours idle, 8 hours full load |
years | 3 | 3 | |
KWh per drive (3 years) | 38.88 | 450.36 | 360 days, 24 hours |
Number of drives | 8 | 30 | Based on SysBench performance measurements |
Total KWh for disks | 311.04 | 13510.8 | |
Cooling (50%) | 155.52 | 6755.4 | to remove heat from array |
. | |||
Total KWh in datacenter | 466.56 | 20266.2 | disks power + cooling |
Price per KW | $0.10 | $0.10 | |
Total Power costs (3 years) | $46.656 | $2026.62 | |
TCA | $6400 | $6000 | Eight 64GB SLC drives at $800 Thirty 15000RPM SAS drives at $200 |
. | |||
Savings | $1579.964 |
If you use six drives for the RAID 10 data LUN (two drives for the logs), you need the 64GB SLC drives. That is why we use those in this calculation. Note that our calculation is somewhat biased in favor of the SAS drives: the SLC drives probably run at idle much more than the SAS drives, and it is very likely that even 30 SAS drives won't be able to keep with our eight SSDs. Even with the bias, the conclusion is crystal clear: if you are not space limited but you are performance limited, SSDs are definitely a better deal and will save you quite a bit of money as they lower the TCO.
67 Comments
View All Comments
marraco - Wednesday, March 25, 2009 - link
The comparison is not fair, but can be fairer:If the RAID of SATA/SAS disks is restricted to the same storage capacity than the SSD, limiting the partition to the fastest external tracks/cilynders, the latency is significantly reduced, and average read/write speed is significantly increased, so
PLEASE, PLEASE, PLEASE
Repeat the benchmarcks, but with short stroking for magnetic disks.
JohanAnandtech - Friday, March 27, 2009 - link
May I ask what the difference with the fact that we created a relatively small partition across our RAID-5 raidset? Also, you can imagine that our 23 GB database was at the outer tracks of the disks. I have to verify, but that seems logical.This kind of testing should give the same effects as short stroking. I personally think Short stroking can not be good for your actuator, while a small partition should be no problem.
marraco - Friday, March 27, 2009 - link
See this link.http://www.tomshardware.com/reviews/short-stroking...">http://www.tomshardware.com/reviews/short-stroking...
Clearly, you results are orders of magnitude than those showed on that benchmark.
As I understand, short stroking increase actuator health, because reduces physical acceleration on the actuator.
Anything necessary, is to use a small partition on the fastest external track.
you utilized a raid 0 of 16 disks, with less than 1000 gb/second.
On Tomshardware, a raid of only 4 disk achieved average (not maximun) 1400 to 1600 Mb/s. (of course, the test are not the same; for that reason, I ask for new test)
About the RAID 5: I would love to see RAID 0.
I are interesed on comparing a fast SSD as the intels, (or OCZ Vostro/Summit), with what can be achieved at the same cost, with magnetic media, if the partition size is restricted to the same total capacity than the SSD.
Anyway, thanks for the article. Good work.
So good, I want to see more :)
marraco - Sunday, April 5, 2009 - link
Please, tell me you are preparing such article :)JohanAnandtech - Tuesday, April 7, 2009 - link
We are investigating the issue. I like to have some second opinions before I start heavy benchmarking on THG article. They tend to be sensational...araczynski - Wednesday, March 25, 2009 - link
wow, color me impressed. all the more reason to upgrade everything to gigabit and fiber.BailoutBenny - Tuesday, March 24, 2009 - link
Can we get any updates on the future of chalcogenide glass (phase change) based drive technologies? IBM's Millipede and other MEMS probe storage devices? Any word about Intel and STMicroelectronics' shipments of PRAM samples to customers that happened last year? What do the rumor mills say? Are these technologies proving viable? It is difficult to formulate a coherent picture for these technologies without being an industry insider.Black Jacque - Tuesday, March 24, 2009 - link
RAID 5 in Action... However, it is rarely if ever used for any serious application.
You are obviously not a SAN Admin or know too much about enterprise level storage.
RAID 5 is the mainstay of block-level storage systems by companies like EMC.
In addition, the article mentions STEC EFDs used by EMC. On the EMC CLARiiON line, those EFDs are provisioned in RAID 5 groups.
spikespiegal - Wednesday, March 25, 2009 - link
[quote]RAID 5 is the mainstay of block-level storage systems by companies like EMC. [/quote]Which thus explains why in this day in age I see so many SANs blowing entire volumes and costing days of restoration when the room temp gets a few degrees above ambient.
Corrupted RAID 5 arrays have cost me more lost enterprise data than all the non-RAID client side disks I've ever replaced; iSeries, all brands of x386, etc. EMC has a great script to account for this in which they always blame the drives first, then only when cornered by an enraged CIO will they admit it's their controllers. Been there...done that...for over a decade in many different industries.
If you haven't been burned by RAID 5, or dare claim a drive controller in RAID 5 mode has a better MTBF than the drives it's hosting, then it's time to quite your day job at the call center in India. RAID 5 saves you the cost of one drive every four, which was logical in 1998 but not today. At least span across multiple redundant controllers in RAID 10 or something....
JohanAnandtech - Tuesday, March 24, 2009 - link
I fear you misread that sentence:"RAID 0 is good way to see how adding more disks scales up your writing and reading performance. However, it is rarely if ever used for any serious application."
So we are talking about RAID-0 not RAID-5.
http://it.anandtech.com/IT/showdoc.aspx?i=3532&...">http://it.anandtech.com/IT/showdoc.aspx?i=3532&...