SSD versus Enterprise SAS and SATA disks
by Johan De Gelas on March 20, 2009 2:00 AM EST- Posted in
- IT Computing
Energy Consumption
For our performance testing we used a 3.3GHz (120W TDP) Core 2 X5470; we admit to being a bit paranoid and we wanted the CPU to have plenty of processing power in reserve. In the case of purely storage related tasks, the CPU never achieved more than 15% CPU load with software RAID. Only SysBench was capable of pushing it up to 80%, but if we want to measure the power consumption of our SC-836TQ storage enclosure the SysBench value is unrealistic. In most cases, the server will run the database and perform the transactions. The storage enclosure attached to the server will perform only the I/O processing. Therefore we measure the power consumption of our storage enclosure using IOMeter, and we use a more sensible (80W) 2.5GHz Core 2 E5420 CPU. High performance enclosures (such as those of EMC) also use Xeons to perform the I/O processing.
The SC-836TQ uses one Ablecom PWS-902-1R 900W 75A power supply, one Xeon E5420 "Harpertown", 4x2GB 667MHz FB-DIMM, and one Adaptec 5085 RAID controller. "Full Load" means that the storage enclosure is performing the IOMeter Random Read/Write tests. The difference between sequential reads and random writes is only a few watts (with both SSD and SAS).
Drive Power Consumption | ||||||
Idle | Full Load | Idle (Drives Only) |
Full Load (Drives Only) |
Idle (per Drive) |
Full Load (per Drive) |
|
8 x SSD X25-E | 257 | 275 | 6 | 24 | 0.75 | 3 |
4 x SSD X25-E | 254 | 269 | 3 | 18 | 0.75 | 4.5 |
8 x SAS (Seagate) | 383 | 404 | 132 | 153 | 16.5 | 19.125 |
4 x SAS (Seagate) | 316 | 328 | 65 | 77 | 16.25 | 19.25 |
No disks at all (One system disk) |
251 | n/a | n/a | n/a | n/a | n/a |
While the Intel SLC X25-E consumes almost nothing in idle (0.06W), the reality is that the drive is attached to a RAID controller. That RAID controller consumes a little bit of energy to keep the connection to the idle drive alive. Still, the fact that eight SLC drives need 129W less power than eight SAS drives while offering 3 to 13 times better OLTP performance is a small revolution in storage land.
Let us do a small thought experiment. Assume that you have a 100GB database that is performance limited. Our SysBench benchmark showed that eight SLC X25-E drives perform at least three times (up to 13 times) better than ten 15000RPM SAS drives. You need at least 30 SAS drives to achieve the same performance as the SSDs. We'll ignore the fact that you would probably need another enclosure for the 30 drives and simply look at the costs associated with an eight SLC SSD setup versus a 30 drive 15000RPM SAS setup.
We base our KWh price on the US Department of Energy numbers which states that on average 1 KWh costs a little more than 10 cents[2]; the real price is probably a bit higher, but that's close enough. It is important to note that we add 50% more power to account for the costs of air conditioning for removing the heat that the disks generate. We assume that the drives are working eight hours under full load and 16 under light load.
TCO Comparison | |||
X25-E | SAS 15000RPM | Comment | |
Power per drive | 1.5 | 17.375 | 16 hours idle, 8 hours full load |
years | 3 | 3 | |
KWh per drive (3 years) | 38.88 | 450.36 | 360 days, 24 hours |
Number of drives | 8 | 30 | Based on SysBench performance measurements |
Total KWh for disks | 311.04 | 13510.8 | |
Cooling (50%) | 155.52 | 6755.4 | to remove heat from array |
. | |||
Total KWh in datacenter | 466.56 | 20266.2 | disks power + cooling |
Price per KW | $0.10 | $0.10 | |
Total Power costs (3 years) | $46.656 | $2026.62 | |
TCA | $6400 | $6000 | Eight 64GB SLC drives at $800 Thirty 15000RPM SAS drives at $200 |
. | |||
Savings | $1579.964 |
If you use six drives for the RAID 10 data LUN (two drives for the logs), you need the 64GB SLC drives. That is why we use those in this calculation. Note that our calculation is somewhat biased in favor of the SAS drives: the SLC drives probably run at idle much more than the SAS drives, and it is very likely that even 30 SAS drives won't be able to keep with our eight SSDs. Even with the bias, the conclusion is crystal clear: if you are not space limited but you are performance limited, SSDs are definitely a better deal and will save you quite a bit of money as they lower the TCO.
67 Comments
View All Comments
Rasterman - Monday, March 23, 2009 - link
since the controller is the bottleneck for ssd and you have very fast cpus, did you try testing a full software raid array, just leave the controllers out of it all together?.Snarks - Sunday, March 22, 2009 - link
reading the comments made my brain asplode D:!Damn it, it's way to late for this!
pablo906 - Saturday, March 21, 2009 - link
I've loved the stuff you put out for a long long time. This another piece of quality work. I definitely appreciate the work you put into this stuff. I was thinking about how I was going to build the storage back end for a small/medium virtualization platform and this is definitely swaying some of my previous ideas. It really seems like an EMC enclosure may be in our future instead of a something built by me on a 24 Port Areca Card.I don't know what all the hubub was about at the beginning of the article but I can tell you that I got what I needed. I'd like to see some follow ups in Server Storage and definitely more Raid 6 info. Any chance you can do some serious Raid Card testing, that enclosure you have is perfect for it (I've built some pretty serious storage solutions out of those and 24 port Areca cards) and I'd really like to see different cards and different configurations, numbers of drives, array types, etc. tested.
rbarone69 - Friday, March 20, 2009 - link
Great work on these benchmarks. I have found very few other sources that provided me with the answers to my questions regarding exaclty what you tested here (DETAILED ENOUGH FOR ME). This report will be referenced when we size some of our smaller (~40-50GB but heavily read) central databases we run within our enterprise.It saddens me to see people that simply will NEVER be happy, no matter what you publish to them for no cost to them. Fanatics have their place but generally cost organizations much more than open minded employees willing to work with what they have available.
JohanAnandtech - Saturday, March 21, 2009 - link
Thanks for your post. A "thumbs up" post like yours is the fuel that Tijl and I need to keep going :-). Defintely appreciated!classy - Friday, March 20, 2009 - link
Nice work and no question ssds are truly great performers, but I don't see them being mainstream for several more years in the enterprise world. One is no one knows how relaible they are? They are not tried and tested. Two and three go hand in hand, capapcity and cost. With the need for more and more storage, the cost for ssd makes them somewhat of a one trick pony, a lot of speed, but cost prohibitive. Just at our company we are looking at a seperate data domain just for storage. When you start tallking the need for several terabytes, ssd just isn't going to be considered. Its the future, but until they drastically reduce in cost and increase in capacity, their adoption will be minimal at best. I don't think speed right now trumps capacity in the enterprise world.virtualgeek - Friday, March 27, 2009 - link
They are well past being "untried" in the enterprise - and we are now shipping 400GB SLC drives.gwolfman - Friday, March 20, 2009 - link
[quote]Our Adaptec controller is clearly not taking full advantage of the SLC SSD's bandwidth: we only see a very small improvement going from four to eight disks. We assume that this is a SATA related issue, as eight SAS disks have no trouble reaching almost 1GB/s. This is the first sign of a RAID controller bottleneck.[/quote]I have an Adaptec 3805 (previous generation as to the one you used) that I used to test 4 of OCZ's first SSDs when they came out and I noticed this same issue as well. I went through a lengthy support ticket cycle and got little help and no answer to the explanation. I was left thinking it was the firmware as 2 SAS drives had a higher throughput than the 4 SSDs.
supremelaw - Friday, March 20, 2009 - link
For the sake of scientific inquiry primarily, but not exclusively,another experimental "permutation" I would also like to see is
a comparison of:
(1) 1 x8 hardware RAID controller in a PCI-E 2.0 x16 slot
(2) 1 x8 hardware RAID controller in a PCI-E 1.0 x16 slot
(3) 2 x4 hardware RAID controllers in a PCI-E 2.0 x16 slot
(4) 2 x4 hardware RAID controllers in a PCI-E 1.0 x16 slot
(5) 2 x4 hardware RAID controllers in a PCI-E 2.0 x4 slot
(6) 2 x4 hardware RAID controllers in a PCI-E 1.0 x4 slot
(7) 4 x1 hardware RAID controllers in a PCI-E 2.0 x1 slot
(8) 4 x1 hardware RAID controllers in a PCI-E 1.0 x1 slot
* if x1 hardware RAID controllers are not available,
then substitute x1 software RAID controllers instead,
to complete the experimental matrix.
If the controllers are confirmed to be the bottlenecks
for certain benchmarks, the presence of multiple I/O
processors -- all other things being more or less equal --
should tell us that IOPs generally need more horsepower,
particularly when solid-state storage is being tested.
Another limitation to face is that x1 PCI-E RAID controllers
may not work in multiples installed in the same motherboard
e.g. see Highpoint's product here:
http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...
Now, add different motherboards to the experimental matrix
above, because different chipsets are known to allocate
fewer PCI-E lanes even though slots have mechanically more lanes
e.g. only x4 lanes actually assigned to an x16 PCI-E slot.
MRFS
supremelaw - Friday, March 20, 2009 - link
More complete experimental matrix (see shorter matrix above):(1) 1 x8 hardware RAID controller in a PCI-E 2.0 x16 slot
(2) 1 x8 hardware RAID controller in a PCI-E 1.0 x16 slot
(3) 2 x4 hardware RAID controllers in a PCI-E 2.0 x16 slot
(4) 2 x4 hardware RAID controllers in a PCI-E 1.0 x16 slot
(5) 1 x8 hardware RAID controllers in a PCI-E 2.0 x8 slot
(6) 1 x8 hardware RAID controllers in a PCI-E 1.0 x8 slot
(7) 2 x4 hardware RAID controllers in a PCI-E 2.0 x8 slot
(8) 2 x4 hardware RAID controllers in a PCI-E 1.0 x8 slot
(9) 2 x4 hardware RAID controllers in a PCI-E 2.0 x4 slot
(10) 2 x4 hardware RAID controllers in a PCI-E 1.0 x4 slot
(11) 4 x1 hardware RAID controllers in a PCI-E 2.0 x1 slot
(12) 4 x1 hardware RAID controllers in a PCI-E 1.0 x1 slot
MRFS