Server Guide part 2: Affordable and Manageable Storage
by Johan De Gelas on October 18, 2006 6:15 PM EST- Posted in
- IT Computing
Introduction
The first magnetic disk was introduced by IBM in the 305 RAMAC computer on September 13th, 1956. The first disk drive was the size of two large refrigerators, could hold 4.4 MB, and cost $10,000 per MB. Although the capacity of the hard disk has exploded and price per GB has decreased spectacularly, the price of a complete enterprise storage solution can still quickly amount to tens of thousands of dollars and more.
Building a complete server solution for our own server lab, we quickly found out that finding the best storage solution for our needs was pretty hard when you are on a tight budget. As usual, the companies active in this market are not helping out. Minor evolutions are called "Breakthrough Architectures", "Affordability" means "not too expensive unless you need more than two drive bays filled" and "Business Intelligence" or "Investment Protection" just means that the marketing people were running out of buzz words and inspiration. In fact, the storage companies do their best to confuse people by calling both a simple SCSI DAS and a very expensive Fibre Channel SAN "scalable", "flexible", "affordable" and "serviceable".
The seasoned storage veteran quickly weeds out all the fluffy buzzwords, but what if you are relatively new to this market? What if your own experience with storage has been limited to adding disks to your old trusty tower server or the workstations of your colleagues? Welcome to the second part of our server guide! Just like our first guide, our goal is to offer you a no-nonsense introduction into the server room, and in this particular server guide we focus on storage performance and different disk interfaces.
Disk performance?
Before we start discussing the different topologies and technologies in the storage world, it is good to get back to basics. The basic component of 99.9% of the storage technology out there is still the hard disk.
To understand the basic performance of a disk, take a look at what happens when a request is sent to the disk:
Which operation will be the most important? That depends on the amount of data you read or write. If you need many small pieces of data all scattered all over the disk, seek time and latency are the most important. On the other hand if you transfer larger, contiguous pieces of data (i.e. data that is located in close proximity on the drive surface), the MTR will be the most important parameter.
To illustrate this, take a look at the table below. The table below calculates how much time it would take to transfer one block of 4 MB, similar to opening a MP3 song on a desktop PC. We also calculate the time it takes to get 100 different blocks of 4 KB, similar to what would happen if 100 users sent a very simple query to a database server simultaneously. At the end of the table we calculate the total time it takes to perform the requested actions, and we calculate the sustained transfer rate (STR), or the amount of data divided by the total time.
The Faster SATA and SCSI disk performing a database and a typical desktop workload
Although it's transferring one tenth the amount of data, the database access takes almost 15 times more time. In the case of our database access, seek time and latency determine 90-95% of our disk performance, while transfer time is only 1%. If we increase the size of the blocks that we need to 16 KB, little would change. The transfer time would quadruple, but the total time would hardly increase. However, if we increase the numbers of blocks or more generally the number of "I/O operations" that we access, the total time necessary to complete this action would scale almost linearly: twice as many I/O operations will double the time.
In our "desktop MP3" example, transfer time is good for 85% of the time: MB/s is the most important metric. File and FTP servers are somewhere between the desktop and database server examples: on average the number of KB per I/O operation is much higher than a transactional database, but I/O operations are also requested simultaneously.
So basically, there are two ways to measure storage performance:
The first magnetic disk was introduced by IBM in the 305 RAMAC computer on September 13th, 1956. The first disk drive was the size of two large refrigerators, could hold 4.4 MB, and cost $10,000 per MB. Although the capacity of the hard disk has exploded and price per GB has decreased spectacularly, the price of a complete enterprise storage solution can still quickly amount to tens of thousands of dollars and more.
Building a complete server solution for our own server lab, we quickly found out that finding the best storage solution for our needs was pretty hard when you are on a tight budget. As usual, the companies active in this market are not helping out. Minor evolutions are called "Breakthrough Architectures", "Affordability" means "not too expensive unless you need more than two drive bays filled" and "Business Intelligence" or "Investment Protection" just means that the marketing people were running out of buzz words and inspiration. In fact, the storage companies do their best to confuse people by calling both a simple SCSI DAS and a very expensive Fibre Channel SAN "scalable", "flexible", "affordable" and "serviceable".
The seasoned storage veteran quickly weeds out all the fluffy buzzwords, but what if you are relatively new to this market? What if your own experience with storage has been limited to adding disks to your old trusty tower server or the workstations of your colleagues? Welcome to the second part of our server guide! Just like our first guide, our goal is to offer you a no-nonsense introduction into the server room, and in this particular server guide we focus on storage performance and different disk interfaces.
Disk performance?
Before we start discussing the different topologies and technologies in the storage world, it is good to get back to basics. The basic component of 99.9% of the storage technology out there is still the hard disk.
To understand the basic performance of a disk, take a look at what happens when a request is sent to the disk:
- The Disk controller translates a logical address into a physical address (cylinder, track, and sector). The request is a matter of a few tens of nanoseconds, the command decoding and translating can take up to 1 ms.
- The head is moved by the actuator to the correct track. This is called seek time, the average seek time is somewhere between 3.5 and 10 ms
- The rotational motor makes sure that the correct sector is located under the head. This is called rotational latency and it takes from 5.6 ms (5400 rpm) to 2 ms (15000 rpm). Rotational latency is thus determined by how fast the rotational motor spins.
- The data is then read or written. The time it takes is dependent on how many sectors the disk has to write or read. The rate at which data is accessed is called the media transfer rate (MTR).
- If data is read, the data goes into disk buffer, and is transferred by the disk interface to the system.
Which operation will be the most important? That depends on the amount of data you read or write. If you need many small pieces of data all scattered all over the disk, seek time and latency are the most important. On the other hand if you transfer larger, contiguous pieces of data (i.e. data that is located in close proximity on the drive surface), the MTR will be the most important parameter.
To illustrate this, take a look at the table below. The table below calculates how much time it would take to transfer one block of 4 MB, similar to opening a MP3 song on a desktop PC. We also calculate the time it takes to get 100 different blocks of 4 KB, similar to what would happen if 100 users sent a very simple query to a database server simultaneously. At the end of the table we calculate the total time it takes to perform the requested actions, and we calculate the sustained transfer rate (STR), or the amount of data divided by the total time.
The Faster SATA and SCSI disk performing a database and a typical desktop workload
Although it's transferring one tenth the amount of data, the database access takes almost 15 times more time. In the case of our database access, seek time and latency determine 90-95% of our disk performance, while transfer time is only 1%. If we increase the size of the blocks that we need to 16 KB, little would change. The transfer time would quadruple, but the total time would hardly increase. However, if we increase the numbers of blocks or more generally the number of "I/O operations" that we access, the total time necessary to complete this action would scale almost linearly: twice as many I/O operations will double the time.
In our "desktop MP3" example, transfer time is good for 85% of the time: MB/s is the most important metric. File and FTP servers are somewhere between the desktop and database server examples: on average the number of KB per I/O operation is much higher than a transactional database, but I/O operations are also requested simultaneously.
So basically, there are two ways to measure storage performance:
- In MB/s
- In I/O operations per second
21 Comments
View All Comments
slashbinslashbash - Thursday, October 19, 2006 - link
Sounds great, thanks. If possible it'd be great to see full schematics of the setup, pics of everything, etc. This is obviously outside the realm of your "everyday PC" stuff where we all know what's going on. I administer 6 servers at a colo facility and our servers (like 90% of the other servers that I see) are basically PC hardware stuck in a rackmount box (and a lot of the small-shop webhosting companies at the colo facility use plain towers! In the rack across from ours, there are 4 Shuttle XPC's! Unbelievable!).We use workstation motherboards with ECC RAM, Raptor drives, etc. but still it's basically just a PC. These external enclosures, SAS, etc. are a whole new realm. I know that it'd be better than the ad-hoc storage situation we have now, but I'm kind of scared because I don't know how it works and I don't know how much it would cost. So now I know more about how it works, but the cost is still scary. ;)
I guess the last thing I'd want to know is the OS support situation. Linux support is obviously crucial.