Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the CIFS share:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.

Readers interested in the hard numbers can refer to the CSV program output here.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Seagate NAS 4-bay - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 44 32
Re-Write 44 36
Read 25 76
Re-Read 25 79
Random Read 14 33
Random Write 41 35
Backward Read 14 28
Record Re-Write 683* 920*
Stride Read 23 64
File Write 45 37
File Re-Write 44 37
File Read 17 55
File Re-Read 17 55
*: Benchmark number skewed due to caching effect

 

Single Client Performance - iSCSI on Windows Multi-Client Performance - CIFS on Windows
Comments Locked

19 Comments

View All Comments

  • MadMan007 - Thursday, July 24, 2014 - link

    Whoa, you mean a motherboard that costs more than this entire NAS performs better? SHOCKING!
  • JeffFlanagan - Thursday, July 24, 2014 - link

    Thanks Ganesh and others who commented on the cooling.
  • harshw - Thursday, July 24, 2014 - link

    Ganesh, I can attest to the fact that LaCie's fan design on their 5Big boxes for example - isn't really good for thermals. Like I commented earlier, I had a LaCie 5Big NAS Pro consistently overheat here in California where the ambient room temps can easily rise to 80F. I replaced the 120mm Noctua with a 150mm Thermaltake and also cut out the fan grill. It worked but the LaCie RAID config was already broken and no amount of recovery would help.
  • ganeshts - Thursday, July 24, 2014 - link

    That is disappointing to hear. I have the 5big Pro running without issues, but I have it perched on a shelf open on all four sides (basically a garage rack converted to hold stuff) where there is plenty of air-flow. One issue with the 5big box is that you have to make sure the underside is pretty clear (air intake is through that).

    That said, I do have more confidence in the NAS 4-bay, as the air is being pulled in through the front side of the unit. But, then, again, the NAS 4-bay doesn't have a metal body, so some heat loss through conduction will be missed.
  • beginner99 - Friday, July 25, 2014 - link

    With these huge RAID-Rebuild times greatly increasing the risk of a second drive failure, wouldn't it make sense to add a dedicated chip that speeds this up? This would even help during normal operation?
  • Zan Lynx - Friday, July 25, 2014 - link

    As far as I know, no current hardware is limited by the CPU when doing RAID rebuild. It is always the amount of time that it takes to read and write so many terabytes of data.

    Now maybe if the drive bays were fitted with SSDs it would be a problem.
  • jabber - Friday, July 25, 2014 - link

    From looking at recent tests and from my own kit at home it really looks like you need around 500MHz of ARM CPU power per 20-22MBps of data pumped down the cable.
  • Zan Lynx - Friday, July 25, 2014 - link

    Maybe the NAS builders should start using AMD Bobcat. My little home server built on a E-350 can fill a gigabit Ethernet link.
  • wintermute000 - Sunday, July 27, 2014 - link

    I agree. heck even Bill us the cost difference, what 50 more or whatever for a real cpu like a celeron

Log in

Don't have an account? Sign up now