I have made some real world, non-scientific tests of i/o speeds of iscsi and different network protocols in OS X.
My setup:
- Early 2011 MPB runnig OS X 10.7 Lion, connected to Netgear gigabit
switch
- Qnap TS-419P II NAS with 4 disks in RAID5, connected to Netgear
gigabit switch
- Buffalo LinkStation Pro NAS with 1 disk, connected to Netgear gigabit
switch
- globalSAN iSCSI initiator for OS X was used for iSCSI tests
The test was made by copying (cp) about 2gb of camera raw files (each about 20-25mb in size) to the device, restarting the device and copy the same data back to local SSD drive.
Write performance:
- Qnap, Async NFS = 34.59 mb/s
- Qnap, AFP = 31.83 mb/s
- Qnap, ISCSI = 31.89 mb/s
- *Qnap, SMB, cp = 30.71 mb/s
- Qnap, NFS = 27.22 mb/s
- Buffalo, AFP = 10.07 mb/s
- *Qnap, SMB, mv = 3.93 mb/s
*) Only when using SMB I got very different write performance results from copying the files to device using cp or mv command!
Settings the async option for NFS greatly improves the read performance.
I use the following mount command for the test:
mount -t nfs -o resvport,soft,intr,rsize=32768,wsize=32768,timeo=900,retrans=3,proto=tcp,vers=3,async server:/share /private/share/
Read performance:
- Qnap, Async NFS = 71.99 mb/s
- Qnap, AFP = 67.44 mb/s
- Qnap, ISCSI = 60.22 mb/s
- Qnap, NFS = 46.51 mb/s
- Qnap, SMB = 35.82 mb/s
- Buffalo, AFP = 5.46 mb/s
The protocols seems to handle caching differently. This is the results I got when copying the files to the device and immediately back to the local SSD drive (without restarting the device)
Read performance - without restart
- Qnap, ISCSI = 151.71 mb/s
- Buffalo, AFP = 145.54 mb/s
- Qnap, AFP = 143.23 mb/s
- Qnap, Async NFS = 71.99 mb/s
- Qnap, NFS = 47.37 mb/s
- Qnap, SMB = 38.13 mb/s
My conclusion: I will use either AFP or NFS since both protocols gives similar performance and flexibility (compared to iSCSI) for my purposes (Lightroom, backup, media streaming)
Thanks for the article it was useful ! Can I say that in my case AFP will be the faster and easier to use than the other ones ? – Kami – 2010-02-08T22:35:10.657
3The NFS v3 results reported are pretty lousy. In my moderately extensive work optimizing throughput for customers, NFS tends to be twice as fast or more than SMB.
With one server, one host over a gigabit network with no other traffic, I can get 900 megabits per second for NFS when CIFS struggles past 250.
That's not sustainable real-world throughput, to be sure, and doesn't invalidate these tests per se, but does make me quite suspicious about the results. – Jon Lasser – 2010-02-09T00:12:26.193
@Jon Lasser: The trouble with such tests is that they are usually only correct for the hardware used AND the nature of the test. In general, they are at best only weak indicators. – harrymc – 2010-02-09T07:15:33.660
@Kami: I can't answer that, as per my comment above to Jon Lasser. If you have the time, you might try to duplicate these test results on your hardware. If they differ in any way, it might be interesting for you to publish it here. – harrymc – 2010-02-09T07:17:50.227