9

I have an OpenSolaris box sharing out two ZFS filesystems. One is an NFS connection to a CentOS box running VMware server (the disk images are stored in ZFS). The other is an iSCSI connection to a Windows 2008 server with NTFS formatting on top of the ZFS. Both connections are direct over gig-E (no switches).

I'm running munin to monitor the boxes, but I'm not sure what kind of numbers I should be expecting. Can anybody give me some baseline numbers to compare against or make any suggestions on where to start tuning?

Here are the NFS stats I'm seeing, I'll post iSCSI once I fix munin on the solaris box :P

interface

nfs client

Sysadminicus
  • 586
  • 4
  • 8
  • 19

6 Answers6

12

We've pushed a Sun X4100 writing over bonded GigE and iSCSI to an Sun X4500 to 280MB/s.

There's a lot that can be done to tune the TCP stack on Solaris to help things out, this my my stock tuning config. (take from some collection of Sun whitepapers):

$ cat /etc/rc3.d/S99ndd
#!/bin/bash

NDD=/usr/sbin/ndd

$NDD -set /dev/tcp tcp_xmit_hiwat 1048576
$NDD -set /dev/tcp tcp_recv_hiwat 8388608
$NDD -set /dev/tcp tcp_max_buf 8388608
$NDD -set /dev/udp udp_xmit_hiwat 1048576
$NDD -set /dev/udp udp_recv_hiwat 8388608
$NDD -set /dev/udp udp_max_buf 8388608
$NDD -set /dev/tcp tcp_conn_req_max_q 65536
$NDD -set /dev/tcp tcp_conn_req_max_q0 65536
$NDD -set /dev/tcp tcp_fin_wait_2_flush_interval 67500

Also, worth looking into on your OpenSolaris machine is changing the fsflush interval, the interrupt adjustment "magic" and disabling soft rings. Append the following to /etc/system (reboot required):

* "fsflush" tuning
set tune_t_fsflushr = 5
set autoup = 300
* Disable the Automatic Interrupt Adjustment
set dld:dld_opt = 2
* Disable "soft rings"
set ip:ip_squeue_fanout = 0
set ip:ip_soft_rings_cnt = 0

Worth mentioning, I do this on Solaris 10 -- not OpenSolaris -- but I think the tunables should work for you just the same.

I'm a big fan of Filebench for playing around with tuning options and doing throughput tests.

The (recently renamed) OpenSolaris 2009.06 release looks to be very exciting in the world of iSCSI and ZFS.

Hope this helps some!

jharley
  • 813
  • 6
  • 10
3

I get around 90 MB/sec to my EMC AX150i arrays over iSCSI on 1GB ethernet.

Brent Ozar
  • 4,425
  • 17
  • 21
1

For just a single dd or bonnie++ (raw speed, linear writing) you should get pretty close to the wire-speed.

But your bottleneck will be the disk array, once you start getting the random IO load of multiple VMs going, much more than the transport.

Also, if you don't have a battery backed write cache with a significant amount of RAM, your performance will crater as soon as you start getting a lot of writes with any other IO going on.

jwiz
  • 166
  • 1
  • 4
0

I've been able to push data over iSCSI to about 200 Mbit/sec over 1GB links. But I had a 14 disk RAID 6 hosted by an EMC CX4-240 with not much else using the cache at the time.

The biggest bottleneck will probably be the amount of controller cache and speed of the disks (for when the cache gets full).

mrdenny
  • 27,074
  • 4
  • 40
  • 68
  • 1
    We failed in the planning stage by putting all 12 disks on the same controller. I imagine that splitting those out among another controller would be an easy speed win for us. – Sysadminicus May 27 '09 at 19:07
  • It might, it all depends on where the bottleneck is. Our RAID 6 is a single controller as it's all within a single shelf. But it's in a pretty high end piece of hardware. Where is the bottleneck? You may simply need to put more cache in the controller and/or assign a higher percentage of cache as write cache. – mrdenny May 28 '09 at 00:47
  • I've got a similar setup (though with AX4-5). I don't use iSCSI, but I got extremely fast transfers using unencrypted protocols between two machines on the SAN. I wish I knew of a good SAN optimization document(s). – Matt Simmons Jun 03 '09 at 03:11
0

For those of us closer to the semi-pro end of things (rather than pro) I get constant and consistent 150MB read 120MB write from W2012 server dual 1gb nic teamed via draytek managed switch to blackarmorsnas via RJ45 copper, single transfer of 20GB file. No other simulataneous operations during test. To acheive this i am using jumbo frames 9k, rx&tx flow, ie all normal driver optimisations, but no tweaks other than turning things on and upping jumbo frame to max.

0

I get around 80 MB/S to my windows server over ISCSI on 1G ethernet. Target: KernSafe iStorage Server http://www.kernsafe.com/Product.aspx?id=5 Initiator: Microsoft Initiator www.microsoft.com/downloads/details.aspx?familyid=12cb3c1a-15d6-4585-b385-befd1319f825&displaylang=en

Hardisk: ATA 7200