2

I have two SuperMicro servers directly connected (no switch) by two 10 GBit Intel X540-T2 NIC. One server runs Citrix XenServer 6.2, the other runs Debian 7.

I then installed open-iscsi and iscsitarget on the Debian system, configured a 12 GByte RAM disk, mounted it as an iSCSI storage on the XenServer and provided a 12 GByte virtual disk to one of the VMs running on that XenServer.

It turns out that I can't get more than about 290 MByte/s:

root@s1002:~# dd if=/dev/zero of=/dev/xvdb bs=16M
dd: writing `/dev/xvdb': No space left on device
737+0 records in
736+0 records out
12348030976 bytes (12 GB) copied, 42.6216 s, 290 MB/s
root@s1002:~# dd if=/dev/xvdb of=/dev/null bs=16M
736+0 records in
736+0 records out
12348030976 bytes (12 GB) copied, 46.0591 s, 268 MB/s

I then repeated the same test with a commercial storage and I got roughly 450 MByte/s transfer speed even when using physical disks.

I expected a similar or even better speed when using my Linux server with a ramdisk, but it seems that either my iscsitarget configuration or my network configuration is not optimal. The network is configured with jumbo frames (tested with ping -M do -s 8972 ipaddr on both ends). The targetcli setup is pretty much the default configuration:

/> ls
o- / ....................................................................................................................... [...]
  o- backstores ............................................................................................................ [...]
  | o- fileio ................................................................................................. [0 Storage Object]
  | o- iblock ................................................................................................. [0 Storage Object]
  | o- pscsi .................................................................................................. [0 Storage Object]
  | o- rd_dr .................................................................................................. [0 Storage Object]
  | o- rd_mcp ................................................................................................. [1 Storage Object]
  |   o- ramdisk ............................................................................................. [ramdisk activated]
  o- iscsi ........................................................................................................... [1 Targets]
  | o- iqn.2003-01.org.linux-iscsi.server85.x8664:sn.f63360d26dd2 ........................................................ [1 TPG]
  |   o- tpgt1 ......................................................................................................... [enabled]
  |     o- acls .......................................................................................................... [0 ACL]
  |     o- luns .......................................................................................................... [1 LUN]
  |     | o- lun0 ..................................................................................... [rd_mcp/ramdisk (ramdisk)]
  |     o- portals .................................................................................................... [1 Portal]
  |       o- 10.0.12.85:3260 ................................................................................................ [OK]
  o- loopback ......................................................................................................... [0 Target]
  o- tcm_fc ........................................................................................................... [0 Target]
/>

How can I configure iscsitarget and/or the NIC to improve the network performance so that it matches the commercial storage?

rjt
  • 568
  • 5
  • 25
nn4l
  • 1,336
  • 5
  • 22
  • 40
  • 1
    You might have to tune the network buffers in the linux kernel. Search for "linux 10ge sysctl tuning" on the internet. I don't think there is one good configuration that fits all...so you have to test. Use iperf for generic benchmarking your network. – Thomas Oct 01 '16 at 13:21
  • `I expected a similar or even better speed when using my Linux server with a ramdisk` - Why would you expect the same or better performance from a general purpose OS than from a purpose built storage array? – joeqwerty Oct 01 '16 at 14:50
  • 1
    because I did not expect the network setup to be a problem, and a ramdisk should have better i/o performance than the spinning disks of the other storage. – nn4l Oct 01 '16 at 16:27
  • _dd: writing `/dev/xvdb': No space left on device_ Does the RamDisk use old style RamDisk, tmpfs, or something newer? One could write past the last block with old style RamDisks. Old style RamDisk vs tmpfs is easier to see when configured via /etc/fstab, but not sure with iSCSI. Can you post the output of mount on the xenhost? – rjt Oct 24 '16 at 03:53
  • Disconnect the disk from the xenserver. On the debian host, initiate a localhost-to-localhost mount of the target. Repeat the same dd tests. – rjt Oct 24 '16 at 03:58

1 Answers1

1

First, even though it is called a ramdisk, it may actually be using the spinning platters. Turns out there are many types of RamDisks nowadays and the tmpfs kind can use harddisk as well as RAM. Would like to see your test against a 12GB spinning platters file backing store. Maybe the same speed.

Second, maybe you are over writing the end of the disk and writing much more than 12GB. That has been my experience. Unlike normal disks, dd (or more likely the kernel) does not stop when it reaches the end of the ramdisk. Set a limit on how much is written by dd by appending bs=1GB count=12.

Test overwriting by creating a 4GiB RamDisk backing store using targetcli on localhost and initiate a connection to it using iscsiadm. Then test writing much more than 4GB with dd:

# targetcli ls backstores/ramdisk/

o- ramdisk .................................................................................................... [Storage Objects: 1]
o- RamDisk4GB ............................................................................................... [(4.0GiB) activated]

Safe to write 3GB:

time dd if=/dev/zero of=/mnt/sdd bs=1GB count=3#
3+0 records in 3+0 records out 3000000000 bytes (3.0 GB) copied, 4.41983 s, 679 MB/s

real    0m6.692s
user    0m0.000s
sys     0m4.333s

But shocked when no errors when writing 5GB, 6GB, 8GB, 16GB, even 32GB to only 4GB of space:

time dd if=/dev/zero of=/mnt/sdd bs=1GB count=16

16+0 records in
16+0 records out
16000000000 bytes (16 GB) copied, 36.671 s, 436 MB/s

real    0m38.301s
user    0m0.002s
sys     0m13.591s

The errors did not report until 64GB were attempted to be written to the 4GB RamDisk. It seemed to error out at about the size of my / partition which is 50G.

time dd if=/dev/zero of=/mnt/sdd bs=1GB count=64
dd: error writing ‘/mnt/sdd’: No space left on device
49+0 records in
48+0 records out
48838684672 bytes (49 GB) copied, 122.821 s, 398 MB/s

real    2m4.682s
user    0m0.002s
sys     0m38.257s

Wish Datera.io and or linux-iscsi.org would warn about this and provide tmpfs or one of the newer ramdisk type options. My setup is CentOS7, so i actually use the free branch from github.com/open-iscsi/.

rjt
  • 568
  • 5
  • 25