I am trying to achieve high iSCSI speeds between my ESX box, and the synology NAS. I am hoping to achieve a top speed of 300-400 Mb/s. But so far all i've reached is 150 - 170 MB/s.
The main test that I am using is to create a 20GB Virtual Disk, Think Eager Zeroed in the iSCSI SSD based datastore. (And variations of this.)
Some questions:
- I am assuming that creating this disk would be sequential writing?
- Synology never passes 30% / 40% CPU usage, and memory is almost used. I am assuming that the Synology is capable of writing at these speeds on an SSD, right?
- Also, is ESX able to max out the available bandwidth when creating a virtual disk over iSCSI?
- If using a benchmark tool, what would you recommend, and how can I be sure that I won't have the bottleneck on the data sending side? Can I install this tool in a VM in the SSD Datastore, and run it "against itself"?
This is my setup.
I have a Synology 1513+ with the following disks and configuration:
- 3 4TB WD disks (Unused)
- 1 Samsung EVO 860. (1 volume, no raid)
- 1 Samsung 256GB SATA III 3D NAND. (1 volume, no raid)
- 2 iSCSI targets, one per SSD. (8 total vmware iSCSI initiators connected)
Network config:
Synology 4000 Mbps bond. MTU 1500, Full Duplex.
Synology Dynamic Link Aggregation 802.3ad LACP.
Cisco SG350 with link aggregation configured for the 4 Synology ports.
Storage and iSCSI network is physically separated from the main network.
CAT 6 cables.
VSphere:
- PowerEdge r610 (Xeon E5620 @ 2.40Ghz, 64 GB memory)
- Broadcom NetXtreme II BCM5709 1000Base-T (8 NICS)
- VSphere 5.5.0 1623387
VSphere config:
- 4 vSwitch, 1 NIC each for iSCSI.MTU 1500. Full Duplex.
- iSCSI Software initiator with the 4 vmkernel switches in the port group, all compliant and path status active.
- 2 iSCSI targets with 4 MPIO paths each. All active and round robin
So basically, 4 cables from the NAS go to the Cisco LAG, and 4 iSCSI from ESX go to regular ports on the switch.
Tests and configs I've performed:
- Setting MTU to 9000 on all vmswitches, vmkernel, synology and cisco. I have also tried other values like 2000 and 4000.
- Creation of 1 (and 2, 3 simultaneous ) virtual disks in 1/2 iSCSI targets to maximise the workload.
- Disabled / Enabled Header and Data Digest, Delayed Ack.
I've lost count of all the things that I have tried. I am not sure where my bottleneck is, or what have I configured wrongly. I have attached some screenshots.
Any help would be much appreciated!
Example of the vmkernel config