4

We have a Dell PowerVault MD3200i connected to a server via iSCSI using 2 Dell switches (PowerConnect 6224) at 1GB. The server has VMware ESXi 6.5 installed and it's connected to the PowerVault using 2x 1GB NICs connected to each switch.

I have a VM created and one virtual disk assigned to the virtual machine that reside on the PowerVault. I have enable Jumbo Frames on each switch, on VMware and on each interface on the PowerVault, but I didn't notice any increment on the performance.

I was testing the performance on the virtual disk using IOmeter, and without Jumbo Frames enable I get around 739 Total IOPs and 28.5 Total Mbs per seconds. With Jumbo Frames enabled, I get about 640 Total IOPs and about 22-24 Total Mbs per seconds.

Shouldn't be the other way around? I thought I would get a better performance setting MTU to 9000, but it seems to be the opposite.

I confirm that Jumbo Frames are enabled on all the device in the connections (VMware server, switches and PowerVault) because I can "ping" from VMware to the PowerVault using vmkping -d -s 8972 *ip-powervault* and I receive reply from the PowerVault without errors.

Am I missing something? or is this something with the PowerVault MD3200i?

Thanks and best regards.

U880D
  • 597
  • 7
  • 17
Alberto Medina
  • 299
  • 1
  • 9
  • Then just leave Jumbo Frames off. I don't enable them in most environments. – ewwhite Nov 27 '17 at 06:02
  • You should consider tailoring your IOmeter testing to fit the expected I/O profile of your real workload before deciding whether a change like this becomes permanent. Simulate your planned workload, test various valid configurations, then keep whatever gives you the desired performance and reliability. – JimNim Nov 27 '17 at 15:35

2 Answers2

4

Enabling jumbo frames for iSCSI connections usually boosts the performance on larger packets/blocks so you have to ensure that you are running your benchmarks using 64K (for example) block size for testing.

Do not expect to get more than 5%-15% performance boost with jumbo frames.

Net Runner
  • 5,626
  • 11
  • 29
1

I tried jumbos on our MD3220i and found that they didn't do much. You gain 3-4% when the network is the bottleneck due to less overhead and that's about it.

I tested simple sequential throughput with dd bs=4k, bs=1M and bs=16M from an ESXi guest (over VMkernel software initiator) and from a physical host with Windows initiator with either four or two NICs using 1:1 IP connections, round-robin with subset, as recommended by Dell.

Jumbos can actually slow your connection when something isn't working correctly - some NICs don't cope so well but mostly it's the switches playing up (jumbos in combination with sflow for instance).

It worked, but since there wasn't any real benefit we went back to standard frames.

Zac67
  • 8,639
  • 2
  • 10
  • 28