4

I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI. Each of these ports is set to Jumbo Frames (9000) as well.

So now the questions:

  1. Do I need to log into each Windows Server VM I am running and delve into the NIC properties and manually set it to Jumbo Frames in the NIC properties in the device Manager as well?
  2. Whats the best way of testing that Jumbo Frames are indeed working as intended?
Starfish
  • 2,716
  • 24
  • 28
vlannoob
  • 153
  • 4
  • 16

3 Answers3

6

Don't do this unless you know exactly what you're doing. Really only do it on your dedicated iSCSI NICs and connected switch ports and SAN NICs.

There's really not many reasons to have non-storage ports set for Jumbo Frames with modern equipment.

Chris S
  • 77,337
  • 11
  • 120
  • 212
MDMarra
  • 100,183
  • 32
  • 195
  • 326
  • I agree. Alots. Don't use jumbo frames on regular traffic except if you have a VERY specific need for it. – pauska Sep 21 '12 at 01:51
  • I have it enabled on my iSCSI connections. I have two physical hosts and a SAN box. The VM's live on the SAN box so I just want to ensure I have the best possible throughput between them and see if that included enabling Jumbo Frames inside the actual VM's themselves. Appreciate the response ;) – vlannoob Sep 24 '12 at 22:33
  • If you enable jumbo frames inside of the guests, you're not doing it for storage, you're doing it for data across the NIC. You should grab a copy of Scott Lowe's *Mastering VMWare vSphere 5*. It sounds like you have a lot to learn. – MDMarra Sep 24 '12 at 23:07
3

In order to test if jumbo frames are working correctly:

  1. Enable SSH into the ESXi host and login to the shell(VmWare KB)
  2. Do a ping of a storage IP using the don't-fragment option and using a packet size higher than 1500, ex: vmkping -d -s 7000 storageipaddr

If you receive something like:

~ # vmkping -d -s 7000 10.10.10.10
PING 10.10.10.10 (10.10.10.10): 7000 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

--- 10.10.10.10 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

It means that there is an issue with your configuration, Jumbo frames are not working! You should follow this doc to see if all of your virtual switches have proper MTU size.

Martino Dino
  • 1,145
  • 1
  • 10
  • 17
  • Aha! Cheers Martino - I will give that a go ;) – vlannoob Sep 24 '12 at 22:36
  • that should be vmkping -s 8784 -d x.x.x.x jumbo frames is mtu of 9000, less header of 216 bytes is 8784. –  Jan 30 '13 at 01:05
  • 1
    It really doesn't matter as long as the packet size is more that 1500, if jumbo frames are not enabled, the DF option will cause it to fail even at 1500 ;) – Martino Dino Jan 30 '13 at 05:21
2

Jumbo Frames are usually disabled by default on the NIC so you most likely will have to enable Jumbo Frames on the NIC and make sure that you configure Jumbo Frames to match the rest of your iSCSI network devices.

As an aside, I've seen more than a few iSCSI implementations that experienced I/O problems that were originally attributed to Jumbo Frames (with the problem being thought to be that Jumbo Frames were disabled and needed to be enabled or that Jumbo Frames were enabled and needed to be disabled) that turned out to be Ethernet flow control problems. If you experience I/O problems on your iSCSI network the first thing I would do would be to look at the Ethernet statistics/counters on the iSCSI switches and look for a large number of Ethernet Pause frames. If you see those then your problem is related to Ethernet flow control and you should disable flow control on the iSCSI switches.

joeqwerty
  • 108,377
  • 6
  • 80
  • 171