6

Right now I have 3 VMware ESXi hosts using 4Gb fibre channel to our NetApp for storage. I'd like to switch to NFS over 10Gb ethernet.

Each ESXi server has two 10Gb ethernet ports and each controller on my NetApp has two 10Gb ethernet ports. The only piece left that I need to get is the ethernet switches.

I'd like to have two ethernet switches for redundancy, so that if one switch dies, storage will still work, identical to the dual-switch fibre channel multipath I/O I have now.

But how do you do the same thing for NFS over ethernet? I know how to handle the ESXi side and the NetApp side of the equation, it's just with the switching side I don't quite know what to do.

I know how to do a LACP trunk/etherchannel bonding, but that doesn't work between physically separate switches.

So, can you recommend a pair of Cisco switches to use for this purpose and which Cisco IOS features I'd use to enable this kind of multipath NFS I/O? I'd like the switch to have at least 12 10Gb ports each. I know these switches will be mega-expensive, that's fine.

DigiSage
  • 61
  • 1
  • 1
  • 3
  • Would very much love to hear from someone that is actually using 10Gb ethernet NFS NetApp storage with ESXi in production.... – DigiSage Feb 17 '11 at 02:25
  • 1
    I got a Cisco rep from CDW on the phone and we figured this out. The industry standard way of doing this, with Cisco anyway, is by taking advantage of VPC, "virtual port channel", available in the Nexus 5000 and 7000 series switches: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html It basically lets you do LACP across physically separate switches. The specific switch that seems to fit best here is the Nexus 5548. I now consider this question "answered". – DigiSage Feb 17 '11 at 23:21
  • If you had to figure anything out then it is hardly an industry standard. – JamesRyan Jun 17 '14 at 12:06

2 Answers2

1

My firm just expanded our Cisco 4507 chassis switch by adding and another supervisor engine and 6-port 10GbE line cards to accommodate the storage network (VMWare and NexentaStor/ZFS). I know it's not the multiple-switch arrangement, but was a good way to get the number of ports we needed. Elsewhere in the industry, it seems as though Cisco Nexus and 4900M are popular for the solution you're requesting.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Thanks for the switch recommendation, but that's just half the equation, I also need to know which IOS features / configuration to use to implement multipath I/O. – DigiSage Feb 16 '11 at 01:49
0

This document is specific to the Linux bonding driver, but it has some good information about configuring reliable network topologies such as you've requested.

It looks like you may be able to do what you want using a "single-mode interface group" on your netapps. Only one of the 10GB interfaces would be in use at any given time, and if it failed, the filer would start using the second interface. This would look something like:

vif create single vif0 e0a e0c

You have your filers and ESX hosts each with one connection to each switch.

[That syntax is for Data ONTAP 7.1 or so (documented here); it may have changed in later versions.]

If you haven't already you may want to take this up with Netapp support. I'm not 100% convinced that this will Do The Right Thing in a multi-switch topology, but it seems sane.

larsks
  • 41,276
  • 13
  • 117
  • 170
  • Thanks for the response, but this doesn't actually address multipath I/O over ethernet, as far as I can tell. I asked NetApp for info and they sent me details on the NetApp side of things, the ESXi side of things, but nothing about the switching in between. – DigiSage Feb 16 '11 at 01:48
  • This addresses accessing your Netapp filer over multiple paths. It's an active/passive, rather than active/active configuration, but it's still multipath. The vif configuration doesn't require any specific configuration on the switches; all of the logic is handled by the filers failover machinery. If you take out one of your switches, the filer would fail over to the second interface and continue providing service through the second switch. – larsks Feb 16 '11 at 01:53
  • Like I said in my original post, I know how to handle the netapp side of things and the ESXi side of things. It's the switching in the middle where I need help. – DigiSage Feb 16 '11 at 02:28
  • And I think I've addressed that both in my answer and explicitly in my previous comment. But if you're unhappy with this answer, that's fine. I've got other things to work on. – larsks Feb 16 '11 at 02:37
  • Are you suggesting creating a vif on the netapp for two ports, and connecting one of each port to separate switches, with no special configuration on the switch, and that will somehow work? – DigiSage Feb 16 '11 at 23:33
  • I am suggesting that you give it a shot. In theory, if you lose a switch, your ESXi hosts will ARP for the netapp and the netapp will respond over the second interface (through the remaining switch). I don't know if this will work in practice, but it seems as if it might. If you don't have the luxury of experimentation, Netapp support may be able to answer specific questions about this configuration...I won't have the luxury of a free filer to try this with until autumn. – larsks Feb 17 '11 at 01:52
  • I think the major problem with that will be (if it works) the time delay it takes for the arp request/receive to happen. That's why some switches support things like etherchannel across physically separate switches, but I think that's only in stacked switches. My ultimate goal in posting this question was to hear from someone saying "This is the industry standard way it's done, and how I implemented it: ", etc. I'd be happy to experiment, but I have to know what hardware to buy first, and I can't know that until I know exactly the best way of making this work, as not all switches will have it. – DigiSage Feb 17 '11 at 02:22
  • If you're still in the market for switches, Cisco's 3750-X switches are (a) stackable and (b) allow you to create aggregate channels (LACP, etherchannel, etc) across the stacked switches. [This document](http://www.cisco.com/en/US/products/hw/switches/ps5023/products_configuration_example09186a00806cb982.shtml) discusses exactly what you want, I think. – larsks Feb 17 '11 at 02:28
  • Thanks larsks, I've considered the 3750X and the problem is you can only get 2 10Gb ports per switch. I need, at a minimum, 4, preferably 8. – DigiSage Feb 17 '11 at 02:45