3

I have a FreeNAS Server with four network interfaces. The iSCSI traffic goes through two of those interfaces, and each interface has one IP address in a different subnet. As example:

igb0: 192.168.10.1/24 igb1: 192.168.11.1/24

There are three XenServer hosts with only one interface dedicated to iSCSI traffic. So in the schematic are two interfaces on the storage and three as the total on the hosts.

My plan was to achieve up to 2Gbit connection with the hosts, limited as 1Gbit per host.

The problem starts with the different subnetting. I don't know how to put two different IP's addresses on the same network interface on the XenServer hosts. The XenCenter simply don't let me to do this. Another idea was to isolate this kind of traffic with different VLANs. It's OK, but this does not appears to work, either.

EDIT: Unfortunately LACP does not work as expected, there are more info on FreeNAS docs: "LACP and other forms of link aggregation generally do not work well with virtualization solutions. In a virtualized environment, consider the use of iSCSI MPIO through the creation of an iSCSI Portal. This allows an iSCSI initiator to recognize multiple links to a target, utilizing them for increased bandwidth or redundancy. This how-to contains instructions for configuring MPIO on ESXi."

That's why I'm trying to setup MPIO even with VLANs and hacks to achieve the 2Gbit/s for the storage.

womble
  • 95,029
  • 29
  • 173
  • 228
Vinícius Ferrão
  • 5,400
  • 10
  • 52
  • 91
  • 2
    You need multiple physical connections (layer 1 and 2) on the hosts to the iSCSI storage. Adding another ip address (layer 3) isn't going to `double` the iSCSI bandwidth. – joeqwerty Apr 06 '14 at 15:20
  • Hi Joe, I understand. Perhaps I've explained bad. There are TWO interfaces on the Storage with iSCSI, and I want a peak of 2Gbit when two VM Hosts are generating traffic. – Vinícius Ferrão Apr 06 '14 at 15:53
  • How many connections to the storage does each host have? – joeqwerty Apr 06 '14 at 15:57
  • 1
    If I understand what you are wanting, it is to increase the bandwidth available to the freenas server and share the load between interfaces so that you get both interfaces on freenas loaded evenly up to 2Gbit, and service the xenserver hosts with maxiumum performance limited to 1Gbit. Right? – hookenz Apr 06 '14 at 21:17
  • 1
    Use Link aggregation on the freenas server. And if you need a second IP in that other subnet for some reason then add it as an alias against the aggregated link. Forget about MPIO. Seems like a good idea but it's already solved at the L3 level with link aggregation. – hookenz Apr 06 '14 at 21:18
  • I'm assuming you wanted to use MPIO to provide redundancy and help with even loading on the server. LACP does all this for you and you only need to configure freenas end. – hookenz Apr 06 '14 at 21:36
  • The major problem is that I'm aware of LACP, and was using it. But it never worked as expected. Digging the FreeNAS docs I've found this: "LACP and other forms of link aggregation generally do not work well with virtualization solutions. In a virtualized environment, consider the use of iSCSI MPIO through the creation of an iSCSI Portal. This allows an iSCSI initiator to recognize multiple links to a target, utilizing them for increased bandwidth or redundancy. This how-to contains instructions for configuring MPIO on ESXi." ---- So apparently LACP isn't really an option either. – Vinícius Ferrão Apr 07 '14 at 04:33
  • 1
    The key sentence being `This allows an iSCSI initiator to recognize multiple links to a target`. Do you have multiple links from each host to the storage? Mutliple ip addresses bound to the same NIC does not constitute multiple links. – joeqwerty Apr 07 '14 at 16:20
  • @Matt is correct here, as is joeqwerty. Trust us, go with LACP; it is the best your are going to get in this particular configuration. – Jed Daniels Apr 07 '14 at 16:47
  • Most LACP issues come down to configuration and switches. I've never had an issue with it when using it in a virtualized environment and with xen. The comment mentioned in the freenas docs may be old or may be related to some specific problem with the bonding driver implementation freenas uses. – hookenz Apr 07 '14 at 19:58

3 Answers3

13

Use LACP for NFS. Use MPIO for iSCSI.

If your hypervisor hosts don't have storage interface redundancy, that's where you should focus your attention; no hacks, no bullshit. Add an additional NIC to your hosts and configure MPIO.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • So, what I said, except less explanation, less words, and a day later. I shall keep your technique in mind for future answers, clearly it works. – Jed Daniels Apr 07 '14 at 18:01
  • 4
    The logic of the original question was flawed. A long explanation around a faulty idea *almost* gives the bad idea credibility. Here, the focus is to promote the use of resilient hypervisor storage networking. So while terse, my recommendation is the *PRO* solution. – ewwhite Apr 07 '14 at 18:26
  • 1
    I agree. I had mistakenly assumed the goal here was to educate, not just to provide the most professional answer. You are absolutely correct, and in my professional career I would never give such a detailed answer, because it does give the false assumptions in the question more credit than they deserve. Providing such a detailed response to a client would not be an effective use of time. It is far better to say "do it right, trust me, I'm a professional" than to help someone understand implement a solution to the wrong question. As I said, I'll try to keep your technique in mind. – Jed Daniels Apr 07 '14 at 18:52
  • 2
    @JedDaniels No, you explained it well. But the message wasn't getting through. I'm coming off of a flurry of [***bad FreeNAS questions***](http://serverfault.com/a/586967/13325), so patience is wearing thin. – ewwhite Apr 07 '14 at 18:55
  • Youch! That is quite the string of bad questions. I prescribe alcohol to help you cope. – Jed Daniels Apr 07 '14 at 19:13
  • 1
    Jed, you're wrong. Actually I'm posting here because I need to know WHY it should be in this way. I don't like plain answers. So whatever ewwhite said I vehemently disagree with him, and with the FreeNAS hate. Just in case I know how to use FreeNAS, and I was only questioning the FreeNAS documentation, because it appeared to be strange in this aspect. This is why I marked your answer as the best one. ewwhite answer was OK for guys looking for fast answers without thinking, which is valid, but definitively not my case, but I've upvoted it too, because it's appears to be a good and right answer. – Vinícius Ferrão Apr 08 '14 at 02:26
12

If each host only has one interface for iSCSI, then you won't be able to use MPIO with the setup you've described here. However, you should be able to configure the FreeNAS system to use Link Aggregation (LACP), so that you can service two hosts simultaneously each at 1Gb (for a total of 2Gb from the FreeNAS). Instead of MPIO, look into LACP (or, get a second NIC for each host).

EDIT: The reason that LACP is generally not recommended for virtualization is because it doesn't do what people expect. They usually expect that by putting two NICs on a host and two NICs on the storage, they can double the bandwidth to the storage for a single VM (or even from multiple VMs on that one host). It doesn't work that way, but MPIO, when properly configured, does. However, this clearly isn't what you are trying to do. If I read your original question correctly, you have two 1Gb NICs in the storage box, and one 1Gb NIC in each of the XenServer Hosts (for storage, at least--let's ignore the other network connectivity for the moment). What you want is for each of the hosts to be able to saturate their connection to the storage box simultaneously. LACP on the storage box is exactly the correct solution here (no need for LACP on the XenServer hosts, since they only have one NIC each).

If you are really insistent on making this work with MPIO, it can be done, but would be a terrible dirty hack. You'd basically have to configure each of the hosts with a dummy NIC on the other storage network, then tell XenServer to use the two NICs in an MPIO configuration. XenCenter certainly won't let you configure it that way, so you'd have to hack it from the command line. I'm not going to tell you how to do that, cause it is the wrong thing to do. It would likely break when you make any configuration changes and would almost certainly break when you upgrade XenServer.

Trust the community: configure LACP on the Storage box only, and you'll get what you want here. If you need an analogous configuration to settle your mind, think of it as installing a 2Gb NIC in the FreeNAS box. (Of course, with that said, the other solution is to add a 10Gb NIC into the FreeNAS box, and connect it to a 10Gb port on the switch that the hosts are connected to, but I'd guess that your switch doesn't have a 10Gb port on it.)

Jed Daniels
  • 7,172
  • 2
  • 33
  • 41
  • The major problem is that I'm aware of LACP, and was using it. But it never worked as expected. Digging the FreeNAS docs I've found this: "LACP and other forms of link aggregation generally do not work well with virtualization solutions. In a virtualized environment, consider the use of iSCSI MPIO through the creation of an iSCSI Portal. This allows an iSCSI initiator to recognize multiple links to a target, utilizing them for increased bandwidth or redundancy. This how-to contains instructions for configuring MPIO on ESXi." ---- So apparently LACP isn't really an option either. – Vinícius Ferrão Apr 07 '14 at 04:34
2

LACP

Link bonding happens at the Ethernet layer (L2) not the IP layer (L3). The LACP protocol involves a hash which can be an L2 hash, or L3 hash or even L4 hash (i.e. looking into the TCP/UDP port numbers) and this hash (by design) prevents a single session spanning more than a single physical interface. Thus, one single iSCSI session to one target across LACP will only give you the speed of one interface, at best.

MPIO

It is possible to open multiple sessions between a single initiator and a single target on a given IP address, and if this travels across an LACP bonded connection there are reasons why you might want to. Sadly, not all combinations of initiator and target will support this. In my testing with Citrix XenServer 6.2 (the installer that can be freely downloaded, not any enhanced version) my observation has been that when Multipath IO is enabled, it opens exactly one session to each IP address it can find. Thus, if you want multiple paths you need to setup multiple IP addresses on multiple interfaces.

Newer versions of Linux open-iscsi have the extra feature, so I would guess Xenserver will also get this too at some stage.

Tel
  • 21
  • 1