4

We currently have Brocade 200E fiber switches, connecting 2 EMC CLARiiONs to 4 VMware ESXi hosts. We are looking into new storage options using iSCSI with our existing ethernet network, including the possibility of gradually upgrading to 10 gigabit. I have been searching for any kind of 10GBASE-T switch, that is backwards-compatible to 1 gigabit, and includes the fiber channel ports necessary to connect to the Brocades/CLARiiONs as well.

I am not very experienced with storage administration and fiber channel, so I understand this question might have an obvious answer of "no", but it did seem like the Cisco Nexus 5010 with a module (N5K-M1008) might work.

I also thought about using a 10Gb switch (Dell Powerconnect 8024) that has the SFP ports for uplink to other switches. Are these SFP ports capable of connecting to the fiber ports on the Brocade (not necessarily just on this Dell switch, but any switch like this), or are they designed only to work as uplink to the same model?

Any insight into the specifics of fiber switching, and how fiber ports are classified would be helpful.


EDIT: I've held off commenting because I was learning a good deal from the answers, and wanted to be able to clarify as best I could. I don't necessarily need a simple switch, but more a single device that can do this (so a Cisco Nexus with the necessary modules could work). Also, it seems like for this to function, I would need my new storage to be able to support FCoE over the 10Gbps links, so that it could then reach my hosts over FC.

I understand that getting the zoning right on the FC switch might be overly complicated, but I want to see if my understanding of the technologies is now correct. So, assuming this could be accomplished, would a Nexus switch that has the 10Gbps ports, as well as FC ports from a module that connect to exisiting FC switches, be able to connect a new storage device (that can speak FCoE) to my existing hosts?

Paul Kroon
  • 2,220
  • 16
  • 20
  • If you need to consolidate so many storage oriented protocols, its probably cheaper and simpler to pick one side of the fence to sit on, and retire one of these. – Jim B Feb 09 '12 at 04:40
  • @JimB - I definitely believe it may be cheaper, but I need to pull together options and pricing for management. The idea with this would be to make the investment on the 10Gbps switch now, and then the NICs later down the road when we fully move away from the fiber equipment. My thought is a dedicated 10Gbps switch and the NICs would be cheaper than a more complicated device that can handle both, but I can't be sure until I understand what that "more complicated device" would be. – Paul Kroon Feb 09 '12 at 18:52
  • One of the things I would reccomend you ask for before working on building options is a budget for a frame of reference. There are alot of top end datacenter convergence technologies but you need to know where you're shopping. EG if you are looking for cars are you in the lamborghini, corvette, or yugo pricerange. You should also ask for specific goals in mind when considering restructuring to a converged infrastructure (not to mention providing storage transparency- like an IBM SVC controller sitting on top of all of this) – Jim B Feb 09 '12 at 20:52

4 Answers4

9

Certainly Cisco MDS 95xx FC switches can have 1Gbps and 10Gbps Fibre-Channel-over-Ethernet line cards added to them to convert regular FC traffic onto Data Centre Ethernet which can then be fed into any FCoE/DCE-capable switch, which in turn could have regular 1Gbps and 10Gbps Ethernet ports.

That doesn't answer your question however, I'm unaware of any one regular ethernet switch that's capable of also taking native-FC ports - they're wildly different protocols.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Nexus should fit the bill though? Not that they're a regular ethernet switch, really, so I agree with you there. – Shane Madden Feb 08 '12 at 18:19
  • 1
    @ShaneMadden, but then you're using 5000's and dealing with the whole L2/3 thing falls way outside of the scope of this question I think. – Chopper3 Feb 08 '12 at 18:57
  • Yup, that's a good point - it definitely goes against the implied goal of keeping it simple. – Shane Madden Feb 08 '12 at 19:00
  • @Chopper3 - I tried to edit the question title to make it a little clearer. A complicated device would be fine; I'm mainly wondering if it was possible and what the general pricing would be. – Paul Kroon Feb 09 '12 at 19:00
  • It's more than possible, I use FCoE - works a treat, there's a very good Cisco-press book on the subject but in a Cisco world you're really talking about ~$100k to get started in FCoE if you include some routing too, plus the array of course too. But it does work very well indeed. – Chopper3 Feb 09 '12 at 21:21
  • @Chopper3 - Does that mean the Nexus 5010 switch with a module that seems to be around $15k total wouldn't be enough? The $50k-$100k range is what I'm expecting to see now, so I'm wondering if this means just the one Nexus isn't enough. – Paul Kroon Feb 10 '12 at 13:45
  • exactly - you'd want redundancy if you're going that way, hence the big bill – Chopper3 Feb 10 '12 at 13:46
5

It's doable to do your storage switching and your network switching on the same device. However, it's a lot more expensive and complicated than just keeping it separate, especially for a smaller environment.

While the SFP and LC connections are identical when you're looking at an FC port versus when you're looking at an ethernet fiber port, they're completely different from a connectivity perspective. An FC port on a switch can only connect to another FC device, and an ethernet port can only connect to another ethernet device. The SFP ports on an ethernet switch like the Dell speak a completely different language from the SFP ports on the Brocade.

To be clear on the storage protocols: if you want to use iSCSI to take advantage of faster ethernet ports, your storage connectivity must be iSCSI the whole way - the devices need to be talking iSCSI and the storage needs to be talking iSCSI and they need IP connectivity between them. Likewise, if you want to use FC connectivity, your only choices are FC and FCoE, and every device participating needs to be compatible. You can't mix and match in any way except between compatible protocols (FC/FCoE).

But what you can do is to connect your CX arrays to both iSCSI and FC protocols (via both ethernet ports and FC ports), presenting LUNs over different protocols and fabrics as needed - talking iSCSI with the devices you're running on your new 10GbE switch, and FC to the devices plugged in to your Brocade. That's probably the most appropriate approach for you - but if you still want to run FC storage and network traffic on the same switch, there are options:

The Nexus switches which support FCoE have the ability to have native FC ports, providing a bridge into the Nexus FCoE fabric for classic-FC devices (as well as the desired copper and fiber ethernet ports for IP traffic). But, be very careful with your expectations about bridging them into an existing FC switch infrastructure, if you're looking to do so - I haven't looked into it in a while, but when I did it looked painful to get the zoning working how you'd want.

Brocade also has converged storage and network switches, as a product of their acquisition of Foundry; I'm less familiar with their product line, but it's likely they'll have something suitable as well.

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
4

You can do this with FCoE. I've seen vendors call this "Unified Network" or "Converged Network".

3dinfluence
  • 12,409
  • 2
  • 27
  • 41
  • I believe I am looking for FCoE. I came across those options, and I think I was thrown off by the price differences, which made me think I was looking at the wrong options. The HP converged switches seem to be about 5x the price of the Cisco switches, which definitely surprised me. I guess these both do have the same general features though, right? – Paul Kroon Feb 08 '12 at 17:39
  • I believe the HP switches there are all 10Gb ports which is why they are so expensive. They may have some options with 1Gb but I'm not that familiar with their product line. – 3dinfluence Feb 08 '12 at 17:42
  • 1
    It looks like brodcade also has some options in this area. Dell also has some options it seems. http://www.dell.com/us/enterprise/p/powerconnect-b-8000/pd – 3dinfluence Feb 08 '12 at 17:50
  • I think that dell solution is a rebranded brodcade switch. – 3dinfluence Feb 08 '12 at 17:51
  • I picked this answer mainly because it was the right idea and came in first. Chopper3 and Shane Madden definitely had the right answers, too. I wanted to wait a while to be sure we were done, and of course management ended up going with just iSCSI after seeing the prices and complications. – Paul Kroon May 26 '12 at 15:11
1

Fibre channel is a protocol, distinct from Ethernet or IP.

As such, a FC switch is not compatible with Ethernet technology.

iSCSI runs over IP but is likewise unrelated to and incompatible with FC.

If you meant "ethernet over fibre" then I have no idea, but the Fibre Channel protocol is wholly incompatible with ethernet or IP.

adaptr
  • 16,479
  • 21
  • 33
  • You could word this better to be clear that you are not just being anal. A storage switch != a network switch. – JamesRyan Feb 08 '12 at 17:23
  • @JamesRyan Except if it's a storage and network switch. And those products certainly do exist. – Shane Madden Feb 08 '12 at 18:09
  • With both protocols being able to talk *to each other* ? I seriously doubt that - that requires a higher-level-protocol-aware router - like a...computer ;) – adaptr Feb 08 '12 at 22:34
  • @adaptr I didn't mean having them talk to each other, I meant having the same hardware be a transit for both the storage traffic and the network traffic. But with a closer read it's clear that the OP was looking to translate between protocols - so I see where you're coming from. – Shane Madden Feb 09 '12 at 04:05