3

I am putting a Dell server together, more specifically R720. I have to select the correct Host Bus Adapter. This HBA on R710 will connect to a storage device. I am confused between these two:

  • QLogic 2562, Dual Port 8Gb Optical Fiber Channel HBA (price $2,045)
  • QLogic 8262, Dual Port 10Gb SFP+, Converged Network Adapter (price $1,618)

I thought since the QLogic 2562 is a fiber channel and is more expensive then it is faster in terms of IOPS. But, it is a 8Gb as opposed to 10 Gb of SFP+.

My questions:

  • Which one is better (IOPS performance, etc.)?
  • Why should I choose one over another?
H A
  • 222
  • 1
  • 5
  • 13
  • What kind of storage device are you purchasing and what will the role of this server be? – SpacemanSpiff Jul 23 '12 at 23:34
  • I am designing a SharePoint Farm. There will be two R720 servers. One is the Hyper-V hosting the Web servers and App servers. The other R720 will be the Database server. Both R720 severs will communicate with a SAN storage via HBA controllers. I have not decided on a particular SAN storage model. But, I would like something with the highest IOPS. – H A Jul 23 '12 at 23:40
  • 1
    You can always buy the HBA or CNA later, you need to choose a storage device first. You will probably also need switching to support this. – SpacemanSpiff Jul 23 '12 at 23:43
  • I decided on PowerVault MD3620f with 1 controller, 2X SFP (Fiber Channel). I don't need a switch in this case, since 1 controller can handle up to 4 servers. http://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/SS754_Dell_PowerVault_MD3600f.pdf While each servers connecting to MD3620f has a Emulex LPE 12002 (dual port, 8 Gb Fiber Channel) HBA – H A Jul 24 '12 at 03:15
  • Yes, that should be fine. – SpacemanSpiff Jul 24 '12 at 03:17

2 Answers2

7

FC and 10GE use different bit encoding mechanisms, which dictates the maximum theoretical throughput for either. FC uses 8b/10b encoding while 10GE uses 64b/66b. What this means is that within the 8 Gbps for FC, 10 bits are sent for each byte of actual data. On the 8.5 Gbps (full underlying line rate of 8G FC) this comes out to 8.5 * 0.8 = 6.8 Gigabits per second. For 10GE this number ends up at 9.7 Gbps - or about 42% faster. There's some nominal amount lost in FCoE for Ethernet headers, of course, but it's a very small amount when compared to a 2.3k frame.

That said, the useful bandwidth of the 10GE FCoE can be shared with other network data, although there are environments that dedicate 10GE FCoE to -just- storage traffic. There are a few things to consider when looking at converged fabric, including:

1.) What's the actual amount of data crossing the notional FC link? Very, very few of the SAN's that I've seen (in some very large networks) even have a handful of consistently busy 4G ports, much less 8G. Most of the world would probably operate fine on 2G (..and much does).

2.) There are mechanisms in place with various implementations of DCB to guarantee lossless bandwidth to FCoE traffic. This means that if you set aside 4Gbps for storage traffic that this bandwidth will be available between the CNA and the switch under all circumstances - but in instances where the additional 6 Gbps is not otherwise in use that it will also be made available. By the same token, all 10Gbps is potentially available for normal data if said bandwidth isn't otherwise in use. The specifics of how these allocations is accomplished is going to be somewhat vendor dependent, but the overall behavior should be similar.

3.) Where do you break out the actual FC traffic to connect to the storage target (assuming said target isn't FCoE itself). The design of the intervening sections of your network will vary based on where the FC itself is broken out, requirements for multi-hop, etc.

Overall the speed king at the moment is 10G FCoE. This may change with the introduction of 16G FC - and, again, when 40G FCoE shows up. There's often a big win in terms of cabling, manageability, etc for FCoE - one connection to one port on one switch (x2 for redundancy) vs a completely separate infrastructure for traditional FC. FCoE is also generally managed just as normal FC is (same WWN setup, targets, zones, masking, etc).

As to IOPS - as mentioned above, this will likely be driven far more by the type of storage in use than the link in question.

rnxrx
  • 8,103
  • 3
  • 20
  • 30
4

You're throwing terms together that do not belong.

The 8Gb HBA is only meant for talking to a storage device. You would either run fiber directly from the server into a storage controller, or into a fiberchannel switch which can distribute those connections. Your server would then use the onboard 1Gb or 10Gb ethernet ports for data connections only.

The 10Gb SFP+ Converged Network Adapter (CNA), is exactly what it says it is. It runs both your data and storage over the same one (or two) links. I believe this generation of Dell servers lets you easily carve those up and segregate traffic. This can be done in concert with a CNA capable switch allowing you to deliver FCoE or iSCSI to the server, while your data traffic is placed onto VLANs or whatnot.

As for IOPS... that's really a measurement applied against a storage device, not the bus that gets you there (though that does have an impact).

SpacemanSpiff
  • 8,733
  • 1
  • 23
  • 35