4

I'm building a small datacenter from the ground, and I'm considering 10GbE. I'm looking for some advices about this decision.

I've compared Infiniband, FC, 10GbE and LACP with GigE, and in the end 10GbE appears to be the best solution at this moment.

Talking about the datacenter: it will have one or two storages (two in case of failover scenario), and three 1U machines running some Hypervisor (XenServer is my favorite). The VM's will be in the storage, so the Hypervisors will boot from the storage or I will put some small SSDs in the 1U machines just to load the Hypervisor.

So, the problem here is: I'm little confused with what I've to buy to make the network. I've saw some expensive switches, like the Cisco Nexus 5000/7000 with a lot of features but I don't know if I really need this guys.

I don't have FC, so it's safe to buy single 10GbE switches without "converged networking" technologies? Or I should take one of those to get FCoE running?

Another question: iSCSI over 10GbE would be better than FCoE? (I'm considering FCoE is better because we don't use the IP stack in FCoE).

Thanks in advance, and I really appreciate some opinions here.

Vinícius Ferrão
  • 5,400
  • 10
  • 52
  • 91

4 Answers4

3

I'm with Tom here! IB (even ancient one) is cheaper and faster compared to 10 GbE.

People get some good numbers from the basically el cheapo gear:

http://forums.servethehome.com/networking/1758-$67-ddr-infiniband-windows-1-920mb-s-43k-iops.html

The problem is TCP-over-IB sucks (kills performance adding huge latency) and native IB support is very limited. SMB Direct with Windows Server 2012 R2 is great (when it works).

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46
  • Yeah. Most people are just woefully ignorant about HOW chheap and HOW fast not top of the line infiniband is ;) – TomTom Dec 06 '13 at 07:15
  • 2
    Yeap! It's a pity IB will be sorted out eventually. Commodity always wins and not technologically superior solution. – BaronSamedi1958 Dec 06 '13 at 15:20
2

The decision between technologies should be made on an evaluation of what your needs/budget/expertise are. Obviously, your choice is highly dependent on what type of storage hardware you have or will purchase, along with your networking infrastructure. Traditionally, SANs have used fibre channel due to their high speed, but with the advent of 10GbE, Ethernet has become a viable contender. Depending on the utilization level of your data center, you may even be able to get away with using 1GbE and MPIO, with the ability to scale up later. Most major vendors will give you the option between iSCSI, FCoE, and FC offerings, and the choice among these should be based on what your current (or desired) infrastructure is, taking into consideration your staff expertise.

I cannot comment on the use of Infiniband, as I have never used it myself, other than it's use is less prevalent than these other technologies, along with correspondingly fewer vendors to choose from. The side risk is finding staff that can support less common equipment.

Personally, if you (and your staff) have no experience in (nor existing infrastructure) with fibre channel, my recommendation would be to choose an iSCSI offering as your learning curve (and possibly, your implementation costs) will be much lower. Most people forget that hardware costs are tiny compared to labor. I spend ten times more on personnel costs than I do on my hardware budget, so if some type of hardware is a little more expensive but well understood by my staff (or I can more easily find someone to work on it), that becomes the obvious choice. Unless, of course, you're looking for a new learning opportunity. :P

newmanth
  • 3,913
  • 4
  • 25
  • 46
1

Why?

Given the high prices and low bandwidth I would always prefer Infiniband to 10g. Plus a 1g based outlink - unless you have more than 1g uplink bandwidth.

Due to other constraints I am using 10g on some servers (mostly - nearly all are 1g and the Netgear TXS 752 we use as backbone has 4x10g spf+) and the price factor of the network cards is - ouch. Compared to the much faster Infiniband.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • The problem with IB is the lack of support from XenServer and FreeNAS. Which are the hypervisor and the storage solution that we use here, because they both are free. – Vinícius Ferrão Dec 07 '13 at 18:36
  • 1
    Well, then maybe use other FREE technologies? Hyper-V has no problem with that and is free (hyper-v server), and there are open source NAS that have the IB drivers. Linux support is there. Btw., also with FreeNas - a 10 second google search found some links. – TomTom Dec 08 '13 at 08:51
  • It's not really stable with FreeNAS. It works, but not very well... about Hyper-V. We can't use it too. Hyper-V doesn't support FreeBSD. :( – Vinícius Ferrão Dec 08 '13 at 15:39
1

FCoE makes sense if you have an existing FC infrastructure and need to feed FC LUNs to your new servers and they don't have FC HBAs (or you're running out of licensed FC ports on your FC switches - that's the same). So you take 10 GbE and run FCoE to cut down the costs on FC gear. Building from scratch FCoE is pointless, run iSCSI (or SMB Direct with RDMA if you're on the "dark side") over 10 GbE and be happy. With a recent and decent multi-GHz and multi-core CPUs and both TCP and iSCSI at least partially offloaded to NIC ASICs there's no different between storage-over-TCP and storage-over-raw-Ethernet. Good luck my friend!

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46
  • Windows Server doesn't support RDMA over regular 10GbE. It's only supported via Infiniband, iWARP, or ROCE. Also, iSCSI has additional overhead in the protocol when compared to FC. Using converged ethernet to deliver FCoE gives you the ease of not managing separate FC HBAs and switches while giving you the efficiency of the FC protocol compared to iSCSI. In IOPS constrained situations - you're right. Both protocols will perform similarly, however in throughput constrained workloads, FC-based solutions will typically outperform iSCSI-based solutions. – MDMarra Dec 05 '13 at 04:22
  • 1
    I don't know what's "regular 10 GbE" but we've successfully deployed RDMA with Windows Server 2012 and pre-engineering 10 GbE samples from Mellanox nearly two years ago. Just the opposite to what you tell: you'll see typically higher IOPS especially on a single queue and pulsating traffic with FC because of the lower latency but x-put with a fully loaded pipeline would be virtually the same for 8 Gb FC and iSCSI-over-10 GbE. – BaronSamedi1958 Dec 05 '13 at 14:01
  • I use RDMA here with 10g and Intel adapters. Not sure where your "does not support RDMA" come from. – TomTom Dec 06 '13 at 07:12