5

my company bought 2x DL380p gen8 servers, and i would like to create hyper-convergence model.

My question is, whether I can connect directly two 544FLR-QSFP ( 649282-B21 ) cards bypassing InfiniBand switch over ethernet protocole ?

user397399
  • 85
  • 2

2 Answers2

5

If you don't have any plans to grow you might want to do that.

IB has lower operational latency compared to Ethernet so if you can do IB - do that!

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46
2

Yes and no. Hyper-V virtual machines (VMs) need to connect to networking via a virtual switch or through Single-Root Input/Output (I/O) Virtualization (SR-IOV). Leveraging InfiniBand connectivity typically means using Remote Direct Memory Access (RDMA), which currently isn't supported via a virtual switch and therefore isn't available to VMs. The only currently supported use of InfiniBand is RDMA over InfiniBand for SMB traffic and user-mode RDMA over InfiniBand for HPC communications. Outside of these two scenarios, there is no InfiniBand support—which means no support for VMs.

It seems that some network infrastructure vendors are heavily pushing their InfiniBand solutions and claiming to support Hyper-V over InfiniBand, even though Microsoft doesn't support it. In this scenario the IP over InfiniBand (IPoIB) miniport device is used by a Hyper-V virtual switch, to which VMs then connect. However, some organizations I work with have tried this method and have reported problems - sometimes very strange problems that were very hard to hunt down.

It's important to remember that RDMA wouldn't be exposed to the VMs via the virtual switch with this method, nor does this approach use SR-IOV to directly map VMs to the InfiniBand card. The only benefit at this point is a very fast connection, which Windows Server 2012R2+ would be able to take advantage of using its virtual Receive Side Scaling (vRSS) feature. But until Microsoft tests and supports this approach, I would be very hesitant to use it.

bjoster
  • 4,423
  • 5
  • 22
  • 32