2

We are planning to purchase nodes for the Hyper-V Cluster which will run on Windows Server 2019.

  • Which chipset you recommend for cluster 2019?
  • What is your recommendation for PCIe Cards?
  • Should all ports be through cards or we can use also onboard Ports?

Regarding the 10GB PCIe cards we decide that we should have 8 ports:

One team for management

One team for cluster network

One team for Live Migrations

and two separate card for iSCSI - MPIO

  • Any specific requirements required for Hyper-V?

Thanks

MSAdmin
  • 31
  • 3
  • Totally off topic as per community reason: Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it. – TomTom Oct 22 '20 at 19:47
  • Btw. Regarding the 10GB PCIe cards we decide that we should have 8 ports: - GET A DUAL PORT !00G CARD - there are 4:1 breakout cables for 10G but then you can upgrade to something more modern. – TomTom Oct 22 '20 at 19:48
  • You can’t upgrade 10Gb to anything more modern as 40Gb Ethernet is history already. It makes sense to invest into 25/50/100 Gb infrastructure only. – RiGiD5 Oct 24 '20 at 11:33

1 Answers1

3

You want something reliable and with RDMA support. I’d stick with some Mellanox CX4lc NICs. This is what Microsoft uses to power Azure networking.

BaronSamedi1958
  • 12,510
  • 1
  • 20
  • 46