guys! After a recent upgrade of our infrastructure, I've got a bunch of decommissioned hardware that I would like to use for storage purposes along with trying out some new stuff. So, I have 4 x DELL R630 servers that have 6 x HDD and 2 x SSDs inside. Drives S.M.A.R.T are green so I would like to utilize those servers as a highly-available dedicated storage cluster. I would like to try the new Windows 2016 Storage Spaces Direct technology as a dedicated Scale-Out File server cluster, so I ordered 4 x Mellanox Connectx4 dual-port 10 Gbe NIC for them. Do you know how to properly deploy such kind of configuration? Any troubles or disadvantages of this scenario? Since all the official guides cover only hyper-converged approach a short step-by-step guidance of dedicated SoFS would be awesome!
2 Answers
Your scenario is fine if you are OK with paying loads of money for licensing. You will also need an interconnect fabrics and switch that supports PFC in order to use SMB Direct feature (I assume you would like to use it because of RDMA-capable Mellanox NICs which are awesome).
Your drives allow you to create a parity-based capacity tier and add some SSD-based caching (or faster tier) on top of it which is good. Though I would not expect exceptional performance from this setup unless your workload exactly fits your SSD cache/tier. Software parity RAID/RAIN used in S2D still sucks in terms of performance.
There is a lot of information on how to plan an S2D cluster on TechNet here https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview.
A step-by-step guidance that covers your particular scenario can be found here: https://www.starwindsoftware.com/blog/microsoft-storage-spaces-direct-4-node-setup-2
- 4,817
- 9
- 14
There's very little to zero sense building SoFS based on S2D. Here's why:
1) Datacenter edition everywhere. It's expensive ($6K+), and while for hyperconverged scenario you pay for licensed Windows Server VMs with SOFS you pay for... nothing! There are no VMs to run!
2) Dual-head configuration is possible since TP5 AFAIK, but there's no local reconstruction codes (Read: cluster isn't tolerant to double disk failure - this is NONSENSE in the storage array world!), there's no erasure coding, and there's no multi-resilient virtual disks. Going for more heads fixes issues, but that's ( 4 * Datacenter ) editions and... Did you see much quad active storage controllers in storage arrays?! Yup, there are two. Sometimes three (Infinidat?).
So stick with Clustered Storage Spaces as a SoFS back end (super mature solution) or use some replication between nodes for much less than $12K in software alone.
- 12,510
- 1
- 20
- 46