12

I have recently deployed WS2016 DC on 4x DL380 G7s for PoC purposes. Each server has 4x 300GB 10K SAS drives, also, I have a couple of Intel SSDs that I can temporary borrow from my company. My main goal is to test different Storage Replica "modes" and deploy Scale-out File Server role on top of Storage Spaces Direct.

About a month ago, I have had a hard time deploying 2-node Storage Spaces Direct on a different hardware configuration (2 Supermicro servers). To be honest, the installation process was far from "straightforward". There was an issue with WinRM, the "unsupported bus type" error while I tried to "-Enable-ClusterS2D" and a few issues later when I tried to create a new tiered space.

Essentially, I am looking for the most up to date guidance on how to setup Storage Spaces Direct in 4 node environment using Powershell. Resiliency type is not important as I would like to test different resiliency settings.

Thank you for your help!

Mwilliams
  • 123
  • 5

2 Answers2

11

Speaking shortly, the deployment sequence looks as following:

  1. Deploy necessary WS roles and features
  2. Validate the Failover Cluster
  3. Create the Failover Cluster
  4. Enable Storage Spaces Direct

-EnableStorageS2D

  1. Create and configure storage pools

Example input:

New-StoragePool -StorageSubSystemName #CLUSTER_NAME# -FriendlyName #POOL_NAME# -WriteCacheSizeDefault 0 -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Simple -PhysicalDisk (Get-StorageSubSystem -Name #CLUSTER_NAME# | Get-PhysicalDisk)

  1. Create and configure virtual disks

Example input:

New-Volume -StoragePoolFriendlyName #POOL_NAME# -FriendlyName #VD_NAME# -PhysicalDiskRedundancy 2 -FileSystem CSVFS_REFS –Size 100GB

  1. Deploy SOFS
  2. Create file shares That's it!

Here are two articles that I found helpful:

Link1 https://www.starwindsoftware.com/blog/microsoft-storage-spaces-direct-4-node-setup-2

Link2 https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/hyper-converged-solution-using-storage-spaces-direct

Net Runner
  • 5,626
  • 11
  • 29
  • 2
    I have configured Storage Spaces Direct following the guidance you have provided and will now deploy SOFS to test this setup further. Thanks for assistance! – Mwilliams Mar 06 '17 at 09:45
  • 2
    Think twice before you do: 2-node S2D lacks local reconstruction codes support, and do two-way mirror only. TL;DR: disk failure during second node patch reboot will bring your cluster down. Also performance isn't that great at all: no DRAM write back cache, and CSV is read-only. – BaronSamedi1958 Mar 12 '17 at 09:41
  • May stil lbe good enough for an inidial POC. – TomTom May 11 '17 at 07:42
4

My current script for evaluating Storage Spaces Direct

# windows server installation
Install-WindowsFeature Hyper-V, Data-Center-Bridging, Failover-Clustering, RSAT-Clustering-Powershell, Hyper-V-PowerShell -IncludeManagementTools

# before creating cluster set correct MediaType for all disks
#note before setting MediaType disks have to be assigned to a Storage Pool which can be deleted right after setting
Get-Physicaldisk | where size -gt 506870912000 | Set-PhysicalDisk –MediaType HDD

# Create the cluster
New-Cluster -Name w16hyper -Node w16hyper1, w16hyper2, w16hyper3 -NoStorage -StaticAddress 192.168.2.100

# hack to use RAID cards as JBOD
(Get-Cluster).S2DBusTypes=0x100

Enable-ClusterStorageSpacesDirect -CacheState Disabled

Get-StorageSubSystem Cluster*
Get-StorageSubSystem Cluster* | Get-Volume

#statistics
Get-StorageSubSystem Cluster* | Get-StorageHealthReport

#jobs running on background (eg. rebuild)
Get-StorageJob | ? JobState -Eq Running

#status
Get-StoragePool S2D* | Get-PhysicalDisk | Group OperationalStatus -NoElement
Get-StoragePool S2D* | Get-PhysicalDisk | Sort Model, OperationalStatus

#get log info
Get-StorageSubSystem Cluster* | Debug-StorageSubSystem

Get-VirtualDisk
Get-PhysicalDisk -Usage Retired

#create new mirrored volume (survive 1 fail for 2node system, 2 simultaneous fails for more nodes)
New-Volume -FriendlyName "Volume A" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S* -Size 1TB

#create hybrid volume (mirror + parity) with recommended 10% mirror part size
New-Volume -FriendlyName "Volume A" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 100GB, 900GB

#cleanup (pool has to be deleted on each node)
Disable-ClusterStorageSpacesDirect
Get-StoragePool S2D* | Set-StoragePool -IsReadOnly $false
Get-StoragePool S2D* | Remove-StoragePool
Jan Zahradník
  • 547
  • 5
  • 14