hi guys,
I'm switching between Storage Spaces and an hardware based LSI controller for a while. The HW controller offers 8 ports, my onboard LSI sas 16 and my case 12 Slots of drives. Currently I'm running MS Server 2016 Datacenter.
I wonder if I switch from my 6x4Tb Raid 5 (HW controller) to an more flexible SS solution with 7x4 TB + 5x2 TB (HDDs) + 2x256Gb SSDs as Cache. My experience with SS was not that good in past, had a lot of disappearing drives and issues + data loss and a very bad alerting system of MS (SS lacks completely in my mind) - but most HDD issues were caused by my PSU which is solved now - so I decided to give it SS a try as I can have all my drives at one controller + get rid of the HW controller. The Onboard LSI (XEON D Board) works fine including HDD Standby.
I've read this post Mirrored virtual disk on storage pool can't expand after adding drive on Windows Server 2012 R2
which I'll quote some very interesting findings:
Hence, to extend a Storage Pool of 4 Columns - you don't need to add 4 disks - you only need to make sure, that you have 4 disks with remaining diskspace left in the pool!
So, if 1 is full, but 3 have remaining space, the pool will become operational again after adding One Disk!
For now I did a test Pool and Disk by this: -> created a Pool with 3x2 TB + 2x256GB SSD (in GUI)
-> Get-StoragePool MyStoragePool | New-StorageTier –FriendlyName SSDTier –MediaType SSD
-> Get-StoragePool MyStoragePool | New-StorageTier –FriendlyName HDDTier –MediaType HDD
-> Get-StoragePool MyStoragePool | Set-ResiliencySetting -Name Mirror -NumberOfColumnsDefault 1
-> $SSD = Get-StorageTier -FriendlyName SSDTier
-> $HDD = Get-StorageTier -FriendlyName HDDTier
-> $vd1 = New-VirtualDisk -StoragePoolFriendlyName MyStoragePool -FriendlyName Mirror -StorageTiers @($SSD) -StorageTierSizes @(175GB) -ResiliencySettingName Mirror -WriteCacheSize 0GB
-> $vd1 = New-VirtualDisk -StoragePoolFriendlyName MyStoragePool -FriendlyName Parity -StorageTiers @($HDD) -StorageTierSizes @(3700GB) -ResiliencySettingName Parity -WriteCacheSize 1GB
works absolutely fine in Powershell but doens't provide me the correct setting in GUI. When i do an
-> Get-VirtualDisk | Format-List
I get
PS C:\Users\Administrator> Get-VirtualDisk | Format-List
ObjectId : {1}\\HOMESERVER\root/Microsoft /Windows/Storage/Providers_v2\SPACES_VirtualDisk.ObjectId="{ae6f3b40-843d-11e6-8239-806e6f6e6963}:VD:{4b713005-eb2e-4ac4-bf03-71a8ae7e05dc}{018a9f90-33a5-4c4e-a8e
b64e55}"
PassThroughClass :
PassThroughIds :
PassThroughNamespace :
PassThroughServer :
UniqueId : 909F8A01A5334E4CA8E4E55ECBB64E55
Access : Read/Write
AllocatedSize : 3972844748800
AllocationUnitSize :
ColumnIsolation :
DetachedReason : None
FaultDomainAwareness :
FootprintOnPool : 5963025219584
FriendlyName : Parity
HealthStatus : Healthy
Interleave :
IsDeduplicationEnabled : False
IsEnclosureAware :
IsManualAttach : False
IsSnapshot : False
IsTiered : True
LogicalSectorSize : 512
MediaType :
Name :
NameFormat :
NumberOfAvailableCopies :
NumberOfColumns :
NumberOfDataCopies :
NumberOfGroups :
OperationalStatus : OK
OtherOperationalStatusDescription :
OtherUsageDescription :
ParityLayout :
PhysicalDiskRedundancy :
PhysicalSectorSize : 4096
ProvisioningType :
ReadCacheSize : 0
RequestNoSinglePointOfFailure : False
ResiliencySettingName :
Size : 3972844748800
UniqueIdFormat : Vendor Specific
UniqueIdFormatDescription :
Usage : Other
WriteCacheSize : 1073741824
PSComputerName :
ObjectId : {1}\\HOMESERVER\root/Microsoft/Windows/Storage/Providers_v2\SPACES_VirtualDisk.ObjectId="{ae6f3b40-843d-11e6-8239-806e6f6e6963}:VD:{4b713005-eb2e-4ac4-bf03-71a8ae7e05dc}{bc5374c5-e4c0-4fbe-b4d
094b26}"
PassThroughClass :
PassThroughIds :
PassThroughNamespace :
PassThroughServer :
UniqueId : C57453BCC0E4BE4FB4D9036AED094B26
Access : Read/Write
AllocatedSize : 472446402560
AllocationUnitSize :
ColumnIsolation :
DetachedReason : None
FaultDomainAwareness :
FootprintOnPool : 472446402560
FriendlyName : SSD_VD
HealthStatus : Healthy
Interleave :
IsDeduplicationEnabled : False
IsEnclosureAware :
IsManualAttach : False
IsSnapshot : False
IsTiered : True
LogicalSectorSize : 512
MediaType :
Name :
NameFormat :
NumberOfAvailableCopies :
NumberOfColumns :
NumberOfDataCopies :
NumberOfGroups :
OperationalStatus : OK
OtherOperationalStatusDescription :
OtherUsageDescription :
ParityLayout :
PhysicalDiskRedundancy :
PhysicalSectorSize : 4096
ProvisioningType :
ReadCacheSize : 0
RequestNoSinglePointOfFailure : False
ResiliencySettingName :
Size : 472446402560
UniqueIdFormat : Vendor Specific
UniqueIdFormatDescription :
Usage : Other
WriteCacheSize : 0
Findings 1: I wonder abt a lot of missing data in here.
Findings 2: runs smooth at 150MB/s + copying MP3s to that new drive. That extemly good for a Storage Spaces Parity write performance thanks to 1 Gb WBC.
My goal is to have an extendable pool with at least 6x4 TB drives + 2x256 GB SSDs in the beginning and to make use of Write Back Cache on SSD which works fine in my above example. I can pin the parity to the HDDs and the Mirror to the SSD (for my Vms not waking up my HDDs all the Time)
Can I set the Column No of Parity to eg 3 when using 6 Disks? Am I able to add 3more 4TB disks then to extend my pool easily? Like 6x4TB -> 9x4TB = 8x4TB usable in Parity...?
Lets say I put 6x 4 TB and 5x2 TB in one pool together with 2 SSDs... what would be the max Parity Volume and can i extend it by replacing the 2 TB drives afterwards? The quoted posting meant:
I can't downvote by now, but I want to outline, that the basic information given in bviktors post is wrong - he is still thinking in Raid by saying you can't extend with half a diskgroup:
If you have a Storage Pool with 4 Columns - you can use any number of disks, starting with 4. Storage Spaces will always utilize all the Disks you have! The Column count just defines on how much Disks data is written at the same time (This is called striping) The next Stripe (by default 256 KB in Size, called Interleave) however can (and will) be written to 4 different Disks!
Hence, to extend a Storage Pool of 4 Columns - you don't need to add 4 disks - you only need to make sure, that you have 4 disks with remaining diskspace left in the pool!
So, if 1 is full, but 3 have remaining space, the pool will become operational again after adding One Disk!
(This allows to mix capacity as desired - there's no need to keep the raid-constraint of equal disk-sizes)
As a Best-Practice, you should always add more disks, than #NumberOfDataCopies * #NumberofColumns would be:
Consider a 2 Column 2 Copy Disk - It requires a minimum of 4 disks. If you loose one Disk, you could still access your data - but you cannot write anything anymore, cause you don't have 4 columns left where data could be stored!
Consider you would have added 5 Disks to that pool (which will be used based on Size by the Storage Spaces Subsystem, filled up in the best possible way to make all disks hit 100% at the same time) - loosing one Disk still retains your data - and keeps your Pool working for new writes, because you still have the minimum of 4 Columns left.
Also, this allows you to rebuild the pool immediately if one disk fails, without having to purchase a new disk first!
Set-PhysicalDisk -FriendlyName "BrokenDisk" -Usage Retired Get-PhysicalDisk -FriendlyName "BrokenDisk" | Get-VirtualDisk | Repair-VirtualDisk -AsJob
The data will now be "moved" to the remaining disks, if enough space is left. After the rebuild:
$disk = Get-PhysicalDisk -FriendlyName "BrokenDisk" Remove-PhysicalDisk -StoragePoolFriendlyName "My Pool" -PhysicalDisks $disk
(You can use the same commands to retire "functional" disks and move data to other disks - this will allow some sort of redistributing the data, once you add a disk - but at the end you will always have one disk "empty". However in your case it would not work, due to the small number of disks. In 10 disk pool for instance, you could free up a 2 TB Disk, by distributing as little as 200 MB to every other disk. Re-Running the operation will now write prefered to the empty disk. Storage Spaces basically always says: "I have to write: 8 Blocks (NumberOfColumns * NumberOfDataCopies) with a size of 64 KB (Interleave / Number of Columns) each - give me 8 distinct disks out of the 10 disks with the least percentual usage, so I can throw the data there!")
It would be nice to be able to shift all data from a 2TB drive to the other Pool devices (while space is free) and then to add a new drive to the Pool. I wonder if Storage Spaces can do an internal migration then - but the big deal seems to be that columns are fixed. What will storage spaces do if I switch from 6x4 Tb to 12x4TB? (all Parity). What abt. the columns? In Standard it should be 6 (using 6x4TB), - the MS guidelines tell me I need 6 more drives to extend (which sucks).
Performance is not that critical as im limited by 1 Gbit Lan (even a 3x2 Tb Parity with 1GB WBC performs fast enough). So Columns can be low as possible.
I'm willing to pull the max capacity out of that pool with simple redundancy (parity). I have an additional backup on my 8 TB drive which runs every 3 days so my important data is save anyway.
Thanks for providing answers and ideas.