0

I used the Synology RAID calculator to model a scenario of a mix of a bunch of 1TB and 2TB drives.

Could someone explain to me how is it possible that 2TBs of redundant space give protection for 12TBs of data?

Synology RAID calculator screensohot

adamsfamily
  • 245
  • 2
  • 9
  • 1
    That's completely normal for RAID 5. n disks of m gigabytes gives (n-1)*m gigabytes useable capacity. [Synology's documentation on *shr*](https://kb.synology.com/en-us/DSM/tutorial/What_is_Synology_Hybrid_RAID_SHR) may be worth reading as well; it's basically a buzzword for using non-equal devices in an array. – vidarlo Jun 27 '21 at 08:21
  • What is the concrete business problem you are trying to solve? – Nikita Kipriyanov Jun 27 '21 at 11:35
  • Thanks, analogy to RAID 5 seems to be the answer. – adamsfamily Jun 27 '21 at 12:28

1 Answers1

1

My guess? Synology uses a variation of RAID5.

Normally your data is stored in blocks along with its parity data.

The order of where the blocks and parity is located is rotated, so parity blocks is spread evenly across all drives in the array.

The idea is that if a drive fails, data can be restored by using information from the remaining drives and parity data.

In SHR I suspect Synology is pairing the 1 TB drives, so for every 2 x 1 TB drives they behave like a 2 TB drive - effectively mimicking some kind of RAID 0.

In essence it would be the same as populating the drive with 9 x 2 TB drives and do ordinary RAID5 on the lot.

.... and as attached picture shows: There are no difference between RAID5 and SHR when you have 9 x 2 TB drives

enter image description here

  • SHR only gives some "advantage" when used in scenario with disks of *different* size. That's the whole goal of its existence: to maximize the available redundant space at all costs. If all disks are of the same size, it just produces usual RAID. So your example simply does not make sense. – Nikita Kipriyanov Jun 27 '21 at 11:31
  • Agreed. As far as I see it is basically just the same as making a JBOD array - except with a little bit of redundancy built-in. – Lasse Michael Mølgaard Jun 27 '21 at 12:21
  • Thanks, the proposed answer through RAID 5 proves (mathematically) that it's possible to protect 16TB with only 2TB overhead. I guess it's worth mentioning from what I read in the links from the comments above that it's a "protection" in double quotes since a single bit of read error after a full drive failure discards the entire array - which is pretty much an issue. I'm not sure if I should accept this answer BTW because it says "I suspect", anybody can confirm whether the assumptions in the proposed answer are correct? – adamsfamily Jun 27 '21 at 12:31
  • Well I put my emphasis on "I suspect" due to the largest array I have ever done with Synology was 4 drives, hence only scratching the bare surface of RAID systems. :-) – Lasse Michael Mølgaard Jun 27 '21 at 15:28