I have a Linux (debian/ubuntu) server with 4 x Intel SSD 910 800GB pcie that I need to RAID together.
The biggest problem with these card is that they give you 4x200GB drives that you cant hardware-raid (more about that here: http://www.intel.com/support/ssdc/hpssd/sb/CS-034181.htm)
So linux detects these drives:
- sda - System drive
- sdb - Card #1
- sdc - Card #1
- sdd - Card #1
- sde - Card #1
- sdf - Card #2
- sdg - Card #2
- sdh - Card #2
- sdi - Card #2
- sdj - Card #3
- sdk - Card #3
- sdl - Card #3
- sdm - Card #3
- sdn - Card #4
- sdo - Card #4
- sdp - Card #4
- sdq - Card #4
If I would RAID these like normal, Lets say RAID-10, and for example Card #1 breaks, I would lose 4 drives at the same time (sdb,sdc,sdd,sde) which would probably result in data-loss ?
So I was thinking I do like most(?) SSD-cards do anyway, "internal RAID-0":
$ mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sd[b-e]
$ mdadm --create /dev/md1 --level=0 --raid-devices=4 /dev/sd[f-i]
$ mdadm --create /dev/md2 --level=0 --raid-devices=4 /dev/sd[j-m]
$ mdadm --create /dev/md3 --level=0 --raid-devices=4 /dev/sd[n-q]
$ mdadm --create /dev/md4 --level=1 --raid-devices=4 /dev/md[0-3]
But this is a RAID-01 which have no benefits over RAID-10... So If I do a RAID-10, something like this I suppose:
$ mdadm --create /dev/md0 --level=1 --raid-devices=8 /dev/sdb[a-h]
$ mdadm --create /dev/md1 --level=1 --raid-devices=8 /dev/sdb[i-q]
$ mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md[0-1]
Question is then, What happens if Card #1 breaks, I would lose the first 4 drives, what if sdb is mirrored on sdc?
So after this is decided the question is, what chunk size and block size should we choose for running PostgreSQL on this? I'll think we will use XFS, but open for ideas.
So to summarise:
- Need to be able to lose one Card without dataloss (we have cold-spares)
- Need to get at least 1600GB of the RAID