4

I have a Linux (debian/ubuntu) server with 4 x Intel SSD 910 800GB pcie that I need to RAID together.

The biggest problem with these card is that they give you 4x200GB drives that you cant hardware-raid (more about that here: http://www.intel.com/support/ssdc/hpssd/sb/CS-034181.htm)

So linux detects these drives:

  • sda - System drive
  • sdb - Card #1
  • sdc - Card #1
  • sdd - Card #1
  • sde - Card #1
  • sdf - Card #2
  • sdg - Card #2
  • sdh - Card #2
  • sdi - Card #2
  • sdj - Card #3
  • sdk - Card #3
  • sdl - Card #3
  • sdm - Card #3
  • sdn - Card #4
  • sdo - Card #4
  • sdp - Card #4
  • sdq - Card #4

If I would RAID these like normal, Lets say RAID-10, and for example Card #1 breaks, I would lose 4 drives at the same time (sdb,sdc,sdd,sde) which would probably result in data-loss ?

So I was thinking I do like most(?) SSD-cards do anyway, "internal RAID-0":

$ mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/sd[b-e]
$ mdadm --create /dev/md1 --level=0 --raid-devices=4 /dev/sd[f-i]
$ mdadm --create /dev/md2 --level=0 --raid-devices=4 /dev/sd[j-m]
$ mdadm --create /dev/md3 --level=0 --raid-devices=4 /dev/sd[n-q]
$ mdadm --create /dev/md4 --level=1 --raid-devices=4 /dev/md[0-3]

But this is a RAID-01 which have no benefits over RAID-10... So If I do a RAID-10, something like this I suppose:

$ mdadm --create /dev/md0 --level=1 --raid-devices=8 /dev/sdb[a-h]
$ mdadm --create /dev/md1 --level=1 --raid-devices=8 /dev/sdb[i-q]
$ mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md[0-1]

Question is then, What happens if Card #1 breaks, I would lose the first 4 drives, what if sdb is mirrored on sdc?

So after this is decided the question is, what chunk size and block size should we choose for running PostgreSQL on this? I'll think we will use XFS, but open for ideas.

So to summarise:

  • Need to be able to lose one Card without dataloss (we have cold-spares)
  • Need to get at least 1600GB of the RAID
Linus
  • 41
  • 2
  • I've had the same dilemma with Fusion-io cards. Basically, I wouldn't use more than two physical cards in an array together. Just run with mirrors that span the two cards and you protect yourself from module and card failure. – ewwhite Feb 05 '15 at 17:11
  • 2
    You could also use LVM to make a 'disk' from each card's individual devices and then use mdadm to make your array from the logical volumes (1 per card). – Liczyrzepa Feb 05 '15 at 18:00

1 Answers1

3

Your first instinct is correct:

So I was thinking I do like most(?) SSD-cards do anyway, "internal RAID-0": (snip) But this is a RAID-01 which have no benefits over RAID-10...

The only thing to change is your last line:

$ mdadm --create /dev/md4 --level=10 --raid-devices=4 /dev/md[0-3]

(notice the change: level=10)

This essentially turns each card into a self-contained RAID0 array, then creates a RAID10 array of the individual elements. It's essentially RAID010 (A stripe of mirrors of stripes). If any single card dies, you still have another card with the same data mirrored.

Hyppy
  • 15,458
  • 1
  • 37
  • 59
  • Wouldn't this kinda create a "RAID-010" ? If I use "mdadm --create /dev/md4 --level=1 --raid-devices=4 /dev/md[0-3]" it would still mirror the data(?) so i fail too see the benefit of adding a RAID-10 on top, would you mind expanding your answer? – Linus Feb 06 '15 at 11:03