I've currently got a 3 Hard Disk Linux Software RAID5 volume (mdadm
). At least one disk is probably dying - long spin up times, and I've heard the click of death coming for a little while.
I've purchased a brand new disk, but while the old disk isn't fully dead, I'm not sure which makes more sense;
Option 1) Upgrade the 3-disk RAID5 array to a 4-disk RAID6 array, effectively using the new disk as additional parity, and meaning the array can take a 2-drive failure. This means I'd get a bit of extra life out of the old dying disk until it "really dies"?
Option 2) Just rip out the semi-dead disk and keep a RAID5 array.
Option 2.1) RAID5E? Move one of the disks (the new, or the dying) to be a online spare?
Option 3) ... that I didn't think of? :-)
I want to optimize for the option that gives me less chance of the array dying/gives me more parity - so Option 1 seems like the correct decision, but I worry that having the semi-dead disk still in the array might have the side effect of a more catastrophic failure later prehaps?
IO Profile/use case: The use case is very low I/O online file share storage, with a mixture of lots of little files, and lots of big files (ie, >1Gb each), it's also basically write-once, read often.
Caveat: I'm aware that RAID != backup, and all the array contents are safely backed up in multiple formats across multiple sites. The chance of real data loss is close to impossible with the amount of places it's backed up (the RAID effectively serves as the online faster access to that data). However, rebuilding the array from backups would be a timely pain in the ass. That's what I'm trying to avoid.