A while ago I had a raid10 config crap out on me, and am now just getting around to trying to salvage the array so I can rebuild and move on with my life. Basically a drive in each subset failed, which means (in theory) I can recover. Versus if I lost two disks in the same subset, then recovery is not possible.
I removed the two bad drives and added two new drives to the system. For the raid controller card the system is using a promise fasttrak 4310. When I booted the system I jumped into the raid controller card bios and noticed that all 4 drives were found, but the two new ones (obviously) were not assigned to the raid configuration. Unfortunately there is no way for me to remove the two old drives and add the two new drives from the config via the bios. Promise does provide a WebPAM installer, but it ancient (6 years old) and will not install on CentOS 6.4.
So I did some digging around and came across "dmraid". dmraid looks promising as it was returning information about my raid config, based on what I know about it:
root@service1 ~ # -> dmraid -s -s
ERROR: pdc: wrong # of devices in RAID set "pdc_fbdbhaai-0" [1/2] on /dev/sdb
ERROR: pdc: wrong # of devices in RAID set "pdc_fbdbhaai-1" [1/2] on /dev/sde
ERROR: pdc: wrong # of devices in RAID set "pdc_fbdbhaai-0" [1/2] on /dev/sdb
ERROR: pdc: wrong # of devices in RAID set "pdc_fbdbhaai-1" [1/2] on /dev/sde
*** Superset
name : pdc_fbdbhaai
size : 976642080
stride : 32
type : raid10
status : ok
subsets: 2
devs : 2
spares : 0
--> Subset
name : pdc_fbdbhaai-0
size : 976642080
stride : 32
type : stripe
status : broken
subsets: 0
devs : 1
spares : 0
--> Subset
name : pdc_fbdbhaai-1
size : 976642080
stride : 32
type : stripe
status : broken
subsets: 0
devs : 1
spares : 0
root@service1 ~ # -> dmraid -r
/dev/sde: pdc, "pdc_fbdbhaai-1", stripe, ok, 976642080 sectors, data@ 0
/dev/sdb: pdc, "pdc_fbdbhaai-0", stripe, ok, 976642080 sectors, data@ 0
As of now, it looks like all I need to do is update the raid metadata to disregard the old drives, and add the new drives. Then (hopefully) I can issue a rebuild command and theoretically the raid will salvage itself with the two remaining drives.
I did read "man dmraid", but I wanted to be absolutely sure the commands I issue will accomplish what I am trying to do. Unfortunately I was unable to find any good docs online regarding how to add/remove drives from raid metadata using dmraid.
My proposed command set will look like:
root@service1 ~ # -> dmraid --remove pdc_fbdbhaai-0 /dev/sda1
root@service1 ~ # -> dmraid --remove pdc_fbdbhaai-1 /dev/sda2
With old drives removed, time to add new ones:
root@service1 ~ # -> dmraid -R pdc_fbdbhaai-0 /dev/sdc
root@service1 ~ # -> dmraid -R pdc_fbdbhaai-1 /dev/sdd
Anyone with experience in working with dmraid able to confirm these steps? Or should I go another route?