I'm testing a Rackspace cloud server and have two Cloud Block Storage volumes set up in a Raid1 configuration.
There are no system files on these volumes, they're purely for storage. Everything appears to work fine until I reboot the server.
After doing that the second volume is removed and labeled as "faulty spare" under state.
Any idea what could be causing this?
UPDATE: 12/24
I've discussed this Rackspace support and the problem is still unresolved. They think that the Raid may not be being fully deactivated before shutdown and suggested I try adding barrier=0
to the fstab options which didn't help.
I also tried unmounting the Raid volume before rebooting again, but when that happened the first volume went into "faulty spare" this time around.
The following are my fstab options:
proc /proc proc nodev,noexec,nosuid 0 0
/dev/xvda1 / ext3 errors=remount-ro,barrier=0 0 1
/dev/xvdc1 none swap sw 0 0
/dev/md0 /mnt/var1 ext4 defaults,noatime,barrier=0 0 0
And the following is the result of mdadm --query --detail /dev/md0
after reboot
Version : 1.2
Creation Time : Fri Dec 21 17:42:10 2012
Raid Level : raid1
Array Size : 104791936 (99.94 GiB 107.31 GB)
Used Dev Size : 104791936 (99.94 GiB 107.31 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Dec 24 21:24:26 2012
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : test-prod:0 (local to host test-prod)
UUID : a6b73196:be9fb090:5cc71f0a:205b6fb2
Events : 148
Number Major Minor RaidDevice State
0 0 0 0 removed
2 202 48 1 active sync /dev/xvdd
0 202 16 - faulty spare /dev/xvdb
After reboot, I can run mdadm --remove /dev/md0 /dev/xvdb; mdadm --add /dev/md0 /dev/xvdb
and the array rebuilds successfully until the next reboot.