2

I'm trying to rebuild a raid 1 (dmraid+isw) unsuccessfully. I replaced the failed disk with a new one and the BIOS added it automatically to the raid. Running kernel 2.6.18-194.17.4.el5.

dmraid -r

/dev/sda: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data@ 0
/dev/sdb: isw, "isw_babcjifefe", GROUP, ok, 1953525165 sectors, data@ 0

dmraid -s

*** Group superset isw_babcjifefe
--> Subset
name   : isw_babcjifefe_Raid0
size   : 1953519616
stride : 128
type   : mirror
status : nosync
subsets: 0
devs   : 2
spares : 0

When i try to start the raid i receive the next errors

dmraid -f isw -S -M /dev/sdb

ERROR: isw: SPARE disk must use all space on the disk

dmraid -tay

isw_babcjifefe_Raid0: 0 1953519616 mirror core 3 131072 sync block_on_error 2 /dev/sda 0 /dev/sdb 0

dmraid -ay

RAID set "isw_babcjifefe_Raid0" was not activated
ERROR: device "isw_babcjifefe_Raid0" could not be found

dmraid -f isw -S -M /dev/sdb

ERROR: isw: SPARE disk must use all space on the disk

dmraid -R isw_babcjifefe_Raid0 /dev/sdb

ERROR: disk /dev/sdb cannot be used to rebuilding

dmesg

device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: table: 253:13: mirror: Device lookup failure
device-mapper: ioctl: error adding target to table
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: ioctl: device doesn't appear to be in the dev hash table.
device-mapper: ioctl: device doesn't appear to be in the dev hash table.

Disks:

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

LVM:

PV /dev/sda5   VG storage   lvm2 [914.64 GB / 28.64 GB free]
Total: 1 [914.64 GB] / in use: 1 [914.64 GB] / in no VG: 0 [0   ]
Reading all physical volumes.  This may take a while...
Found volume group "storage" using metadata type lvm2
ACTIVE            '/dev/storage/home' [68.00 GB] inherit
ACTIVE            '/dev/storage/home2' [68.00 GB] inherit
ACTIVE            '/dev/storage/home3' [68.00 GB] inherit
ACTIVE            '/dev/storage/home4' [68.00 GB] inherit
ACTIVE            '/dev/storage/home5' [68.00 GB] inherit
ACTIVE            '/dev/storage/var' [15.00 GB] inherit
ACTIVE            '/dev/storage/mysql' [20.00 GB] inherit
ACTIVE            '/dev/storage/pgsql' [7.00 GB] inherit
ACTIVE            '/dev/storage/exim' [12.00 GB] inherit
ACTIVE            '/dev/storage/apache' [25.00 GB] inherit
ACTIVE            '/dev/storage/tmp' [2.00 GB] inherit
ACTIVE            '/dev/storage/backup' [450.00 GB] inherit
ACTIVE            '/dev/storage/log' [15.00 GB] inherit
  • 1
    I hope you have backups; scrap the fakeraid and use (reliable, robust, proven) software raid with mdadm. – Andrew Jul 13 '12 at 01:39

2 Answers2

1

I am going to agree with Andrew.

Hopefully you can do a mount -o ro /dev/sd

Then, copy the data off of that drive and start over using mdadm and full, reliable, easy to use, fast software raid.

Good luck.

michael
  • 26
  • 1
0

I'm not sure dmraid supports rebuilding a fakeraid (like isw). I would advise scraping the isw and build pure software raid with mdadm.

sten
  • 21
  • 1