I recently upgraded my OS from RHEL 5 to 6. To do so, I installed the new OS on new disks, and I want to mount the old disks. The old disks are listed as /dev/sdc and sdd in the new system, they were created as a RAID 1 array using LVM, using the default setup from the RHEL install GUI.
I managed to mount the old disks and use them for the last two weeks, but after a reboot, they did not remount, and I can't figure out what to do to get them back on line. I have no reason to believe there is anything wrong with the disks.
(I'm in the process of doing dd copy of the disks, I have an older backup, but I hope I don't have to use either of these...)
Using fdisk -l :
# fdisk -l
Disk /dev/sdb: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00042e35
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 30596 245760000 fd Linux raid autodetect
/dev/sdb2 30596 31118 4194304 fd Linux raid autodetect
/dev/sdb3 31118 36482 43080704 fd Linux raid autodetect
Disk /dev/sda: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00091208
Device Boot Start End Blocks Id System
/dev/sda1 * 1 30596 245760000 fd Linux raid autodetect
/dev/sda2 30596 31118 4194304 fd Linux raid autodetect
/dev/sda3 31118 36482 43080704 fd Linux raid autodetect
Disk /dev/sdc: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00038b0e
Device Boot Start End Blocks Id System
/dev/sdc1 1 77825 625129281 fd Linux raid autodetect
Disk /dev/sdd: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00038b0e
Device Boot Start End Blocks Id System
/dev/sdd1 1 77825 625129281 fd Linux raid autodetect
Disk /dev/md2: 4292 MB, 4292804608 bytes
2 heads, 4 sectors/track, 1048048 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md1: 251.7 GB, 251658043392 bytes
2 heads, 4 sectors/track, 61439952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md127: 44.1 GB, 44080955392 bytes
2 heads, 4 sectors/track, 10761952 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
And then
# mdadm --examine /dev/sd[cd]
mdadm: /dev/sdc is not attached to Intel(R) RAID controller.
mdadm: /dev/sdc is not attached to Intel(R) RAID controller.
/dev/sdc:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : 8e7b2bbf
Family : 8e7b2bbf
Generation : 0000000d
Attributes : All supported
UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40
Checksum : 014eeac2 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : WD-WCASY6849672
State : active
Id : 00010000
Usable Size : 1250259208 (596.17 GiB 640.13 GB)
[Volume0]:
UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 1250258944 (596.17 GiB 640.13 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 4883824
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : uninitialized
Dirty State : clean
Disk00 Serial : WD-WCASY7183713
State : active
Id : 00000000
Usable Size : 1250259208 (596.17 GiB 640.13 GB)
mdadm: /dev/sdd is not attached to Intel(R) RAID controller.
mdadm: /dev/sdd is not attached to Intel(R) RAID controller.
/dev/sdd:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : 8e7b2bbf
Family : 8e7b2bbf
Generation : 0000000d
Attributes : All supported
UUID : c8c81af9:952cedd5:e87cafb9:ac06bc40
Checksum : 014eeac2 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk00 Serial : WD-WCASY7183713
State : active
Id : 00000000
Usable Size : 1250259208 (596.17 GiB 640.13 GB)
[Volume0]:
UUID : 03c5fad1:93722f95:ff844c3e:d7ed85f5
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 1250258944 (596.17 GiB 640.13 GB)
Per Dev Size : 1250259208 (596.17 GiB 640.13 GB)
Sector Offset : 0
Num Stripes : 4883824
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : uninitialized
Dirty State : clean
Disk01 Serial : WD-WCASY6849672
State : active
Id : 00010000
Usable Size : 1250259208 (596.17 GiB 640.13 GB)
Trying to assemble:
# mdadm --assemble /dev/md3 /dev/sd[cd]
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has no superblock - assembly aborted
I've tried:
# mdadm --examine --scan /dev/sd[cd]
ARRAY metadata=imsm UUID=c8c81af9:952cedd5:e87cafb9:ac06bc40
ARRAY /dev/md/Volume0 container=c8c81af9:952cedd5:e87cafb9:ac06bc40 member=0 UUID=03c5fad1:93722f95:ff844c3e:d7ed85f5
And adding this to the /etc/mdadm.conf file, but it doesn't seem to help. I'm not sure what to try next. Any help would appreciated.
EDIT 1: Does "Magic : Intel Raid ISM Cfg Sig." indicate that I need to use dmraid?
EDIT 2: As suggested below, I tried dmraid, but I don't know what the response means:
# dmraid -ay
RAID set "isw_cdjaedghjj_Volume0" already active
device "isw_cdjaedghjj_Volume0" is now registered with dmeventd for monitoring
RAID set "isw_cdjaedghjj_Volume0p1" already active
RAID set "isw_cdjaedghjj_Volume0p1" was not activated
EDIT 2b: So, now I can see something here:
# ls /dev/mapper/
control isw_cdjaedghjj_Volume0 isw_cdjaedghjj_Volume0p1
but it doesn't mount:
# mount /dev/mapper/isw_cdjaedghjj_Volume0p1 /mnt/herbert_olddrive/
mount: unknown filesystem type 'linux_raid_member'
EDIT 2c: Ok, maybe this might help:
# mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0
mdadm: cannot open /dev/mapper/isw_cdjaedghjj_Volume0: Device or resource busy.
# mdadm -I /dev/mapper/isw_cdjaedghjj_Volume0p1
#
The second command returns nothing. Does this mean anything or am I way off track?
EDIT 3: /proc/mdstat:
# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda3[1] sdb3[0]
43047808 blocks super 1.1 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md1 : active raid1 sda1[1]
245759808 blocks super 1.0 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk
md2 : active raid1 sda2[1]
4192192 blocks super 1.1 [2/1] [_U]
unused devices: <none>
md1 and md2 are raid arrays on sda and sdb, which are used by the new OS.