I have a 64bit Ubuntu Jaunty server (kernel 2.6.28-17-server) installed on two SATA disks (sdc and sde) in a mirror RAID, this being my current raid configuration:
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md5 : active raid1 sdd7[1] sdc7[0] 126953536 blocks [2/2] [UU] md2 : active raid1 sdd3[1] sdc3[0] 979840 blocks [2/2] [UU] md0 : active raid1 sdd1[1] sdc1[0] 96256 blocks [2/2] [UU] md4 : active raid1 sdd6[1] sdc6[0] 9767424 blocks [2/2] [UU] md3 : active raid1 sdd5[1] sdc5[0] 979840 blocks [2/2] [UU] md1 : active raid1 sdd2[1] sdc2[0] 1951808 blocks [2/2] [UU] unused devices: none
# df -h Filesystem Size Used Avail Use% Mounted on /dev/md4 9.2G 922M 7.9G 11% / tmpfs 490M 0 490M 0% /lib/init/rw varrun 490M 316K 490M 1% /var/run varlock 490M 0 490M 0% /var/lock udev 490M 228K 490M 1% /dev tmpfs 490M 0 490M 0% /dev/shm lrm 490M 2.5M 488M 1% /lib/modules/2.6.28-17-server/volatile /dev/md0 89M 55M 30M 65% /boot /dev/md5 120G 96G 18G 85% /data /dev/md2 942M 18M 877M 2% /tmp /dev/md3 942M 186M 709M 21% /var
Users are quickly filling up the /data Samba share, so I added two additional hard disks (sda and sdb, they are the exact same type and size), as I wanted to create another mirror out of them and then mounted the new raid device inside /data.
Steps I took was creating one Linux raid autodetect partition on each of the new disks, making sure that they are the same size.
fdisk /dev/sda -l Disk /dev/sda: 122.9 GB, 122942324736 bytes 255 heads, 63 sectors/track, 14946 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000e2e78 Device Boot Start End Blocks Id System /dev/sda1 1 14946 120053713+ fd Linux raid autodetect
fdisk /dev/sdb -l Disk /dev/sdb: 122.9 GB, 122942324736 bytes 255 heads, 63 sectors/track, 14946 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000ef08e Device Boot Start End Blocks Id System /dev/sdb1 1 14946 120053713+ fd Linux raid autodetect
Next I created the new mirror:
mdadm --create /dev/md6 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1
At which point I got the following warning:
mdadm: /dev/sdb1 appears to contain an ext2fs file system size=120053712K mtime=Sat Dec 19 11:10:30 2009 Continue creating array?
This is weird, as I just created the new partition, and I never had a filesystem created on it, but anyway, I continued on and waited for the sync to finish.
Everything seems fine:
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md6 : active raid1 sdb1[1] sda1[0] 120053632 blocks [2/2] [UU] md5 : active raid1 sdd7[1] sdc7[0] 126953536 blocks [2/2] [UU] md2 : active raid1 sdd3[1] sdc3[0] 979840 blocks [2/2] [UU] md4 : active raid1 sdc6[0] sdd6[1] 9767424 blocks [2/2] [UU] md3 : active raid1 sdc5[0] sdd5[1] 979840 blocks [2/2] [UU] md1 : active raid1 sdc2[0] sdd2[1] 1951808 blocks [2/2] [UU] md0 : active raid1 sdc1[0] sdd1[1] 96256 blocks [2/2] [UU] unused devices: none
mdadm --detail /dev/md6 /dev/md6: Version : 00.90 Creation Time : Sat Dec 19 11:33:31 2009 Raid Level : raid1 Array Size : 120053632 (114.49 GiB 122.93 GB) Used Dev Size : 120053632 (114.49 GiB 122.93 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 6 Persistence : Superblock is persistent Update Time : Sat Dec 19 12:24:14 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : b901925f:b5ca90e0:afcf3cfb:09b88def (local to host szerver.mtvsz.local) Events : 0.4 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1
But once I reboot, here comes the problem:
cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d6 : inactive sdb1[1](S) 120053632 blocks md3 : active raid1 sdc5[0] sdd5[1] 979840 blocks [2/2] [UU] md5 : active raid1 sdc7[0] sdd7[1] 126953536 blocks [2/2] [UU] md2 : active raid1 sdc3[0] sdd3[1] 979840 blocks [2/2] [UU] md1 : active raid1 sdd2[1] sdc2[0] 1951808 blocks [2/2] [UU] md0 : active raid1 sdd1[1] sdc1[0] 96256 blocks [2/2] [UU] md4 : active raid1 sdd6[1] sdc6[0] 9767424 blocks [2/2] [UU] unused devices: none
ls /dev/md* /dev/md0 /dev/md2 /dev/md4 /dev/md_d6 /dev/md_d6p2 /dev/md_d6p4 /dev/md1 /dev/md3 /dev/md5 /dev/md_d6p1 /dev/md_d6p3
So my question is: What the hell is this with md_d6 and its partitions?