3

I've set up a RAID-5 array on Ubuntu 13.04 (kernel 3.8.0-27-generic) using mdadm - v3.2.5 - 18th May 2012. It appears to work fine and be in high spirits:

$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sdb3[0] sdd1[3] sdc1[1]
      2929994752 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

However, on reboot, the array gets split into two separate arrays for what seems to me to be no good reason. On boot, I get the prompt:

*** WARNING: Degraded RAID devices detected. ***
Press Y to start the degraded RAID or N to launch recovery shell

To which I usually answer yes and get dropped into an initramfs shell which I immediately exit. Once I'm back in the system proper, my RAID array has been split thusly:

$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdb3[0]
      1464997976 blocks super 1.2

md127 : inactive sdc[1] sdd[2]
      2930275120 blocks super 1.2

I've also gotten it in the reverse:

$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : inactive sdb3[0]
      1464997976 blocks super 1.2

md0 : inactive sdc[1] sdd[2]
      2930275120 blocks super 1.2

Although sdc and sdd seem to have formed a bit of a clique. I can reassemble the array just fine by issuing:

$ mdadm --stop /dev/md0
$ mdadm --stop /dev/md127
$ mdadm -A /dev/md0 /dev/sdb3 /dev/sdc1 /dev/sdd1 

After which I can mount the LVM volume that sits on md0 and act like nothing happened (doesn't rebuild or anything). What I'd really like, however, is to not have to go through these steps. My mdadm.conf file contains the line:

ARRAY /dev/md0 metadata=1.2 UUID=e8aaf501:b564493d:ee375c76:b1242a82

from which I pared the name under advice from this forum post. Running detail and scan produces this:

$ mdadm --detail --scan
mdadm: cannot open /dev/md/mimir:0: No such file or directory
ARRAY /dev/md0 metadata=1.2 name=turbopepper:0 UUID=e8aaf501:b564493d:ee375c76:b1242a82

Note the array "mimir". This is a vestigial array from when I was playing with arrays before. I don't know where it's being detected from (it's not in mdadm.conf and no reference is made to it in fstab). It probably needs to go, but I can't figure out where it's coming from (it may in fact be the culprit).

Any help would be appreciated in getting the array to be able to persist through reboot without intervention.

Just in case it's pertinent, here's some more output that may or may not be useful.

$ fdisk -l /dev/sdb /dev/sdc /dev/sdd

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x7f0e98a6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      499711      248832   fd  Linux raid autodetect
/dev/sdb2          499712   976771071   488135680   fd  Linux raid autodetect
/dev/sdb3       976771072  3907029167  1465129048   fd  Linux raid autodetect

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
81 heads, 63 sectors/track, 574226 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00052c9c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  2930277167  1465137560   fd  Linux raid autodetect

Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
81 heads, 63 sectors/track, 574226 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bd694

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  2930277167  1465137560   fd  Linux raid autodetect

I imagine the reason that sdc and sdd are together during the split is that they're identical drives.

$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=e8aaf501:b564493d:ee375c76:b1242a82

# This file was auto-generated on Sun, 08 Dec 2013 00:39:01 -0500
# by mkconf $Id$
fromClouds
  • 155
  • 6

1 Answers1

1

One of your partitions, probably the sdb3, still has an old superblock on it for the "mimir" array that is scanned by mdadm at startup. It should be fixable by issuing

mdadm --zero-superblock /dev/sdb3

and readding the partition to the array afterwards.

etagenklo
  • 5,694
  • 1
  • 25
  • 31
  • Sorry for the delay in replying, I had to rebuild the array a couple of times. I did as you suggested (I actually decided that sdc1 and sdd1 were more likely where the vestigial array laid, since sdb3 doesn't start from the first block and they all used to participate in the old array). It _did_ get rid of the old array when displaying status, but the array is still being split up on reboot. – fromClouds Dec 11 '13 at 01:02