8

Just ran an apt-get update on one of my dedicated servers to be left with a relatively scary warning:

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.26-2-686-bigmem
W: mdadm: the array /dev/md/1 with UUID c622dd79:496607cf:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/2 with UUID 24120323:8c54087c:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/6 with UUID eef74de5:9267b2a1:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.
W: mdadm: the array /dev/md/5 with UUID 5d45b20c:04d8138f:c230666b:5103eba0
W: mdadm: is currently active, but it is not listed in mdadm.conf. if
W: mdadm: it is needed for boot, then YOUR SYSTEM IS NOW UNBOOTABLE!
W: mdadm: please inspect the output of /usr/share/mdadm/mkconf, compare
W: mdadm: it to /etc/mdadm/mdadm.conf, and make the necessary changes.

As instructed I inspected the output of /usr/share/mdadm/mkconf and compared with /etc/mdadm/mdadm.conf and they are quite different.

Here is the /etc/mdadm/mdadm.conf contents:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b93b0b87:5f7c2c46:0043fca9:4026c400
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0fa8842:e214fb1a:fad8a3a2:28f2aabc
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=cdc2a9a9:63bbda21:f55e806c:a5371897
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=eca75495:9c9ce18c:d2bac587:f1e79d80

# This file was auto-generated on Wed, 04 Nov 2009 11:32:16 +0100
# by mkconf $Id$

And here is the out put from /usr/share/mdadm/mkconf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md1 UUID=c622dd79:496607cf:c230666b:5103eba0
ARRAY /dev/md2 UUID=24120323:8c54087c:c230666b:5103eba0
ARRAY /dev/md5 UUID=5d45b20c:04d8138f:c230666b:5103eba0
ARRAY /dev/md6 UUID=eef74de5:9267b2a1:c230666b:5103eba0

# This configuration was auto-generated on Sat, 25 Feb 2012 13:10:00 +1030
# by mkconf 3.1.4-1+8efb9d1+squeeze1

As I understand it I need to replace the four lines that start with 'ARRAY' in the /etc/mdadm/mdadm.conf file with the different four 'ARRAY' lines from the /usr/share/mdadm/mkconf output.

When I did this and then ran update-initramfs -u there were no more warnings.

Is what I have done above correct? I am now terrified of rebooting the server for fear it will not reboot and being a remote dedicated server this would certainly mean downtime and possibly would be expensive to get running again.

FOLLOW UP (response to question):

the output from mount:

/dev/md1 on / type ext3 (rw,usrquota,grpquota)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /boot type ext2 (rw)
/dev/md5 on /tmp type ext3 (rw)
/dev/md6 on /home type ext3 (rw,usrquota,grpquota)

mdadm --detail /dev/md0

mdadm: md device /dev/md0 does not appear to be active.

mdadm --detail /dev/md1

/dev/md1:
    Version : 0.90
  Creation Time : Sun Aug 14 09:43:08 2011
     Raid Level : raid1
     Array Size : 31463232 (30.01 GiB 32.22 GB)
  Used Dev Size : 31463232 (30.01 GiB 32.22 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Sat Feb 25 14:03:47 2012
      State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

       UUID : c622dd79:496607cf:c230666b:5103eba0
     Events : 0.24

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

mdadm --detail /dev/md2

/dev/md2:
    Version : 0.90
  Creation Time : Sun Aug 14 09:43:09 2011
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sat Feb 25 13:20:20 2012
      State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

       UUID : 24120323:8c54087c:c230666b:5103eba0
     Events : 0.30

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

mdadm --detail /dev/md3

mdadm: md device /dev/md3 does not appear to be active.

mdadm --detail /dev/md5

/dev/md5:
    Version : 0.90
  Creation Time : Sun Aug 14 09:43:09 2011
     Raid Level : raid1
     Array Size : 2104448 (2.01 GiB 2.15 GB)
  Used Dev Size : 2104448 (2.01 GiB 2.15 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Sat Feb 25 14:09:03 2012
      State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

       UUID : 5d45b20c:04d8138f:c230666b:5103eba0
     Events : 0.30

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8       21        1      active sync   /dev/sdb5

mdadm --detail /dev/md6

/dev/md6:
    Version : 0.90
  Creation Time : Sun Aug 14 09:43:09 2011
     Raid Level : raid1
     Array Size : 453659456 (432.64 GiB 464.55 GB)
  Used Dev Size : 453659456 (432.64 GiB 464.55 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 6
    Persistence : Superblock is persistent

    Update Time : Sat Feb 25 14:10:00 2012
      State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

       UUID : eef74de5:9267b2a1:c230666b:5103eba0
     Events : 0.31

    Number   Major   Minor   RaidDevice State
       0       8        6        0      active sync   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6

FOLLOW UP 2 (response to question):

Output from /etc/fstab

/dev/md1      /                    ext3 defaults,usrquota,grpquota 1 1
devpts         /dev/pts             devpts     mode=0620,gid=5       0 0
proc           /proc                proc       defaults              0 0
#usbdevfs       /proc/bus/usb        usbdevfs   noauto                0 0
/dev/cdrom     /media/cdrom         auto       ro,noauto,user,exec   0 0
/dev/dvd       /media/dvd           auto       ro,noauto,user,exec   0 0
#
#
#
/dev/md2       /boot    ext2       defaults 1 2
/dev/sda3       swap     swap       pri=42   0 0
/dev/sdb3       swap     swap       pri=42   0 0
/dev/md5       /tmp     ext3       defaults 0 0
/dev/md6       /home    ext3       defaults,usrquota,grpquota 1 2
user568829
  • 231
  • 1
  • 3
  • 8
  • 1
    It thinks they should be 1, 2, 5, and 6 - your existing config has them as 0, 1, 2, 3. Something's not right. Can you provide the output of `mount` and the `mdadm --detail` commands for each MD device? – Shane Madden Feb 24 '12 at 19:23
  • Thanks - added FOLLOW UP information above (in original question) – user568829 Feb 24 '12 at 19:43

4 Answers4

3

All you need to do is:

First, change the mdadm.conf with the result of mkconf

/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

Then, you have to update the initramfs

update-initramfs -u

Now, you can reboot the system.

guntbert
  • 553
  • 7
  • 21
2

Looks like the warnings are correct - your current layout differs wildly from your mdadm.conf.

The settings that it's given in /usr/share/mdadm/mkconf appear to be correct. Just to verify - do your /etc/fstab entries match up with your current mounts?

Since something large-ish seems to have changed on this system, I'd still be a bit concerned about the reboot. Back up first!

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
  • Yes, /etc/fstab seems to agree with the new mount settings. I have added the output of /etc/fstab above (in FOLLOW UP 2 in original question). Strange, I dont know how the system setup could have changed...? Yes, will be backing up all important data before attempting a reboot. Thanks. – user568829 Feb 24 '12 at 19:56
  • Yup, looks like everything was updated except the `mdadm.conf`. Strange! Maybe check the modify timestamp on `/etc/fstab` to get a guess at when the changes might have occurred? – Shane Madden Feb 24 '12 at 19:58
  • Thinking back maybe is has something to do with a question that appeared during the apt update. A screen came up with "Configuring mdadm" and asked whether I wanted All or None, I wasn't sure and couldn't seem to find any information on Google so just clicked enter on the default which was set to All... – user568829 Feb 24 '12 at 20:00
  • -rw-r--r-- 1 root root 703 Aug 14 2011 /etc/fstab – user568829 Feb 24 '12 at 20:01
  • Hmm. Did the `mdadm.conf` change recently, maybe? The package update may have put it back to an old state? – Shane Madden Feb 24 '12 at 21:19
  • 2
    @user568829, if you want to see that dialog again, you can re-run `dpkg-reconfigure mdadm`. It is asking you about which volumes you the initrd needs to make available for your system to boot. Generally you can just choose all, unless some of your disks will not be available until networking is up (iSCSI) or something like that. – Linux Geek Feb 24 '12 at 23:12
  • Thanks guys, will update this question when I try a reboot next. – user568829 Feb 25 '12 at 15:08
1

I had a similar problem, but instead of different arrays, mdadm.conf got empty after a Debian upgrade (Lenny to Squeeze).

W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

The same solution worked. I used the output of mkconf as my mdadm.conf:

/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

The reboot test passed.

hdiogenes
  • 113
  • 6
0

Just a follow up.

Finally backed up all data on the server and did a reboot and the server rebooted with no problems. So the changes outlined above (in original question) were correct.

user568829
  • 231
  • 1
  • 3
  • 8