31
10
After booting, my RAID1 device (/dev/md_d0
*) sometimes goes in some funny state and I cannot mount it.
* Originally I created /dev/md0
but it has somehow changed itself into /dev/md_d0
.
# mount /opt
mount: wrong fs type, bad option, bad superblock on /dev/md_d0,
missing codepage or helper program, or other error
(could this be the IDE device where you in fact use
ide-scsi so that sr0 or sda or so is needed?)
In some cases useful info is found in syslog - try
dmesg | tail or so
The RAID device appears to be inactive somehow:
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md_d0 : inactive sda4[0](S)
241095104 blocks
# mdadm --detail /dev/md_d0
mdadm: md device /dev/md_d0 does not appear to be active.
Question is, how to make the device active again (using mdmadm
, I presume)?
(Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab
:
/dev/md_d0 /opt ext4 defaults 0 0
So a bonus question: what should I do to make the RAID device automatically mount at /opt
at boot time?)
This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question.
Edit: My /etc/mdadm/mdadm.conf
looks like this. I've never touched this file, at least by hand.
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR <my mail address>
# definitions of existing MD arrays
# This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200
In /proc/partitions
the last entry is md_d0
at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.)
Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan
:
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...]
and added it in /etc/mdadm/mdadm.conf
, which seems to have fixed the main problem. After changing /etc/fstab
to use /dev/md0
again (instead of /dev/md_d0
), the RAID device also gets automatically mounted!
2Ok,
mdadm --examine --scan
producedARRAY /dev/md0 level=raid1 num-devices=2 UUID=...
(Note the md0 instead of md_d0!) I put that in the mdadm.conf file (manually, because the was some problem with sudo and>>
("permission denied"), and sudo is required) and also updated fstab to use md0 (not md_d0) again. Now I don't seem to run into the "inactive" problem anymore and the RAID device mounts automatically at /opt upon booting. So thanks! – Jonik – 2010-03-10T14:19:34.9403The reason you had problems with
sudo ... >> mdadm.conf
is that the shell opens the redirected files before sudo runs. The commandsu -c '.... >> mdadm.conf'
should work. – Mei – 2013-10-08T18:32:56.197