3

I have 2 disk with the next structure:

lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0   3,7T  0 disk
├─sda1        8:1    0   2,7T  0 part
└─sda2        8:2    0 931,5G  0 part
sdb           8:16   0   2,7T  0 disk
└─sdb1        8:17   0   2,7T  0 part

sda1 and sdb1 are part of md0 if I execute

mdadm --misc --detail /dev/md0

return

mdadm: cannot open /dev/md0: No such file or directory

My mdadm.conf is:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Thu, 03 Mar 2016 21:53:00 +0100 by mkconf
ARRAY /dev/md/0  metadata=1.2 UUID=6ca17528:517621d3:e1c460a2:529955dd name=rp3-0:0

If I execute

sudo mdadm -A /dev/md0

or

sudo mdadm --assemble --scan

return

mdadm: /dev/md/0 has been started with 2 drives.

and the new disk structure is:

sda           8:0    0   3,7T  0 disk
├─sda1        8:1    0   2,7T  0 part
│ └─md0       9:0    0   5,5T  0 raid0
└─sda2        8:2    0 931,5G  0 part
sdb           8:16   0   2,7T  0 disk
└─sdb1        8:17   0   2,7T  0 part
  └─md0       9:0    0   5,5T  0 raid0

The fstab line for raid is

/dev/md0        /mnt/ext        ext4    defaults,nobootwait,nofail      0       0

nobootwait and nofail is for not crash systemd at boot.

When I reboot the system the raid0 disapear. How I can do for persist the raid after each reboot?

Thanks.

3 Answers3

1

You just need the conf and initrd/initramfs hook. This may vary by distro.

First, add the conf:

mdadm -D --scan > /etc/mdadm.conf

if you can find a directory like /etc/mdadm/ maybe your distro requires the file there instead... symlink it or just use that path instead.

Second, make sure the mdadm initrd hooks are installed and enabled. (the normal mdadm package should have them... I only mention this in case your distro is weird, so you know to look for a 2nd package)

For example on Arch based:

$ pacman -Ql mdadm | grep hook
mdadm /usr/lib/initcpio/hooks/
mdadm /usr/lib/initcpio/hooks/mdadm

$ grep mdadm /etc/mkinitcpio.conf 
HOOKS="base ... mdadm lvm2 filesystems ..."

And on debian based:

$ dpkg -L mdadm | grep initr.*hook
/usr/share/initramfs-tools/hooks
/usr/share/initramfs-tools/hooks/mdadm

(I'm not really sure where the hook is enabled, but I think it should be enabled by default... someone else please fill that in)

And then after these files and packages are installed, you have to rebuild your initramfs, and possibly update-grub just in case:

Arch based:

# mkinitcpio -p /etc/mkinitcpio.d/somekernel.preset
# update-grub

Debian based:

# update-initramfs -u
# update-grub

Also, remember to use a UUID rather than /dev/md0 in your fstab for more reliable booting. (eg. if you boot it on a rescue disk and modify it, sometimes then it changes to /dev/md127 and is sometimes stubborn and hard to set back to 0). See man fstab for syntax, and blkid for the UUIDs.

Peter
  • 2,546
  • 1
  • 18
  • 25
  • I tried it but nothing :( – Álvaro de la Vega Olmedilla Mar 05 '16 at 12:54
  • Which distro is it? (one time with openSUSE, I couldn't get this sort of solution to work... the hook install script that mkinitrd runs would do some weird scan for disks and fail to find them so the generated hook would just ignore them and do nothing... so I could only fix that by writing my own hook :D) – Peter Mar 05 '16 at 12:56
0

It's a workaround but after many hours the only one solution is to edit etc/rc.local. I added this line before exit 0

$ sudo mdadm -A /dev/md0 && sudo mount /dev/md0 /mnt/ext

And now after each reboot the RAID is mounted successfully.

slm
  • 7,355
  • 16
  • 54
  • 72
0

I've seen this happen with many Debian servers. The solution that worked for me every time (and believe me, it took me many, many hours to figure it out), was to patch the file "/usr/share/initramfs-tools/scripts/local-top/mdadm". Find the place where "Assembling all MD arrays" is output, then add "sleep 3" after it. When done, run "update-initramfs -t -u -k all" to install the modified script.

Chrissi
  • 101