10

Recently, I came across Ubuntu Server install. During install, it asked me whether or not to allow booting system from degraded RAID array (probably because I installed system onto RAID1 /dev/md0 device). This is mighty-useful option for unattended servers which just have to come online, whether or not their RAID array is degraded (as long as it didn't completely fail).

After quick lookup, I found that it works by either reading /etc/initramfs-tools/conf.d/mdadm configuration file (BOOT_DEGRADED=true option), or by reading kernel boot line argument (bootdegraded=true).

Question: Is there something similar (a way to boot system with degraded array) that would work for Debian? I'm not sure if this exact method is applicable, or even that it has this specific functionality.

I'm asking this because I used to have RAID5 array in some system, and upon improper shutdown, it could not boot, until I manually "fixed" the array, which proved to be major PITA, since server was unattended at remote location, there was no UPS, and power failures did happen. So, I'm asking so I could prevent this kind of issue in future.

mr.b
  • 583
  • 10
  • 25
  • 1
    Don't you mean *Ubuntu* Server install? – Teddy Jan 25 '11 at 07:42
  • @Teddy: indeed, I do. Fixed. – mr.b Jan 26 '11 at 03:40
  • A server in a remote location, with no UPS, booting from a software RAID volume? Sounds ill-conceived at best. – Skyhawk Jan 31 '11 at 04:19
  • @Miles: It is, but that's even good, given the budget and circumstances at the time of building that server, not implying that it was a good solution. – mr.b Jan 31 '11 at 12:33
  • http://www200.pair.com/mecham/raid/raid1-degraded-etch.html - somehow lengthy instructions for Debian Etch configuration. (Not written by me) – Olli Feb 01 '11 at 08:04

5 Answers5

6

You want start_dirty_degraded. Try specifying md-mod.start_dirty_degraded=1 as a boot argument to the kernel image.

Nathan
  • 156
  • 1
  • 3
2

I had the problem that my system would boot normally with /dev/sdb unplugged, but would stall forever if i removed /dev/sda.

Simple solution, after a standard install of Debian, was to simply run grub-install /dev/sdb.

..and now it boots even with /dev/sda disconnected.

Kvisle
  • 4,113
  • 23
  • 25
1

Debian does not care wether or not your raid is safe or not while it boot.

You can check using dmesg, when server start, it display the number of drive used in the raid array.

you can also check /proc/mdstat to read current status.

Eventually, you can use mdadm /dev/md0 --manage --fail /dev/sda1 for instance to force /dev/sda1 to be marked as failed and then reboot.

Best regards,

Arnaud.

aligot
  • 318
  • 1
  • 7
0

I don't have an easy way to test this right now (only Debian box that isn't remote, and is using software RAID1 is in production at the moment), but I'm pretty sure I remember one or two cases in the past where one of my Debian softraid boxes had a disk issue, and I think Debian defaults to allowing it to boot with a degraded RAID.

In fact, I'm nearly positive that it does, because if you aren't using the write-intent bitmap feature (which adds a big performance hit if you use internal bitmap, much butter to store it on a separate disk), and your box crashes/reboots for any reason (without shutting down cleanly), it'll come up with a degraded RAID, and then resync after starting.

Christopher Cashell
  • 8,999
  • 2
  • 31
  • 43
0

I would try to boot into something resembling single mode running off initramfs and "fixing" it.

Konrads
  • 860
  • 2
  • 20
  • 38