7

I have Ubuntu server with 1x2TB HDD and 1x128GB SSD (unused). There are 2 partitions: /boot, and a physical volume for LVM, which has 1 group and 5 logical volumes: /, /var/log, /home, /srv, /tmp.

Recently the 2nd 2TB HDD arrived. We need better redundancy, so I'm looking at either joining 2 HDDs into 1 RAID1 (with /boot, /, and an LVM partition with /home, /var/log, /srv, /tmp), or adding 2nd HDD as LVM physical volume, and using LVM mirroring for logical partitions.

In addition to redundancy, I need to achieve 2 more goals: - relatively safe change from 1xHDD to 2xHDDs (I'm administering remotely a live system) - easy future extensions of LVM partitions.

Here, is RAID1 superior to LVM mirroring? (I believe it is. I'm talking software RAID1 here.)

If it is - what would be the best way to convert a 1xHDD live system to a 1xRAID1 live system remotely, if all I have is an empty 128GB SSD, and all HDD data currently fits onto it easily?

chronos
  • 568
  • 5
  • 13
  • 1
    I've documented the steps I've taken to achieve this conversion in more [details](http://bogdan.org.ua/2011/05/17/how-to-remotely-convert-live-hdd-lvm-linux-server-to-raid1-grub2-gpt.html). – chronos May 17 '11 at 13:24

2 Answers2

4

assuming sda is the original disk, and sdb is the new disk:

  1. partition the new drive. if using fdisk, be sure to hit c so it is aligned, and change the partition type to da.
  2. sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missing to create the raid device from the newly partitioned drive.
  3. sudo mdadm -Es and copy the output of this to /etc/mdadm/mdadm.conf so the raid device gets started automatically when the server reboots.
  4. sudo pvcreate /dev/md0, after this just do vgcreate on /dev/md0 and lvcreate like usual, or you can use pvmove

After the data is moved off the old drive and on to the new, you can repartition the old drive, making sure the partition size is the same as the new raid drive. Then add it to the raid device with: sudo mdadm --manage --add /dev/md0 /dev/sda1. Since it sounds like these are boot drives, you'll want to install grub to both drives.

Since this is a remote system, you'll probably want to do sudo dpkg-reconfigure mdadm (assuming this is a debian-based system, such as Ubuntu) and enable boot with degraded raid.

You'll also want to set up email so mdadm can notify you of issues with the Raid device (such as a failed drive).

  1. sudo aptitude install postfix
  2. choose a satellite system, I use the fqdn servername (or whatever you prefer), enter a smarthost if necessary.
  3. Edit /etc/aliases and add root: yourusername so root's mail go to you
  4. Edit ~/.forward with youremail@awesomesauce.com so your emails go to your email account
James
  • 819
  • 4
  • 10
  • Thanks a lot! So there are no real benefits to using a separate mdX for `/`, right? And I can just leave `/` within the LVM, as it is now? (I will, however, create a separate md0 for `/boot`. And yes, our system is currently Ubuntu Server.) Thanks for the extra email tip. – chronos May 12 '11 at 08:46
  • I don't see any reason reason for a separate mdX for `/`. I've been managing a couple servers for the last couple years with the system drives raided on software raid and I don't remember ever having a situation where I wished `/` was split out. `/boot` does need to be split out because it can not go in the lvm volume. I didn't include exactly how to do that in my answer, but I expected you would figure it out ;) – James May 12 '11 at 12:41
  • Yes, I've figured the `/boot` out :) Thanks again. – chronos May 12 '11 at 20:09
3

I was advised that possibly the easiest way to convert 1xHDD into 1xRAID1 is by:

  • creating a degraded 1-disk RAID1 on the newly installed HDD
  • copying data from current HDD to that degraded RAID1
  • adding the earlier-installed HDD to the degraded RAID1, and rebuilding the RAID.

Relevant links: 1, 2.

chronos
  • 568
  • 5
  • 13
  • 2
    If you add the degraded raid1 dev to lvm, to the volume group, you could do a `pvmove /dev/ /dev/` to move the data from the non-raid device to the other, then vgreduce, then create the raid device and add it. That will let you do it online rather than take everything down. It may take longer though. – lsd May 10 '11 at 14:15
  • Thanks, I actually plan to do everything online. I think I'll use what you suggest. The only difference is that I'm planning to remove `/` from LVM to make it a usual md1 (to possibly simplify booting from /boot=md0 and /=md1 when RAID is degraded due to HDD failure). Does moving `/` out of LVM make sense to you? – chronos May 10 '11 at 14:39
  • You can probably have both, the vg on the md, but I think altering / would be difficult without rebuilding (or at least quite a bit of offline work migrating stuff over). – lsd May 11 '11 at 15:14
  • Yeah, moving `/` will be a huge pain in the neck. See my answer below for how to enable booting with a degraded raid. – James May 12 '11 at 03:11