5

I've got two 600GB drives in a software RAID1 setup on a physical Debian server.

I want to be able to upgrade the capacity of the server by cloning the drives to a matching pair of 2TB drives. I can then wipe the 600GB drives and use them as storage or whatever.

What's the best way to go about this?

slave:~# mount
/dev/md0 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
Gareth
  • 8,413
  • 13
  • 43
  • 44

3 Answers3

10

You should be able to replace the first drive, partition add to the array and let the raid resync. Then replace the second drive, allow it to resync, and then expand the raid and filesystem to take up the entire space. When you partition your new drives make it so they take up the all the space that you want want for the new layout.

man mdadm, resize2fs

remove a device from the array

mdadm /dev/md0 --remove /dev/olddevice

add a device to the array

mdadm /dev/md0 --add /dev/newdevice

grow the array to take up the entire space allowed by the partitions

mdadm /dev/md0 --grow --size=max

Grow the filesystem to take up the enitre space of the array

resize2fs /dev/md0

You should still make a backup. Just to be sure. If you want to practice and test, you might want to try this in a virtual machine first so you can feel confident.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
  • I'll report back in a few hours but that seems to make sense. Thanks @Zoredache for keeping the power of the lazyweb alive! – Gareth May 17 '09 at 22:50
  • 2
    You may want to investigate LVM (logical volume management) for future use- it can simplify adding on extra drives down the road. It's too hard to add in now, but maybe you want to consider it for your next server. – Tim Howland May 18 '09 at 00:33
5

Just in case someone googles this up, here is my experience with moving from 2x150Gb to 2x1Tb drives in mdadm RAID1 + LVM on top of it.

Assuming, we got 2 drives - small1, small2 in mdadm mirror (md0), and the new are big1 and big2. On top of them is LVM with volume group VG1 and logical volume LV1

ensure everything OK with current md:

cat /proc/mdadm

Tell mdadm to fail one drive and remove it from md array:

mdadm /dev/md0 --set-faulty /dev/small1 && mdadm /dev/md0 --remove /dev/small1

Replace small1 drive with big one (either hotswapping, or powering the system down).

Make new partition on the big HDD of type FD (Linux RAID autodetect). Make it the size you want your new RAID to be. I prefer cfdisk, but this may vary:

cfdisk /dev/big1

Add the new disk (or, to be correct, your newly created partition, e.g. /dev/sda1):

mdadm /dev/md0 --add /dev/big1

Wait till the array is synced:

watch cat /proc/mdstat

Repeat this with the other pair of drives. In the end you'll get two big disks in array.

Grow the array to maximum size allowed by component devices, wait till synced:

mdadm /dev/md0 --grow --size=max
watch cat /proc/mdstat

Now it's time to resize LVM. Note the --test option, it will simulate action, but would not change metadata (it's useful to see if there're any misconfiguration before actually resizing).

Resizing physical volume:

pvresize --verbose --test /dev/md0

Resizing logical volume:

lvresize --verbose -L <SIZE> --test /dev/VG1/LV1

And finally, resizing ext3 FS:

resize2fs /dev/VG1/LV1

With two 1Tb HDDs it took me about 20 hours (I've removed one disk from an array before messing with LVM and FS, so it was 3 syncs + array growing).

All was done on a production server, with no interruptions to services running.

But don't forget to BACKUP YOUR DATA before making any changes.

user3843
  • 51
  • 1
1

Assuming one of the disks being replaced is a boot disk, don't you need to worry about having GRUB on both disks before you start yanking disks out? (I am assuming the stuff GRUB goes looking for, in /boot, is mirrored onto both disks.)

I'm pretty sure I've stared at a not-quite-GRUB prompt when I didn't get this right...