7

I've got a linux box that's running out of space, on a non-root volume drive. What's the best way to move to a larger drive?

I figure I boot to single user mode, format, mount the new drive, do some magical copy command that preserves links, permissions, file dates and everything else, then unmount the old drive, mount the new drive as the old name and reboot.

Does that sound right? Am I missing something? Suggestions? Tips? Anybody know what the cp command would be?

This is a ubuntu machine.

Stu
  • 2,118
  • 2
  • 15
  • 21

6 Answers6

10

If you gave your current layout (the output of fdisk -l will do if you don't use LVM, the output of "fdisk -l", "pvdisplay -C", "vgdisplay -C" and "lvdisplay -C" if do you use LVM) and the drive/partition you wish to grow, we could give a more accurate answer.

Assuming that by "non root volume drive" you mean a drive with a single partition that contains the volume you want to grow onto a new disk, that the old disk appears as sdb (and the partition on it sdb1), that the existing partition is an ext2 or ext3 filesystem, that the new disk is in and partitioned as a single volume (say, sdc1), and that you want to move completely to the new disk getting rid of the old, the following will work:

  1. Backup the data, just in case
  2. stop any services and other processes that are accessing the volume /dev/sdb1
  3. umount it
  4. dd if=/dev/sdb1 of=/dev/sdc1
  5. fsck /dev/sdc1 -C 0
  6. resize2fs /dev/sdc1 -p
  7. Adjust any pointers to the old device (i.e. in /etc/fstab) to the new one
  8. Remount and restart services
  9. Remove the old drive next time the machine powers down. You might want to keep it a while as an emergency backup in case the new drive turns out to be a dud

edit: the "-C 0" on fsck and "-p" on resize2fs tell the respective utilities to output progress information as they do their thing. The resize operation should be pretty quick (it usually only takes a lot of time if making a volume smaller, as more data needs to be moved around in that case). If you have pv installed then you can make step 3 give you progress info too by replacing the call to dd with "pv /dev/sdb1 > /dev/sdc1"

edit 2: this is a good option for pretty full volumes as it copies block-for-block at first so doesn't need to flip drive heads around caring about filesystem structures (so the copy will happen as fast as the slower of "the speed the old drive can bulk read" and "the speed the new drive can bulk write") and it doesn't have any confusion with hard links, device nodes, or anything else special that may be in the filesystem - for volumes that are fairly empty you'll find one of the cp/cpio based methods much faster as they won't be copying all the empty blocks from disk to disk

David Spillett
  • 22,534
  • 42
  • 66
4

i prefer rsync for this kind of job because if anything interrupts the copy process, you can just run the rsync again and it will pick up where it left off, not at the beginning again.

you can also run the rsync while the system is running normally (although it will be slower while rsync is copying files). then, when you're ready to cut over to the new drive, either shut down everything that is writing to the old drive (including user process, daemon, cron jobs, etc) OR reboot to single-user mode and run the rsync again, to sync the new drive with any changes that occurred while the initial rsync was running.

process is roughly:

  • install new drive
  • partition and format it with your preferred filesystem
  • mount it
  • rsync old fs to new fs
  • reboot into single-user mode
  • rsync again
  • edit /etc/fstab to mount new fs in place of old
  • reboot again (or shutdown and remove old drive first)

if this is likely to happen again in future, or if you want to use the capacity of both the old and the new drive, then you might want to consider using LVM for the new drive...then rsync the data to it and edit fstab as above. once you have the system running on the LVM volume group, you can add the old drive (and/or any extra new drives) to the volume group and resize the fs.

cas
  • 6,653
  • 31
  • 34
1

My favorite filesystem copy-fu:

(cd /src; tar cf - .) | (cd /dst; tar xpf -)

I am curious to see what others suggest, though. Since you are moving the entire filesystem, there are likely to be better choices. Oh, Is the original filesystem on an LVM volume?

Chad Huneycutt
  • 2,096
  • 1
  • 16
  • 14
0

I've always been partial to cpio myself:

cd /src; find . -print | cpio -dpum /dst

TCampbell
  • 2,014
  • 14
  • 14
0

If it's not the root, and there's nothing using the drive(/usr/sbin/lsof | grep '/path/to/mount/point), then you shouldn't need to boot to single user mode.

I'd do cp -a, if it's not LVM. But as I recall, Ubuntu(version 9, at least) requires the alternate installation disc. I don't know about other versions.

Kevin M
  • 2,302
  • 1
  • 16
  • 21
0

If you've got lvm you can do it on the fly without rebooting/stopping any services, if your system supports hot plugging the drives you can also replace the drives. - partition the new disk - pvcreate the new partition - pvcreate new_disk - extend your volume group to the new partition vgextend datavg new_disk - pvmove old_disk new_disk

Jure1873
  • 3,692
  • 1
  • 21
  • 28