5

I need to clone a CentOS installation from a 1TB disk partitioned with LVM, to several identical machines. The disk is mostly empty since only the operating system and some software are installed and configured.

Without LVM I would copy the entire partition table, and then I would clone the partitions one by one using partclone:

sfdisk -d /dev/sda | sed -e 's/sda/sdb/' | sfdisk /dev/sdb
partclone.ext4 -c -s /dev/sda# -o - | partclone.ext4 -r -s - -o /dev/sdb#

However I think it will not work with LVM.

Of course I could just use dd to clone the whole disk:

dd if=/dev/sda of=/dev/sdb

but it takes too much time compared to partclone.

Is there a way to clone the LVM partitions faster? I think one possible solution is to clone the LVM partitions to regular partitions in another disk using dd, and then clone the new disk to the other machines using partclone. But I do not know if something like this will work:

dd if=/dev/mapper/vg_node07-lv_root of=/dev/sdb1

Can it work? Can you tell me other solutions?

muru
  • 569
  • 7
  • 26
Manuel
  • 61
  • 1
  • 1
  • 5
  • Exactly what structure do you want to end up with? And why aren't you just using kickstart? – Michael Hampton Jul 02 '15 at 00:58
  • I'm not familiar with kickstart. Moreover the machine that I want to clone is already configured to serve as a compute node in a cluster. It has torque job manager installed and several scientific software. User data is on the head node and mounted with NFS. The only important partitions are / and /boot. – Manuel Jul 02 '15 at 01:42
  • Did you miss that file `anaconda-ks.cfg` that was left in the `/root` directory after installation? That's a kickstart file. Feed it to the installer and it will install an identical system. And you can of course customize it to do whatever configuration you wish. – Michael Hampton Jul 02 '15 at 01:51
  • The problem is that I would have to compile a lot of additional software on each node. It is already compiled in the node that I want to clone. – Manuel Jul 02 '15 at 01:56
  • In that case you should be packaging it and running an internal repository. – Michael Hampton Jul 02 '15 at 01:57
  • That would be ideal, but this is a one-time job, so I was looking for a simpler solution. – Manuel Jul 02 '15 at 02:08
  • Keep in mind that it's only a one-time job until you have to do it again. And if you had to do it once, you will have to do it again sooner or later. Plus, what happens when that custom software needs to be updated? Do you really want to go compile it again on 100 compute nodes? Or just once, and then drop it in the repo and let all 100 nodes update? – Michael Hampton Jul 02 '15 at 02:12
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/25458/discussion-between-manuel-and-michael-hampton). – Manuel Jul 02 '15 at 16:13
  • Note: when you clone a system like that, you still need to roll out a few changes to each. With sysvinit, it still somewhat worked, although all machines would have the same hostname, and logs would be collected by IP address, but systemd has a unique system ID that is used when merging logs, so this needs to be changed at least when you want central log file analysis. There is a reason why identical machines are usually set up by running the installer in non-interactive mode. – Simon Richter Nov 07 '21 at 11:47

4 Answers4

4

Yes, you can use dd just as described.

What I would do is create the source image using the smallest possible partitions, clone them, then enlarge the logical volume and filesystem on the target. Your cloning procedure becomes something like:

# <attach target for cloning, say, /dev/sdc>
# CURRENT_LE=2000  (get exact "Current LE" value from lvdisplay)
# NEW_SIZE="20G"
# parted -a optimal /dev/sdc mklabel gpt mkpart p1 ext4 0% 100%
# pvcreate /dev/sdc1
# vgcreate nodexx /dev/sdc1
# lvcreate -n lv_root -l $CURRENT_LE nodexx
# dd if=/dev/node07/lv_root of=/dev/nodexx/lv_root bs=4M
# lvresize /dev/vg_nodexx/lv_root -L $NEW_SIZE
# fsck.ext4 -f -y /dev/vg_nodexx/lv_root
# resize2fs /dev/vg_nodexx/lv_root

You'll want to book up on LVM and the file system tools, but this is a great candidate for shell scripting.

nortally
  • 381
  • 2
  • 11
1

You can loopback-mount the LVM logical volumes with losetup using the --partscan option:

losetup --partscan --read-only --show --find /path/to/my/lv

This will return a path to the loopback device that the LVM logical volume has been bound to (something like /dev/loop0). Thanks to the --partscan option, the partitions will also be accessible (i.e. /dev/loop0p1, /dev/loop0p2, ...). You can partclone those directly.

Once you are finished, you can release the loop device with:

losetup -d /dev/loop0

Be sure to specify the same loop device the first call to losetup returned (it might be a different number than 0).

0

You can do almost the same thing that VMware p2v software does. What it does is create the new filesystems on the new system exactly as you want them to be, then do a tar of the filesystems across to the other server. This way you get everything exactly the same, and you are only copying files and space that are currently being used. Then you have to do the grub stuff to make sure its bootable.

lsd
  • 1,653
  • 10
  • 8
-2

LVM does a lot of things much more complicated than just a simple disk with a simple partition... and because of that... cloning is a much more complicated process.

In fact, you'd be better off (re?)creating the lvm volume manually (or scripted) and then simply using the sfdisk/partclone process to clone the actual data to the new LVM.

This would also give you the added benefit of being able to clone a RAID-1 system to a RAID-5 setup across more disk... as the partitions would be unaffected.

TheCompWiz
  • 7,349
  • 16
  • 23
  • I googled about using partclone with LVM partitions but I did not find anything specific. Would the following work? partclone.ext4 -c -s /dev/mapper/vg_node07-lv_root -o - | partclone.ext4 -r -s - -o /dev/mapper/vg_node08-lv_root – Manuel Jul 01 '15 at 23:41
  • I can't say 100% yes... but it appears to be fine... except for the fact that the volume-group name appears to be changed. In the fstab (and possibly in grub as well) you might need to make changes to accommodate this... or to a `vgrename` to rename it to what it should be. – TheCompWiz Jul 02 '15 at 00:16