56

I'm wondering what is the correct way of moving a VM between two KVM hosts without using any kind of shared storage

Would copying the disk files and the XML dump from the source KVM machine to the destination one suffice? If so, what commands need to be run to import the vm on the destination?

OS is Ubuntu on both the Dom0's and DomU.

Thanks in advance

Onitlikesonic
  • 1,161
  • 4
  • 15
  • 24

5 Answers5

77
  1. copy the VM's disks from /var/lib/libvirt/images on src host to the same dir on destination host
  2. on the source host run virsh dumpxml VMNAME > domxml.xml and copy this xml to the destination host
  3. on the destination host run virsh define domxml.xml

start the VM.

  • If the disk location differs, you need to edit the xml's devices/disk node to point to the image on the destination host
  • If the VM is attached to custom defined networks, you'll need to either edit them out of the xml on the destination host or redefine them as well

    1. On source machine virsh net-dumpxml NETNAME > netxml.xml
    2. copy netxml.xml to target machine
    3. On target machine virsh net-define netxml.xml && virsh net-start NETNAME & virsh net-autostart NETNAME)
Gordster
  • 174
  • 8
dyasny
  • 18,482
  • 6
  • 48
  • 63
  • and what if use logical volume instead of file as storage... i think i'll have problem with device uuid – inemanja May 25 '16 at 11:04
  • 2
    You can remove the device UUIDs from the xml, just leave the path `/dev/mapper/vgname-lvname` there – dyasny May 25 '16 at 13:10
  • If you are not sure about the VM name --> sudo virsh list --all – Fergara Jun 11 '20 at 18:19
  • When using logical volumes, copy the volume with dd or similar. If the files are copied to a new volume the UUID will not match and the machine will not boot. – Tim Styles Jan 02 '21 at 16:08
12

Since I can't comment yet, I have to post this addendum to dyasny's answer this way.

If the VM has snapshots that you want to preserve, you should dump the snapshot xml-files on the source with virsh snapshot-dumpxml $dom $name > file.xml for each snapshot in the snapshot list of the VM virsh snapshot-list --name $dom.

Then on the destination use virsh snapshot-create --redefine $dom file.xml to finish migrating the snapshots.

If you also care about which snapshot is the current one, then additionally do on the source:
virsh snapshot-current --name $dom
and on the destination:
virsh snapshot-current $dom $name

Then you can use virsh snapshot-delete --metadata $dom $name for each snapshot to delete the xml files on the source, or you could just delete them from /var/lib/libvirt/qemu/snapshots/$guestname


Sources:

  1. libvirt-users mailing list

  2. http://kashyapc.com/2012/09/14/externaland-live-snapshots-with-libvirt/

HBruijn
  • 72,524
  • 21
  • 127
  • 192
LN2
  • 121
  • 1
  • 4
4

Yes, just copying the XML file and the virtual disk images is sufficient, but this obviously precludes a "live" migration. The VM must be shut off during this procedure.

Once copied to the destination, libvirtd must be reloaded or restarted to recognize the new XML file.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • According to https://help.ubuntu.com/community/KVM/Virsh and to complete an answer I believe the sequence of commands would be: On the source Dom0: - virsh shutdown foo - virsh dumpxml foo > /tmp/foo.xml Then on the destination Dom0: - Copy over the disk files and putting them on the same directory as the source Dom0 - Copy over the XML dump - virsh create /tmp/foo.xml - virsh start foo – Onitlikesonic Oct 02 '12 at 14:33
  • Reasonable enough if you use `virsh`. I'd just copy the files directly and reload `libvirtd`. – Michael Hampton Oct 02 '12 at 14:36
3

Detailed Instructions on Copying VMs using blocksync.py

These instructions apply to a VM using a LVM provided disk and assumes that Python is on each of the hosts

Download the blocksync.py script from https://gist.github.com/rcoup/1338263 and put on both source and destination host in your /home/user folder.

Precursor

  • You will need to have 'ssh' access to both machines (source and target) for your user.
  • You will also need to have 'sudo' access to 'root' on both machines.

  • Alternatively, you could do everything as root, but only if your ssh key gives you root access to at least the target machine. ** In this case, remove the user name from the command lines.

Example Settings

  • The virtual machine is on the dom0 host known as chewie
  • The destination desired in on the dom0 host known as darth and had an internal IP here 10.10.10.38 (for our example)
  • In our actual case we use centos 7 as the dom0 operating system on both machines
  • The VIRTUAL Machine in this instance we are moving is called LARRY
  • The user doing the action is USER (which will be your name)
  • DOM0 means the actual physical server

Procedure

Initial steps on the source host

  • Login to the dom0 host which currently has the machine (the "source" host), eg:
    ssh user@chewie.domainname.com.au
  • Stay as your user, so don't become sudo user *List machines with
    sudo virsh --all
  • Dump the machine definition using, eg:
    sudo virsh dumpxml larry > larry.xml
  • Copy the dumped definition to the new machine (the "target" host), eg:

    scp -p larry.xml 10.10.10.38:larry.xml
    you can change the internal ip to your destination dom0 server name ** Note: it is best to use the ip address for the target, eg:
    scp -p larry.xml user@10.10.10.38:larry.xml

    If you cannot copy due to keys the cat larry.xml and copy it Then you can ssh into other machine and create file and paste it.

  • Find the size and name of the VM's disk using

    sudo lvs --units B
    .
    ** The command above should show size exactly in bytes. ** The machine's disk name is in the first column of the listing, its volume group in the second, and size in the last. ** Determine the device name as /dev// ** Check it with a 'll' command For example, in this output: vm_larry vg1 -wi-ao---- 69793218560B

LV        VG   Attr       LSize         Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root   vg1  -wi-ao----  53687091200B
  lv_swap   vg1  -wi-ao----  17179869184B
  vm_vsrv1  vg1  -wi-ao---- 193273528320B
  vm_vsrv10 vg1  -wi-ao----  64424509440B
  vm_vsrv11 vg1  -wi-ao---- 161061273600B
  vm_vsrv12 vg1  -wi-ao---- 204010946560B
  vm_vsrv2  vg1  -wi-ao---- 140110725120B
  vm_vsrv3  vg1  -wi-ao---- 128849018880B
  vm_larry  vg1  -wi-ao----  69793218560B
  vm_vsrv5  vg1  -wi-ao---- 257698037760B
  vm_vsrv6  vg1  -wi-ao----  64424509440B
  vm_vsrv7  vg1  -wi-ao---- 161061273600B
  vm_vsrv8  vg1  -wi-ao----  64424509440B
  vm_vsrv9  vg1  -wi-ao---- 214748364800B
  • Disk name is 'vm_larry', volume group is 'vg1'.
  • The device name is /dev/vg1/vm_larry
  • Output for eg:
    ls -l /dev/vg1/vm_larry
    is: lrwxrwxrwx. 1 root root 8 Jan 31 13:57 /dev/vg1/vm_larry -> ../dm-11

Initial steps on the target host

  • Login to the target host, eg
    ssh user@darth.domainname.com.au
  • Stay as your own user. i.e. don't become root.
  • Create a volume definition file, eg:

    vi larry.domainname.com.au-vol.xml
    or
    nano larry.domainname.com.au-vol.xml
    with the following lines: NOTE - You will need to take the size in bytes from the original VM and put into below script. The command on the source machine for size was sudo lvs --units B
    <volume type='block'>
       <name>larry.domainname.com.au</name>
       <capacity unit='bytes'>69793218560</capacity>
       <allocation unit='bytes'>69793218560</allocation>
      <target>
       <path>/dev/centos/larry.domainname.com.au</path>
       <permissions>
         <mode>0600</mode>
         <owner>0</owner>
         <group>6</group>
       <label>system_u:object_r:fixed_disk_device_t:s0</label>
      </permissions>
     </target>
    </volume>
    

Note: this definition is for a 69793218560 Bytes disk for VM larry, change as necessary for the actual VM.

Note: the name and last part of the path should match and will be used as the new disk name.

Create the new disk from the definition, using

   sudo virsh vol-create --pool centos larry.domainname.com.au-vol.xml

it will say Vol larry.domainname.com.au created from larry.domainname.com.au-vol.xml

Make the disk device file accessible:

sudo chgrp wheel /dev/mapper/centos-larry.domainname.com.au
sudo chmod g+rw /dev/mapper/centos-larry.domainname.com.au

Edit the xml definition copied over, eg:

vi larry.xml

Find the disk definition in the file (search for "source dev =") and replace the device with the one just created (you can ls /dev/centos/ to see vm), eg: /dev/drbd4 -> /dev/centos/larry.domainname.com.au

This bridge change was unique to our situation.

** Find any references to "br1" in the interface stanzas and change it to "br0" e.g. you are changing source bridge so line is like this

Final steps on the source host

  • Login to the source host, eg

    ssh user@chewie.domainname.com.au
  • The best practice would be to shutdown the VM on the source host before doing the final sync but doesn't need to be done. (virsh shutdown NameOfMachine)

  • If not already on the source host, download the blocksync.py script from https://gist.github.com/rcoup/1338263

  • If your username is user (for example) then copy the blocksync.py script into both machines into /home/user and chown user:user and chmod 755 the script.

  • If not already on the target host, copy it there, eg:
scp -p blocksync.py user@10.10.10.38:blocksync.py
  • Use it to copy the source disk to the target disk, eg

Command that does the copying

sudo -E python blocksync.py /dev/vg1/vm_larry user@10.10.10.38 /dev/mapper/centos-larry.domainname.com.au -b 4194304

Note: the first device name is for the source host, as determined from the 'lvs' command; this one is from a [[chewie]] source host.

Note: this will destroy the contents of the target disk, make sure that /dev/mapper/centos-larry.domainname.com.au is correct!

Note: the sync will take a long time - about 100 seconds per gigabyte, ie: 90 minutes for a 60 gigabyte disk.

However, you can do a sync while the VM is in use; subsequent syncs can be up to 25 percent faster

The script will print out the parameters that it is using (there may be a message about a deprecated module, this is okay). Next, it displays the ssh command that it is using and runs it (you will see the authorised staff only message when it does this). During its sync, it will display a running total of blocks copied and its average speed. Finally, it prints out a completion message with the number of seconds it took.

Things to Know

You can cancel the sync with CTRL C and restart it later by running the command again

Final steps on the target host

  • Login to the target host, eg
     ssh user@darth.domainname.com.au
  • Create the virtual machine, eg:
    virsh define larry.xml
  • Start the newly defined machine, eg:
    sudo virsh start larry
  • Mark it to start on host boot, eg:
    sudo virsh autostart larry

Note: it may be necessary to alter the details of the VM to suit the new environment.

  • I have not tried this, but you got my upvote for the detailed instructions provided. When it comes time to do this, I will most likely try this. – G Trawo Jan 10 '19 at 16:03
2

I have run into this problem with a couple of my older KVM servers, but its really annoying when it happens, and can cause issues with any of the installed VM's. In my case it regularly pushed one of my VMs into the reset state, as disk space was slow exhausted. The instructions below are somewhat sensitive to KVM/Distro version. In my case, I have CentOS 7.5

CentOS Linux release 7.5.1804 (Core) and Qemu-KVM version 1.5.3

By default the KVM images are located the location /var/lib/libvirt/images/

You need to find the Name of the VM, for this use virsh list

virsh list
 Id    Name                           State
----------------------------------------------------
 12    VM-Name                        paused

Stop the VM virsh stop VM-Name

For me I copy the file first, rather than moving. Copy the qcow file to the new location

cp /var/lib/libvirt/images/VM-Name.qcow2 /home/VMImages/

Edit the VM xml file, to reference the new "source file" location virsh edit VM-Name

You will want to change the "source file" this file

Restart the libvirtd service

service libvirtd restart

Then restart the VM and you should be good to go.

virsh start VM-Name