4

I have two physical machines that I wish to virtualize.

I can not (physically) plug the hard drives from either machine into the new machine that will act as their VM host, so I think that copying the entire structure of the system over using dd is out of the question.

How can I best go about migrating these machines from their hardware to the KVM environment? I've set up empty, unformatted LVM logical volumes to host their filesystems, with the understanding that giving the VMs a real partition to work with achieves higher performance than sticking an image on the filesystem.

Would I be better off creating new OS installs and rsyncing the differences over?

FWIW, the two machines to be VM'd are running CentOS 5, and the host machine is running Ubuntu Server 10.04 for no particularly important reason. I doubt this matters too much, as it's still going to be KVM and libvert that matter.

Kyle
  • 494
  • 1
  • 5
  • 13
Charles
  • 1,194
  • 2
  • 12
  • 22

4 Answers4

6

P2V Linux physical to virtual KVM - no automated tools

Live migration of P running Debian Wheezy 7.11 to V KVM on Proxmox VE v5.4

mdadm software RAID1 P disks to KVM virtio disks

migrate from large P disks to smaller V disks

Foreword

The goal of these steps was to take a physical Linux P node running live-production and virtualise it. Without having to create and allocate multi terabyte disks, nor have to use md raid in the V guest, because the target hypervisor (Proxmox 5) used ZoL/ZFS. Also wanted to mitigate downtime/reboots on the running P node.

These steps might not be perfect, but it should get you close to a solution and cites useful links I found along the way.

I chose to post an answer on this question after careful google searching on https://unix.stackexchange.com and https://serverfault.com. This seemed like the most relevant question for my answer, even though its 9+ years old at the time of writing this.

Here are some related questions I found, which this answer also aims to solve too:
P2V with rsync
How to create a virtual machine from a live server?
How to migrate a physical system to a KVM virtual server with only network access?
Converting physical machine into virtual without shutdown it
Migrating physical machine to KVM
vmware conversion linux physical machine
How to migrate a bare metal Linux installation to a virtual machine

STEP 1

virtio support

# On the P node
# check the kernel has the virtio support
grep -i virtio /boot/config-$(uname -r)
# when no, that is an issue out of scope of these instructions. contact me.

# if lsinitrd is available, check if the initramfs already has the modules
lsinitrd /boot/initrd.img-$(uname -r) | grep virtio
# when yes, your virtio is already in the initramfs, continue to next step

# when no, add the modules to the initramfs
# backup existing initrd.img
cp -iv /boot/initrd.img-$(uname -r) /boot/BACKUP_initrd.img-$(uname -r)

# non Debian way
mkinitrd --with virtio_console --with virtio_pci --with virtio_blk -f /boot/initrd.img-$(uname -r) $(uname -r)

# The Debian way
# https://wiki.debian.org/DebianKVMGuests
echo -e 'virtio_console\nvirtio_blk\nvirtio_pci' >> /etc/initramfs-tools/modules
# check correctly append new lines etc, correct manually if needed
cat /etc/initramfs-tools/modules
# compile new initramfs
update-initramfs -u

# OPTIONAL if safe
# !!! WARNING DOWNTIME -- reboot P node to test everything is ok with the new initramfs
shutdown -r now

STEP 2

KVM prep - BIOS partition(s)

# boot a new KVM guest on SystemRescueCD or similar

# create the BIOS/UEFI partition(s)
# https://help.ubuntu.com/community/DiskSpace#BIOS-Boot_or_EFI_partition_.28required_on_GPT_disks.29
# https://help.ubuntu.com/community/Installation/UEFI-and-BIOS/stable-alternative#Create_a_partition_table

# follow the linked guides above to create the relevant BIOS partitions/disks.

STEP 3

KVM prep - BOOT and DATA partition(s)

# BOOT partition
# inspect the P boot partition - note the parameters
# CRITICAL the new V boot partition should be identical to the partition on the P.
# make the V boot partition using your preferred partitioning tool


# on P node, make a copy of the boot partition
# umount the boot fs for backup
umount /boot
# backup boot partition
gzip --stdout /dev/md1 > ~user/boot.disk.md1.img.gz
# re-mount boot fs
mount /boot

# on the KVM live CD
cd /tmp # or somewhere with > some space for the boot image
scp user@hostname.com:boot.disk.md1.img.gz .

gunzip boot.disk.md1.img.gz
# copy the P boot partition to the V boot partition
dd if=boot.disk.md1.img of=/dev/vda1
# verify consistency
fsck.ext3 /dev/vda1

# list the detected file systems, visual check for the expected results
fsarchiver probe simple

# on the KVM live CD make your data partitions, the size you wish
# mirroring the P is not required, obviously needs to be enough space for the data.

# CRITICAL the binaries/tools used to make the data file systems must be for the same kernel generation i.e. from the node being converted, otherwise the system will fail to mount the rootfs during boot.
# https://unix.stackexchange.com/questions/267658/

# CRITICAL target file systems must have enough inodes
mkefs -t ext4 -N 1500000 /dev/vda2
mkefs -t ext4 -N 1500000 /dev/vda3

cd /mnt/
mkdir linux
mount /dev/vda2 linux/
cd linux/
mkdir -p var boot
mount /dev/vda3 var/

step 4

rsync data

# consider mounting the fs ro, or at the very least stopping services for the final rsync
nohup rsync --bwlimit=7m --human-readable --itemize-changes --verbose --archive --compress --rsync-path='sudo rsync' --rsh='ssh -p22345' --exclude=/mnt --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/run --exclude=/boot --exclude=/var/spool user@hostname:/ . 1>../rsync.stdout 2>../rsync.stderr

# check the logs are ok, and as expected

# final sync, stop services, and/or ro the fs(s)
rsync --bwlimit=7m --human-readable --itemize-changes --verbose --archive --compress --rsync-path='sudo rsync' --rsh='ssh -p22345' --exclude=/mnt --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/run --exclude=/boot --exclude=/var/spool user@hostname:/ .

step 5

update grub

mount /dev/vda1 boot/
mkdir -p proc dev sys
mount -o bind /proc /mnt/linux/proc
mount -o bind /dev /mnt/linux/dev
mount -o bind /sys /mnt/linux/sys

chroot /mnt/linux /bin/bash

export PATH=$PATH:/bin:/sbin:/usr/sbin/ # check what is required for your P
grub-install /dev/vda
grub-install --recheck /dev/vda
update-grub

# update /etc/fstab with the new id/names/options
vim /etc/fstab

# Only required if you're P node had md RAID
# https://dertompson.com/2007/11/30/disabling-raid-autodetection-in-ubuntu/
aptitude purge mdadm

# required mount points
mkdir -p /mnt /proc /sys /dev /run

# required because I didn't rsync /var/spool
mkdir -p /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cron/atjobs /var/spool/postfix /var/spool/rsyslog

step 6

test kvm and make changes required post P2V

# reboot to KVM guest normal boot disk

# fix a screen bug that appeared after P2V
aptitude reinstall screen

# ensure boot logging is enabled
# https://wiki.debian.org/bootlogd
aptitude install bootlogd

# networking
# check that the MAC address assigned to the KVM matches what the KVM kernel reports
ip link

# modify net interfaces, taking note of the interface name with the correct MAC address
vim /etc/network/interfaces

# update DNS if required
vim /etc/resolv.conf

# update apache2 envvars if required
vim /etc/apache2/envvars

# update hosts
vim /etc/hosts

# reboot
shutdown -r now

step 7

final tests and verification's

#### post reboot
# check dmesg and/or kvm console for boot it issues
dmesg -x
dmesg -x --level=err --level=warn

# check boot log for issues, useful if physical console cannot be easily viewed
# formatting: https://stackoverflow.com/q/10757823
sed 's/\^\[/\o33/g;s/\[1G\[/\[27G\[/' /var/log/boot |less -r
Kyle
  • 494
  • 1
  • 5
  • 13
3

You could possibly pipe the output from dd through an SSH tunnel to your target machine. I've known it to be done relatively successfully into a VMWare virtual machine.

Good details in the main answer here, and provides instructions for what to do if SSH isn't running (a number of liveCDs have SSH server on them anyway, so shouldn't be a problem): How to set up disk cloning with dd, netcat and ssh tunnel?

Twirrim
  • 673
  • 4
  • 8
  • The notes at the bottom of that answer are golden. In particular, one of source filesystems is dramatically larger in capacity than the space allotted, though it's mostly free space and all the actual data will fit. Looks like a dump/restore for that machine, and a netcat+ssh+dd copy for the other. – Charles Jun 29 '10 at 01:57
1

You can copy the disk images to the LVM logical volumes and provide this as the disk image for the VM. Make sure you disable NTP and NTPDATE on the virtual servers.

I converted some old images using Mondo to create bootable recovery images. This allowed me to resize the partitions during reinstall.

BillThor
  • 27,354
  • 3
  • 35
  • 69