2

I am trying to create a vmware virtual machine from a physical one running RedHat. Here are the steps I followed:

I create a VM and boot a live-cd (kali). I perform a rsync of the physical host for its / without the proc, sys and dev folders. I launch the VM, starting on kali, and partition /dev/sda to get a /dev/sda1. I create a ext4 filesystem on it and mount it on /mnt.

I recreate the /dev, /proc and /sys folders:

mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev

I edit /etc/fstab:

/dev/sda1 / defaults 0 0

I chroot into /mnt:

chroot /mnt /bin/bash

I install grub, recreate an initrd, and the grub configuration file: ($(uname -r) launched on physical server)

grub2-install --recheck --no-floppy /dev/sda
mkinitrd /boot/initrd.$(uname -r).img $(uname -r)
grub2-mkconfig -o /boot/grub2/grub.cfg

I reboot ;

grub menu is loaded, I can choose system to get loaded, when I try rescue mode, I can reach the login interface ; the appropriate hostname get shown on my VM:

physical_hostname login:

However, I can't login (I am pretty sure having entered the appropriate password)

If I don't chose the rescue mode, the system does not fully boots: Here are the last lines printed:

[ OK ] Started show plymount boot screen
[ OK ] Reach target paths
[ OK ] Reach target basic system
... Here, I wait for like 2 minutes
dracut-initqueue[246]: Warning: dracut-initqueue timeout - starting timeout scripts
... This message gets prints like 100 times
[ OK ] Started dracut initqueue hook
[ OK ] Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems.
dracut-initqueue[246]: Warning: dracut-initqueue timeout - starting timeout scripts
A start job is running for dev-disk...225a.device

And it does not stop.

I'm pretty sure I close to boot my system; however I am totally stuck. Many thanks for helping ; I'm feeling pretty desperate

philippe
  • 2,131
  • 4
  • 30
  • 53
  • You miss to tell us what is the target hypervisor, as if vmware, tool exist to do a p2v – yagmoth555 Feb 08 '19 at 17:04
  • Did you fix the /etc/fstab with the UUIDs of the filesystems in the target VM? – Zoredache Feb 08 '19 at 18:12
  • I just put `/dev/sda1 / defaults 0 0` into  `/etc/fstab` which seemed to be enough as the hostname gets correctly printed when I boot on rescue mode :/ Maybe this is not enough, however I have no idea how to get UUIDs of the filesystems – philippe Feb 08 '19 at 18:25

1 Answers1

2

I wrote a step-by-step detailed answer of how I solved a very similar challenge on the question: Turning a running Linux system into a KVM instance on another machine. I hope it proves a useful answer for this question too.

Goal of the answer: to take a physical Linux P node running live-production and virtualise it. Without having to create and allocate multi terabyte disks, nor have to use md raid in the V guest, because the target hypervisor (Proxmox 5) used ZoL/ZFS. Also wanted to mitigate downtime/reboots on the running P node.

Kyle
  • 494
  • 1
  • 5
  • 13