After installation KVM virtual machine stops after rebooting, keeps adding -no-reboot

1

After successfully installing an Ubuntu 14.04 LTS KVM virtual machine I need to reboot the whole thing for everything to take effect. The thing is, it doesn't actually reboot, it just stops and then I have to start it up again manually on the CLI. I found this is the KVM QEMU logs:

2016-02-22 10:34:21.398+0000: starting up
....
-no-reboot -boot
....

Does -no reboot mean that the VM can not be restarted by the guest itself?

An XML dump shows the following:

<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>

I tried to find a solution on the internet, but have not succeeded there so far. How can I install my VM just so that -no reboot does not appear anymore?

https://www.redhat.com/archives/libvir-list/2013-April/msg01734.html mentions that if each of the 'on' events want to destroy the VM that '-no-reboot' will be added, otherwise '-no-shutdown' will be used. But since only on_poweroff is set to destroy, '-no-shutdown' should be added right?

EDIT:

After ejecting the CDROM and starting up the VM again, -no-shutdown now does appear in the logs. I think this needs to be there when creating the VM with virt-install. Any idea how to fix this?

file and frisk -l of rebooted machines :

john@h3:~/images$ sudo file image.img 
1000-laatstevm.img: x86 boot sector

john@h3:~/images$ sudo fdisk -l image.img 

Disk 1000-laatstevm.img: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001707b

             Device Boot      Start         End      Blocks   Id  System
image.img1   *        2048      499711      248832   83  Linux
image.img2          501758    20969471    10233857    5  Extended
image.img5          501760    20969471    10233856   8e  Linux LVM

of newly created uninstalled VM's:

john@h3:~/images$ sudo file newimage.img 
newimage.img: data

john@h3:~/images sudo fdisk -l newimage.img
only adds "Disk newimage.img doesn't contain a valid partition table" at the end of the output

These raw images are created like this: fallocate -l 2048M /path/to/image.img

Beeelze

Posted 2016-02-22T11:09:27.500

Reputation: 161

Answers

0

This answer seems to have fixed my issue. Simple adding --noautoconsole --wait=-1 to my virt-install command did the trick.

I however don't think it's the best solution. If I want to have multiple virtual machines to be created at the same time, I probably have to use something like threads, correct? Because now I have to wait for the first one to complete.

UPDATE:

I decided to create a shell script that runs in the background where multiple virt-install commands can run simultaneously.

Beeelze

Posted 2016-02-22T11:09:27.500

Reputation: 161

0

You should definitely try to check the image you are using for the virtual machine, maybe something is wrong with the bootloader in there:

$ sudo file /path/to/image.img

$ sudo fdisk -l /path/to/image.img

Here's a similar question that has more info on the subject: https://unix.stackexchange.com/questions/159294/kvm-guest-os-not-accessible-after-system-reboot

Victor Marchuk

Posted 2016-02-22T11:09:27.500

Reputation: 687

Thanks for your answer, please check my latest edits to my OP. Is there anything wrong there? – Beeelze – 2016-02-23T08:21:50.100