12

I have just built a shiny new KVM/libvirt-based virtual machine host, containing 4 SATA II hard drives, and running CentOS 5.5 x86_64.

I have decided to create virtual machine disks as logical volumes in an LVM volume group managed as a libvirt storage pool, instead of the usual practice of creating the disks as qcow images.

What I can't decide on is whether I should create the virtual machine logical volumes in the VM host's volume group, or in a dedicated volume group.

Which method should I choose, and why?


Method 1: Use the VM host's volume group

Implementation:

  • small RAID1 md0 containing the /boot filesystem
  • large RAID10 md1 occupying the remaining space, which contains an LVM volume group vghost. vghost contains the VM host's root filesystem and swap partition
  • create virtual machine disks as logical volumes in vghost as required

Pros:

  • if the VM host's root filesystem runs out of space, I can allocate more space from vghost with relative ease
  • The system is already up and running (but it is no big deal to start over)

Cons:

Depsite the fact that this method seems to work, I can't shake the feeling that this is somehow a bad idea. I feel that:

  • this may somehow be a security risk
  • at some point in the future I may find some limitation with the setup, and wish that I used a dedicated group
  • the system (CentOS, libvirt, etc.) may not really be designed to be used like this, and therefore at some point I might accidentialy corrupt/lose the VM host's files and/or filesystem

Method 2: Use a dedicated volume group

Implementation:

  • same md0 and md1 as in Method 1, except make md1 just large enough to contain for the VM host (eg. 5 to 10GB)
  • large RAID10 md2 occuping the remaining space. md2 contains an LVM volume group vgvms, whose logical volumes are to be used exclusively by virtual machines

Pros:

  • I can tinker with vgvms without fear of breaking the host OS
  • this seems like a more elegant and safe solution

Cons:

  • if the VM host's filesystem runs out of space, I would have to move parts of its filesystem (eg. /usr or /var) onto vgvms, which doesn't seem very nice.
  • I have to reinstall the host OS (which as previously stated I don't really mind doing)

UPDATE #1:

One reason why I am worried about running out of VM host disk space in Method 2 is because I don't know if the VM host is powerful enough to run all services in virtual machines, ie. I may have to migrate some/all services from virtual machines to the host OS.

VM host hardware specification:

  • Phenom II 955 X4 Black Edition processor (3.2GHz, 4-core CPU)
  • 2x4GB Kingston PC3-10600 DDR3 RAM
  • Gigabyte GA-880GM-USB3 motherboard
  • 4x WD Caviar RE3 500GB SATA II HDDs (7200rpm)
  • Antec BP500U Basiq 500W ATX power supply
  • CoolerMaster CM 690 case

UPDATE #2:

One reason why I feel that the system may not be designed to use the host VG as a libvirt storage pool in Method 1 is some behaviour I noticed in virt-manager:

  • upon add, it complained that it couldn't activate the VG (obviously, becuase the host OS has already activated it)
  • upon remove, it refused to do so because it couldn't deactivate the VG (obviously, because the host OS is still using the root and swap LVs)
mosno
  • 143
  • 1
  • 6
  • I made a question (#272324) where your solution #1 would have been a very good answer! And this is exactly what I went for in a similar setup - and I'm so far quite happy with it. I do however have a problem where diskIO within the Guest is way slower than if "loop-mounting" the same LV in the Host. – stolsvik Sep 17 '11 at 23:16

3 Answers3

3

Well thought-out question!

I'd go with Method 2, but that's more of a personal preference. To me, the Method 2 Cons aren't much of an issue. I don't see the host OS outgrowing its 5-10GB partition, unless you start installing extra stuff on it, which you really shouldn't. For the sake of simplicity and security, the host OS really should be a bare minimal install, not running anything except the bare minimum needed for administration (e.g. sshd).

The Method 1 Cons aren't really an issue either, IMO. I don't think there would be any extra security risk, since if a rooted VM is somehow able to break out of its partition and infect/damage other partitions, having the host OS on a separate VG might not make any difference. The other two Cons are not something I can speak to from direct experience, but I my gut says that CentOS, LVM, and libvirt are flexible and robust enough not to worry about them.

EDIT - Response to Update 1

These days, the performance hit of virtualization is very low, especially using processors with built in support for it, so I don't think moving a service from a guest VM into the host OS would ever be worth doing. You might get a 10% speed boost by running on the "bare metal", but you would lose the benefits of having a small, tight, secure host OS, and potentially impact the stability of the whole server. Not worth it, IMO.

In light of this, I would still favour Method 2.

Response to Update 2

It seems that the particular way that libvirt assumes storage is layed out is yet another point in favour Method 2. My recommendation is: go with Method 2.

Steven Monday
  • 13,019
  • 4
  • 35
  • 45
  • thanks. I have appended 2 updates to my question, which further explain why I have listed some of the cons that you have addressed. Do the updates change your opinion at all? – mosno Nov 12 '10 at 00:14
  • @mosno: Updated my answer in response to your updates. – Steven Monday Nov 12 '10 at 19:23
  • Thanks everyone for your answers, all have been helpful to me and it was hard to choose whose answer to accept. I am choosing Steven's because I feel it makes the best effort to address the question asked. For the record, while I agree Method 2 is probably better, I chose to stay with Method 1 because it works and because of time constraints. – mosno Nov 18 '10 at 23:34
  • 1
    Also, I am staying with Method 1 for now because I think it would be educational to explore the limitations of this method. For example, I learnt that if in a guest OS you create an LVM PG directly onto the device (eg. device /dev/vda instead of partition /dev/vda1), then the host OS' pvscan lists the guest's PV (ie. use /dev/vda1, not /dev/vda). – mosno Nov 18 '10 at 23:35
1

As long as only one system attempts to use any given LV in read/write mode at any time, it is feasible to use the same VG for host and guests. If multiple systems attempt to write to the same LV then corruption of the filesystem will result.

Ignacio Vazquez-Abrams
  • 45,019
  • 5
  • 78
  • 84
1

You might want to take a look at this, maybe tinker & see how this project does what you're talking about.

ProxmoxVE is a bare-metal KVM host that uses a perl implememtation of libvirt rather than RHEL's heavier counterpart. It implements both scenarios.

Virtual disks are .raw and sparse, similar to .qcow but faster.

The qcow & vmdk disk image formats are also supported but I think there might be LVM limitations involved. I don't use them so I can't say much about that.

LVM storage is shared between VMs on the node, and can be DRBD devices.

As for sharing the OS's VG space, the only limitation to be concerned with is snapshot size during backups. Here this value can be changed in a config file, and I see sometimes in the forums where people have had to change it, but the defaults have served me well for a couple years now- even with huge virtual disks.

PVE's LVM storage details:

http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing

This is how the VGs are laid out:

Found volume group "LDatastore1" using metadata type lvm2

Found volume group "LDatastore0" using metadata type lvm2

Found volume group "pve" using metadata type lvm2

These are my LVs:

ACTIVE '/dev/LDatastore1/vm-9098-disk-1' [4.00 GB] inherit

ACTIVE '/dev/LDatastore1/vm-7060-disk-1' [2.00 GB] inherit

ACTIVE '/dev/LDatastore1/vm-5555-disk-1' [8.00 GB] inherit

ACTIVE '/dev/LDatastore0/vm-4017-disk-1' [8.00 GB] inherit

ACTIVE '/dev/LDatastore0/vm-4017-disk-2' [512.00 GB] inherit

ACTIVE '/dev/LDatastore0/vm-7057-disk-1' [32.00 GB] inherit

ACTIVE '/dev/LDatastore0/vm-7055-disk-1' [32.00 GB] inherit

ACTIVE '/dev/LDatastore0/vm-6030-disk-1' [80.01 GB] inherit

ACTIVE '/dev/pve/swap' [3.62 GB] inherit

ACTIVE '/dev/pve/root' [7.25 GB] inherit

ACTIVE '/dev/pve/data' [14.80 GB] inherit

This is LVM on RAID10 made of 6 7200 rpm Seagate Barracuda SATA drives:

CPU BOGOMIPS: 53199.93

REGEX/SECOND: 824835

HD SIZE: 19.69 GB (/dev/mapper/LDatastore0-testlv)

BUFFERED READS: 315.17 MB/sec

AVERAGE SEEK TIME: 7.18 ms

FSYNCS/SECOND: 2439.31

And this is LVM on a single Intel X25-E SATA SSD, same VG as the aforementioned /dev/pve/data where VMs live:

CPU BOGOMIPS: 53203.97

REGEX/SECOND: 825323

HD SIZE: 7.14 GB (/dev/mapper/pve-root)

BUFFERED READS: 198.52 MB/sec

AVERAGE SEEK TIME: 0.26 ms

FSYNCS/SECOND: 1867.56

NginUS
  • 468
  • 1
  • 5
  • 13