8

I'm trying to build a system that will run short-lived (CI and test builds) of software components, it's mandatory according to my requirements that each live on a private host. I'm taking that definition to include paravirtualsation options as well, as it seems like it will save me a lot of headache.

I'm working on a Mac, so pretty much every technology is out, libvirt and quemu, etc just won't work for me. I am however planning on deploying to Debian; so anything that runs on Debian is back on the table, provided I can script the provisioning of the host machine as well as it's guest domains.

My intended setup was that I can use to bootstrap a Debian installer, that something should mean that upon booting, the machine is automatically provisioned (Chef, Puppet, Babushka, don't mind, really) - and part of that provisioning should build a template rootfs that can be used for booting a container. The container itself also needs to be provisioned, so that when the container comes up, it knows what work is has to do, and can do the work, and then exit.

In short, here's the workflow I need:

  1. Boot a machine (virtual or otherwise) and have it ready to do work.
  2. The work should be performed by a script installed by chef/puppet/babushka/etc
  3. When work comes in, a virtual machine should be started to do the work.
  4. The VM should do the work, exit and release it's resources to the parent/host machine. (it's important that this scaled to at least hundreds of guest VMs on reasonable hardware)

I've come to a point where I've tried the following, and abandoned them for the reasons inlined below:

For the host machine

  1. Pre-seed Debian micro ISO images with Instalinux (LinuxCOE backed) (Bad: Didn't work at all ("No kernel modules found"( because the Instalinux images are out of sync with the FTP repositories, apparently this solution is notoriously fragile, it also doesn't allow much scope for post-install, and dropping known SSH keys, host keys, etc onto the machine, it seems like fire and forget, in the end I'd have a running machine, but no access to it.)
  2. Pre-seed Debian netinst ISO (Bad: same problems, as above, except at least the install typically completes as there's no kernel disparity between the ISO and the FTP repository. Still limited scope for post-install. Good: Absolutely reliably & repeatable, easy to throw at any VM technology stack on Mac, or on a bare metal machine, would work anywhere, however I can't post-install it enough)
  3. Various methods of building a rootfs, and compiling it as a bootable hard disk image (Bad: What little I could get working was fragile as hell, would be difficult to install onto a real machine, and is a complex build process. Good: If I could get it working, this would seem to provide the most scope for pre-configuring the machine to a given specification with ssh keys, host keys, hostname, software installed from Git and whatever else, but then the question would be how to package it for distribution, or how to script it's recreation.)

I'm honestly not sure what technology people are expected to use to bring up a VM from nothing to a running, working and useful system. Seems like three steps to me a) operating system, b) system configuration (users, etc) and then c) filesystem changes.

For the guest (virtual) machines:

  1. Lots of things, mostly I think the answer here is a readonly rootfs created with debootstrap, and a special partition on the LXC container which contains the work to be done for this specific instance (a job manifest). Insert all the usual caveats about building the OS, booting, creating users, checking out software from git, and doing work.

I'm genuinely not sure what tools to reach for, seems like the problem should be well solved. But I just can't find out where to really get a start.

Most people seem to suggest for the host machine that I should pick a virtualisation technology, boot a machine to a working state, and then snapshot it (libvirt seems the logical favorite for this). Using the snapshot to bring up any subsequent installations for testing, or in production.

For the guest machines, lxc seems to provide the easiest option, except that backgrounding a container, and connecting to it later over the console is broken in all present kernels, and the newest version of lxc available to stable Debian is more than 18 months old, and lacks a lot of features which are widely usef.

Typically I'm an application developer, and I don't often work with server level technology (and I'm certain that SF will flag this question as "too subjective") but I'm genuinely uncertain which tools to reach for.

Final word is that I know of one similarly stacked project (travis-ci.org) who are using Vagrant boxes for this. That seems like a rather blunt instrument, big, slow, ruby orientated tools designed for small-scale desktop provisioning of testing VMs being used for critical service infrastructure, but I also know some of those guys, and they're smarter than I am, so maybe they just gave up.

Any help appreciated.

Lee Hambley
  • 340
  • 3
  • 12
  • Truly devops... This can definitely be automated. Building the system is easy. I'm assuming you can script the work or use the configuration management tool of your choice. More information about the destination or final result of this effort would be helpful. You're right on the edge between a private cloud solution or using something like LXC... – ewwhite Feb 08 '13 at 16:15
  • Absolutely, the point is to be able to build the host in a way that my team and I (Mac users) can repeatably build a *host*, inside which we can develop with LCX guests, but build that in a way that we can also deploy it to production. The tooling for our application is all written in Ruby, and I'd REALLY like to use LXC for the guests. The host machine is naturally enough long-lived, but as the typical lifetime of a guest will be 2-10 minutes, the whole infrastructure is ephemeral, really. It's about dev vs. production, and having a repeatable process. – Lee Hambley Feb 08 '13 at 16:41

3 Answers3

2

Some ideas:

  1. Your point "hundreds of VMs on reasonable hardware" makes me (without personal experience) think of VMs that either boot over the network or share most of their volume space (/usr) via NFS. Depends on how similar your VMs are.
  2. "What little I could get working was fragile as hell" Hard to believe. Can you be more precise what the problem is?
  3. "would be difficult to install onto a real machine" You mean "difficult" compared to what, to the wanted 1-click solution for VM creation? I would ask: How difficult is this and how often is this going to happen? What is the difference, recreating the initrd for the respective hardware?
  4. "however I can't post-install it enough" What you you wand/need and why does that not work? You could make the download of a script part of the boot process. The VM gets its IP by DHCP (hard configured to the VMs MAC address) and Samba delivers different post-install scripts to the VMs, depending on the IP address of the client.
Hauke Laging
  • 5,157
  • 2
  • 23
  • 40
  • +1 for network booting. I don't have enough experience to write out a full answer about it, but I can tell you that I have been in places that deploy hundreds of machines, both physical and virtual, by having them boot from a PXE server. It means you won't have to fuss with separate disk images for each VM. – Moshe Katz Feb 14 '13 at 19:11
1

While reading your post I kept thinking that vagrant and jenkins with the vagrant plugin would fit your needs pretty well. Any box you have that can actually handle the number of VMs you're talking about shouldn't even notice the overhead of the tools maintaining the environment.

0

Using some thing that works on apple and Debian, the only thing I have tried is virtual box. Whats nice using virtualbox here is you could build a VM on your mac system and copy it onto a Debian system using the same version of virtual box and it will boot.

Having hundreds of vms using virtual box sound like you will be spending quite a bit of time using the vboxmange interface to script the necessary unique info for each vm. Like uuids for hard drives, mac address on network interfaces.

If the base system is going to use the same software configured in the same way, you can create a snap shot of the system in virtual box and freeze it. So that no changes made are written over your frozen snap shot, but are instead written into new temporary storage area. Then shudown the VM, restore back to the snap shot and you are working off a clean system with out any changes that were made during testing. This can all be scripted using vboxmange.

Using your snap shot you could also make hundreds of copies of that VM image. Using the vboxmange scripting interface to make the copies, unique in the ways that matter ie, uuids and mac addresses. Then have a start up script call what ever changes, configs you need to apply on to your VMs for testing, or running various tests.

nelaaro
  • 584
  • 4
  • 9
  • 25