1

Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest?

The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)?

To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests.

For what it's worth, we're running Debian 6.

Edit:

I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use:

Starting container ...
vzquota : (error) Quota on syscall for id 1102: Device or resource busy
vzquota : (error)       Possible reasons:
vzquota : (error)       - Container's root is already mounted
vzquota : (error)       - there are opened files inside Container's private area
vzquota : (error)       - your current working directory is inside Container's
vzquota : (error)         private area
vzquota : (error)       Currently used file(s):
/var/lib/vz/private/1102/sbin
/var/lib/vz/private/1102/usr
/var/lib/vz/private/1102/lib
/var/lib/vz/private/1102/bin
vzquota on failed [3]

If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

andyortlieb
  • 1,052
  • 1
  • 12
  • 25

2 Answers2

2

Based on the comments, the question is a little different to what I (and maybe others) expected: the use of "maintain" doesn't relate to the packages within the containers, but their individual configuration.

This makes the process more difficult, but still possible. For example:

Mounting the directories

As you've said, you'll need to share a mount for the binaries (such as /usr/bin) which would be the first step in ensuring they can share all the packages - this will allow the installed binaries to be readily available to all of the other containers.

You need to ensure that when you mount --bind - you do this on the ROOT directory in your appropriate /etc/vz/<veid>.conf file.

For example, mount --bind /some/mount/point/bin /vz/root/1/bin

It's also essential that these mounts are clean (how to do you ensure that they are there after the machine boots the first time?). To do this, OpenVZ offers start and stop hooks in the form of scripts. Assuming you are working within /etc/vz/conf, you can have:

  • /etc/vz/conf/<veid>.mount - this is the start hook: called once the container is running
  • /etc/vz/conf/<veid>.umount - this is the stop hook: called once the container has shut down

Their names are derived from their technical definitions: OpenVZ mounts /vz/private/<veid> onto /vz/root/<veid> so that's what it's hooking (assuming the default directories).

Configuration

In the comments, you've queried that one advantage would be that they could configure their server how they like (e.g. mysql.cnf or httpd.conf/apache2.conf). The only issue I could see with this is you need to ensure that when you install the packages, these configuration files are setup inside each of the containers. You could try sharing a mount copy on write share.

The issue with this is exactly which directories to share. Apache it over at /etc/httpd and MySQL at /etc/mysql - so you need to be sure that you copy these stock files over or this idea won't work. Personally I'd inspect at the .deb files that are installed by your 'global' package admin, and extract out any directories that aren't shared into individual containers. But, this is just a single way of doing it - I'm sure there are a fair few more.

Gotchas

One thing I came up with is that you can just straight replace the binaries and libraries that come with a package, but you need to observe whether or not the package manager restarts services after installing it

Error in your post

You are mounting on private, you need to mount on the /var/lib/vz/root/1102/bin directory (assuming that's where the root points to in your vz's .conf file).

Jay
  • 6,439
  • 24
  • 34
  • The package manager wouldn't be broken, it would just only be accessible to the designated sys admin with security clearance to install packages on the entire development environment. If these users were all sharing one box, they wouldn't be able to use apt-get at all anyway. – andyortlieb Jul 02 '12 at 18:26
  • I meant they'd be broken to those other people, who have `root`, but aren't able to install packages in their own container. If you don't want them to have the flexibility that `root` offers, why give them an entire container to use? You'd be better of just giving them normal user accounts on a larger development server if they can't do this kind of management for themselves. – Jay Jul 02 '12 at 18:31
  • This is more of a research project. But my answer is that this will still permit them to create however many databases with whatever names, run servers on any ports, and be able to configure their webservers in any which way they like without stepping on eachother's toes. Let's say for example they're all working on an application that demands that a database exists at localhost mysql on the standard port, and that it has to have a particular name (freepbx comes to mind--mostly because its contributed modules are written by jerks... and also they're necessary). – andyortlieb Jul 02 '12 at 18:40
  • This is a lot clearer to me now. I was actually going to "*unless the individual containers won't need customisation at this level*" to my answer - wish I had now! I was assuming that you meant from a more industrial viewpoint: e.g. have a base server and individual containers build on that, but if you update the base server, everyone else gets the updates (which is what my answer addresses). Yes - in this terms what you're asking will work, but you'll need to ensure that when you do update packages, the correct services are restarted, etc. I'll update my answer. – Jay Jul 02 '12 at 18:44
  • Thanks. I think that mount --bind may be the wrong way to go about this. It does work as you suggested, but starting and stopping the machines creates an odd situation of mounts getting clobbered and orphaned. Perhaps I should look into NFS, something more similar what you would do with a PXE setup. I was hoping that OpenVZ would afford me more filesystem control, which it does of course, but not quite flexible enough to use mount bind. – andyortlieb Jul 02 '12 at 19:28
  • 1
    You can make `.mount` and `.umount` scripts in your `/etc/vz/conf` directory - which will run on `start` and `stop` of the container respectively - this might be your best bet if you want to use bind mounts. Given the article on [Bind mounts](http://wiki.openvz.org/Bind_mounts) I think OpenVZ is OK with them as a concept. – Jay Jul 02 '12 at 20:55
1

Jay has the correct answer for implementing it as you ask, but I'm not sure if you want to upgrade people's containers while stuff is running.

I think you still want to mount system directories as read-only in their containers, but you want to version those system directories and you want them to install their own software into their home directory or somewhere that is bind mounted from outside their container.

When the user wants the latest version, you should recreate their CT using your updated template. Each time you upgrade, you can save it as a CT template. This keeps their systems stable, but they can ask for an upgrade at anytime.

rox0r
  • 147
  • 8