4

I've got a fun challenge for you all. A company I'm working with is setting up its development/release workflow and environments. The eventual production environment is a VPS hosted with a popular host running CentOS within a Virtuozzo virtualization environment.

We'd like to install a virtualized instance of the production environment (or something close to it) to each developer's local system. This will allow each developer to test changes locally on his/her own system without tainting anyone else's system when things go wrong.

Clearly, there are many ways to do this, but I imagine there's a specific approach that is superior. This is where I need your help.

So far, the best options seem to be as follows.


Export/Back up the Virtuozzo VM. Download it to a local system and install it into a Virtuozzo Container.

This is the cleanest/purest solution, but it may not work. The cloned VM would still have all of the network configuration from the VPS' network, for example. Would it be hard to write a script to update these (and any other old configuration values) so they are more appropriate for the local VM?

Deploy the codebase to a similarly configured VM created in an arbitrary virtualization product.

This approach isn't as clean, but I know it can work (I've done it before). For example, we could install the same version of CentOS, Apache, MySQL, etc. to a local VMWare VM. The environment wouldn't be identical to the production environment, but it might be close enough to make this workflow feasible.

Another option?

What other options do you think there may be? Or, is one of these options the best? I'd love to hear your thoughts! :)


Edit:

Multiple cPanel/user accounts on the production server

I contacted our VPS' customer support to see if I could obtain access to the Virtuozzo backups; he denied our request but suggested that we might try creating a separate cPanel/user account for each developer. This would allow the environment to be identical to that of the server, but would be a little more inconvenient since the environment would still be running remotely instead of locally. Still, a decent option.

rinogo
  • 329
  • 4
  • 13

1 Answers1

5

This will work perfectly for you ...

On the source VM instance ...

sudo su
cd /
tar cvpzf backup.tgz --exclude=/proc --exclude=/lost+found --exclude=/backup.tgz --exclude=/mnt --exclude=/sys /

On the target machine (dedicated or VPS)

tar xvpfz backup.tgz -C /

And make sure any dirs excluded are re-created

mkdir proc
mkdir lost+found
mkdir mnt
mkdir sys

You could just create a nice Microsoft VirtualPC Linux image and hand it out to everyone

Ben Lessani
  • 5,174
  • 16
  • 37
  • Have you done this before? This seems so simple, but sometimes the simplest solutions turn out to be the most reliable/elegant... – rinogo Mar 09 '12 at 02:24
  • 1
    A few times actually. With the exception that we did it with dedicated servers. The only thing you need to bear in mind is the contents of /etc/fstab and /boot/grub/menu.lst - as these will be platform dependant. – Ben Lessani Mar 09 '12 at 02:40
  • Fascinating. This is one of those things for which there are MANY solutions, and yours is quite intriguing. Thanks so much for your response. :) – rinogo Mar 09 '12 at 02:56
  • 1
    The fact you can do it on a running server is the best bit :) – Ben Lessani Mar 09 '12 at 03:07
  • Just as a follow-up - it has been a few years, and we actually went with the "similarly configured VM" approach described in my question. I don't recall exactly why we did - I think I might have had a hard time getting the archived files to execute in the VM - you know, different platform and all from the actual server. Or, maybe I had a hard time creating the archive. Regardless, it's good to know that this works well on actual dedicated servers! Thanks again for your answer. – rinogo Jun 06 '16 at 16:31