9

What are the pros and cons of deploying via automated installs vs. drive imaging? For Windows I know there are issues around SID generation when cloning drives. Are there any similar issues for deploying Linux via an image?

jldugger
  • 14,122
  • 19
  • 73
  • 129
Charles Hepner
  • 425
  • 1
  • 3
  • 10

8 Answers8

3

I disagree with some of the answers here. If done correctly you can take an Image and load it on multiple systems using different hardware. Personally I've seen images that support up to 30 different systems.

My answer to your question is to use both methods if you are very anal about the creation of an image. Create the automated install and then sysprep the result. This will lead to repeatable, self-documenting images.

Also, if you can write to the disk image in its saved state you can extend what is on the image by including a script that can be run during sysprep. OR you could backup your system before taking a sysprep and then extend it and take a sysprep after.

I've done both methods with good results.

Regarding SID problems, you should always use Sysprep for a new image (Although NewSID will work), which will resolve any SID issues, However, there are other applications which write GUIDs to the registry which would need to be cleaned. Off the top of my head Altiris and WSUS are two that do this.

Rob Haupt
  • 794
  • 6
  • 10
3

Imaging is a losing proposition. A full CentOS kickstart installation should take under 10 minutes. If your install is significantly slower, that's the problem worth investigating.

The problem with imaging is that you have to keep a "golden" copy and update it as you make changes to your build. This means that you still need a mechanism for an unattended installation, and each change requires doing such an installation, changing the image (requiring a mechanism for automated customization for your environment), and making this copy the golden copy. If you are going to make changes directly to your golden copy, then you'll quickly end up with a mess after years of patching, upgrades, etc.

If you must image the systems, then you should image the default build of the OS, and make your postinstall work (local customization) happen separately on each new machine. This way trivial changes to the build won't require rebuilding the golden copy.

If your hardware is not all identical, you can leverage the installer's automated detection/configuration. I've used a virtually identical Kickstart configuration between RedHat/CentOS 3, 4, and 5, and all kinds of hardware.

The worst result of imaging I've ever seen was a system of installing Solaris systems using a golden image (and dd with multipacks). Their installer and patching is so slow that this seemed to make sense. Unfortunately, they make completely changing the hardware of an installed system nontrivial. Each hardware type had its own golden image. A trivial build change would require making a change on dozens of disks. The second-worst was a Windows group imaging machines (again reasonable due to a crippled installer) compared to a Linux group using Kickstart. The Linux group could deploy a change to, say, DNS configuration in a few minutes. (One minute to change the postinstall, then a test build, and then a manual push of the configuration to the existing machines). The Windows group had to boot each golden image, make the change, undo the cruft caused by booting the golden image, then do a test build. (They also had to purchase special tools to automate changes to system configuration on multiple machines, to change the existing machines). The Windows group also had the option of reinstalling the golden image to make their change, but as it was a manual installation of the OS and dozens of applications, it would be slightly different each time requiring weeks of testing and risking the production systems being less identical than otherwise possible.

Note that in both cases the Windows and Solaris setups using a golden image were not handled in the best possible way and some of the choices made by the admins involved belied a lack of competence. But starting with a design that was not reasonable did not help.

Kickstart works so well that there's no reason to even consider doing otherwise (I have a lot of little complaints about it, but it would be a thousand times worse if it was done by imaging machines). If your installation program is something besides Anaconda, and its automated installs are less useful than kickstart, you should consider whether that distribution was really ever intended for enterprise use.

carlito
  • 2,489
  • 18
  • 12
2

Drive imaging is faster, but your hardware has to be very similar for it to work. It's also harder to customize the image, you'd need a base image for a web server, email server, etc. With automated installs you can have all machines install from the same network location, but use different scripts depending on what kind of server you want rather then needing to store and create multiple images.

Dan Carroll
  • 103
  • 4
Jared
  • 1,420
  • 2
  • 16
  • 22
1

I can't really comment on the Linux side of things but with Windows I'd say there aren't too many pros for using an automated process over an image.

There is lot of guidance available from Microsoft, here.

The proof is in the pudding. Microsoft now use image based deployments for Vista, Windows 2008 and Windows 7. Using the new tools and process described in the link above you can deploy Windows to any HAL type (including XP), with full driver support and not a huge amount of effort.

Jacob
  • 322
  • 1
  • 10
1

Deploying Windows via images is fully supported by Microsoft using Sysprep to "factory seal" the image before deploying it. Sysprep resets the SID and essentially prepares the image for a new machine.

However, it is highly recommended (unless you are a small company) to have a fully scripted install as well, for a simple reason. Every time you need to update your image, you have two options:

1) Continually modify the existing image and re-sysprep it every time. This will eventually result in problems as you keep patching, modifying and sysprepping the same image over and over.

2) Recreate the image from scratch, which is vastly preferable. However, if you don't have a scripted build, you run the high risk of getting plenty of inconsistencies between builds.

So, in summary:

  • Use a scripted build to create an image
  • Use the image for deployment

An additional wrinkle in all this is that Windows Vista, 2008 and 7 all use an image-based install, so the gains of using image-based vs scripted install have disappeared anyway.

0

depends on what applications you will install and how long you'll keep the image un-updated.

there is quite a lot of updates coming every month so even after restoring box from image you'd need to upgrade it.

regarding SID - as far as i know it's enough if you generate unique private keys [ for ssh, https, tls eg for smtp/pop3 servers etc ] this should work fine. also unique host name generation would be nice. this might differ depending on distribution, i'm mostly using debian and didnt have any problems cloning virtual machines with that os.

pQd
  • 29,561
  • 5
  • 64
  • 106
  • 1
    If you're joining an Active Directory domain, duplicate SIDs will create problems. Sysprep and NewSID - http://technet.microsoft.com/en-us/sysinternals/bb897418.aspx - are easy enough to use, though. – Kara Marfia May 25 '09 at 21:46
0

Check out this question which is very similar:

https://stackoverflow.com/questions/398410/windows-disk-cloning-vs-automatic-installation-of-servers

(I asked it on stackoverflow.com a while ago, when serverfault.com wasn't around).

Tom Feiner
  • 16,758
  • 8
  • 29
  • 24
0

Especially if you have disparite hardware I'd suggest automated. For windows look at unattended.

http://unattended.sourceforge.net/

LapTop006
  • 6,466
  • 19
  • 26