I'm learning my way through configuration management in general and using puppet to implement it in particular, and I'm wondering what aspects of a system, if any, should not be managed with puppet?
As an example we usually take for granted that hostnames are already set up before lending the system to puppet's management. Basic IP connectivity, at least on the network used to reach the puppetmaster, has to be working. Using puppet to automatically create dns zone files is tempting, but DNS reverse pointers ought to be already in place before starting up the thing or certificates are going to be funny.
So should I leave out IP configuration from puppet? Or should I set it up prior to starting puppet for the first time but manage ip addresses with puppet nonetheless? What about systems with multiple IPs (eg. for WAN, LAN and SAN)?
What about IPMI? You can configure most, if not all, of it with ipmitool, saving you from getting console access (physical, serial-over-lan, remote KVM, whatever) so it could be automated with puppet. But re-checking its state at every puppet agent run doesn't sound cool to me, and basic lights out access to the system is something I'd like to have before doing anything else.
Another whole story is about installing updates. I'm not going in this specific point, there are already many questions on SF and many different philosophies between different sysadmins. Myself, I decided to not let puppet update things (eg. only ensure => installed
) and do updates manually as we are already used to, leaving the automation of this task to a later day when we are more confident with puppet (eg. by adding MCollective to the mix).
Those were just a couple of examples I got right now on my mind. Is there any aspect of the system that should be left out of reach from puppet? Or, said another way, where is the line between what should be set up at provisioning time and "statically" configured in the system, and what is handled through centralized configuration management?