I would recommend to either use a configuration management system, such as puppet or CFEngine, or to at least centrally store the configurations inside a single repository and pull them to all the web servers.
For the configuration management solution, you can either specify whole files that must exist, and where to get the cannonical copy on the configuration management server, or you can specify the parameters for those files in the configuration management language, which provides an abstraction layer and simplifies the process of introducing new configurations in a correct way.
For simply centrally maintaining and distributing files, you would probably want to check the config files into a version control software, such as CVS or SVN. From here, there are two pretty straightforward ways to get these configurations into all of your web servers.
- You could then instruct your web servers to pull directly from the version management tool (cvs co or svn checkout)
- Alternately you could do a little more work to make a more robust, scalable and reusable solution
- script building an RPM of all of the apache configuration files (or the equivalent for your OS)
- run a yum repo inside the version control server (or the equivalent for your OS)
- then simply instruct your web servers to perform a yum update my-apache-configs (or the equivalent for your OS).
The VCS-only solution is easiest to setup, and will work across operating systems. The package repository solution is a little bit harder to setup, but it will pave the way for you to package and distribute configurations, codes and scripts of all sorts, and more closely aligns with the OS vendor's methodology.
The other nice thing about the package repository solution is that you can define dependancies and groups of packages. This means you could make my-apache-configs dependent on httpd and mod_ssl. You could then create an empty package that you call something like company_com-web_server that depends on my-apache-configs and my-ssl-certificates and any other packages specific to your company. To setup a new web server instance, put a freshly installed server (add your yum repo to the kickstart) behind the load balancer, issue a yum -y install company_com-web_server, walk away for a coffee, and come back up with a ready-to-roll web server instance.
===== EDIT =====
The value of this mechanism is that it creates a loosely coupled system. If the configuration management server or the yum repo goes offline, you lose the ability to reconfigure, but the web servers stay up. Even in htat instance, you could manually replicate changes to all machines, and check the changes in by-hand when the repo comes back up. Using shared storage (NFS, clustered filesystem, etc) would create a single point failure.