2

Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.

Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:

  • Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.

  • Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled.

  • Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.

  • Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.

Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone.

I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.

1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.

2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.

3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.

It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.

I thank you for your time and help.

1 Answers1

1

You probably want to use change automation for this. e.g., Puppet, Chef, cfengine, bcm2, or whatever.

Personally, I've used Puppet on Solaris for the last three years, and have been quite happy with the decision. We use it to manage every aspect of our systems administration: Users, files, cron jobs, ZFS filesystems, NFS mounts, Zones, services (via SMF), and so on. It's quite useful.

The Puppet SRV4 package provider works, but it lacks the ability to pull files remotely (e.g., via HTTP). You can work around this by writing a function which installs your packages for you. If the packages are locally available (via NFS), the provider should just work.

In addition to Solaris 10, we use the same Puppet repo to manage our Solaris Express and Debian Linux systems.

I wrote this post a while which might be helpful: http://mirrorshades.net/post/196593566

bdha
  • 164
  • 5