2

I maintain many servers and have the following requirements

  1. Run a unix process at a given time
  2. Run a unix process at boot time and keep it running (in case it dies)
  3. Disable a running process

I am using cron right now, but it is time consuming to maintain the local cron in different servers.

Is there a distributed cron kind of mechanism? It will nice to have the "cron" config stored in a db so I can access the same through a web interface.

  • I found this link which has a listing of open source configuration management systems : http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software – Prashanth Ellina May 21 '12 at 09:18

5 Answers5

4

Use a configuration management system, like Chef or Puppet. Have the configuration management server push out the appropriate cron/Upstart/monit/whatever configurations to the various nodes, depending on their roles. Yes, it's probably more of an investment to set this up than spewing crontab files all over the place manually, but you will wind up with a centralized point of control over all the servers you're trying to manage.

cjc
  • 24,533
  • 2
  • 49
  • 69
  • I tried reading about Chef and Puppet on their respective sites. These definitely solve part of my problem which is distributed configuration. It is not clear though whether these tools can manage the lifecycle of distributed processes too. Any input regarding this please? – Prashanth Ellina May 20 '12 at 05:14
  • I would use, say, Chef to push a monit or Upstart configuration to all your nodes to start up the process and keep it running. Those are the right tools to manage process lifecycle, with Chef helping to get that configuration out to the machines that you want to use it on. If you want to stop a process (which is managed through Monit, for example) on a set of nodes, you can run the Chef command "knife ssh role:appserver sudo monit stop application" or something like that. Chef will do essentially a cluster ssh to all appropriate nodes and run that command. – cjc May 20 '12 at 10:45
0

For cron updates you could place a copy of your cron files on a web server and have your machines update their cron files based on the contents of the files stored on the web server. This would allow you to do cron updates in one place. The down side is that all of your computers would only be as secure as the web server since anyone that can manipulate the contents of the web server could then run arbitrary commands on any of your systems.

chuck
  • 232
  • 1
  • 5
  • Rather than a web-server, why not give them SSH keys for a restricted account on another machine (to enable automatic login), and use rsync? That would avoid man-in-the-middle attacks on the updated crontab, and also enable the various servers to download only the changes in the crontab. – Darael May 17 '12 at 12:18
0
  1. Use cron

  2. Use inittab

  3. If managed by inittab -- edit inittab,
    If managed by system startup (/etc/init.d & /etc/rc[0-6].d) -- use chkconfig or service

From man inittab

   respawn
          The  process  will  be  restarted  whenever  it terminates (e.g.
          getty).

To maintain these in a multi-system replicated system, I can only think of something like rsync. I don't know if something like NIS/YP would be usable. You may be looking for a more enterprise-level solution.

RedGrittyBrick
  • 3,792
  • 1
  • 16
  • 21
0

You can use puppet (a centralized configuration management software) to manage your cron jobs. Here there is some documentation http://docs.puppetlabs.com/references/stable/type.html (search cron in this link).

NoNoNo
  • 1,939
  • 14
  • 19
0

Seems like ucron fits your requirements fairly well. I haven't tried it myself. http://siag.nu/ucron/ Maybe something?

Ztyx
  • 1,365
  • 3
  • 13
  • 27