I have some work that needs to be done on 50+ servers. The first step is to checkout an updated version of some source code onto a shared directory (assume all have the shared drive mounted). The second is to perform some work on each of the servers.
I'd prefer to have these two scripts run on each of the servers. All 50+ servers are cloned from a single disk image and it's not practical for me to customize any of them.
When the 50 servers run the first script, I want only the first one that tries to run it to actually run it. The others I want to simply exit. The server that actually runs the script should then update a shared directory, then exit. Then, later, the second script will run and perform the work on all servers based on the updated code that the first server fetched.
What's the best way to do this? Can I reliable have the first script run on one server and create a file or something that acts as a 'semaphore' or 'lock' of some sort that keeps the other servers away?
Making this more complicated is that I'm thinking of having the scripts run from identical cron files on each of the servers -- meaning all scripts could try to run it at the same time assuming all their clocks are set identically.
I'm hoping these will be run from bash scripts. Does this make sense as an approach?
EDIT: Updated based on questions:
We don't want every server to try to checkout it's own copy of these files -- they are in a multi-GB source code repository and having 50+ simultaneous checkouts of that code would be difficult for our source control server (and not scalable to 100+ servers).
Adding a cronjob to the 50+ servers is not that big of an issue, but adding another customized server with it's own configuration is harder. We're already cloning the 50 servers -- maintaining a separate server just to checkout an the latest source code for the 50+ servers to access seems wasteful and will add more overhead than just adding a script to our current servers.