7

I regularly do things where I loop through a list of servers to take some action. For example:

for s in `cat servers.txt` ; do
    echo; echo $s
    ssh $s 'do something'
done

I'm wondering (from a shell perspective) if there's an easier way of doing this than cat servers.txt

Yes I know about tools like mcollective, capistrano, etc - I'm often doing this to fix up mcollective problems :-)

ewwhite
  • 194,921
  • 91
  • 434
  • 799
Sonia Hamilton
  • 341
  • 1
  • 4
  • 11
  • oops, easy to read over that comment "from a shell perspective"... forget about parallel-ssh. – hookenz Jul 07 '14 at 02:13
  • Can you clarify what you mean by "easier/better"? If by this you mean you would like to *parallelize* it, yes you can use xargs to parallelize the ssh connections. – Michael Martinez Jul 16 '14 at 00:03
  • 1
    "Easier" - less typing, or a "better/more correct" way. I hadn't considered doing it in parallel - how would you do that using xargs? I usually use multixterm (one of the scripts in the expect package). – Sonia Hamilton Jul 16 '14 at 06:13
  • Ah, didn't read that first comment ... still, I feel parallel ssh is a valid answer, since it's run from a shell.. but yeah, not really "from a shell perspective" – Andrew Jul 16 '14 at 07:11
  • 2
    @SoniaHamilton: no matter which method, it's going to be about the same amount of typing. The only improvement I would suggest is to add within the loop a line before your ssh line, such as this: `trap 'continue 3';`. This will allow you to use `Ctrl-\\` to kill the current ssh session without killing the loop. – Michael Martinez Jul 16 '14 at 17:09
  • @MichaelMartinez the trap is nice, and I wasn't aware of the 'continue N' syntax in bash - great. – Sonia Hamilton Jul 17 '14 at 00:24
  • @SoniaHamilton - Can you please explain what you mean by "from a shell perspective" ? – hookenz Jul 17 '14 at 00:53

7 Answers7

7

I use ClusterSSH.
It opens up a lot of small shells, and you can type to all of them in the same time. Really handy when you want to execute the same command on a lot of servers, but still see the output.
I use it like this: clusterssh $(~/get-servers.sh), but obviously you can do something like clusterssh $(cat servers.txt)
The result looks like this:

enter image description here

It's also available as a Debian package.

Nitz
  • 1,018
  • 1
  • 8
  • 18
5

My quick and dirty... where servers.txt has a series of hosts or IPs, one-per-line.

#!/bin/bash

SERVER_LIST=/path/to/servers.txt

while read REMOTE_SERVER
do
        ssh $REMOTE_SERVER "do_something_cool"
done < $SERVER_LIST
ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Does this really satisfy her requirement for an "easier way"? All you're doing is replacing "cat" with a redirect operator "<". – Michael Martinez Jul 15 '14 at 23:49
  • Yes... because she's already aware of the commonly available orchestration tools. Sometimes something like this is needed to fix Puppet or Mcollective issues. – ewwhite Jul 15 '14 at 23:51
  • 1
    She's looking for an "easier" way of doing it from within the shell. You only give her an alternate method, not an easier method. I suspect she isnt' clear herself on what she means by "easier". She needs to explain what she thinks are the flaws or shortcomings of her method ... – Michael Martinez Jul 15 '14 at 23:53
  • 5
    @MichaelMartinez I dunno, man. The OP doesn't need to do anything at this point... I gave the example of what **I** use for this task. And usually the task is not something I wish to do in parallel. You're entirely welcome to post your own question and answer if you feel that the original post here is misleading or the answer(s) insufficient. – ewwhite Jul 16 '14 at 12:42
5

To execute a simple task across a wide range of servers without using any tool designed for this purpose, regardless they require previous infrastructure or not, you can use a simple shell script called mussh, distributed as a package in many distributions.

You can call it with a list of hosts, a list of commands, store both things in files, and quite a few more options, like ssh-agent integration, proxy support, ... Check the manpage for all the details.

An example could be as simple as:

$ mussh -H host_list.txt -C command_list.txt
dawud
  • 14,918
  • 3
  • 41
  • 61
4

Please do yourself a favor and use something designed for this. You already know about mcollective, but you and I both know that it needs some infrastructure to work. As do puppet and chef.

clusterssh, parallel ssh and dancer shell are small simple improvements over a shell for loop. They don't need more infrastructure.

But there's also ansible, which let's you do that, but also write reusable "playbooks" with several steps. It does need python installed in addition to sshd, but in practice I've never had to separately install that, it's always been available.

Ansible is the only configuration management system I've tried that also works well as a deployment and orchestration tool (puppet needs mcollective and maybe capistrano/fabric for that, ...)

(Yes, Puppet and Chef and all the rest can be run without central servers, but you need to install packages on the hosts to manage, which ansible doesn't need)

ptman
  • 27,124
  • 2
  • 26
  • 45
4

Here is how to use xargs to parallelize these ssh sessions:

cat servers.txt | xargs -IH -n1 -P0 ssh H 'some command to run'

You can also add the -n or -f options to ssh to redirect the stdin from /dev/null or put the session in the background. If you have to type a password for each host, then this doesn't help you much, but if you are using ssh keys then this works very nicely.

Michael Martinez
  • 2,543
  • 3
  • 20
  • 31
2

Have you considered using parallel-ssh? https://code.google.com/p/parallel-ssh/

I generally fall back to using that if/when our mco or puppet setup is broken. It is another dependency to manage, but totally worth it if you have large fleet of boxen to manage - with the added bonus of being able to choose how many machines to work on in tandem/parallel, or even doing one at a time, as you're used to with bash.

Andrew
  • 484
  • 2
  • 9
0

Undeleted my answer. Although it's not really clear what you're asking for.


There is a project called parallel SSH which provides parallel versions of ssh, scp and rsync.

So the advantage is that you don't need to do any shell script you just provide it the list of servers to run the command on and it'll do it in parallel.

This is great if you've got a long running set of ssh commands to do as it will do them all in parallel offering a potentially big speed up.

e.g.

parallel-ssh -h myhosts.txt "echo 'hello world'"
hookenz
  • 14,132
  • 22
  • 86
  • 142