11

Say that I have an application running on one PC that is sending commands via SSH to another PC on the network (both machines running Linux).

For example every time something happens on #1, I want to run a task on #2. In this setup, I have to create SSH connection on every single command.

Is there any simple way to do this with basic unix tools without programming custom client/server application? Basically all I want is to establish a connection over SSH and then send one command after another.

Jakub Arnold
  • 1,674
  • 10
  • 25
  • 33

7 Answers7

23

Automatic Persistency Using OpenSSH

You can also use the ControlMaster feature of OpenSSH, which opens a unix domain socket for the first connection and reuses this connection in all subsequent calls.

To enable the feature, you can either use -M as the command line switch or enable the ControlMaster option in your ~/.ssh/ssh_config, e.g.:

ControlMaster auto

Additionally, you should set the ControlPath using the following lines in your ~/.ssh/ssh_config:

Host *
   ControlPath ~/.ssh/master-%r@%h:%p

To maintain a persistent connection to a host, e.g. if you want to run a script which needs to establish many ssh connections to the host, none of which persistent over the whole lifetime of the script, you can start a silent connection in advance using:

ssh -MNf remotehost

Cheerio, nesono

nesono
  • 331
  • 2
  • 3
  • I signed into server fault just to +1 vote your answer. Yours worked! – Keval Domadia Aug 31 '12 at 10:34
  • `ssh -N` starts a session without remote command, seems no corresponding options in ssh_config. – Jokester Mar 16 '13 at 11:07
  • You may also want to configure `ControlPersist 480m` (adjust time to preference) to keep the connection alive for some time even after the SSH session has ended, to avoid having to re-enter your password/MFA each time. – erwaman Jul 03 '20 at 11:06
13

Not sure if it can be used in production but you can do something like this:

create file on #1

1> touch /tmp/commands

Then run command:

1> tail -f /tmp/commands | ssh username@x.x.x.x

That will open file /tmp/commands and start sending its content to server x.x.x.x (#2) and run it there line by line

now, every time something happens on #1 do:

1> echo "ls -l" >> /tmp/commands

or

1> echo "reboot" >> /tmp/commands

whatever you add to file /tmp/commands will be sent to #2 and executed. Just make sure you do not run anything interactive, or deal with it somehow.

drakyoko
  • 103
  • 3
Vitaly Nikolaev
  • 386
  • 1
  • 6
2

In /etc/ssh/ssh_config add

# Send keep alive signal to remote sshd
ServerAliveInterval 60
Sandra
  • 9,973
  • 37
  • 104
  • 160
2

If you run into this sort of thing a lot, try Parallel. It is like dsh (distributed shell) but has some neat features like counting semaphores and it is actively maintained.

From the documentation:

EXAMPLE: GNU Parallel as queue system/batch manager

GNU Parallel can work as a simple job queue system or batch manager. The idea is to put the jobs into a file and have GNU Parallel read from that continuously. As GNU Parallel will stop at end of file we use tail to continue reading:

echo >jobqueue; tail -f jobqueue | parallel

To submit your jobs to the queue:

echo my_command my_arg >> jobqueue

You can of course use -S to distribute the jobs to remote computers:

echo >jobqueue; tail -f jobqueue | parallel -S ..

There are many great examples that just scratch the surface. Here is a cool one.

EXAMPLE: Distributing work to local and remote computers

Convert *.mp3 to *.ogg running one process per CPU core on local computer and server2:

  parallel --trc {.}.ogg -j+0 -S server2,: \
  'mpg321 -w - {} | oggenc -q0 - -o {.}.ogg' ::: *.mp3
Allen
  • 1,315
  • 7
  • 12
0

yes it is possible with a pipe:

echo 'echo "hi"' | ssh -i my_private_key tester@host2

this will execute the command echo "hi" on host2

you just have to write a programm, that just gives out the commands ( don't forget the ; ) and then pipe that output to ssh

JMW
  • 1,451
  • 4
  • 19
  • 27
0

You might want to use program like dsh (Distributed SHell) that is made to do just that :), After configuring it with host names and setting up publickeya auth, you can use it to run commands on multiple machines either in series ("run on machine a then run on machine b") or in pararell ("run on all machines on same time"). Or just make script

#!/bin/sh
ssh machine -- $@
exec $@
XANi
  • 392
  • 1
  • 3
0

DISCLAIMER: I can't test this right now, as I'm on a windows machine with bash but without ssh.

In a bash script, you can do something like this:

exec 4> >(ssh --ssh-options-here user@host)

And then to send commands, write them to FD 4.

echo 'echo hi' >&4

If you do it this way, you can't access the results of the commands easily, but based on your question it doesn't seem like you need to.

user253751
  • 146
  • 1
  • 5