5

I'm working on a platform that uses a lot such commands: ssh login@server.com 'curl http://some_server/script.sh | bash'

This is really clean and handy to execute scripts remotely, however, I don't see anyway to get the output/exit code of the script. Anyone can figure out something to make sure the script has been properly executed (from the host launching the ssh point of view).

Nic
  • 13,025
  • 16
  • 59
  • 102
rmonjo
  • 231
  • 1
  • 4
  • 12

5 Answers5

7

As @Zoredache points out, ssh relays the status of the remote command as its own exit status, so error detection works transparently over SSH. However, two important points require special consideration in your example.

First, curl tends to be very lenient, treating many abnormal conditions as success. For example, curl http://serverfault.com/some-non-existent-url-that-returns-404 actually has an exit status of 0. I find this behavior counterintuitive. To treat those conditions as errors, I like to use the -fsS flags:

  • The --fail flag suppresses the output when a failure occurs, so that bash won't get a chance to execute the web server's 404 error page as if it were code.
  • The --silent --show-error flags, together, provide a reasonable amount of error reporting. --silent suppresses all commentary from curl. --show-error re-enables error messages, which are sent to STDERR.

Second, you have a pipe, which means that a failure could occur in either the first or the second command. From the section about Pipelines in bash(1):

The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled (see The Set Builtin). If pipefail is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.

Side note: The bash documentation is relevant not because you pipe to bash, but because (I assume) it is your remote user's login shell, and would therefore be the program that interprets the remote command line and handles the execution of the pipeline. If the user has a different login shell, then refer to that shell's documentation.

As a concrete example,

( echo whoami ; false ) | bash
echo $?

yields the output

login
0

demonstrating that the bash at the end of the pipeline will mask the error status returned by false. It will return 0 as long as it successfully executes whoami.

In contrast,

set -o pipefail
( echo whoami ; false ) | bash
echo $?

yields

login
1

so that the failure in the first half of the pipeline is reported.


Putting it all together, then, the solution should be

ssh login@server.com 'set -s pipefail ; curl -fsS http://some_server/script.sh | bash'

That way, you will get a non-zero exit status if any of the following returns non-zero:

  • ssh
  • The remote login shell
  • curl
  • The bash at the end of the pipeline

Furthermore, if curl -fsS detects an abnormal HTTP status code, then it will:

  • suppress its STDOUT, so that nothing will get piped to bash to be executed
  • return a non-zero value which is properly propagated all the way
  • print a one-line diagnostic message to its STDERR, which is also propagated all the way
200_success
  • 4,701
  • 1
  • 24
  • 42
1

That's a horrible hack. If you want remote execution, use something that does remote execution properly, such as func or mcollective.

Dennis Kaarsemaker
  • 18,793
  • 2
  • 43
  • 69
  • 2
    I certainly isn't perfect, but it is useful. I use a similar method to bootstrap my configuration management system. You certainly don't want to use that for everything, but it does have its uses. – Zoredache May 06 '13 at 20:40
  • Tend to agree with @Zoredache, however, interested to hear why this is a horrible hack ? – rmonjo May 06 '13 at 20:47
  • 1
    @user1437126, it is hacky if you don't have strong security in your setup. There are lots of ways that `curl url | bash` can fail, and result in you trashing your system. If you run `curl http://remote | bash` what happens if someone manages to MITM you and replace what you were expecting on the remote with a script that does `rm -rf /`. What happens if the remote system is down? – Zoredache May 06 '13 at 20:56
  • Bootstrapping your config management should really be done by however you install your systems, e.g. kickstart for redhat. But ok, as far as reasons to use this horrible hack go, this one is about the only one I can agree with :) – Dennis Kaarsemaker May 06 '13 at 20:59
  • Ok I see your point. In my setup, the machine hosting the scripts is the one doing the ssh call. Accessing scripts is secured by a token, ssh communication does the rest. We chose the solution since maintaining scripts file is way easier than hardcoded script within code (passed to the remote server as strings in ssh command). – rmonjo May 06 '13 at 21:07
  • 1
    @user1437126 transmitting a token in the clear doesn't provide any security (it's the equivalent of writing your PIN on the back of your bank card). You can mitigate a lot of the inherent suck by using `https://` (and verifying the server certificate), but this is still inherently a hackish solution -- I certainly wouldn't put it in production in my environment, but your needs (and requirements) may vary... – voretaq7 May 06 '13 at 21:52
1

When SSH returns it should emit the exit code from the script.

Try ssh user@host 'echo "exit 2" | bash' ; echo $?. You should see a value of 2 returned.

Just write lots of good error-checking into your script, and make sure your script exits with a useful errors, and exit codes. Make sure your script returns non-zero exit codes for any errors.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
0

That will all depend on what script.sh DOES/goes about its job. If it's verbose (does things that have output) you should be able to see them on STDOUT/STDERR in your ssh stream. If if doesn't modify it to do so.

tink
  • 1,036
  • 11
  • 19
0

Here's a couple of options that you could try.

Another way would be to have the echo $? run just after the script executed,

# ssh login@server.com 'curl http://some_server/script.sh | bash; echo $?' 

This will give you the exit status of the script. or Only supply a text when exit status is 0

# ssh login@server.com 'curl http://some_server/script.sh | bash && echo "Script Completed!"'

This will display "Script Completed!" on successful execution. or Only display error code when a exit status is not 0

# ssh login@server.com 'curl http://some_server/script.sh | bash || echo "Script Failed: $?"'

This will display "Script Failed: " As suggested by @Zoredache adding multiple exit statuses to your script will help you in identifying what went wrong.

Danie
  • 1,350
  • 10
  • 12