86

From my script output I want to capture ALL the logs data with error messages and redirect them all to log file.

I have script like below:

#!/bin/bash
(
echo " `date` : part 1 - start "
ssh -f admin@server.com 'bash /www/htdocs/server.com/scripts/part1.sh logout exit'
echo " `date` : sleep 120"
sleep 120
echo " `date` : part 2 - start"
ssh admin@server.com 'bash /www/htdocs/server.com/scripts/part2.sh logout exit'
echo " `date` : part 3 - start"
ssh admin@server.com 'bash /www/htdocs/server.com/scripts/part3.sh logout exit'
echo " `date` : END"
) | tee -a /home/scripts/cron/logs

I want to see all actions in file /home/scripts/cron/logs

But I only see this what I put after echo command.

How to check in logs that SSH command was successful?

I need to gather all logs. I need this to monitor result of every command in my script, to better analyse what's going on while script fails.

BlueMark
  • 999
  • 1
  • 8
  • 8
  • 1
    You can also use the "script" to perform this. See this post: [here][1] [1]: http://stackoverflow.com/questions/5985060/bash-script-using-script-command-from-a-bash-script-for-logging-a-session – xX0v0Xx Jun 03 '14 at 12:57
  • you must be careful with errexit - I noticed it is ignored or error does not exit script eg. if you have something like this in the script: command || dosmthg when command fails script does not exit immediately with errexit set but just run dosmthg and continues the script –  Aug 10 '16 at 09:31
  • If you're expecting a prolonged session with with user input, `gnu screen` can also do this with the `-L` option: https://stackoverflow.com/questions/14208001/save-screen-program-output-to-a-file – aggregate1166877 Jan 23 '22 at 02:20

9 Answers9

128

I generally put something similar to the following at the beginning of every script (especially if it'll run as a daemon):

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

Explanation:

  1. exec 3>&1 4>&2

    Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.

  2. trap 'exec 2>&4 1>&3' 0 1 2 3

    Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

  3. exec 1>log.out 2>&1

    Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example,

echo "$(date) : part 1 - start" >&3

will go to wherever stdout was directed, presumably the console, prior to executing line 3 above.

nicerobot
  • 1,406
  • 1
  • 9
  • 6
  • 3
    Great KungFu here !!! - Even better is to use `trap 'exec 2>&4 1>&3' 0 1 2 3 RETURN`. The RETURN pseudo sigspec restore file descriptors each time a shell function or a script executed with the . or source builtins finishes executing. For this (and for other reasons) in my scripts I add as the last 2 lines : `return` & `exit 0` - Just my 2 cents, kudos to you @nicerobot ! – DavAlPi Mar 05 '15 at 15:25
  • There's good amount of detail on logging for shell scripts via global varaibles of shell. We can emulate the similar kind of logging in shell script: http://cubicrace.com/2016/03/efficient-logging-mechnism-in-shell.html The post has details on introdducing log levels like INFO , DEBUG, ERROR. Tracing details like script entry, script exit, function entry, function exit. – Piyush Chordia Mar 10 '16 at 04:56
  • The enumeration of `trap` signals looks slightly weird. Usually you'll want to include `15` as well. – tripleee Nov 04 '16 at 04:47
  • 2
    @PiyushChordia link fix: http://www.cubicrace.com/2016/03/efficient-logging-mechnism-in-shell.html – vigilancer May 17 '18 at 04:27
  • This recipe solves my problem with grace. I want: 1. all stdout + stderr to a file, 2. all stderr to console, 3. status messages to console – jchook Aug 13 '18 at 23:52
  • This solution also works well for cloud-init runcmd logging, since all runcmd lines are simply wrapped into a single sh script when run. – SharkWipf Apr 25 '19 at 14:26
  • I have made this to work but the logs are not appended to the file, the contents of the file is replaced. – radztech Nov 10 '19 at 22:08
  • 4
    @radztech Append is `>>`, so `exec 1>>log.out 2>&1`. – nicerobot Nov 11 '19 at 16:12
  • 1
    Does `trap 'exec 2>&4 1>&3' 0 1 2 3 15` will better? it handles SIGTERM – Daniel YC Lin Feb 21 '20 at 01:42
  • FYI, there is also `$BASH_XTRACEFD` though you must have Bash >= 4.1, see the [Bash documentation](https://www.gnu.org/software/bash/manual/html_node/Bash-Variables.html) You'll still need to open the file descriptor if you want it to go to a file – adam Jul 07 '21 at 15:17
  • For an example usage of `$BASH_XTRACEFD` (and a ton of other detailed logging/tracing to file) you can see what I use [here](https://gist.github.com/mzpqnxow/755db0f772dc7b33b5a2529758d815a8) – adam Jul 07 '21 at 16:30
  • #!/bin/bash exec 3>&1 4>&2 trap 'exec 2>&4 1>&3' 0 1 2 3 exec 1>"$0".log 2>&1 – BND Sep 18 '21 at 15:28
  • Is there a way to add the date or a timestamp to this? – Liam Pillay Mar 03 '22 at 09:18
27

As I read your question, you don't want to log the output, but the entire sequence of commands, in which case, the other answers won't help you.

Invoke shell scripts with -x to output everything:

sh -x foo.sh

Log to the file you want with:

sh -x foo.sh >> /home/scripts/cron/logs

Alex Holst
  • 2,200
  • 1
  • 14
  • 13
  • Dunno why this is with bash, but I could only get the 'set -x' output to the logging file with: ```bash -x foo.sh &> loggingfile.log ``` – Jesse Adelman Jul 09 '20 at 22:39
  • 2
    @JesseAdelman it's because xtrace outputs everything to sterr, so you need to redirect stderr as well as stdout, which is what `&>` does. – Michael Dorst Sep 10 '20 at 23:48
23

to get the ssh output to your logfile, you have to redirect stderr to stdout. you can do this by appending 2>&1 after your bash script.

it should look like this:

#!/bin/bash
(
...
) 2>&1 | tee ...

when this does not display the mesages in the right order, try to add another subshell:

#!/bin/bash
((
...
) 2>&1) | tee ...
Christian
  • 4,645
  • 2
  • 23
  • 27
12

In bash, you can put set -x and it will print off every command that it executes (and the bash variables) after that. You can turn it off with set +x.

If you want to be paranoid, you can put set -o errexit in your script. This means the script will fail and stop if one command returned a non-zero exit code, which is the unix standard way to signal that something went wrong.

If you want to get nicer logs, you should look at ts in the moreutils debian/ubuntu package. It will prefix each line with a timestamp and print it out. So you can see when things were happening.

Amandasaurus
  • 30,211
  • 62
  • 184
  • 246
4

Following off of what others said, the set manual is a good resource. I put:

#!/usr/bin/env bash
exec 1> command.log 2>&1
set -x

At the top of scripts I wish to keep going, or set -ex if it should exit upon error.

dragon951
  • 41
  • 1
  • How this will help for ssh logs and successful execution of command? – asktyagi Jun 02 '19 at 11:30
  • Testing with the OP's dummy script, it records each command and whatever the result: ssh timeout, incorrect domain, etc. Or for a valid ssh address, it records the commands in the remote shell. If more info about ssh logins is required, perhaps `ssh -vv`? – dragon951 Jun 03 '19 at 06:53
2

I've found that the @nicerobot (How can I fully log all bash scripts actions? ) answer might be not enough to completely redirect all output to the console. Some output still can be lost.

The complete redirection looks like this:

#!/bin/bash

# some basic initialization steps including `NEST_LVL` and `SCRIPTS_LOGS_ROOT variables...
source ".../__myinit__.sh"

# no local logging if nested call
(( ! IMPL_MODE && ! NEST_LVL )) && {
  export IMPL_MODE=1
  exec 3>&1 4>&2
  trap 'exec 2>&4 1>&3' EXIT HUP INT QUIT RETURN

  [[ ! -e "${SCRIPTS_LOGS_ROOT}/.log" ]] && mkdir "${SCRIPTS_LOGS_ROOT}/.log"

  # RANDOM instead of milliseconds
  case $BASH_VERSION in
    # < 4.2
    [123].* | 4.[01] | 4.0* | 4.1[^0-9]*)
      LOG_FILE_NAME_SUFFIX=$(date "+%Y'%m'%d_%H'%M'%S''")$(( RANDOM % 1000 ))
      ;;
    # >= 4.2
    *)
      printf -v LOG_FILE_NAME_SUFFIX "%(%Y'%m'%d_%H'%M'%S'')T$(( RANDOM % 1000 ))" -1
      ;;
  esac

  (
  (
    myfoo1
    #...
    myfooN

    # self script reentrance...
    exec $0 "$@"
  ) | tee -a "${SCRIPTS_LOGS_ROOT}/.log/${LOG_FILE_NAME_SUFFIX}.myscript.log" 2>&1
  ) 1>&3 2>&4

  exit $?
}

(( NEST_LVL++ ))

# usual script body goes here...

Explanation:

  1. There is not enough to just redirect the output because you still have to redirect the inner echo calls both to console and to the file, so the pipe-plus-tee is the only way to split the output stream.
  2. We have to use (...) operator to either redirect each stream into a standalone file or to restore all streams back to the original by separate steps. The second reason because myfoo calls before the self reentrance has to work as is without any additional redirection.
  3. Because the script can be called from a nested script, then we have to redirect the output in a file only once in a top level call. So we use NEST_LVL variable to indicate the nest call level.
  4. Because the redirection can be executed externally irrespective to the script, then we have to explicitly enable the inner redirection by the IMPL_MODE variable.
  5. We have to use date time values to automatically create a new unique log file on each script run.
Andry
  • 121
  • 5
1

TL;DR - Yesterday I wrote a set of tools for logging program runs and sessions.

Currently available at https://github.com/wwalker/quick-log

As an admin, I'm always wanting to log the output of some command, often not a script. To solve that problem, I've written a few things. The easiest is to use the program "script" as xX0v0Xx mentioned. I found that calling script (without any arguments) would often result in me overwriting the output of a previous script. So I created this alias. All it does is prevent overwriting. You need a ~/tmp directory.

$ alias scr='script ~/tmp/typescript-$(date +%FT%T)'
$ scr
Script started, file is /home/wwalker/tmp/typescript-2019-12-05T18:56:31
$

That is great when I want to catch an interactive session.

When I want to log the output of a command (script or binary), I either want the exact output, or I want the output with timestamps in front of each line. So I wrote these two bash functions:

alias iso8601="date +%Y-%m-%dT%H:%M:%S"

justlog(){
  name=$(basename "$1")
  log=~/logs/${name}-$(iso8601)
  "$@" > "$log" 2>&1
}

timelog(){
  name=$(basename "$1")
  log=~/logs/${name}-$(iso8601)
  # https://github.com/wwalker/ilts
  # You could replace ilts with ts
  # /usr/bin/ts %FT%H:%M:%.S
  "$@" |& ilts -S -E > "$log" 2>&1
}

Just run your command like you normally would:

justlog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor'

or

timelog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor'

This just created 2 file called:

wwalker@polonium:~ ✓ $ ls -l ~/logs/run*
-rw-r--r-- 1 wwalker wwalker 10495 2019-12-05 18:21:14.985 /home/wwalker/logs/run-2019-12-05T18:21:13
-rw-r--r-- 1 wwalker wwalker  1694 2019-12-05 18:24:02.878 /home/wwalker/logs/run-2019-12-05T18:24:01
-rw-r--r-- 1 wwalker wwalker  7623 2019-12-05 18:25:07.873 /home/wwalker/logs/run-2019-12-05T18:25:06
-rw-r--r-- 1 wwalker wwalker 10296 2019-12-05 18:34:59.546 /home/wwalker/logs/run-2019-12-05T18:34:57

But, Wait there's more!!

I didn't want to have to do an ls to find the name of the log file that justlog or timelog just created for me. So, I added 3 more functions:

newestlog(){
  name=$(basename "$1")
  ls  ~/logs/"${name}"* | tail -1
}

viewlog(){
  name=$(basename "$1")
  view $( newestlog "$name" )
}

lesslog(){
  name=$(basename "$1")
  less $( newestlog "$name" )
}

So, you run your command with justlog (or timelog), and then you just use lesslog or viewlog (I'll probably create an emacs log for Those people):

justlog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor'
lesslog run

That's it, no ls ~/tmp, no tab completion games to find the file name. Just run lesslog (or viewlog if you like using vim to look at logs).

But, wait! There's more!

"I use grep all the time on my log files" - And the answer is, you guessed it, greplog

First, get the text from all 800 server's /etc/cron.d/atop files that are broken:

justlog fordcrun 'cat /etc/cron.d/atop; md5dum /etc/cron.d/atop'

Then get the hostnames (on the line above the output in the file :-) ) with greplog:

wwalker@polonium:~ ✓ $ greplog fordcrun  -B1 -F "0 0 * * root"
int-salt-01:
    0 0 * * root systemctl restart atop
--
rcn-pg-01:
    0 0 * * root systemctl restart atop
rcn-pg-02:
    0 0 * * root systemctl restart atop
rcn-pg-03:
    0 0 * * root systemctl restart atop
Wayne Walker
  • 241
  • 2
  • 5
0

You can simply use "script".

man script for details.

Example:

script -a -e -c "set -x; echo this is logging ..; <your script can be called here>" mylogfile.log
0

So, here is an option I used and consider to be a little more friendly. Essentially, define a task function at the top of your script, and define all of your sections with their own functions to be passed to calls of the task function. It could be extended to add some specific INFO, WARN, DEBUG, and ERROR flags for your logs as well. They would simply be functions that use an echo to wrap your message in a consistent way, as all of the outputs are going to your log anyway.

It would look something like this:

# Append a timestamp and always write a new file
#LOGFILE=/var/log/my_setup_log_file.$(date +%Y%h%d_%H.%M).log
# Same log, growing and appending - make sure its a one time deal or that you have a log rotating utility running.
LOGFILE=/var/log/my_setup_log_file.log
touch "${LOGFILE}"
TAIL_PID=$!
# [Un]comment the following line to control display of this script for the tty in real time.  All other commands should be passed to task.
tail -f "${LOGFILE}" &
# logging function - called task to make main section more readable...prepend it to commands, or group commands in a function and prepend the call.
task() {
    echo "===================================================================================================="
    echo "$(date):$(printf ' %q' "$@")"
    echo "===================================================================================================="
    start=$(date +%s)
    "$@" 2>&1
    end=$(date +%s)
    runtime=$((end-start))
    echo "Elapsed time for command was $runtime seconds."
    echo ""
} >> "${LOGFILE}"

### FUNCTION DEFS...

### Main exectuion

task my_operational_step_1
task my_operational_step_2
kill ${TAIL_PID}
exit 0

LIMITATIONS and CAUTION:

  1. My first iteration I chose to use the tee command in append mode for all of the log writes, which requires use of a pipe. This obliterates the ability to use globally scoped variables across functions, because every function is effectively run in its own subshell, due to the pipe. Argh.
  2. Addressing issue 1, I moved to the approach of using the tail -f as a spawned task at the start (~line 7). This has allows the task function to put all the output to the log, but still be able to see whats going on in real time.
  3. Note that you need to capture the PID of tail in order to kill it later, as you can see around line 8, and then the kill at the end of the script.
  4. A corollary to item 3 is that if your script exits early, tail stays running in the background... Its probably worthwhile to write some traps to handle that... I've not bothered with this, given the scope of what I'm doing.
  5. I've not used this extensively, so you may run into issues I've not thought of.
  6. It is possible to have stuff that isn't logged using this approach, so not a perfect answer for OP's question.
Matt
  • 1
  • 1