Monitoring a file until a string is found

62

38

I am using tail -f to monitor a log file that is being actively written to. When a certain string is written to the log file, I want to quit the monitoring, and continue with the rest of my script.

Currently I am using:

tail -f logfile.log | grep -m 1 "Server Started"

When the string is found, grep quits as expected, but I need to find a way to make the tail command quit too so that the script can continue.

Alex Hofsteede

Posted 2011-04-13T20:38:57.810

Reputation: 1 241

4@ZaSter: The tail dies only at the next line. Try this: date > log; tail -f log | grep -m 1 trigger and then in another shell: echo trigger >> log and you will see the output trigger in the first shell, but no termination of the command. Then try: date >> log in the second shell and the command in the first shell will terminate. But sometimes this is too late; we want to terminate as soon as the trigger line appeared, not when the line after the trigger line is complete. – Alfe – 2015-02-17T14:56:18.403

That is an excellent explanation and example, @Alfe. – ZaSter – 2015-02-18T02:02:15.863

1

the elegant one-line robust solution is to use tail + grep -q like 00prometheus's answer

– Trevor Boyd Smith – 2017-04-12T18:37:37.397

https://superuser.com/a/375331/166461 – Wolfgang Fahl – 2019-01-01T16:13:45.113

I wonder on what Operating System the original poster was running. On a Linux RHEL5 system, I was surprised to find that the tail command simply dies once grep command has found the match and exited. – ZaSter – 2013-09-14T01:27:25.317

Answers

44

A simple POSIX one-liner

Here is a simple one-liner. It doesn't need bash-specific or non-POSIX tricks, or even a named pipe. All you really need is to decouple the termination of tail from grep. That way, once grep ends, the script can continue even if tail hasn't ended yet. So this simple method will get you there:

( tail -f -n0 logfile.log & ) | grep -q "Server Started"

grep will block until it has found the string, whereupon it will exit. By making tail run from it's own sub-shell, we can place it in the background so it runs independently. Meanwhile, the main shell is free to continue execution of the script as soon as grep exits. tail will linger in its sub-shell until the next line has been written to the logfile, and then exit (possibly even after the main script has terminated). The main point is that the pipeline no longer waits for tail to terminate, so the pipeline exits as soon as grep exits.

Some minor tweaks:

  • The option -n0 to tail makes it start reading from the current last line of logfile, in case the string exists earlier in the logfile.
  • You might want to give tail -F rather than -f. It is not POSIX, but it allows tail to work even if the log is rotated while waiting.
  • Option -q rather than -m1 makes grep quit after the first occurrence, but without printing out the trigger line. Also it is POSIX, which -m1 isn't.

00prometheus

Posted 2011-04-13T20:38:57.810

Reputation: 556

3This approch will leave the tail running in the background for-ever. How would you capture the tail PID within the backgrounded sub-shell and expose it the the main shell? I can only come up with the sub-optional workaround by killing all session attached tail processes, using pkill -s 0 tail. – Rick van der Zwet – 2016-11-24T08:55:42.323

1In most use cases it shouldn't be a problem. The reason you are doing this in the first place is because you are expecting more lines to be written to the log file. tailwill terminate as soon as it tries to write to a broken pipe. The pipe will break as soon as grep has completed, so once grephas completed, tail will terminate after the log file gets one more line in it. – 00prometheus – 2016-11-25T17:22:43.187

when i used this solution I did not background the tail -f. – Trevor Boyd Smith – 2017-04-12T18:38:53.753

2@Trevor Boyd Smith, yes, that works in most situations, but the OP problem was that grep will not complete until tail quits, and tail won't quit until another line appears in the log file after grep has quit (when tail tries to feed the pipeline that was broken by grep ending). So unless you background tail, your script will not continue execution until an extra line has appeared in the log file, rather than exactly on the line that grep catches. – 00prometheus – 2017-04-15T18:11:08.717

Re "trail won't quit until another line appears after [the required string pattern]": That is very subtly and I completely missed that. I didn't notice because the pattern I was looking for was in the middle and it all got printed out fast. (Again the behavior you describe is very subtle) – Trevor Boyd Smith – 2017-04-15T18:50:12.027

Yes, this is a rather subtle problem, and the solution does require some unpacking to fully understand. Sorry about that, it's the best I could do! :-) – 00prometheus – 2017-04-15T19:58:31.323

I have edited my answer again to try to explain a bit more clearly, but this stuff is a bit arcane, so I might not be doing it very well. – 00prometheus – 2017-04-15T20:19:44.270

60

The accepted answer isn't working for me, plus it's confusing and it changes the log file.

I'm using something like this:

tail -f logfile.log | while read LOGLINE
do
   [[ "${LOGLINE}" == *"Server Started"* ]] && pkill -P $$ tail
done

If the log line matches the pattern, kill the tail started by this script.

Note: if you want to also view the output on the screen, either | tee /dev/tty or echo the line before testing in the while loop.

Rob Whelan

Posted 2011-04-13T20:38:57.810

Reputation: 719

2You don't need a while loop. use watch with -g option and you can spare yourself the nasty pkill command. – l1zard – 2015-02-17T16:37:03.753

@l1zard Can you flesh that out? How would you watch the tail of a log file until a particular line showed up? (Less important, but I'm also curious when watch -g was added; I have a newer Debian server with that option, and another old RHEL-based one without it). – Rob Whelan – 2015-02-17T20:28:11.953

It's not quite clear to me why tail is even needed here. As far as i understand this correct the user wants to execute a specific command when a certain keyword in a log file appears. The command given below using watch does this very task. – l1zard – 2015-02-19T23:36:07.723

Not quite -- it's checking when a given string is added to the log file. I use this for checking when Tomcat or JBoss is fully started up; they write "Server started" (or similar) each time that happens. – Rob Whelan – 2015-02-20T06:55:42.970

pkill is not specified by POSIX but it is preinstalled on CentOS, Fedora, Ubuntu, Debian, MacOS and probably many others for a good reason. – ndemou – 2016-08-16T14:54:35.057

Using grep will save you some CPU. And you can still use pkill. – laurent – 2017-08-08T11:47:16.397

@Rob Whelan - Is there a way to implement a timeout with your solution? So that I will exit after a certain time. Even if the string was not found. – skymedium – 2019-04-01T09:07:26.360

2This works, but pkill is not specified by POSIX and isn't available everywhere. – Richard Hansen – 2013-02-27T18:29:44.333

17

If you're using Bash (at least, but it seems it's not defined by POSIX, so it may be missing in some shells), you can use the syntax

grep -m 1 "Server Started" <(tail -f logfile.log)

It works pretty much like the FIFO solutions already mentioned, but much simpler to write.

petch

Posted 2011-04-13T20:38:57.810

Reputation: 171

1This works, but tail still running untill you send a SIGTERM (Ctrl+C, exit command, or kill it) – mems – 2014-10-06T12:17:09.823

3@mems, any additional line in the log file will do. The tail will read it, try to output it and then receive a SIGPIPE which will terminate it. So, in principle you are right; the tail might run indefinitely if nothing gets written to the log file ever again. In practice this might be a very neat solution for a lot of people. – Alfe – 2015-02-17T14:33:23.357

14

There are a few ways to get tail to exit:

Poor Approach: Force tail to write another line

You can force tail to write another line of output immediately after grep has found a match and exited. This will cause tail to get a SIGPIPE, causing it to exit. One way to do this is to modify the file being monitored by tail after grep exits.

Here is some example code:

tail -f logfile.log | grep -m 1 "Server Started" | { cat; echo >>logfile.log; }

In this example, cat won't exit until grep has closed its stdout, so tail is not likely to be able to write to the pipe before grep has had a chance to close its stdin. cat is used to propagate the standard output of grep unmodified.

This approach is relatively simple, but there are several downsides:

  • If grep closes stdout before closing stdin, there will always be a race condition: grep closes stdout, triggering cat to exit, triggering echo, triggering tail to output a line. If this line is sent to grep before grep has had a chance to close stdin, tail won't get the SIGPIPE until it writes another line.
  • It requires write access to the log file.
  • You must be OK with modifying the log file.
  • You may corrupt the log file if you happen to write at the same time as another process (the writes may be interleaved, causing a newline to appear in the middle of a log message).
  • This approach is specific to tail—it won't work with other programs.
  • The third pipeline stage makes it hard to get access to the return code of the second pipeline stage (unless you're using a POSIX extension such as bash's PIPESTATUS array). This is not a big deal in this case because grep will always return 0, but in general the middle stage might be replaced with a different command whose return code you care about (e.g., something that returns 0 when "server started" is detected, 1 when "server failed to start" is detected).

The next approaches avoid these limitations.

A Better Approach: Avoid Pipelines

You can use a FIFO to avoid the pipeline altogether, allowing execution to continue once grep returns. For example:

fifo=/tmp/tmpfifo.$$
mkfifo "${fifo}" || exit 1
tail -f logfile.log >${fifo} &
tailpid=$! # optional
grep -m 1 "Server Started" "${fifo}"
kill "${tailpid}" # optional
rm "${fifo}"

The lines marked with the comment # optional can be removed and the program will still work; tail will just linger until it reads another line of input or is killed by some other process.

The advantages to this approach are:

  • you don't need to modify the log file
  • the approach works for other utilities besides tail
  • it does not suffer from a race condition
  • you can easily get the return value of grep (or whatever alternative command you're using)

The downside to this approach is complexity, especially managing the FIFO: You'll need to securely generate a temporary file name, and you'll need to ensure that the temporary FIFO is deleted even if the user hits Ctrl-C in the middle of the script. This can be done using a trap.

Alternative Approach: Send a Message to Kill tail

You can get the tail pipeline stage to exit by sending it a signal like SIGTERM. The challenge is reliably knowing two things in the same place in code: tail's PID and whether grep has exited.

With a pipeline like tail -f ... | grep ..., it's easy to modify the first pipeline stage to save tail's PID in a variable by backgrounding tail and reading $!. It's also easy to modify the second pipeline stage to run kill when grep exits. The problem is that the two stages of the pipeline run in separate "execution environments" (in the terminology of the POSIX standard) so the second pipeline stage can't read any variables set by the first pipeline stage. Without using shell variables, either the second stage must somehow figure out tail's PID so that it can kill tail when grep returns, or the first stage must somehow be notified when grep returns.

The second stage could use pgrep to get tail's PID, but that would be unreliable (you might match the wrong process) and non-portable (pgrep is not specified by the POSIX standard).

The first stage could send the PID to the second stage via the pipe by echoing the PID, but this string will get mixed with tail's output. Demultiplexing the two may require a complex escaping scheme, depending on the output of tail.

You can use a FIFO to have the second pipeline stage notify the first pipeline stage when grep exits. Then the first stage can kill tail. Here is some example code:

fifo=/tmp/notifyfifo.$$
mkfifo "${fifo}" || exit 1
{
    # run tail in the background so that the shell can
    # kill tail when notified that grep has exited
    tail -f logfile.log &
    # remember tail's PID
    tailpid=$!
    # wait for notification that grep has exited
    read foo <${fifo}
    # grep has exited, time to go
    kill "${tailpid}"
} | {
    grep -m 1 "Server Started"
    # notify the first pipeline stage that grep is done
    echo >${fifo}
}
# clean up
rm "${fifo}"

This approach has all the pros and cons of the previous approach, except it's more complicated.

A Warning About Buffering

POSIX allows the stdin and stdout streams to be fully buffered, which means that tail's output might not be processed by grep for an arbitrarily long time. There shouldn't be any problems on GNU systems: GNU grep uses read(), which avoids all buffering, and GNU tail -f makes regular calls to fflush() when writing to stdout. Non-GNU systems may have to do something special to disable or regularly flush buffers.

Richard Hansen

Posted 2011-04-13T20:38:57.810

Reputation: 461

You solution (like others, I won't blame you) will miss things already written to the log file before your monitoring started. The tail -f will only output the last ten lines, and then all the following. To improve this, you can add the option -n 10000 to the tail so the last 10000 lines are given out as well. – Alfe – 2015-02-17T12:30:20.053

Another idea: Your fifo solution can be straightened, I think, by passing the output of the tail -f through the fifo and grepping on it: mkfifo f; tail -f log > f & tailpid=$! ; grep -m 1 trigger f; kill $tailpid; rm f. – Alfe – 2015-02-17T12:33:16.350

@Alfe: I could be wrong, but I believe having tail -f log write to a FIFO will cause some systems (e.g., GNU/Linux) to use block-based buffering instead of line-based buffering, which means grep might not see the matching line when it appears in the log. The system might provide a utility to change the buffering, such as stdbuf from GNU coreutils. Such a utility would be non-portable, however. – Richard Hansen – 2015-02-17T21:37:38.033

1@Alfe: Actually, it looks like POSIX doesn't say anything about buffering except when interacting with a terminal, so from a standards perspective I think your simpler solution is as good as my complex one. I'm not 100% sure about how various implementations actually behave in each case, however. – Richard Hansen – 2015-02-18T00:48:56.007

Actually, I now go for the even simpler grep -q -m 1 trigger <(tail -f log) proposed elsewhere and live with the fact that the tail runs one line longer in the background than it needs to. – Alfe – 2015-02-18T10:38:38.963

@Alfe: The <(foo) process substitution is specific to Bash (and Zsh, maybe others). It is not POSIX conformant, so it is not appropriate for use in a portable script. – Richard Hansen – 2015-02-22T00:17:54.630

@Alfe: Also, with <(tail -f log) the tail -f log command will continue running even after grep exits. That may not be a problem in this case, but it could be a problem in other cases. – Richard Hansen – 2015-02-22T01:09:12.163

As I wrote: With this solution the tail runs one line longer than necessary (without halting termination of the grep and thus the main command) and at least in my cases this is not a problem. I'm hard pressed to think of a scenario in which this is more than an academic issue. – Alfe – 2015-02-23T01:01:02.820

Regarding buffering and consequent delays a quick note for those trying to use this with any command and not only tail: try stdbuf -o0 and stdbuf -i0 in the left and right side of the pipe if you are experiencing any delays.

In my case, I was trying to monitor docker-compose logs and it was the output that was buffered, nothing to do with grep. – rsilva4 – 2016-02-18T14:29:02.457

9

Let me expand on @00prometheus answer (which is the best one).

Maybe you should use a timeout instead of waiting indefinitely.

The bash function below will block until the given search term appears or a given timeout is reached.

The exit status will be 0 if the string is found within the timeout.

wait_str() {
  local file="$1"; shift
  local search_term="$1"; shift
  local wait_time="${1:-5m}"; shift # 5 minutes as default timeout

  (timeout $wait_time tail -F -n0 "$file" &) | grep -q "$search_term" && return 0

  echo "Timeout of $wait_time reached. Unable to find '$search_term' in '$file'"
  return 1
}

Perhaps the log file doesn't exist yet just after launching your server. In that case, you should wait for it to appear before searching for the string:

wait_server() {
  echo "Waiting for server..."
  local server_log="$1"; shift
  local wait_time="$1"; shift

  wait_file "$server_log" 10 || { echo "Server log file missing: '$server_log'"; return 1; }

  wait_str "$server_log" "Server Started" "$wait_time"
}

wait_file() {
  local file="$1"; shift
  local wait_seconds="${1:-10}"; shift # 10 seconds as default timeout

  until test $((wait_seconds--)) -eq 0 -o -f "$file" ; do sleep 1; done

  ((++wait_seconds))
}

Here's how you can use it:

wait_server "/var/log/server.log" 5m && \
echo -e "\n-------------------------- Server READY --------------------------\n"

Elifarley

Posted 2011-04-13T20:38:57.810

Reputation: 428

So, where is timeout command? – ayanamist – 2016-08-10T02:14:16.580

Actually, using timeout is the only reliable way to not hang indefinitely waiting for a server that cannot start and has already exited. – gluk47 – 2017-02-15T23:27:48.117

1This answer is the best. Just copy the function and call it, it's very easy and reusable – Hristo Vrigazov – 2018-06-06T11:52:18.523

6

So after doing some testing, I found a quick 1-line way to make this work. It appears tail -f will quit when grep quits, but there's a catch. It appears to only be triggered if the file is opened and closed. I've accomplished this by appending the empty string to the file when grep finds the match.

tail -f logfile |grep -m 1 "Server Started" | xargs echo "" >> logfile \;

I'm not sure why the open/close of the file triggers tail to realize that the pipe is closed, so I wouldn't rely on this behavior. but it seems to work for now.

Reason it closes, look at the -F flag, versus the -f flag.

Alex Hofsteede

Posted 2011-04-13T20:38:57.810

Reputation: 1 241

1This works because appending to the logfile causes tail to output another line, but by then grep has exited (probably -- there's a race condition there). If grep has exited by the time tail writes another line, tail will get a SIGPIPE. That causes tail to exit right away. – Richard Hansen – 2013-02-27T18:19:03.457

1Disadvantages to this approach: (1) there's a race condition (it may not always exit immediately) (2) it requires write access to the log file (3) you must be OK with modifying the log file (4) you may corrupt the log file (5) it only works for tail (6) you can't easily tweak it to behave differently depending on differnt string matches ("server started" vs. "server start failed") because you can't easily get the return code of the middle stage of the pipeline. There is an alternative approach that avoids all of these problems -- see my answer. – Richard Hansen – 2013-02-27T18:23:37.733

6

Currently, as given, all of the tail -f solutions here run the risk of picking up a previously logged "Server Started" line (which may or may not be a problem in your specific case, depending on the number of lines logged and log file rotation/truncation).

Rather than over-complicate things, just use a smarter tail, as bmike showed with a perl snippit. The simplest solution is this retail which has integrated regex support with start and stop condition patterns:

retail -f -u "Server Started" server.log > /dev/null

This will follow the file like a normal tail -f until the first new instance of that string appears, then exit. (The -u option does not trigger on existing lines in the last 10 lines of the file when in normal "follow" mode.)


If you use GNU tail (from coreutils), the next simplest option is to use --pid and a FIFO (named pipe):

mkfifo ${FIFO:=serverlog.fifo.$$}
grep -q -m 1 "Server Started" ${FIFO}  &
tail -n 0 -f server.log  --pid $! >> ${FIFO}
rm ${FIFO}

A FIFO is used because the processes must be started separately in order to obtain and pass a PID. A FIFO still suffers from the same problem of hanging around for a timely write to cause tail to receive a SIGPIPE, use the --pid option so that tail exits when it notices that grep has terminated (conventionally used to monitor the writer process rather than the reader, but tail doesn't really care). Option -n 0 is used with tail so that old lines don't trigger a match.


Finally, you could use a stateful tail, this will store the current file offset so subsequent invocations only show new lines (it also handles file rotation). This example uses the old FWTK retail*:

retail "${LOGFILE:=server.log}" > /dev/null   # skip over current content
while true; do
    [ "${LOGFILE}" -nt ".${LOGFILE}.off" ] && 
       retail "${LOGFILE}" | grep -q "Server Started" && break
    sleep 2
done

* Note, same name, different program to the previous option.

Rather than have CPU-hogging loop, compare the timestamp of the file with the state file (.${LOGFILE}.off), and sleep. Use "-T" to specify the location of the state file if required, the above assumes the current directory. Feel free to skip that condition, or on Linux you could use the more efficient inotifywait instead:

retail "${LOGFILE:=server.log}" > /dev/null
while true; do
    inotifywait -qq "${LOGFILE}" && 
       retail "${LOGFILE}" | grep -q "Server Started" && break
done

mr.spuratic

Posted 2011-04-13T20:38:57.810

Reputation: 2 163

Can I combine retail with a timeout, like: "If 120 seconds have passed and retail still has not read the line, then give an error code and exit retail" ? – kiltek – 2018-03-15T09:25:23.013

@kiltek use GNU timeout (coreutils) to launch retail and just check for exit code 124 on timeout (timeout will kill whatever command you use it to start after the time you set) – mr.spuratic – 2018-03-15T20:46:33.397

4

This will be a bit tricky since you will have to get into process control and signaling. More kludgey would be a two script solution using PID tracking. Better would be using named pipes like this.

What shell script are you using?

For a quick and dirty, one script solution - I would make a perl script using File:Tail

use File::Tail;
$file=File::Tail->new(name=>$name, maxinterval=>300, adjustafter=>7);
while (defined($line=$file->read)) {
    last if $line =~ /Server started/;
}

So rather than printing inside the while loop, you could filter for the string match and break out of the while loop to let your script continue.

Either of these should involve just a little learning to implement the watching flow control you are seeking.

bmike

Posted 2011-04-13T20:38:57.810

Reputation: 2 773

maxinterval=>300 means that it will check the file every five minutes. Since I know that my line will appear in the file momentarily, I'm using much more aggressive polling: maxinterval=>0.2, adjustafter=>10000 – Stephen Ostermiller – 2014-09-09T20:18:24.020

using bash. my perl-fu is not that strong, but I'll give this a shot. – Alex Hofsteede – 2011-04-14T17:03:06.523

Use pipes - they love bash and bash love them. (and your backup software will respect you when it hits one of your pipes) – bmike – 2011-04-27T00:57:43.450

2

Read them all. tldr: decouple the termination of tail from grep.

The two forms most convenient are

( tail -f logfile.log & ) | grep -q "Server Started"

and if you have bash

grep -m 1 "Server Started" <(tail -f logfile.log)

But if that tail sitting in the background bothers you, there is nicer way than a fifo or any other answer here. Requires bash.

coproc grep -m 1 "Server Started"
tail -F /tmp/x --pid $COPROC_PID >&${COPROC[1]}

Or if it isn't tail which is outputing things,

coproc command that outputs
grep -m 1 "Sever Started" ${COPROC[0]}
kill $COPROC_PID

Ian Kelling

Posted 2011-04-13T20:38:57.810

Reputation: 865

2

wait for file to appear

while [ ! -f /path/to/the.file ] 
do sleep 2; done

wait for string to apper in file

while ! grep "the line you're searching for" /path/to/the.file  
do sleep 10; done

https://superuser.com/a/743693/129669

Mykhaylo Adamovych

Posted 2011-04-13T20:38:57.810

Reputation: 133

2This polling has two main drawbacks: 1. It wastes computation time by going through the log again and again. Consider a /path/to/the.file which is 1.4GB large; then it is clear that this is a problem. 2. It waits longer than necessary when the log entry has appeared, in the worst case 10s. – Alfe – 2015-02-17T14:16:53.480

2

I can't imagine a cleaner solution than this one:

#!/usr/bin/env bash
# file : untail.sh
# usage: untail.sh logfile.log "Server Started"
(echo $BASHPID; tail -f $1) | while read LINE ; do
    if [ -z $TPID ]; then
        TPID=$LINE # the first line is used to store the previous subshell PID
    else
        echo "$LINE"; [[ "$LINE" == *"${*:2}"* ]] && kill -3 $TPID && break
    fi
done

ok, maybe the name can be subject to improvements...

Advantages:

  • it doesn't use any special utilities
  • it doesn't write to disk
  • it gracefully quits tail and closes the pipe
  • it is pretty short and easy to understand

Giancarlo Sportelli

Posted 2011-04-13T20:38:57.810

Reputation: 241

2

You don't necessary need tail to do that. I think the watch command is what you're looking for. The watch command monitors the output of a file and can be terminated with the -g option when the output changed.

watch -g grep -m 1 "Server Started" logfile.log && Yournextaction

l1zard

Posted 2011-04-13T20:38:57.810

Reputation: 933

1Because this runs once every two seconds it doesn't immediately exit once the line appears in the log file. Also, it doesn't work well if the log file is very large. – Richard Hansen – 2015-02-17T21:42:15.437

1

The tail command can be backgrounded and its pid echoed to the grep subshell. In the grep subshell a trap handler on EXIT can kill the tail command.

( (sleep 1; exec tail -f logfile.log) & echo $! ; wait ) | 
     (trap 'kill "$pid"' EXIT; pid="$(head -1)"; grep -m 1 "Server Started")

phio

Posted 2011-04-13T20:38:57.810

Reputation: 11

1

Alex i think this one will help you lot.

tail -f logfile |grep -m 1 "Server Started" | xargs echo "" >> /dev/null ;

this command will never give an entry on the logfile but will grep silently...

Md. Mohsin Ali

Posted 2011-04-13T20:38:57.810

Reputation: 27

1This won't work -- you have to append to logfile otherwise it could be an arbitrarily long time before tail outputs another line and detects that grep has died (via SIGPIPE). – Richard Hansen – 2013-02-27T18:26:10.137

1

Here is a much better solution that does not require you to write to the logfile, which is very dangerous or even impossible in some cases.

sh -c 'tail -n +0 -f /tmp/foo | { sed "/EOF/ q" && kill $$ ;}'

Currently it has only one side effect, the tail process will remain in background until the next line is written to the log.

sorin

Posted 2011-04-13T20:38:57.810

Reputation: 9 439

tail -n +0 -f starts from the beginning of the file. tail -n 0 -f starts from the end of the file. – Stephen Ostermiller – 2014-09-09T20:05:44.503

1Another side effect I get: myscript.sh: line 14: 7845 Terminated sh -c 'tail... – Stephen Ostermiller – 2014-09-09T20:06:34.363

I believe that "next list" should be "next line" in this answer. – Stephen Ostermiller – 2014-09-09T20:07:29.940

This works, but a tail process remains running in the background. – cbaldan – 2019-03-07T19:47:59.317

1

The other solutions here have several issues:

  • if the logging process is already down or goes down during the loop they will run indefinitely
  • editing a log that should only be viewed
  • unnecessarily writing an additional file
  • not allowing for additional logic

Here is what I came up with using tomcat as an example (remove the hashes if you want to see the log while its starting):

function startTomcat {
    loggingProcessStartCommand="${CATALINA_HOME}/bin/startup.sh"
    loggingProcessOwner="root"
    loggingProcessCommandLinePattern="${JAVA_HOME}"
    logSearchString="org.apache.catalina.startup.Catalina.start Server startup"
    logFile="${CATALINA_BASE}/log/catalina.out"

    lineNumber="$(( $(wc -l "${logFile}" | awk '{print $1}') + 1 ))"
    ${loggingProcessStartCommand}
    while [[ -z "$(sed -n "${lineNumber}p" "${logFile}" | grep "${logSearchString}")" ]]; do
        [[ -z "$(ps -ef | grep "^${loggingProcessOwner} .* ${loggingProcessCommandLinePattern}" | grep -v grep)" ]] && { echo "[ERROR] Tomcat failed to start"; return 1; }
        [[ $(wc -l "${logFile}" | awk '{print $1}') -lt ${lineNumber} ]] && continue
        #sed -n "${lineNumber}p" "${logFile}"
        let lineNumber++
    done
    #sed -n "${lineNumber}p" "${logFile}"
    echo "[INFO] Tomcat has started"
}

user503391

Posted 2011-04-13T20:38:57.810

Reputation: 11

0

You want to leave as soon as the line is written, but you also want to leave after a timeout:

if (timeout 15s tail -F -n0 "stdout.log" &) | grep -q "The string that says the startup is successful" ; then
    echo "Application started with success."
else
    echo "Startup failed."
    tail stderr.log stdout.log
    exit 1
fi

Adrien

Posted 2011-04-13T20:38:57.810

Reputation: 101

0

Try to use inotify (inotifywait)

You set up inotifywait for any file change, then check the file with grep, if not found just rerun inotifywait, if found exit the loop... Smth like that

Evengard

Posted 2011-04-13T20:38:57.810

Reputation: 1 500

This way, the entire file would have to be rechecked every time something is written to it. Doesn't work well for log files. – user1686 – 2011-04-13T20:59:21.293

1Another way is to make two scripts:

  1. tail -f logfile.log | grep -m 1 "Server Started" > /tmp/found

  2. firstscript.sh& MYPID=$!; inotifywait -e MODIFY /tmp/found; kill -KILL -$MYPID

  3. < – Evengard – 2011-04-13T21:16:37.720

I'd love you to edit your answer to show capturing the PID and then using inotifywait - an elegant solution that would be easy to grasp for someone used to grep but needing a more sophisticated tool. – bmike – 2011-05-10T20:47:40.993

A PID of what you would like to capture? I can try to make it if you explain a bit more what you want – Evengard – 2011-05-12T20:31:23.383

-2

how about this:

while true; do if [ ! -z $(grep "myRegEx" myLog.log) ]; then break; fi ; done

Ather

Posted 2011-04-13T20:38:57.810

Reputation: 1