0

Situation: I have a shell script running. Whenever a file appears in /etc/scripts, the shell script will chmod it to be executable, and then run it. It will then redirect it's error and output into 2 other files.

Example: 1000.run appears. Shell script takes 1000.run, makes it executable, runs it, and redirects it's output into 1000.out and 1000.err:

chmod +x 1000.run
sudo -u pi ./1000.run 1>1000.out 2>1000.err

After 1000.out and 1000.err have appeared, I have another script that is watching for these files and then reads out the output and the error.

However, I have a problem: longer commands. Take the following contents of 1000.run:

sleep 30 && ls

Immediately after running ./1000.run, 1000.out and 1000.err appears - both empty. My other script takes those files and says "hey, we have output, the command is done" and returns to me empty outputs (when I was expecting ls output).

In reality, after 30 seconds, output from ls does appear, but at that point my program has already read in the out and err files and concluded that no output was actually received.

Question: Is there a way to delay the creation of the redirection files (1000.out and 1000.err) until the entire command is done running?

What I have tried so far:

  • I have tried using stdbuf -o0 (from here).
  • I have tried using unbuffer (from here).
Devin
  • 3
  • 1
  • I have a feeling that you make this problem a bit more complicated than it should be. Why can't the first script call the second when it is finished, giving it the files it needs? Or there could be only one script, which runs the command, and interprets the output. In both cases you could easily avoid the need for synchronizing between processes. – Lacek Nov 25 '19 at 13:29
  • @Lacek I have a docker container that wants these commands run on the local system, so I have to write them out via a fs mount into the container, and then have this script (running on the system, outside of docker) run the command. The docker container, after writing out the run file, then sets up a node fs.watch on the output/error files and waits for them to be created, and then reads their output as soon as they are. – Devin Nov 25 '19 at 13:47

2 Answers2

0

You can work around this issue by appending a string which denotes the completion of the script to the output file, such as "--end--".

Script that executes files:

chmod +x 1000.run
sudo -u pi ./1000.run 1>1000.out 2>1000.err
"--end--" >> 1000.out

Pseudocode to read outputs:

fs.watch(pathToDirWithOutputsAndErrors, function (event, filename)  {
  if ( outputFile.lastLine == '--end--' ) {
    // script is done executing
    // do whatever...
  }
});

So only if the output file has '--end--' at the last line will their output be read.

Note: In the if statement make sure to only check the .out file for '--end--'.

slightly_toasted
  • 732
  • 3
  • 13
0

I don't think you can delay file creation until they are closed, at least no in an easy way.

I would rewrite the scripts so that the second script doesn't look for the output files, but rather a flag file which indicates that the command has finished. So in the end you would have three files: 1000.out, 1000.err, and 1000.complete. The complete file would be written at the end of the script, like this (assuming every file to be executed ends with .run):

FILE=1000.run

BASE=$(basename $FILE .run)
chmod +x $FILE

trap signalCompletion exit
function signalCompletion {
    touch $BASE.complete
}

sudo -u pi ./$FILE 1>$BASE.out 2>$BASE.err

Using the trap command to create the file ensures that it will be created even if a signal terminates your script. When your second script detects the .complete file, it knows that it is safe to read and parse the output files.

This doesn't answer your question, but may be a solution to the problem you're trying to solve.

Lacek
  • 6,585
  • 22
  • 28