6
1
I am trying to apply timestamps to stdout of a process. For the proper timestamps to be applied, I attempt to unbuffer
stdout of the process. This works with unbuffer
but not with stdbuf
as I would expect. Consider the following slow printing script 'slowprint':
#!/bin/bash
if [ $# -ne 2 ]; then
echo "usage: ${0%%/*} <file> <delay in microseconds>"
exit 1
fi
DELAY=$2 perl -pe 'BEGIN{use Time::HiRes qw(usleep)} { usleep($ENV{DELAY}) }' $
now compare the following attempts to apply timestamps:
stdbuf -oL ./slowprint <(ls) 100000 |
awk '{ print strftime("%H:%M:%S"), $0; fflush(); }'
vs
unbuffer ./slowprint <(ls) 100000 |
awk '{ print strftime("%H:%M:%S"), $0; fflush(); }'
The second one works for me while the first one doesn't, though I expect them to do the same thing. Currently unbuffer
is unsuitable because it swallows error codes in certain circumstances, (I posted a separate question about that behavior).
Necroed:
perl
allows scripts to do pretty lowlevel I/O and I'd guess (but am not certain) that affects buffering. You can override it here by setting$|=1
or with 'English' in effect$OUTPUT_AUTOFLUSH=1
. This won't work for non-perl of course, but you may not have the problem for non-perl. – dave_thompson_085 – 2016-09-09T05:53:10.557