2
I am trying to get a working system to monitor special logs. I usually just want a very specific pattern which I extract using grep
and a pipe from tail -f
. I have noticed that grep does not output everything but instead keeps some lines buffered. I guess that makes sense if you have a pipe that will output everything and then terminate and close the stream.
But with tail -f
that doesn't work out for me.
The same problem appears with sed
.
Here is an example command I want to use:
clear && tail -F -n1000 /var/log/fail2ban.log | grep 'WARNING.*Ban' | sed s/'fail2ban.actions: WARNING '//g | grep -E --color 'ssh-iptables-perma|$'
To provide an example:
The last line of the command above is this:
2015-05-04 11:17:24,551 [ssh-iptables] Ban x.x.x.x
And using this command:
clear && tail -F -n1000 /var/log/fail2ban.log | grep 'WARNING.*Ban' | sed s/'fail2ban.actions: WARNING '//g
The last line ist this:
2015-05-04 19:45:17,615 [ssh-iptables] Ban y.y.y.y
Removing further pipes gets me further to the most recent entries.
How can I possibly avoid this caching in the pipes?
egmont explaind it better but you also gave me the correct option for
sed
. – BrainStone – 2015-05-04T18:44:32.330A hint from grep's manpage: This can cause a performance penalty. – Cyrus – 2015-05-04T18:47:02.267
I thought so. But that won't really affect us since there is at maximum a few lines per minute which a good server should be able to handle. But thank you for the warning anyways.. – BrainStone – 2015-05-04T18:51:33.910