0

I have a program that generates large logs on the standard output. Actually I don't care about the log, but I do want to know the last 1000 lines or so if the program finishes or crashes.

I want something in-between "tail -f" and "> log".. that is to say, I want to monitor the output and put it in a log file, but continuously only save the last 1000 lines to a file. If I do "tail -f >log" this saves everything continuously and the log file gets too big. If I do "tail -f", I can monitor the tail of the output.

I want something that theoretically does,

$ program >log &
$ while true; do sleep 1s; tail -n1000 log > saved_log; done

but without producing log in-between, since it grows too big. Does such a tool exist?

Standard Unix toolkit preferred, but I'm open to suggestions.

  • GNU split will work on `STDIN`, something like `program | split -l 1000 &`. The you could `tail -F xa{a,b,c,d,e,f,g,h,i,j}` to see the logs as they were written. But you'd have to clear the older logs out manually, so its not a complete solution. And it only works for a few logs. Clunky, so your solution below may well be better. – Unbeliever Oct 29 '16 at 07:18

1 Answers1

0

I'd like a better answer, but in the meantime I wrote this terrible little Python script. There must be something standard that does this...

$ cat proposed/box-restart/loglast1000.py 
#!/usr/bin/env python

from __future__ import print_function
import sys, time

fn = sys.argv[1]
last1000 = []

def output_last1000():
    with file(fn,'w') as output:
        for l in last1000:
            print(l, end='', file=output)

t = time.time()
for line in sys.stdin:
    last1000.append(line)
    if len(last1000) > 2000:
        last1000 = last1000[1000:]
    if time.time() - t > 0.5:
        output_last1000()

output_last1000()