Unexpectedly large HLS TS Files

0

It seems sometimes the HLS segmenting does not work properly for few streams. My TS file grows until the drive is full, there's no segmenting happening. Most of my streams running perfectly for weeks (with the same cmdline).

What do I know about one of my problematic source streams (FFPROBE output): https://justpaste.it/5d0px

My cmdline looks like this:

/root/bin/ffmpeg -loglevel 16 -probesize 10M -analyzeduration 10M -i http://host/streams/stream.m3u8 -map 0:1 -map 0:0 -c:a copy -c:v copy -f hls -hls_time 10 -hls_flags delete_segments -hls_base_url http://host/ts/ -hls_segment_filename /var/media/ts/stream_%03d.ts /var/media/playlist/stream.m3u8

I'm logging FFMPEG activites into log file and I saw only one error, when my drive went full:

av_interleaved_write_frame(): No space left on device

Error writing trailer of /var/media/playlist/stream.m3u8: No space left on device

I've tried these FFMPEG versions: N-93765-gfcc01ba, 4.1.3

The HLS m3u8 file looks fine, durations okay, but the last TS files grows until my drive full. Using Ramdrive to store TS files.

Edit: Some update: Ran FFMPEG for a whole day with loglevel 48:

  • Log file is Huge ~790MB, so I'm going to share only the error what I get. Of course if you need I'll upload it.
  • The segmenting stopped at 9538th segment file (see HLS settings above) I see only this error when the segmenting stopped:

Non-monotonous DTS in output stream 0:0; previous: 8584618042, current: 307231408; changing to 8584618043. This may result in incorrect timestamps in the output file.

On my production server, where most of mystreams running, I made a simple Python script which detects if TS file is bigger than 20MB (TS files normally maximum 2 megabytes)

Here's my results:

2019-05-21 08:02:02: /var/media/ts/tlc_9538.ts bigger than 20MB

2019-05-21 08:02:03: /var/media/ts/eurosport_9538.ts bigger than 20MB

2019-05-21 08:02:04: /var/media/ts/digi_sport1_9538.ts bigger than 20MB

2019-05-21 09:27:01: /var/media/ts/travel_9538.ts bigger than 20MB

2019-05-21 09:48:01: /var/media/ts/digi_sport2_9538.ts bigger than 20MB

2019-05-21 11:21:01: /var/media/ts/digi_world_9538.ts bigger than 20MB

2019-05-21 11:22:01: /var/media/ts/history_9538.ts bigger than 20MB

2019-05-21 12:39:01: /var/media/ts/viasat_history_9538.ts bigger than 20MB

As you can see this event happens at the 9538th TS file. Is there any reason for that?

SudoSu

Posted 2019-05-21T06:26:03.193

Reputation: 11

Which is the last non-growing segment? Try to reproduce but set loglevel to 48. – Gyan – 2019-05-21T08:04:06.453

Some update: As you suggested I started a FFMPEG process with loglevel 48, it's still running (on other server), sometimes needs a few days to get this error, it's absoluletly random. I've this message one of my production streams: [hls @ 0x45e6ec0] Cannot use rename on non file protocol, this may lead to races and temporary partial files – SudoSu – 2019-05-22T08:08:53.247

Updated the post. See above – SudoSu – 2019-05-22T12:10:59.470

Looks like a TS wrapping error. Upload full log. I'll look at it in next few days when I have time. – Gyan – 2019-05-22T13:32:14.860

Here you go: https://drive.google.com/open?id=1C8nmtlKHaYrl4KFkBuwczCsiAe6bWePm

– SudoSu – 2019-05-22T14:01:44.647

I've some new experiences:

  • If my input is coming from multicast source, the segmenting is not stopping at the 9538th segment.

So maybe should I set some extra parameters if I'm using a HLS as source? Thanks for your advices, helped with debugging. If you need any more info, I'll provide it. – SudoSu – 2019-05-28T14:00:14.433

No answers