1
I've been using for some time this couple of commands to convert a video segment to an animated gif, having ffmpeg
calculate the best palette for it:
ffmpeg -ss $START -i $IN_FILE -t $LENGTH -vf "fps=$FPS,scale=$WIDTH:-1:flags=lanczos,palettegen" palette.png
ffmpeg -ss $START -i $IN_FILE -i palette.png -t $LENGTH -filter_complex "fps=$FPS,scale=$WIDTH:-1:flags=lanczos [x]; [x] [1:v] paletteuse" output.gif
This works very well for local files, but if I start to use remote URLs for $IN_FILE
, it downloads the required portion twice - once for the palette generation, once for the actual conversion.
Downloading the full file in advance is generally out of question - often I'm interested in a very small sequence in the middle of a longer video.
I tried to download just the small portion using -ss
and -t
and saving it - without reencoding - to a temporary file:
ffmpeg -ss $START -i $IN_URL -t $LENGTH -vc copy -ac none temp.mkv
In this case I do avoid the bandwidth waste (only the relevant portion of the file gets downloaded, and only once), but the seek is no longer precise, as -ss
on input has only the granularity of key frames for seeking when doing stream copy.
When converting to gif in theory it would be possible to do an extra precise seek and fix this, but there doesn't seem to be a way to get the original timestamp of the start of the temporary file generated above, so it is not possible to calculate to where I should -ss
when transcoding to gif. I tried playing with -copy_ts
, but it didn't yield anything good.
The trivial solution is to reencode in this first step (possibly applying the scale/resample in the process, to avoid doing it twice later), but I'd like to avoid the cost/quality loss of one extra useless encoding.
So: how can I perform the best-palette video to gif conversion of a small portion of a potentially big networked file, fetching it efficiently (=download once, only the relevant portion), with precise seek and without extra re-encodings?
Users may need to buffer the 2nd split copy since palettegen, by default, only returns palette after analyzing the whole stream. paletteuse, otoh, is waiting for that palette to start processing. So, for longer streams, to avoid frame drops from
[b]
, buffer it like this:[b]fifo[b];[b][pal]...
– Gyan – 2018-05-17T07:20:17.877@Gyan: uh, that's another thing I didn't know, I thought filters did all the needed buffering by themselves! I'm fixing this immediately. – Matteo Italia – 2018-05-17T07:26:08.350
1In newer versions they do, upto a point. But ffmpeg is also a streaming app. So, there's a drop threshold in effect. – Gyan – 2018-05-17T07:27:47.233