Create video with 5 images with fadeIn/out effect in ffmpeg

28

33

With 5 images I have to create a video of 60 seconds in ffmpeg, each image has to display for 15 seconds. after 15 seconds, first image has to fade out and 2nd image has to fade in, after that 2nd image has to fade out, 3rd image has to fade in..etc. Please guide me how can I achieve this using ffmpeg commands.

user384847

Posted 2014-10-29T12:16:16.000

Reputation: 331

5

What have you tried and then we can assist you? This forum isn't a "please give me the answer without doing any work" type of forum.

With that said, I will provide you the link to ffmpeg fade documentation - https://www.ffmpeg.org/ffmpeg-filters.html#fade

– Mike Diglio – 2014-10-29T12:31:05.240

Answers

51

Dip/fade to black

Scroll down for crossfade method.

fade example

Example where each image displayed for 5 seconds and each has a fade that lasts 1 second. Each image input has the same width, height, and sample aspect ratio. If they vary in size see example #3 below.

MP4 output

ffmpeg \
-loop 1 -t 5 -i input0.png \
-loop 1 -t 5 -i input1.png \
-loop 1 -t 5 -i input2.png \
-loop 1 -t 5 -i input3.png \
-loop 1 -t 5 -i input4.png \
-filter_complex \
"[0:v]fade=t=out:st=4:d=1[v0]; \
 [1:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v1]; \
 [2:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
 [3:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
 [4:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v4]; \
 [v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4

With audio

Same as above but with audio:

ffmpeg \
-loop 1 -t 5 -i input0.png \
-loop 1 -t 5 -i input1.png \
-loop 1 -t 5 -i input2.png \
-loop 1 -t 5 -i input3.png \
-loop 1 -t 5 -i input4.png \
-i audio.m4a \
-filter_complex \
"[0:v]fade=t=out:st=4:d=1[v0]; \
 [1:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v1]; \
 [2:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
 [3:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
 [4:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v4]; \
 [v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" -map 5:a -shortest out.mp4

For input images with varying or arbitrary sizes

Like the first example, but with input images that vary in width x height. They will be padded to fit within a 1280x720 box:

ffmpeg \
-loop 1 -t 5 -i input0.png \
-loop 1 -t 5 -i input1.png \
-loop 1 -t 5 -i input2.png \
-loop 1 -t 5 -i input3.png \
-loop 1 -t 5 -i input4.png \
-filter_complex \
"[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=out:st=4:d=1[v0]; \
 [1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v1]; \
 [2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
 [3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
 [4:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v4]; \
 [v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4

See the examples in Resizing videos to fit into static sized player if you want to crop (fill the screen) instead of pad (letterbox/pillarbox), or if you want to prevent upscaling.

GIF output

Adds the filters from How do I convert a video to GIF using ffmpeg, with reasonable quality?

ffmpeg \
-framerate 10 -loop 1 -t 5 -i input0.png \
-framerate 10 -loop 1 -t 5 -i input1.png \
-framerate 10 -loop 1 -t 5 -i input2.png \
-framerate 10 -loop 1 -t 5 -i input3.png \
-framerate 10 -loop 1 -t 5 -i input4.png \
-filter_complex \
"[0:v]fade=t=out:st=4:d=1[v0]; \
 [1:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v1]; \
 [2:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
 [3:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
 [4:v]fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v4]; \
 [v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,split[v0][v1]; \
 [v0]palettegen[p];[v1][p]paletteuse[v]" -map "[v]" out.gif

Use the -loop output option to control the number of times the GIF loops. Default is infinite loop if this option is not used. A value of -1 is no loop.

Options and filters used:

  • -t to set duration in seconds of each input.

  • -loop 1 loops the image otherwise it would have a duration of 1 frame.

  • -framerate to set input image frame rate (default when undeclared is 25). Useful for making GIF.

  • scale with pad to fit the input images into a specific, uniform size (used in example #3).

  • fade to fade in and out. d is the duration of the fade. st is when it starts.

  • concat to concatenate (or "join") each image.

  • format to output a chroma subsampling scheme that is compatible with non-FFmpeg based players if outputting MP4 and encoding with libx264 (the default encoder for MP4 output if it is supported by your build).

  • split to make copies of a filter output. Needed by the palette* filters to do everything in one command.

  • palettegen and paletteuse for making nice looking GIF.


Crossfade

crossfade example

Example where each image displayed for 5 seconds and each has a crossfade that lasts 1 second. Each image input has the same width, height, and sample aspect ratio. If they vary in size then adapt example #3 above.

MP4 output

ffmpeg \
-loop 1 -t 5 -i 1.png \
-loop 1 -t 5 -i 2.png \
-loop 1 -t 5 -i 3.png \
-loop 1 -t 5 -i 4.png \
-loop 1 -t 5 -i 5.png \
-filter_complex \
"[1]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+4/TB[f0]; \
 [2]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+8/TB[f1]; \
 [3]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+12/TB[f2]; \
 [4]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+16/TB[f3]; \
 [0][f0]overlay[bg1];[bg1][f1]overlay[bg2];[bg2][f2]overlay[bg3]; \
 [bg3][f3]overlay,format=yuv420p[v]" -map "[v]" -movflags +faststart out.mp4

With audio

ffmpeg \
-loop 1 -t 5 -i 1.png \
-loop 1 -t 5 -i 2.png \
-loop 1 -t 5 -i 3.png \
-loop 1 -t 5 -i 4.png \
-loop 1 -t 5 -i 5.png \
-i music.mp3 \
-filter_complex \
"[1]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+4/TB[f0]; \
 [2]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+8/TB[f1]; \
 [3]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+12/TB[f2]; \
 [4]format=yuva444p,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+16/TB[f3]; \
 [0][f0]overlay[bg1];[bg1][f1]overlay[bg2];[bg2][f2]overlay[bg3]; \
 [bg3][f3]overlay,format=yuv420p[v]" -map "[v]" -map 5:a -shortest -movflags +faststart out.mp4

Crossfade between two videos with audio

Select 5 second segment from each input and add a 1 second crossfade:

ffmpeg -i input0.mp4 -i input1.mp4 -filter_complex \
"[0:v]trim=start=5:end=10,setpts=PTS-STARTPTS[v0];
 [1:v]trim=start=12:end=17,setpts=PTS-STARTPTS+4/TB,format=yuva444p,fade=st=4:d=1:t=in:alpha=1[v1];
 [v0][v1]overlay,format=yuv420p[v];
 [0:a]atrim=start=5:end=10,asetpts=PTS-STARTPTS[a0];
 [1:a]atrim=start=12:end=17,asetpts=PTS-STARTPTS[a1];
 [a0][a1]acrossfade=d=1[a]" \
-map "[v]" -map "[a]" output.mp4

GIF output

ffmpeg \
-framerate 10 -loop 1 -t 5 -i 1.png \
-framerate 10 -loop 1 -t 5 -i 2.png \
-framerate 10 -loop 1 -t 5 -i 3.png \
-framerate 10 -loop 1 -t 5 -i 4.png \
-framerate 10 -loop 1 -t 5 -i 5.png \
-filter_complex \
"[1]format=rgba,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+4/TB[f0]; \
 [2]format=rgba,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+8/TB[f1]; \
 [3]format=rgba,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+12/TB[f2]; \
 [4]format=rgba,fade=d=1:t=in:alpha=1,setpts=PTS-STARTPTS+16/TB[f3]; \
 [0][f0]overlay[bg1];[bg1][f1]overlay[bg2];[bg2][f2]overlay[bg3];[bg3][f3]overlay,split[v0][v1]; \
 [v0]palettegen[p];[v1][p]paletteuse[v]" -map "[v]" out.gif

Use the -loop output option to control the number of times the GIF loops. Default is infinite loop if this option is not used. A value of -1 is no loop.

llogan

Posted 2014-10-29T12:16:16.000

Reputation: 31 929

your solution is not working in android it gives error No such filter can you please help?? – Janki Gadhiya – 2016-05-18T06:52:46.670

@jankigadhiya Make a new question. Include your full ffmpeg command and the complete console/log output. – llogan – 2016-05-18T16:43:27.490

1@LordNeckbeard, thanks for this answer, I used it for another example. Have I understood well that number following -t in -loop 1 -t 1 -i 001.png defines the duration of individual frames, and that numbers following T/ within filter_complex block defines the transition's duration? And is the frame duration in this example counted including transition's duration or not? – cincplug – 2016-05-19T08:27:14.920

@LordNeckbeard Its taking too much time to generate video, can we reduce it ? – Nisarg – 2016-08-05T12:42:55.303

@LordNeckbeard Yea sure please look at this

– Nisarg – 2016-08-06T04:30:25.280

@LordNeckbeard Yes Please check this

– Nisarg – 2016-08-06T08:51:05.353

1@Nisarg That does not appear to be the complete output and I'm not sure what command it is from of the two you displayed earlier. Anyway, try adding -preset ultrafast. – llogan – 2016-08-06T16:06:57.757

@LordNeckbeard Yea thats what i did :) but it still takes 1 to 2 mintues, can we still reduce this i mean by managing inputs ? – Nisarg – 2016-08-07T13:12:50.587

1@LordNeckbeard I need to combine a set of images, video clips and an audio track to create a single video file (preferably ogg, but that is less relevant at this point). In addition, I need to create some transition effects between adjacent images. Is there any way to script this whole task using ffmpeg and/or other command line tools? The goal is to automate the task via a command line interface. – Web User – 2016-08-17T17:37:41.607

1@WebUser Transitions are probably going to be easier using melt. – llogan – 2016-08-23T05:31:45.153

@LordNeckbeard sorry I am unfamiliar with melt. Can you point me to it? – Web User – 2016-08-25T22:59:52.827

How would you implement the setting of the SAR and DAR within the filter_complex? For example I need to add the additional filter "setsar=sar=1/1,setdar=dar=16/9" to this line in the filter_complex: "[1:v][0:v]blend=all_expr='A(if(gte(T,0.5),1,T/0.5))+B(1-(‌​if(gte(T,0.5),1,T/0.‌​5)))'[b1v];". Where would I place this additional filter? I have tried to place it after the "[0:v]" and I get the ffmpeg error, "Too many inputs specified for the "setsar" filter." – jrkt – 2016-09-22T19:22:14.163

@JonStevens [0:v]setsar=1[sar0];[1:v][sar0]blend... – llogan – 2016-09-23T02:17:12.583

I tried adding that and it still won't work.. I have opened a new question if you wouldn't mind looking at it @LordNeckbeard? http://superuser.com/questions/1127542/creating-video-from-images-with-different-sars-with-ffmpeg

– jrkt – 2016-09-23T14:06:38.080

Thanks for replying. I am trying to include a specific zoompan filter for each image so I can apply that filter at the same time. How would I also include that in the command above that uses the blend filter? Where would I put it? One example of a zoompan I'm trying to include is: zoompan=z='min(zoom+0.0005,1.5)':x='if(gte(zoom,1.5),x,x-1)':y='y' – jrkt – 2016-09-26T17:35:35.147

@LordNeckbeard I've created a question asking the above about combining the blend and zoompan filters into one command http://superuser.com/questions/1128563/ffmpeg-create-mp4-from-images-with-blend-and-zoompan-filters. I could really use some help.

– jrkt – 2016-09-27T14:10:07.937

2The ffmpeg approach is working nicely for me; thanks! One tip for newcomers to the page; in the concat=n=9 part of the command, the 9 comes from the 5 images in the example + 4 transitions between the images. If you're handling a different number of images, you'll need to adjust that accordingly. – Jim Miller – 2017-03-13T20:58:11.573

Spoke too soon on the ffmpeg crossfade approach. It works for the original 1 sec / 0.5 sec, but when I try to do 2 sec / 0.5, it gives me image times of 2, 3.5, and 3.5 sec for the following 3 image merge: /usr/local/bin/ffmpeg -y -loop 1 -t 2 -i file1.png -loop 1 -t 2 -i file2.png -loop 1 -t 2 -i file3.png -filter_complex "[1:v][0:v]blend=all_expr='A*(if(gte(T,0.5),1,T/0.5))+B*(1-(if(gte(T,0.5),1,T/0.5)))'[b1v]; [2:v][1:v]blend=all_expr='A*(if(gte(T,0.5),1,T/0.5))+B*(1-(if(gte(T,0.5),1,T/0.5)))'[b2v]; [0:v][b1v][1:v][b2v][2:v]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4 ideas? – Jim Miller – 2017-03-16T00:33:09.813

@LordNeckbeard How can I do it withouth the error from scale is not the same or SAR is not the same? There is a parameter that permit create a video with images with different scales and SAR? – 7sega7 – 2017-04-24T15:01:12.440

@7sega7 I'd have to see your command and console output to see what exactly is going on. You can use a pastebin site and provide the link here. – llogan – 2017-04-26T01:37:30.090

@LordNeckbeard Thanks for the great answer once again. we can add setdar=16/9 to prevent aspect ratio related issues. Can you suggest me a solution to map audio stream as well in the same script i am getting weird responses each time i am mapping it via complex filter – Killer – 2018-08-02T11:52:50.680

@7sega7 add scale=1280:720,setdar=16/9 for each image to scale and maintain aspect ratio – Killer – 2018-08-02T11:58:10.450

1@Killer Added an example that has audio. – llogan – 2018-08-02T18:50:26.570

Tear of joy ( ͡↑ ͜ʖ ͡↑) thanks man @LordNeckbeard i tried every possible convention nearest was -map [5:a] but you finally updated the answer. Master – Killer – 2018-08-02T19:37:04.257

4

I wrote a general bash script that takes in a path to a folder of images, and outputs a crossfade video with ffmpeg:

https://gist.github.com/anguyen8/d0630b6aef6c1cd79b9a1341e88a573e

The script essentially looks at the images in a folder and prints out a command that is similar to the answer by @LordNeckbeard above, and executes the command. This script helps when you have many images in a folder and don't want to manually type in a depressingly long command.

anh_ng8

Posted 2014-10-29T12:16:16.000

Reputation: 141

Sorry, but your script fails with ffmpeg 3.0.1, with inputs #0 to #4 it returns:"Invalid file index 5 in filtergraph description" – Krzysztof Bociurko – 2016-05-18T12:30:55.577

TobySpeight: good point, I've edited the answer to be more clear. Basically the main idea is already given by @LordNeckbeard above. This script just generalizes to many images. – anh_ng8 – 2016-09-21T12:54:47.113