ffmpeg - mapping multiple audios to one channel

0

I want to output a video file with multiple sounds, but put them in one channel.
I've tried couple ways to do it, but I'm always given a video with multiple sound streams.
Like this

My command is

ffmpeg -r 30 -i test_%03d.png 
-itsoffset 10 -ss 5 -t 20 -i s01.wav
-itsoffset 15 -ss 10 -t 30 -i s02.wav 
-map 0:v -map 1:a:0 -map 2:a:0 -c:v libopenh264 -c:a mp3 test.mp4

New command test

-framerate 30 -i test_%03d.png -ss 0.2 -t 1 -i sound01.wav -ss 1 -t 2 -i sound02.wav
-filter_complex
"[1] aformat=sample_fmts=s16p:sample_rates=44100:channel_layouts=mono [a1];
 [2] aformat=sample_fmts=s16p:sample_rates=44100:channel_layouts=mono [a2]; 
 [a1]adelay=400|400,apad[b1]; 
 [a2]adelay=900|900[b2]; 
 [b1][b2]amerge=2[a]" 
 -map 0:v -map "[a]" -c:v libopenh264 -c:a mp3 -ac 2 output.mp4

Because filters could not choose their formats error occurred, I added aformat to the command. But now another error is happening, No channel layout for input 1 and output video lacks sound01, please help me out!

Ives

Posted 2017-12-24T07:43:35.543

Reputation: 235

It works after changing channel_layouts=mono to stereo, but I'm still given No channel layout for input 1 error. – Ives – 2017-12-25T06:26:46.757

Answers

0

To combine multiple audio streams into one, you have to use filters to merge the streams:

ffmpeg -framerate 30 -i test_%03d.png -i s01.wav -i s02.wav \
       -filter_complex "[1][2]amerge=2[a]" \
       -map 0:v -map "[a]"   
       -c:v libopenh264 -c:a mp3 -ac 2 test.mp4

The framerate is the right option for image and raw streams.

The -ac 2 downmixes the merged audio to 2 channels since that's the upper limit for the MP3 encoder.


For the updated command,

ffmpeg -framerate 30 -i test_%03d.png 
       -ss 5 -t 20 -i s01.wav
       -ss 10 -t 30 -i s02.wav
       -filter_complex "[1]adelay=10000|10000,apad[a1];
                        [2]adelay=15000|15000[a2];
                        [a1][a2]amerge=2[a]" \
       -map 0:v -map "[a]" -c:v libopenh264 -c:a mp3 test.mp4

The adelay pads the audio in front, and takes value in milliseconds per channel. The apad pads audio at the end, and is needed because amerge ends with the shortest stream.

Gyan

Posted 2017-12-24T07:43:35.543

Reputation: 21 016

1thanks, it does merge the streams, if I want to merge more audios can I just change -filter_complex "[1][2][3][4]amerge=4[a] ? – Ives – 2017-12-24T08:48:07.340

And I found that if I merge the streams the individual settings like -itsoffset -ss -t will be ignored. – Ives – 2017-12-24T08:50:18.347

can I just change -filter_complex "[1][2][3][4]amerge=4[a] ? --> yes. – Gyan – 2017-12-24T09:01:25.550

Yes, those will be ignored and amerge will end with the shortest stream, so you should trim them beforehand. – Gyan – 2017-12-24T09:02:19.680

so basically I can't do this in one line command, I need to trim them and make duration of sound files equal to the video duration, then merge them to the video? – Ives – 2017-12-24T09:12:30.913

You can but that's a longer filter_complex graph. I adapted your given command which doesn't do any trimming or offset. – Gyan – 2017-12-24T09:22:30.043

I edited my command, please check it out! – Ives – 2017-12-24T09:28:09.923

thanks, I added adelay and apad to the command, and other error occurred, I added it to my post. – Ives – 2017-12-25T02:31:22.257

How to set apad parameter, when I have three or more sound streams? – Ives – 2017-12-25T07:48:50.247