I'd like to overlay the audio as a waveform coming from an axis camera's stream.
I've started to read the documentation, but It's quiet complex. For testing I use a test stream. So far I managed to pad a bottom:
I'd like to merge the 2 ffmpeg commands into one single comman producing a videofile with its audio superimposed on the bottom,
Code: Select all
#!/bin/bash stream="rtsp://22.214.171.124/vod/mp4:BigBuckBunny_175k.mov" padbottom=40 # 1 enlarging the video #ffmpeg -i "$stream" -vf "pad=width=240:height=160+$padbottom:x=0:y=0:color=A9F5A9" videoPart.mp4 # 2 creating a waveform out of the video ffmpeg -i $stream -filter_complex "[0:a]showwaves=s=240x$padbottom:mode=line" audioPart.mp4
similar to this one:
As far as I understand the filter_complex documentation, this could be achieved by these steps:
1. split the incoming stream into 2 streams. (which I don't know how to do)
2. padding the 1st stream, (enlarging it by 40 pixels)
3. apply the waveform or showvolume on the second stream, creating a 240 x 40 pixel video out of it
4. merging the 2 stream together. (but how? According to the documentation 'split' is for cutting a media file, at least to my understaning)
Could you please help me?