FFmpeg Recipies
Any technology, no matter how primitive, is magic to those who don’t understand it. - Arthur C. Clarke / Mark Stanley
Here are some magical incantations that let me do help me do cool stuff with FFmpeg.
Fade vision + audio between 2 videos
ffmpeg -i big_buck_bunny.mp4 -i blackout.mkv -filter_complex \
"[0:v]fade=t=out:st=5:d=1:alpha=1,setpts=PTS-STARTPTS[v0];
[1:v]fade=t=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+5/TB[v1];
[v0][v1]overlay[v];
[0][1]acrossfade=d=1[a]" \
-map [v] -map [a] result.mp4
st=5:d=1
start fade out of first clip at 5 seconds, fade duration 1 secondacrossfade=d=1
fade audio from clip 1 to clip 2, duration 1 secondPTS-STARTPTS+5/TB
the second clip needs to start 5 seconds after the first clip5/TB
is “5 divided by time base”
Thanks to this post: https://opensourceforu.com/2015/04/get-friendly-with-ffmpeg/
Mux mp4 + subtitles into an MKV
ffmpeg -fflags +genpts -i infile.mp4 -f srt -i subtitles.srt \
-map 0:0 -map 0:1 -map 1:0 -c:v copy -c:a copy -c:s srt outfile.mkv
-fflags +genpts
necessary if you get Can't write packet with unknown timestamp
errors.
Extract part of video using start + end times
Transcode to lossless Flac and while applying peak normalisation to audio track
#!/bin/bash
#
# process tab delmited 'runsheet' file with format
# output name input name start time end time
#
# where start/end time are hh:mm:ss
#
tosecs() {
date '+%s' --date="$1"
}
fromsecs() {
((h=${1}/3600))
((m=(${1}%3600)/60))
((s=${1}%60))
printf "%02d:%02d:%02d\n" $h $m $s
}
IFS=$'\n'
for i in `cat runsheet`; do
oname=`echo $i | cut -f1`
iname=`echo $i | cut -f2 | tr -d "'"`
start=`echo $i | cut -f3`
end=`echo $i | cut -f4`
to=""
if [[ -n $end ]]; then
t=$(( $(tosecs "$end") - $(tosecs "$start") ))
to="-t $(fromsecs $t)"
fi
if [[ -n $start ]]; then
ss="-ss $start"
else
ss="-ss 00:00:00"
fi
cmd="ffmpeg $ss $to -i \"../$iname\" -vcodec copy -acodec flac -filter:a loudnorm $oname.mkv"
echo $cmd
eval $cmd
date
echo
done
Record from microphone, encode to Opus and send to network socket
ffmpeg -f pulse -i default -acodec libopus -b:a 96000 -vbr on -compression_level 10 -f rtp rtp://127.0.0.1:1234
Play from RTP stream:
Create SDP file using details output from ffmpeg command.
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.12.100
m=audio 1234 RTP/AVP 97
b=AS:96
a=rtpmap:97 opus/48000/2
a=fmtp:97 sprop-stereo=1
Then pass SDP file to RTP client:
ffplay -i opus.sdp -protocol_whitelist file,udp,rtp
Video Capture Using EasyCAP USB analog to digital convertor
- Capture audio from device1 (hw:1)
- Capture video from /dev/video0 as PAL (720x576 50hz)
- de-interlace (yadif)
- encode video using ‘fast’ preset (using slow was getting dropped frames)
- encode audio as AAC 128kb
software encoding
ffmpeg \
-f alsa -ac 2 -thread_queue_size 512 -i hw:1 \
-f v4l2 -standard PAL -thread_queue_size 512 -i /dev/video0 \
-vf yadif -c:v libx264 -preset fast -crf 23 \
-c:a aac -b:a 128k \
output.mkv
hardware encoding using VAAPI
ffmpeg \
-f alsa -ac 2 -thread_queue_size 1024 -i hw:1 \
-f v4l2 -standard PAL -thread_queue_size 1024 -i /dev/video0 \
-vaapi_device /dev/dri/renderD128 -vf 'format=nv12,hwupload' -threads 4 -vcodec h264_vaapi -qp:v 23 \
-c:a aac -b:a 128k \
output.mkv
No deinterlacing with hardware encoding :(
Convert sequence of JPEG images to MP4 video
Simple glob:
ffmpeg -r 24 -i '*.JPG' -s hd1080 -vcodec libx264 timelapse.mp4
Start from DSC_0079.JPG
ffmpeg -r 24 -f image2 -start_number 79 -i DSC_%04d.JPG -s hd1080 -vcodec libx264 timelapse2.mp4
-r 24
- output frame rate-s hd1080
- 1920x1080 resolution
Slower, better quality
Add the following after -vcodec libx264
to achieve better quality output
-crf 18 -preset slow
Bulk convert JPGs to 1920x1080, centered
convert input.jpg -resize '1920x1080^' -gravity center -crop '1920x1080+0+0' output.jpg