Re-streaming video streams with FFmpeg

Using FFMpeg to combine multiple live video sources, for example webcams, and re-stream to a video streaming service like YouTube or any other that support rtmp.

This is an example on how to combine two live webcam streams over a background image with a watermark/logo.

We specify the image background, two streams and a watermark image as inputs to FFmpeg, setup a filter to compose these. We also add a null audio so that YouTube is happy with the stream.

The FFmpeg command used is (we will break down and explain everything!):

ffmpeg
-loop 1 -i background_image.png
-rtbufsize 32M -i https://camera1.stream.url/hls/stream.m3u8
-rtbufsize 32M -i https://camera2.stream.url/hls/stream.m3u8
-i logo.png
-filter_complex "[1:v]pad=iw+8:ih+8:4:4:black[a]; \
 [2:v]pad=iw+8:ih+8:4:4:black[b]; \
 [0:v][a]overlay=30:(main_h/2)-(overlay_h/2)[c]; \
 [c][b]overlay=main_w-overlay_w-30:(main_h/2)-(overlay_h/2)[tvideo];
 [tvideo][3:v]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2[video];
 anullsrc=cl=mono:r=44100[audio]"
-map "[video]" -map "[audio]"
-c:v libx264 -pix_fmt yuv420p -tune zerolatency -crf 28 -x264-params keyint=20:scenecut=0
-movflags +faststart
-f flv rtmp://url.to.ingestion.server/streamkey/etc

What the command does is load a background image (1920x1080) as video [0], two live streams as video [1] and [2] and a logo as video [3]. The two video streams [1] and [2] are padded with a black border vertically aligned in the middle and horizontaly aligned on the left and right sides and overlayd on the background image. Last the logo is placed in the middle.

A dummy null audio (silence) is added to make some rtmp ingestion services happy with the stream (YouTube does not like a video only stream).

Last the video and audio are mapped to the output.

The resulting video is compressed to h264 format, tuned for low latency, key frames forced every 20 frame and encapsulated in flv for the rtmp ingestion.

We are taking a couple of shortcuts here, one is that we know the resolution of the incoming streams (852x480) and that they fit nicely side by side on a Full HD background canvas. As so we are not scaling them in any way as we don't need to. If your streams are of a different size the scaling/cropping might be required.

For testing purposes you can save into a file instead of sending to a server, just change "-f flv rtmp://url.to.ingestion.server/streamkey/etc" to for example test.mov
 

Breaking down the FFmpeg command line

FFmpeg command line explained
-loop 1 -i background_image.png
First "video" input, ffmpeg index [0]
The Background image
-rtbufsize 32M -i https://camera1.stream.url/hls/stream.m3u8

Second video input, ffmpeg index [1]

The first live video stream

-rtbufsize 32M -i https://camera2.stream.url/hls/stream.m3u8

Third video input, ffmpeg index [2]

The second live video stream
-i logo.png
The watermark logo "video", ffmpeg index [3]
-filter_complex "
[1:v]pad=iw+8:ih+8:4:4:black[a];
[2:v]pad=iw+8:ih+8:4:4:black[b];
[0:v][a]overlay=30:(main_h/2)-(overlay_h/2)[c];
[c][b]overlay=main_w-overlay_w-30:(main_h/2)-(overlay_h/2)[tvideo];
[tvideo][3:v]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2[video];
anullsrc=cl=mono:r=44100[audio]"
The compositing filter
-map "[video]" -map "[audio]"
Stream mapping
-c:v libx264 -pix_fmt yuv420p -tune zerolatency -crf 28 -x264-params keyint=20:scenecut=0
h264 compression settings
-movflags +faststart -f flv rtmp://url.to.ingestion.server/streamkey/etc
The output settings