gstreamer mixer, mix 2 rtsp streams side by side with gst-launch -> timestamping problem occurred - gstreamer-1.0

I am trying to display two streams side by side with gst-launch.
It occurs an error, but the streams are displayed.
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstXvImageSink:autovideosink0-actual-sink-xvimage:
Single rtsp source is displayed correctly.
I tried to the parameter latency to 500, no success.
gst-launch-1.0 -e \
videomixer name=mix \
sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0\
sink_1::xpos=640 sink_1::ypos=0 \
rtspsrc location=rtsp://192.168.9.20:554/axis-media/media.amp user-id=username user-pw=password latency=150 \
! decodebin max-size-time=30000000000 \
! videoconvert ! videoscale \
! video/x-raw,width=640,height=480 \
! mix.sink_1 \
rtspsrc location=rtsp://192.168.9.24:554/axis-media/media.amp user-id=username user-pw=password latency=150 \
! decodebin max-size-time=30000000000 \
! videoconvert ! videoscale \
! video/x-raw,width=640,height=480 \
! mix.sink_2 \
mix. ! queue ! videoconvert ! autovideosink
I want to create a mosaic of four rtsp streams.
Please give me help in resolving the problem. Thanks in advance.

The solution is to use:
mix. ! queue ! videoconvert ! xvimagesink sync=false```

Related

Multiple appsink with a single udpsrc ; Gstreamer , using tee element in OpenCV VideoCapture

I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().

Gstreamer - stream with image overlay to youtube

trying to stream from my Jetson nano with picamera 2 to youtube with gstreamer.
Streaming only video works, but i need to overlay video with image using multifilesrc(image will change over time).
After many hours a was not sucesfull to incorporate multifilesrc into pipeline.
I have tried compositor, videomixer but all failed. Maybe using nvcompositor?
Any ideas?
This is what i have so far
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2"
EDIT: tried this but not working
gst-launch-1.0 \
nvcompositor name=mix sink_0::zorder=1 sink_1::alpha=1.0 sink_1::zorder=2 ! nvvidconv ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2" \
nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! \
nvvidconv ! video/x-raw, format=RGBA, width=1920, height=1080, framerate=30/1 ! autovideoconvert ! queue ! mix.sink_0 \
filesrc location=logo.png ! pngdec ! alphacolor ! video/x-raw,format=RGBA ! imagefreeze ! nvvidconv ! mix.sink_1
Although it may work in some cases without these, for using nvcompositor, I'd advise to use RGBA format in NVMM memory with pixel-aspect-ratio=1/1 for both inputs and for output. Use caps after nvvidconv for being sure in inputs pipelines, and use nvvidconv for converting nvcompositor output into NV12 (still in NVMM memory) before encoding.
You may also add a queue on 2nd input for logo before compositor. Probably not mandatory, but safer. You may also set a framerate in caps after imagefreeze.
Last, you may have to set xpos,ypos,width and height for all sources for a more reliable behavior.

gstreamer-imx video streaming and encoding

I am currently using an Nitrogen 6 Max development board. I am attempting to retrieve video from my webcam through v4l2src so that the feed back be streamed and encoded to be saved.
This is the pipeline, and it works:
v4l2src device="/dev/video2" ! tee name=t
t. ! queue ! x264enc ! mp4mux ! filesink location=test.mp4
t. ! queue ! videoconvert ! autovideosink
Then I attempted to use the imx-gstreamer library. I spent time looking around and found that this works:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! \
video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
However, when I attempt to use "tee" to split up the video source, it just freezes and my terminal session locks up.
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! autovideoconvert ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! h264parse ! avdec_h264 ! filesink location=cx1.mp4 \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! autovideosink
I tried isolating the issue by encoding through tee, and realize that this it runs, but the video file that it generates is corrupted:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
I tried using queues, videoconvert, but it does not seem to work.
Also, another question here. I am new to GstElement capabilities, which is what decides which element can be linked (i.e, a v4l2src video/x-raw capability includes I420, that's why I can link this element to imxvpuenc_h264). However, for the element tee, does it split and replicate the capability of the src?
I am new to gstreamer, and I can't seem to work around this issue. Can someone help me out here?
A few hints to help you out:
As a rule, you always use queues at the outputs of the tee so that it doesn't block your pipelines.
Another option to avoid blocking is to set async=false in your sink elements.
Try setting dts-method=2 to the mp4mux to see if it makes a difference.
The first troubleshooting line when working gstreamer is using the debug. Please inspect and share the output of GST_DEBUG=2 gst-launch-1.0 ....

Record multiple RTSP streams into a single file

I need to record 4 RTSP streams into a single file.
Streams must be placed into the video in this way:
---------- ----------
| | |
| STREAM 1 | STREAM 2 |
| | |
|----------|----------|
| | |
| STREAM 3 | STREAM 4 |
| | |
---------- ----------
I need to synchronize these live streams with about ~1 second accuracy. This is challenging because streams have variable framerate (FPS).
I have tried ffmpeg but streams are not synchronized.
Here is the code:
ffmpeg \
-i "rtsp://IP-ADDRESS/cam/realmonitor?channel=1&subtype=00" \
-i "rtsp://IP-ADDRESS/live?real_stream" \
-i "rtsp://IP-ADDRESS/live?real_stream" \
-i "rtsp://IP-ADDRESS/live?real_stream" \
-filter_complex " \
nullsrc=size=1920x1080 [base]; \
[0:v] scale=960x540 [video0]; \
[1:v] scale=960x540 [video1]; \
[2:v] scale=960x540 [video2]; \
[3:v] scale=960x540 [video3]; \
[base][video0] overlay=shortest=1:x=0:y=0 [tmp1]; \
[tmp1][video1] overlay=shortest=0:x=960:y=0 [tmp2]; \
[tmp2][video2] overlay=shortest=0:x=0:y=540 [tmp3]; \
[tmp3][video3] overlay=shortest=0:x=960:y=540 [v]; \
[0:a]amix=inputs=1[a]" \
-map "[v]" -map "[a]" -c:v h264 videos/test-combine-cams.mp4
Is there a way to combine and synchronize streams in ffmpeg or using other utilities like: vlc, openRTSP, OpenCV?
Have you tried gstreamer, it works with my rtsp streams.
gst-launch-1.0 -e rtspsrc location=rtsp_url1 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! m.sink_0 \
rtspsrc location=rtsp_url2 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! m.sink_1 \
rtspsrc location=rtsp_url3 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! m.sink_2 \
rtspsrc location=rtsp_url4 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! m.sink_3 \
videomixer name=m sink_1::xpos=1280 sink_2::ypos=720 sink_3::xpos=1280 sink_3::ypos=720 ! x264enc ! mp4mux ! filesink location=./out.mp4 sync=true
Of course you will need to add in your rtsp urls and adjust the videomixer xpos/ypos properties based on your video size (mine was 720p).
Before mixing you may want to run just one at a time to make sure you have all the dependencies installed correctly
gst-launch-1.0 rtspsrc location=rtsp_url1 ! rtph264depay ! h264parse ! decodebin ! x264enc ! mp4mux ! filesink location=./out.mp4 sync=true
I have not yet added the audio.

What kind of stream GStreamer produce?

I use following 2 commands to stream video from Raspberry Pi
RaPi
raspivid -t 999999 -h 720 -w 1080 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$RA-IP-ADDR port=5000
Linux Box
gst-launch-1.0 -v tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
But what kind of stream is it? Can I read it with OpenCV? or convert with avconv|ffmpeg nc $RA-IP-ADDR 5000 | avconv? or watch with VLC ?
The stream appears to be an RTP stream encapsulated in a GDP stream, the latter of which which appears to be proprietary to GStreamer. You might be able to remove the gdppay and gdpdepay elements from your pipeline and use other RTP tools (there are plenty out there; I believe VLC supports RTP directly), but you could also use a GStreamer pipeline to pipe the depayloaded GDP stream (in this case, the H.264 stream it contains) from the RPi to a file on the Linux Box side, like so:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! filesink location=$FILENAME
or, to pipe it to stdout:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! fdsink
One or the other of these should let you operate on the H.264 video at a stream level.
GStreamer 1.0 can also interact with libav more or less directly if you have the right plugin. Use gst-inspect-1.0 libav to see the elements supported. The avdec_h264 element already in your pipeline is one of these libav elements.

Resources