GStreamer pipeline + OpenCV VideoCapture.read() returns None - opencv

I'm trying to get GStreamer + OpenCV RTSP video capture using the following:
vcap = cv2.VideoCapture("""rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay
! h264parse ! avdec_h264 ! videoconvert ! appsink""", cv2.CAP_GSTREAMER)
while True:
ret, frame = vcap.read()
print(frame)
cv2.imshow('VIDEO', frame)
cv2.waitKey(1)
However, the frame read by vcap is None:
(<unknown>:79564): GLib-GObject-WARNING **: 00:27:54.660: invalid cast from 'GstQueue' to 'GstBin'
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_bin_iterate_elements: assertion 'GST_IS_BIN (bin)' failed
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_iterator_next: assertion 'it != NULL' failed
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_iterator_free: assertion 'it != NULL' failed
[ WARN:0#0.020] global /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/videoio/src/cap_gstreamer.cpp (1226) open OpenCV | GStreamer warning: cannot find appsink in manual pipeline
[ WARN:0#0.020] global /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
None
Traceback (most recent call last):
File "/Volumes/Data/Projects/rtmp_test/src/test.py", line 21, in <module>
read(1)
File "/Volumes/Data/Projects/rtmp_test/src/test.py", line 18, in read
cv2.imshow('VIDEO', frame)
cv2.error: OpenCV(4.5.5) /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/highgui/src/window.cpp:1000: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'
The stream can be played in VLC perfectly fine and gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay! h264parse ! avdec_h264 ! videoconvert ! appsink gives regular output. Does anyone know what might be wrong?
UPDATE: I've noticed that this problem occurs only on OSX. It works fine on my Ubuntu machine.

You may try specifying caps as video format BGR (or GRAY8 for monochrome) before appsink as this would be the default format expected by most cases from OpenCV (also maybe simplifying quoting) such as:
gst_pipeline='rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
vcap = cv2.VideoCapture(gst_pipeline, cv2.CAP_GSTREAMER)
Also note that printing each frame in terminal in the loop may prevent from running at expected framerate depending on your use case and platform.

Related

cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error

I am using below list of version.
opencv-4.2.0, Gstreamer-1.20.2, python 3.7, windows 10.
I want to play video using Gstreamer rtsp camera.why this gst-pipeline it's not run on vscode python but these gst-pipeline perfectly run on cmd.
Pipeline:----
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory), format=(string)NV12 ! appsink
The error i am getting is as follows:-
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (1759) cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error.
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (888) cv::GStreamerCapture::open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (480) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Can you please let me know.How to solve this issue??
You may try adding caps after udpsrc:
rtspsrc location=rtsp://... ! application/x-rtp,encoding-name=H264 ! rtph264depay ! ...
This was a stupid advice. Specifying these caps make sense with udpsrc, but are not required for rtspsrc.
Re-reading it now, the issue is probably the memory space. Opencv may only expect system memory for now, so you may try:
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1
Here converting to BGR as most opencv color algorithms expect, but if you intend to process NV12 frames, just change format (not sure videoconvert is required, but if not the overhead may be low).

Add new sink to gstreamer examples camera

I'm very newbie at GStreamer and I can't write the output to file.
I'm testing the gstreamer detect example in the coral/examples-camera/gstreamer project. This demo run perfectly in my coral dev board mini, but I need help to add a filesink in the gstreamer pipeline of this proyect
PIPELINE += """ ! decodebin ! queue ! v4l2convert ! {scale_caps} !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! {sink_caps} ! {sink_element}
t. ! queue ! glsvgoverlay name=gloverlay sync=false ! glimagesink fullscreen=true
qos=false sync=false
"""
The demo uses the glimagesink element to display the video on the screen and I need to add a sink that saves the video to a file
Executing with debug I've captured, I belive, the format of video in glimagesink element, the debug output is:
GST_PADS gstpad.c:3160:gst_pad_query_accept_caps_default:<salida:sink>[00m allowed caps subset ANY, caps video/x-raw(memory:GLMemory), framerate=(fraction)30/1, interlace-mode=(string)progressive, width=(int)640, height=(int)480, format=(string)RGBA, texture-target=(string)2D
I've tried this pipelines configurations:
oral board model: mt8167
Gstreamer pipeline:
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! queue ! v4l2convert ! video/x-raw,format=BGRA,width=640,height=480 !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! video/x-raw,format=RGB,width=320,height=320 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
t. ! queue ! glsvgoverlay name=gloverlay sync=false !
videoconvert ! x264enc ! avimux ! filesink location=output.avi
Traceback (most recent call last):
File "detect.py", line 138, in <module>
main()
File "detect.py", line 135, in main
videofmt=args.videofmt)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 294, in run_pipeline
pipeline = GstPipeline(pipeline, user_function, src_size)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 36, in __init__
self.pipeline = Gst.parse_launch(pipeline)
gi.repository.GLib.Error: gst_parse_error: could not link videoconvert0 to x264enc0 (3)
Know anyone how write this output to file with Gstreamer?
Any help will be appreciated

Gstreamer + OpenCV h264 Encoding&Decoding İmage Deformation Problem

Hardware:
Apalis IMX8 CPU(SOM)
and
Sensoray model-1012 video frame grabber
I am trying to save analog video with h264 coding and play it.
The code has 3 parts. Reading camera, saving video with coding and playing video with decoding.
The problem i am facing is on decoding part. When i decode and show video it corrupts.
How i read analog video(Works fine):
cap = cv::VideoCapture(" v4l2src device=/dev/video4 ! video/x-raw, format=(string)YUY2, width=(int)720, height=(int)480, framerate=30/1, interlace-mode=interleaved ! deinterlace fields=1 method=2 ! videoconvert ! appsink ",cv::CAP_GSTREAMER);
How i compress and save video(Works fine):
cv::VideoWriter fixedVideo;
QString pipeTmp = "appsrc ! videoconvert ! v4l2h264enc ! h264parse ! qtmux ! filesink location="+ FixedIMG_recordName +" sync=false ";
std::string pipe = pipeTmp.toUtf8().constData();
isOpen = fixedVideo.open(pipe , cv::CAP_GSTREAMER, (double)30, cv::Size(720,480), true);
How i decode and open video(DOESNT Work Fine):
cv::VideoCapture cap_reader;
QString pipeTmp = " filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! videoconvert ! appsink ";
std::string pipe = pipeTmp.toUtf8().constData();
cap_reader.open(pipe , cv::CAP_GSTREAMER);
when i open cap_read pipe line to play same video from gstreamer command line pipe it works fine with some warnings. I put the GST_DEBUG output in log.txt Pipe line:
GST_DEBUG=3 gst-launch-1.0 -v filesrc location=test.mp4 ! qtdemux ! h264parse ! 'video/x-h264, width=720, height=480, framerate=30/1' ! v4l2h264dec ! videoconvert ! autovideosink
When i open the video from VLC it also works fine. But when i open the video from VideoCapture and the pipeline that i gave above it corrupts. The corrupted image example.
There is no problem in compression. When you decode mp4 file, you should use "imxvideoconvert_g2d".
Decode pipeline should be filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! imxvideoconverter_g2d ! video/x-raw,format=UYVY,width=720,height=480 ! videoconvert ! appsink

gst-launch-1.0 - rtspsrc audio/video issue

I'm trying to combine two RTSP streams using gst-launch-1.0 but I'm already stuck already at trying to record/play one RTSP stream. The stream contains both audio and video.
The command line I'm using is:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! autoaudiosink ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! autovideosink
If I change the autoaudiosink to fakesink, the video plays. If I remove the video sink (didn't test with fakesink), the audio plays. But if I add both (like above), the video shows 1 frame and then freezes. [not sure if it matters, but I am on Windows]
I actually suspect that (for some reason) the pipeline goes on pause, but I'm at a loss on how to debug this issue.
I've tried with/without rtpjitterbuffer, I've tried with/without queue's at various places. I've tried with combinations of rtp...depay/parse. Although I've tried so much in the past few hours (ohoh) that I'm not sure if I overlooked anything.
Also tried with various options for rtspsrc (sync/etc).
But the end result is pretty much always the same, playing either audio or video works fine, playing both at the same time (or muxing them into a single file) fails.
Writing each to their own file works fine, for example:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux ! filesink location=audio.flv ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! flvmux ! filesink location=video.flv
If I mux the above into one file the same thing happens as with trying to play the file (using the auto-sink), the output file stays at 0-bytes.
The output of gst-launch-1.0 when trying to play is the same as when writing to two files:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to <location-omitted>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Redistribute latency...
Redistribute latency...
Somehow I think it's some type of sync-issue, but I can't seem to figure out how to solve it.
Since this question is more than 1 year ago, I hope someone with same problem still need the help (as I do, I used many hours to figure this out, and thank you to the poster he gives me the hints)
You need to edit as this
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux name=mux ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! mux. ^
mux. ! filesink location=video.flv

Gstreamer pipeline in Opencv videoCapture()

I'm trying to open an IP camera in OpenCV using gstreamer pipleine.
I can open the IPcamera using Gstreamer in terminal, using :
gst-launch-1.0 -v rtspsrc location="rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! xvimagesink
Now with this how can I open the same camera in OpenCV videoCapture().
Any help is appreciated.
You can copy the same pipe and use it in VideoCapture (if you built OpenCV with gstreamer modules).
Important point is you need to finish the pipe with an appsink element.
const char* pipe = "rtspsrc location=\"rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10\" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink";
VideoCapture cap(pipe);

Resources