I'm trying to open an IP camera in OpenCV using gstreamer pipleine.
I can open the IPcamera using Gstreamer in terminal, using :
gst-launch-1.0 -v rtspsrc location="rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! xvimagesink
Now with this how can I open the same camera in OpenCV videoCapture().
Any help is appreciated.
You can copy the same pipe and use it in VideoCapture (if you built OpenCV with gstreamer modules).
Important point is you need to finish the pipe with an appsink element.
const char* pipe = "rtspsrc location=\"rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10\" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink";
VideoCapture cap(pipe);
Related
I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().
I am using below list of version.
opencv-4.2.0, Gstreamer-1.20.2, python 3.7, windows 10.
I want to play video using Gstreamer rtsp camera.why this gst-pipeline it's not run on vscode python but these gst-pipeline perfectly run on cmd.
Pipeline:----
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory), format=(string)NV12 ! appsink
The error i am getting is as follows:-
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (1759) cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error.
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (888) cv::GStreamerCapture::open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (480) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Can you please let me know.How to solve this issue??
You may try adding caps after udpsrc:
rtspsrc location=rtsp://... ! application/x-rtp,encoding-name=H264 ! rtph264depay ! ...
This was a stupid advice. Specifying these caps make sense with udpsrc, but are not required for rtspsrc.
Re-reading it now, the issue is probably the memory space. Opencv may only expect system memory for now, so you may try:
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1
Here converting to BGR as most opencv color algorithms expect, but if you intend to process NV12 frames, just change format (not sure videoconvert is required, but if not the overhead may be low).
Hardware:
Apalis IMX8 CPU(SOM)
and
Sensoray model-1012 video frame grabber
I am trying to save analog video with h264 coding and play it.
The code has 3 parts. Reading camera, saving video with coding and playing video with decoding.
The problem i am facing is on decoding part. When i decode and show video it corrupts.
How i read analog video(Works fine):
cap = cv::VideoCapture(" v4l2src device=/dev/video4 ! video/x-raw, format=(string)YUY2, width=(int)720, height=(int)480, framerate=30/1, interlace-mode=interleaved ! deinterlace fields=1 method=2 ! videoconvert ! appsink ",cv::CAP_GSTREAMER);
How i compress and save video(Works fine):
cv::VideoWriter fixedVideo;
QString pipeTmp = "appsrc ! videoconvert ! v4l2h264enc ! h264parse ! qtmux ! filesink location="+ FixedIMG_recordName +" sync=false ";
std::string pipe = pipeTmp.toUtf8().constData();
isOpen = fixedVideo.open(pipe , cv::CAP_GSTREAMER, (double)30, cv::Size(720,480), true);
How i decode and open video(DOESNT Work Fine):
cv::VideoCapture cap_reader;
QString pipeTmp = " filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! videoconvert ! appsink ";
std::string pipe = pipeTmp.toUtf8().constData();
cap_reader.open(pipe , cv::CAP_GSTREAMER);
when i open cap_read pipe line to play same video from gstreamer command line pipe it works fine with some warnings. I put the GST_DEBUG output in log.txt Pipe line:
GST_DEBUG=3 gst-launch-1.0 -v filesrc location=test.mp4 ! qtdemux ! h264parse ! 'video/x-h264, width=720, height=480, framerate=30/1' ! v4l2h264dec ! videoconvert ! autovideosink
When i open the video from VLC it also works fine. But when i open the video from VideoCapture and the pipeline that i gave above it corrupts. The corrupted image example.
There is no problem in compression. When you decode mp4 file, you should use "imxvideoconvert_g2d".
Decode pipeline should be filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! imxvideoconverter_g2d ! video/x-raw,format=UYVY,width=720,height=480 ! videoconvert ! appsink
I'm using below opencv and gstreamer codes to send and receive a video over network. In receiver side I'm getting distorted video and a warning message "Redistribute latency and not enough buffering available for the processing deadline, add enough queues to buffer". I tried 'queue max-size-bytes=900000 max-size-buffer=0 max-size-time=0' but no luck. I'm in windows environment. Any ways to improve the buffer?
Sender
cv::VideoWriter m_videoOut("appsrc ! videoconvert ! x264enc ! video/x-h264,stream-format=byte-stream ! rtph264pay ! udpsink host=192.168.1.200 port=5000 sync=false", cv::CAP_GSTREAMER, 0 , 30, cv::Size(640,480),true);
Receiver
cv::VideoCapture cap("udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay queue-delay=0 ! h264parse ! avdec_h264 ! videoconvert ! appsink", cv::CAP_GSTREAMER);
I'm trying to open a video stream in opencv but I'm having some difficulties. I can start a stream with :
gst-launch -v v4l2src device=/dev/video0 ! 'video/x-raw-yuv,width=640,height=480' ! jpegenc quality=30 ! rtpjpegpay ! udpsink host=127.0.0.1 port=1234
`
and I can open it with:
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtpjpegdepay ! jpegdec ! xvimagesink sync=false
But when I tried to open it in my code with
VideoCapture cv_cap;
cv_cap.open("rtp:127.0.0.1:1234/");
I get an error about a missing SDP files. I know what an SDP file is and that I should get the info for it from the gstreamer output but I don't understand exactly how to parse the output.