Gstreamer + OpenCV h264 Encoding&Decoding İmage Deformation Problem - opencv

Hardware:
Apalis IMX8 CPU(SOM)
and
Sensoray model-1012 video frame grabber
I am trying to save analog video with h264 coding and play it.
The code has 3 parts. Reading camera, saving video with coding and playing video with decoding.
The problem i am facing is on decoding part. When i decode and show video it corrupts.
How i read analog video(Works fine):
cap = cv::VideoCapture(" v4l2src device=/dev/video4 ! video/x-raw, format=(string)YUY2, width=(int)720, height=(int)480, framerate=30/1, interlace-mode=interleaved ! deinterlace fields=1 method=2 ! videoconvert ! appsink ",cv::CAP_GSTREAMER);
How i compress and save video(Works fine):
cv::VideoWriter fixedVideo;
QString pipeTmp = "appsrc ! videoconvert ! v4l2h264enc ! h264parse ! qtmux ! filesink location="+ FixedIMG_recordName +" sync=false ";
std::string pipe = pipeTmp.toUtf8().constData();
isOpen = fixedVideo.open(pipe , cv::CAP_GSTREAMER, (double)30, cv::Size(720,480), true);
How i decode and open video(DOESNT Work Fine):
cv::VideoCapture cap_reader;
QString pipeTmp = " filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! videoconvert ! appsink ";
std::string pipe = pipeTmp.toUtf8().constData();
cap_reader.open(pipe , cv::CAP_GSTREAMER);
when i open cap_read pipe line to play same video from gstreamer command line pipe it works fine with some warnings. I put the GST_DEBUG output in log.txt Pipe line:
GST_DEBUG=3 gst-launch-1.0 -v filesrc location=test.mp4 ! qtdemux ! h264parse ! 'video/x-h264, width=720, height=480, framerate=30/1' ! v4l2h264dec ! videoconvert ! autovideosink
When i open the video from VLC it also works fine. But when i open the video from VideoCapture and the pipeline that i gave above it corrupts. The corrupted image example.

There is no problem in compression. When you decode mp4 file, you should use "imxvideoconvert_g2d".
Decode pipeline should be filesrc location=" + device + " ! qtdemux ! h264parse ! video/x-h264, width=720, height=480 ! v4l2h264dec ! imxvideoconverter_g2d ! video/x-raw,format=UYVY,width=720,height=480 ! videoconvert ! appsink

Related

Multiple appsink with a single udpsrc ; Gstreamer , using tee element in OpenCV VideoCapture

I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().

gst-launch-1.0 - rtspsrc audio/video issue

I'm trying to combine two RTSP streams using gst-launch-1.0 but I'm already stuck already at trying to record/play one RTSP stream. The stream contains both audio and video.
The command line I'm using is:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! autoaudiosink ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! autovideosink
If I change the autoaudiosink to fakesink, the video plays. If I remove the video sink (didn't test with fakesink), the audio plays. But if I add both (like above), the video shows 1 frame and then freezes. [not sure if it matters, but I am on Windows]
I actually suspect that (for some reason) the pipeline goes on pause, but I'm at a loss on how to debug this issue.
I've tried with/without rtpjitterbuffer, I've tried with/without queue's at various places. I've tried with combinations of rtp...depay/parse. Although I've tried so much in the past few hours (ohoh) that I'm not sure if I overlooked anything.
Also tried with various options for rtspsrc (sync/etc).
But the end result is pretty much always the same, playing either audio or video works fine, playing both at the same time (or muxing them into a single file) fails.
Writing each to their own file works fine, for example:
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux ! filesink location=audio.flv ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! flvmux ! filesink location=video.flv
If I mux the above into one file the same thing happens as with trying to play the file (using the auto-sink), the output file stays at 0-bytes.
The output of gst-launch-1.0 when trying to play is the same as when writing to two files:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to <location-omitted>
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Redistribute latency...
Redistribute latency...
Somehow I think it's some type of sync-issue, but I can't seem to figure out how to solve it.
Since this question is more than 1 year ago, I hope someone with same problem still need the help (as I do, I used many hours to figure this out, and thank you to the poster he gives me the hints)
You need to edit as this
gst-launch-1.0 ^
rtspsrc location=<location-omitted> protocols=GST_RTSP_LOWER_TRANS_TCP latency=5000 name=s_0 ^
s_0. ! application/x-rtp,media=audio ! rtpjitterbuffer ! decodebin ! audioconvert ! avenc_aac ! flvmux name=mux ^
s_0. ! application/x-rtp,media=video ! rtpjitterbuffer ! decodebin ! videoconvert ! x264enc ! mux. ^
mux. ! filesink location=video.flv

How increase GStreamer Buffer size during video streaming?

I'm using below opencv and gstreamer codes to send and receive a video over network. In receiver side I'm getting distorted video and a warning message "Redistribute latency and not enough buffering available for the processing deadline, add enough queues to buffer". I tried 'queue max-size-bytes=900000 max-size-buffer=0 max-size-time=0' but no luck. I'm in windows environment. Any ways to improve the buffer?
Sender
cv::VideoWriter m_videoOut("appsrc ! videoconvert ! x264enc ! video/x-h264,stream-format=byte-stream ! rtph264pay ! udpsink host=192.168.1.200 port=5000 sync=false", cv::CAP_GSTREAMER, 0 , 30, cv::Size(640,480),true);
Receiver
cv::VideoCapture cap("udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay queue-delay=0 ! h264parse ! avdec_h264 ! videoconvert ! appsink", cv::CAP_GSTREAMER);

Gstreamer pipeline in Opencv videoCapture()

I'm trying to open an IP camera in OpenCV using gstreamer pipleine.
I can open the IPcamera using Gstreamer in terminal, using :
gst-launch-1.0 -v rtspsrc location="rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! xvimagesink
Now with this how can I open the same camera in OpenCV videoCapture().
Any help is appreciated.
You can copy the same pipe and use it in VideoCapture (if you built OpenCV with gstreamer modules).
Important point is you need to finish the pipe with an appsink element.
const char* pipe = "rtspsrc location=\"rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10\" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink";
VideoCapture cap(pipe);

Port pipeline to gst-rtsp-server

I'm trying to wrap this working sender side pipeline in the gst-rtsp-serve
gst-launch-1.0 --gst-plugin-path=/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ filesrc location=sample.mp4 ! decodebin name=mux mux. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 mux. ! queue ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! udpsink host=127.0.0.1 port=5001
Using a complementary pipeline at receiver side all the stuff work and I'm able to send a opencv processed stream, getting it at the client side.
Something is wrong when I try to wrap part of this pipeline in the working example provided along with the gst-rtsp-server.
Infact, editing the test-mp4.c and changing the filesrc input pipelin
"filesrc location=%s ! qtdemux name=d "
"d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 "
"d. ! queue ! rtpmp4apay pt=97 name=pay1 " ")"
the sender doesn't work anymore. On the receiver side I got a 503 error since the receiver is unable to get the sdp.
Could be this a iussue related to missing bad plugin directory?
I set it in the main Makefile but the problem still persists.
I guess so sinche the rtsp-server works perfectly if I do not edit that line and my pipeline works good either.
Thanks,
Francesco
This looks like it is an issue with the pipeline you have created. Try running your pipeline exactly how it is on the command line, but add fakesink elements on the end to see if that works:
gst-launch-1.0 filesrc location=%s ! qtdemux name=d d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 ! fakesink d. ! queue ! rtpmp4apay pt=97 name=pay1 ! fakesink
At a glance, it looks like you're demuxing the media, but not decoding the video to a raw format for the edgedetect element.

Resources