What kind of stream GStreamer produce? - opencv

I use following 2 commands to stream video from Raspberry Pi
RaPi
raspivid -t 999999 -h 720 -w 1080 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$RA-IP-ADDR port=5000
Linux Box
gst-launch-1.0 -v tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
But what kind of stream is it? Can I read it with OpenCV? or convert with avconv|ffmpeg nc $RA-IP-ADDR 5000 | avconv? or watch with VLC ?

The stream appears to be an RTP stream encapsulated in a GDP stream, the latter of which which appears to be proprietary to GStreamer. You might be able to remove the gdppay and gdpdepay elements from your pipeline and use other RTP tools (there are plenty out there; I believe VLC supports RTP directly), but you could also use a GStreamer pipeline to pipe the depayloaded GDP stream (in this case, the H.264 stream it contains) from the RPi to a file on the Linux Box side, like so:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! filesink location=$FILENAME
or, to pipe it to stdout:
gst-launch-1.0 tcpclientsrc host=$RA-IP-ADDR port=5000 ! gdpdepay ! rtph264depay ! fdsink
One or the other of these should let you operate on the H.264 video at a stream level.
GStreamer 1.0 can also interact with libav more or less directly if you have the right plugin. Use gst-inspect-1.0 libav to see the elements supported. The avdec_h264 element already in your pipeline is one of these libav elements.

Related

Multiple appsink with a single udpsrc ; Gstreamer , using tee element in OpenCV VideoCapture

I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().

cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error

I am using below list of version.
opencv-4.2.0, Gstreamer-1.20.2, python 3.7, windows 10.
I want to play video using Gstreamer rtsp camera.why this gst-pipeline it's not run on vscode python but these gst-pipeline perfectly run on cmd.
Pipeline:----
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory), format=(string)NV12 ! appsink
The error i am getting is as follows:-
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (1759) cv::handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module udpsrc1 reported: Internal data stream error.
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (888) cv::GStreamerCapture::open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global opencv-4.2.0\modules\videoio\src\cap_gstreamer.cpp (480) cv::GStreamerCapture::isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Can you please let me know.How to solve this issue??
You may try adding caps after udpsrc:
rtspsrc location=rtsp://... ! application/x-rtp,encoding-name=H264 ! rtph264depay ! ...
This was a stupid advice. Specifying these caps make sense with udpsrc, but are not required for rtspsrc.
Re-reading it now, the issue is probably the memory space. Opencv may only expect system memory for now, so you may try:
rtspsrc location=rtsp://... ! rtph264depay ! queue ! h264parse ! d3d11h264dec ! d3d11convert ! video/x-raw ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1
Here converting to BGR as most opencv color algorithms expect, but if you intend to process NV12 frames, just change format (not sure videoconvert is required, but if not the overhead may be low).

gstreamer-imx video streaming and encoding

I am currently using an Nitrogen 6 Max development board. I am attempting to retrieve video from my webcam through v4l2src so that the feed back be streamed and encoded to be saved.
This is the pipeline, and it works:
v4l2src device="/dev/video2" ! tee name=t
t. ! queue ! x264enc ! mp4mux ! filesink location=test.mp4
t. ! queue ! videoconvert ! autovideosink
Then I attempted to use the imx-gstreamer library. I spent time looking around and found that this works:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! \
video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
However, when I attempt to use "tee" to split up the video source, it just freezes and my terminal session locks up.
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! autovideoconvert ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! h264parse ! avdec_h264 ! filesink location=cx1.mp4 \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! autovideosink
I tried isolating the issue by encoding through tee, and realize that this it runs, but the video file that it generates is corrupted:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
I tried using queues, videoconvert, but it does not seem to work.
Also, another question here. I am new to GstElement capabilities, which is what decides which element can be linked (i.e, a v4l2src video/x-raw capability includes I420, that's why I can link this element to imxvpuenc_h264). However, for the element tee, does it split and replicate the capability of the src?
I am new to gstreamer, and I can't seem to work around this issue. Can someone help me out here?
A few hints to help you out:
As a rule, you always use queues at the outputs of the tee so that it doesn't block your pipelines.
Another option to avoid blocking is to set async=false in your sink elements.
Try setting dts-method=2 to the mp4mux to see if it makes a difference.
The first troubleshooting line when working gstreamer is using the debug. Please inspect and share the output of GST_DEBUG=2 gst-launch-1.0 ....

Gstreamer pipeline in Opencv videoCapture()

I'm trying to open an IP camera in OpenCV using gstreamer pipleine.
I can open the IPcamera using Gstreamer in terminal, using :
gst-launch-1.0 -v rtspsrc location="rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! xvimagesink
Now with this how can I open the same camera in OpenCV videoCapture().
Any help is appreciated.
You can copy the same pipe and use it in VideoCapture (if you built OpenCV with gstreamer modules).
Important point is you need to finish the pipe with an appsink element.
const char* pipe = "rtspsrc location=\"rtsp://192.168.0.220:554/user=admin&password=admin&channel=1&stream=0.sdp?real_stream--rtp-caching=10\" latency=10 ! rtph264depay ! h264parse ! omxh264dec ! videoconvert ! appsink";
VideoCapture cap(pipe);

In Gstreamer while playing the pipeline in iOS 8, and after entering background and returning foreground pipeline doesnt work :(?

-Actually i downloaded the sample tutorial for gstreamer from the link,
http://cgit.freedesktop.org/~slomo/gst-sdk-tutorials/
git://people.freedesktop.org/~slomo/gst-sdk-tutorials
Now i had modified the following code in the tutorial 3,
-(void) app_function
{
GstBus *bus;
GSource *bus_source;
GError *error = NULL;
GST_DEBUG ("Creating pipeline");
pipeline = gst_pipeline_new ("e-pipeline");
/* Create our own GLib Main Context and make it the default one */
context = g_main_context_new ();
g_main_context_push_thread_default(context);
/* Build pipeline */
// pipeline = gst_parse_launch("videotestsrc ! warptv ! videoconvert ! autovideosink", &error);
source = gst_element_factory_make("udpsrc", "source");
g_object_set( G_OBJECT ( source), "port", 8001, NULL );
GstCaps *caps;
caps = gst_caps_new_simple ("application/x-rtp",
"encoding-name", G_TYPE_STRING, "H264",
"payload", G_TYPE_INT, 96,
"clock-rate", G_TYPE_INT, 90000,
NULL);
g_object_set (source, "caps", caps, NULL);
rtp264depay = gst_element_factory_make ("rtph264depay", "rtph264depay");
h264parse = gst_element_factory_make ("h264parse", "h264parse");
vtdec = gst_element_factory_make ("vtdec", "vtdec");
glimagesink = gst_element_factory_make ("glimagesink", "glimagesink");
gst_bin_add_many (GST_BIN(pipeline), source, rtp264depay, h264parse, vtdec, glimagesink, NULL);
if (error) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
[self setUIMessage:message];
g_free (message);
return;
}
/* Set the pipeline to READY, so it can already accept a window handle */
gst_element_set_state(pipeline, GST_STATE_READY);
video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
if (!video_sink) {
GST_ERROR ("Could not retrieve video sink");
return;
}
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(video_sink), (guintptr) (id) ui_video_view);
/* Instruct the bus to emit signals for each received message, and connect to the interesting signals */
bus = gst_element_get_bus (pipeline);
bus_source = gst_bus_create_watch (bus);
g_source_set_callback (bus_source, (GSourceFunc) gst_bus_async_signal_func, NULL, NULL);
g_source_attach (bus_source, context);
g_source_unref (bus_source);
g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, (__bridge void *)self);
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, (__bridge void *)self);
gst_object_unref (bus);
/* Create a GLib Main Loop and set it to run */
GST_DEBUG ("Entering main loop...");
main_loop = g_main_loop_new (context, FALSE);
[self check_initialization_complete];
g_main_loop_run (main_loop);
GST_DEBUG ("Exited main loop");
g_main_loop_unref (main_loop);
main_loop = NULL;
/* Free resources */
g_main_context_pop_thread_default(context);
g_main_context_unref (context);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return;
}
-Now am running the application in the ipad,and Application starts playing.
Now am entering background and returning to foreground the
Gstreamer streaming updates are not visible in the UI,but in the xcode's network usage I could see the packets receiving....:(
Thanks in advance....iOS GEEKS....
Update: Get UDP to work.
After further investigation I got UDP h264 streaming to work on linux (PC x86) but the principle should be the same on IOS (specifically avdec_h264 (used on PC) has to be replaced by vtdec).
Key differences between the TCP and UDP pipelines:
Server side:
IP : The 1st element which confused me between UDP and TCP server sides : On UDP server, the IP address specified on the udpsink element is the client side IP, i.e.
gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000
While on the TCP server side, the IP is the one of the server side (host parameter on tcpserversink) i.e.
gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000
Video stream payload/format: In order for the client to be able to detect the format and size of the frames, the TCP server side makes use of gdppay, a payloader element, in its pipeline. On the client side the opposite element, a de-payloader is used gdpdepay in order to be able to read the received frames.
i.e.
gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000
The UDP server side does not use the gdpay element, it leaves the client side to use a CAPS on its udpsink see below in the client side differences.
Client side
IP: The UDP client does NOT need any IP specified.
While the TCP client side needs the server IP (host parameter on tcpclientsrc) i.e.
gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false
Video stream payload/format:like mentionned in the previous paragraph, the TCP server side uses payloader gdppay while the client side uses a de-payloader to recognize the format and size of the frames.
Instead the UDP client has to explicitely specify it using a caps on its udpsrc element i.e.
CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96'
gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false`
How to specify the caps : it is a bit hacky but it works:
run your UDP server, with the verbose option -v i.e.
gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000
You'll get the following log:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c
/GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956
/GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:sink: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c
/GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: timestamp = 257034921
/GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: seqnum = 12956
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Now copy the caps starting with caps = application/x-rtp
This is the one specifying the rtp stream format. As far as I know the one that really is mandatory to get the UDP client to recognize the rtp stream content and then initialise the playing.
To wrap it up and avoid confusion, find complete command line examples below, using raspivid with a Raspberry pi. if you want to try it ( on linux )
UDP
Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000
Client:
CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96'
gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false
TCP
Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000
Client: gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false
Note: Raspivid could easily be be replaced by a simple h264 file using cat i.e. cat myfile.h264 | gst-launch...
I recently tried getting a live streaming working from a RaspberryPi to IOS8 using hardware h264 decoding, using the Apple VideoToolBox API thru the "vtdec" gstreamer plugin.
I looked at many tutorials, namely from braincorp (https://github.com/braincorp/gstreamer_ios_tutorial)
and
Sebastian Dröge:
http://cgit.freedesktop.org/~slomo/gst-sdk-tutorials/
I got the latter one to work, tutorial 3 modified:
Server pipeline on RaspberryPi using pi Camera and Raspivid + gstreamer:
raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -p 0,0,640,480 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=ServerIPRaspberryPi port=any_port_on_Rpi
client side pipeline one IOS 8 device:
tcpclientsrc host=ServerIPRaspberryPi port=any_port_on_Rpi ! gdpdepay ! rtph264depay ! h264parse ! vtdec ! glimagesink
or the same with instead of glimagesink autovideosink.
This solution works and several clients can be used simultaneously. I tried getting udpsink to work instead of tcpserversink, but not luck so far it never worked.
===IMPORTANT===
Also the factory way using gst_element_factory_make() + gst_bin_add_many (GST_BIN(pipeline), ...) never worked.
Instead I used the pipeline = gst_parse_launch(...) method.
So in our case, on the IOS client side:
pipeline = gst_parse_launch("tcpclientsrc host=172.19.20.82 port=5000 ! gdpdepay ! rtph264depay ! h264parse ! vtdec ! autovideosink", &error);
Possible reason: There is a page documenting differences, and how to port code from gstreamer 0.10 and 1.0:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html
We noted that while using the "factory method" there were various elements of the pipeline missing depending on whether we were using gstreamer 1.0 or 0.1 i.e. trph264depay or avdec_h264 (used on other platforms i.e. linux client side) to decode h264 instead of IOS specific vtdec).
We could hardly get all elements together using the Factory method but we managed using the "gst_parse_launch()" function without any problems, on IOS and linux.
So in conclusion, while we haven't tested and got the UDP sink to work, then try the TCP way using tcpclientsrc element instead, get it working, then only once it works, try find your way to udp and pls let us know if you get to an end.
Best Regards, hope this helps many of you.
Romain S

Resources