How to connect filter src to nvstreammux sink - gstreamer-1.0

I have a pipeline like this,
gst-launch-1.0 v4l2src ! 'video/x-raw,format=(string)YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! nvvidconv ! 'video/x-raw,format=(string)NV12' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt batch-size=1 ! nvmultistreamtiler rows=1 columns=1 width=640 height=480 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
and I am trying to connect the capsfilter 'video/x-raw(memory:NVMM),format=(string)NV12' to nvstreammux. Should I create pads or bins between those two? and how?

Related

Multiple appsink with a single udpsrc ; Gstreamer , using tee element in OpenCV VideoCapture

I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().

Add new sink to gstreamer examples camera

I'm very newbie at GStreamer and I can't write the output to file.
I'm testing the gstreamer detect example in the coral/examples-camera/gstreamer project. This demo run perfectly in my coral dev board mini, but I need help to add a filesink in the gstreamer pipeline of this proyect
PIPELINE += """ ! decodebin ! queue ! v4l2convert ! {scale_caps} !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! {sink_caps} ! {sink_element}
t. ! queue ! glsvgoverlay name=gloverlay sync=false ! glimagesink fullscreen=true
qos=false sync=false
"""
The demo uses the glimagesink element to display the video on the screen and I need to add a sink that saves the video to a file
Executing with debug I've captured, I belive, the format of video in glimagesink element, the debug output is:
GST_PADS gstpad.c:3160:gst_pad_query_accept_caps_default:<salida:sink>[00m allowed caps subset ANY, caps video/x-raw(memory:GLMemory), framerate=(fraction)30/1, interlace-mode=(string)progressive, width=(int)640, height=(int)480, format=(string)RGBA, texture-target=(string)2D
I've tried this pipelines configurations:
oral board model: mt8167
Gstreamer pipeline:
v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! queue ! v4l2convert ! video/x-raw,format=BGRA,width=640,height=480 !
glupload ! glcolorconvert ! video/x-raw(memory:GLMemory),format=RGBA !
tee name=t
t. ! queue ! glfilterbin filter=glbox name=glbox ! queue ! video/x-raw,format=RGB,width=320,height=320 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
t. ! queue ! glsvgoverlay name=gloverlay sync=false !
videoconvert ! x264enc ! avimux ! filesink location=output.avi
Traceback (most recent call last):
File "detect.py", line 138, in <module>
main()
File "detect.py", line 135, in main
videofmt=args.videofmt)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 294, in run_pipeline
pipeline = GstPipeline(pipeline, user_function, src_size)
File "/home/mendel/google-coral/examples-camera/gstreamer/gstreamer.py", line 36, in __init__
self.pipeline = Gst.parse_launch(pipeline)
gi.repository.GLib.Error: gst_parse_error: could not link videoconvert0 to x264enc0 (3)
Know anyone how write this output to file with Gstreamer?
Any help will be appreciated

GStreamer and Youtube Problem RTMPSink can not write to resource

I have problem with sending video to youtube using GStreamer.
My pipeline is:
"appsrc name=videoAppSrc ! rawvideoparse name=videoparser use-sink-caps=false format=8 ! videoconvert ! video/x-raw, fromat=YUV, width="+videoWidth+", height="+videoHeight+", framerate=25/1 ! videoconvert ! x264enc key-int-max=60 ! video/x-h264,profile=baseline ! tee name=t t. ! queue ! flvmux streamable=true name=mux ! rtmpsink name=dest location="+this.url+"/"+this.key+" t. ! queue ! matroskamux name=filemux ! filesink name=fileout location="+archFile.getAbsolutePath()+" appsrc name=audioAppSrc ! rawaudioparse use-sink-caps=true ! audioconvert ! volume name=audiovolume volume=1 ! voaacenc ! aacparse ! tee name=ta ta. ! queue ! mux. ta. ! queue ! filemux."
I'm using Java with gst1-java-core to push frames into the pipeline.
After some time I'm getting this kinde of error: Could not write to resource from GstRTMPSink element.
Sometimes it happends after 1 hour, sometimes after 3 hours.
I think the problem is youtube won't receive my stream.
Am I right?
Is something wrong with my pipeline?
Maybe i have to adjust some properties to get this working with youtube propably?

gst-launch 1.0 window resize

Here I have udp stream sender by gst-launch 1.0:
gst-launch-1.0 -v filesrc location="./venom-trailer-3_h720p.mov" ! qtdemux ! rtph264pay pt=96 config-interval=-1 ! udpsink host=face=eth0 -e3 port=5001 multicast-if
and here is my receiver command:
DISPLAY=:0 gst-launch-1.0 udpsrc uri=udp://232.255.23.23:5001 port=5001 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! h264parse ! queue ! avdec_h264 ! xvimagesink udpsrc
My question is how to change position and size of window in receiver. According to this
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-ximagesink.html
I have to change ximagesink values but i get
WARNING: erroneous pipeline: no property "width" in element "xvimagesink0"
According to the link you posted, the element ximage sink does NOT have a property width, it has a property called window-width instead.

Port pipeline to gst-rtsp-server

I'm trying to wrap this working sender side pipeline in the gst-rtsp-serve
gst-launch-1.0 --gst-plugin-path=/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ filesrc location=sample.mp4 ! decodebin name=mux mux. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 mux. ! queue ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! udpsink host=127.0.0.1 port=5001
Using a complementary pipeline at receiver side all the stuff work and I'm able to send a opencv processed stream, getting it at the client side.
Something is wrong when I try to wrap part of this pipeline in the working example provided along with the gst-rtsp-server.
Infact, editing the test-mp4.c and changing the filesrc input pipelin
"filesrc location=%s ! qtdemux name=d "
"d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 "
"d. ! queue ! rtpmp4apay pt=97 name=pay1 " ")"
the sender doesn't work anymore. On the receiver side I got a 503 error since the receiver is unable to get the sdp.
Could be this a iussue related to missing bad plugin directory?
I set it in the main Makefile but the problem still persists.
I guess so sinche the rtsp-server works perfectly if I do not edit that line and my pipeline works good either.
Thanks,
Francesco
This looks like it is an issue with the pipeline you have created. Try running your pipeline exactly how it is on the command line, but add fakesink elements on the end to see if that works:
gst-launch-1.0 filesrc location=%s ! qtdemux name=d d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 ! fakesink d. ! queue ! rtpmp4apay pt=97 name=pay1 ! fakesink
At a glance, it looks like you're demuxing the media, but not decoding the video to a raw format for the edgedetect element.

Resources