gst-launch 1.0 window resize - gstreamer-1.0

Here I have udp stream sender by gst-launch 1.0:
gst-launch-1.0 -v filesrc location="./venom-trailer-3_h720p.mov" ! qtdemux ! rtph264pay pt=96 config-interval=-1 ! udpsink host=face=eth0 -e3 port=5001 multicast-if
and here is my receiver command:
DISPLAY=:0 gst-launch-1.0 udpsrc uri=udp://232.255.23.23:5001 port=5001 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! h264parse ! queue ! avdec_h264 ! xvimagesink udpsrc
My question is how to change position and size of window in receiver. According to this
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-ximagesink.html
I have to change ximagesink values but i get
WARNING: erroneous pipeline: no property "width" in element "xvimagesink0"

According to the link you posted, the element ximage sink does NOT have a property width, it has a property called window-width instead.

Related

Multiple appsink with a single udpsrc ; Gstreamer , using tee element in OpenCV VideoCapture

I am trying to access video frames from a single udpsrc (a camera) across 2 processes using OpenCv VideoCapture.
Process One: A opencv python application which uses the frame to do some image processing.
Process Two: Another opencv python application doing completely different task.
I need to create a videoCapture object in both these applications to access video streams but when I use
cap = cv2.VideoCapture("udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink")
in both the processes only one of them is successfully able to create a cap object.
I came across a "tee" element to create two sinks , but I dont know where and how to implement this.
Can anyone help with this?
I tried creating a gstreamer pipeline using tee element somthing like this:
gst-launch-1.0 udpsrc address=192.169.0.5 port=11024 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! tee name=t \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! autovideosink \
t. ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format =BGR ! appsink
But I have no idea how to use this in VideoCapture().

gstreamer-imx video streaming and encoding

I am currently using an Nitrogen 6 Max development board. I am attempting to retrieve video from my webcam through v4l2src so that the feed back be streamed and encoded to be saved.
This is the pipeline, and it works:
v4l2src device="/dev/video2" ! tee name=t
t. ! queue ! x264enc ! mp4mux ! filesink location=test.mp4
t. ! queue ! videoconvert ! autovideosink
Then I attempted to use the imx-gstreamer library. I spent time looking around and found that this works:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! \
video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
However, when I attempt to use "tee" to split up the video source, it just freezes and my terminal session locks up.
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! autovideoconvert ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! h264parse ! avdec_h264 ! filesink location=cx1.mp4 \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! autovideosink
I tried isolating the issue by encoding through tee, and realize that this it runs, but the video file that it generates is corrupted:
gst-launch-1.0 -e videotestsrc num-buffers=1000 ! tee name=t \
t. ! video/x-raw,width=640,height=480,framerate=30/1 ! imxvpuenc_h264 ! \
h264parse ! avdec_h264 ! filesink location=cx1.mp4
I tried using queues, videoconvert, but it does not seem to work.
Also, another question here. I am new to GstElement capabilities, which is what decides which element can be linked (i.e, a v4l2src video/x-raw capability includes I420, that's why I can link this element to imxvpuenc_h264). However, for the element tee, does it split and replicate the capability of the src?
I am new to gstreamer, and I can't seem to work around this issue. Can someone help me out here?
A few hints to help you out:
As a rule, you always use queues at the outputs of the tee so that it doesn't block your pipelines.
Another option to avoid blocking is to set async=false in your sink elements.
Try setting dts-method=2 to the mp4mux to see if it makes a difference.
The first troubleshooting line when working gstreamer is using the debug. Please inspect and share the output of GST_DEBUG=2 gst-launch-1.0 ....

How to connect filter src to nvstreammux sink

I have a pipeline like this,
gst-launch-1.0 v4l2src ! 'video/x-raw,format=(string)YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! nvvidconv ! 'video/x-raw,format=(string)NV12' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=640 height=480 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-4.0/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt batch-size=1 ! nvmultistreamtiler rows=1 columns=1 width=640 height=480 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink
and I am trying to connect the capsfilter 'video/x-raw(memory:NVMM),format=(string)NV12' to nvstreammux. Should I create pads or bins between those two? and how?

How increase GStreamer Buffer size during video streaming?

I'm using below opencv and gstreamer codes to send and receive a video over network. In receiver side I'm getting distorted video and a warning message "Redistribute latency and not enough buffering available for the processing deadline, add enough queues to buffer". I tried 'queue max-size-bytes=900000 max-size-buffer=0 max-size-time=0' but no luck. I'm in windows environment. Any ways to improve the buffer?
Sender
cv::VideoWriter m_videoOut("appsrc ! videoconvert ! x264enc ! video/x-h264,stream-format=byte-stream ! rtph264pay ! udpsink host=192.168.1.200 port=5000 sync=false", cv::CAP_GSTREAMER, 0 , 30, cv::Size(640,480),true);
Receiver
cv::VideoCapture cap("udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay queue-delay=0 ! h264parse ! avdec_h264 ! videoconvert ! appsink", cv::CAP_GSTREAMER);

Port pipeline to gst-rtsp-server

I'm trying to wrap this working sender side pipeline in the gst-rtsp-serve
gst-launch-1.0 --gst-plugin-path=/usr/lib/x86_64-linux-gnu/gstreamer-1.0/ filesrc location=sample.mp4 ! decodebin name=mux mux. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay ! udpsink host=127.0.0.1 port=5000 mux. ! queue ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! udpsink host=127.0.0.1 port=5001
Using a complementary pipeline at receiver side all the stuff work and I'm able to send a opencv processed stream, getting it at the client side.
Something is wrong when I try to wrap part of this pipeline in the working example provided along with the gst-rtsp-server.
Infact, editing the test-mp4.c and changing the filesrc input pipelin
"filesrc location=%s ! qtdemux name=d "
"d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 "
"d. ! queue ! rtpmp4apay pt=97 name=pay1 " ")"
the sender doesn't work anymore. On the receiver side I got a 503 error since the receiver is unable to get the sdp.
Could be this a iussue related to missing bad plugin directory?
I set it in the main Makefile but the problem still persists.
I guess so sinche the rtsp-server works perfectly if I do not edit that line and my pipeline works good either.
Thanks,
Francesco
This looks like it is an issue with the pipeline you have created. Try running your pipeline exactly how it is on the command line, but add fakesink elements on the end to see if that works:
gst-launch-1.0 filesrc location=%s ! qtdemux name=d d. ! queue ! videoconvert ! edgedetect ! videoconvert ! x264enc ! rtph264pay pt=96 name=pay0 ! fakesink d. ! queue ! rtpmp4apay pt=97 name=pay1 ! fakesink
At a glance, it looks like you're demuxing the media, but not decoding the video to a raw format for the edgedetect element.

Resources