Gstreamer dynamic pipeline : Camera Preview on HDMI Camera Video Recording on File - gstreamer-1.0

hello I have a problem with Caps, when i try to use this pipeline :gst-launch-1.0 v4l2src ! caps ! nvvidconv ! caps ! tee ! queue1 ! nvvideosink
I get :WARNING: erroneous pipeline: no element "caps"
thanks

GstElement and GstCaps are two different things.
Caps is like a structure that can define the stream media type and some stream specifications. It is not a GstElement. Therefore, you should use capsfilter element which is a GstElement and then set your caps into it.
Your pipeline should be like this:
v4l2src ! capsfilter caps="video/x-raw,width=640,height=480,format=I420" ! nvvidconv ! tee ! queue ! nvvideosink
(Careful, your caps may need to be on GPU like 'video/x-raw(memory:NVMM)'. I explained on below continue to read.
You can arrange your format whatever you want. If you are not sure your camera's format, just don't set it, like: caps="video/x-raw,width=640,height=480"
When you use Capsfilter, you force your pipeline to get THAT stream settings. For instance, if your camera does not support 640x480, your pipeline gives crash!
If you are not sure your camera's specifications, just use nvvidconv or videoconvert element which converts stream for you.
If you are not sure whatever you should do, try this pipeline:
v4l2src ! nvvidconv ! nvvideosink
Warning: nvvidconv and nvvideosink probably works on GPU. So if you try to use videoconvert with nvvideosink program might crash because videoconvert works on CPU and nvvideosink may not be able to work on CPU.
Give it a look to that
https://forums.developer.nvidia.com/t/window-playback-using-nvvideosink/42346 , he built his pipeline to be able to work on GPU. He used nvcamerasrc for gettings stream from GPU. v4l2src gets only from CPU.
You decide whether to get stream from CPU or GPU. Try to look at this link too for your pipeline creation:
https://forums.developer.nvidia.com/t/gstreamer-input-nvcamerasrc-vs-v4l2src/50658/2

Related

How to modify videomixer sink pad alpha value dynamically

I want to take a video file and overlay subtitles that fade in and fade out.
I'm just beginning to learn how to work with Gstreamer.
So far, I've managed to put together a pipeline that composits a subtitle stream drawn by the textrender element onto an original video stream with the videomixer element. Unfortunately, textrender, and its sister element textoverlay do not have a fade-in/fade-out feature.
The videomixer sink pad does have an alpha property. For now, I have set the alpha value of the pad named videomixer.sink_1 to 1.0. Here is the command-line version of that pipeline:
#!/bin/bash
gst-launch-1.0 \
filesrc location=sample_videos/my-video.mp4 ! decodebin ! mixer.sink_0 \
filesrc location=subtitles.srt ! subparse ! textrender ! mixer.sink_1 \
videomixer name=mixer sink_0::zorder=2 sink_1::zorder=3 sink_1::ypos=-25 sink_1::alpha=1 \
! video/x-raw, height=540 \
! videoconvert ! autovideosink
I am looking for a way to dynamically modify that alpha value over time so that I can make the subtitle component fade in and out at the appropriate times. (I will parse the SRT file separately to determine when fades begin and end.)
I am studying the GstBin C API (my actual code is in Python). I think after I create the pipeline with Gst.parse_launch(), I can grab any named element with gst_get_bin_by_name(), then use that value to access the pad "sink_1".
Once I've gotten that far, will I be able to modify that alpha value dynamically from an event handler that receives timer events? Will the videomixer element respond instantly to changes in that pad's property? Has anyone else done this?
I found a partial answers here: https://stackoverflow.com/a/17331845/270511 but they don't tell me if this will work after the pipeline is running.
Yes, it will work.
The videomixer pads respond dynamically to changes; I have done this with both the alpha and position properties. The pad properties can be changed using
g_object_set (mix_sink_pad, "alpha", 0.5, NULL);
I am using C, but your python strategy for accessing the bin and pad sound correct. My gstreamer code responds based on inputs from a udp socket, but timer events will work perfectly fine. For example, if you wanted to change the alpha value every 100ms, you could do something like this
g_timeout_add_seconds (100, alpha_changer_cb, loop);
You can then change the alpha property using g_object_set in the callback; it will update dynamically and looks very smooth.
I got this to work. You can read about it in this post: https://westside-consulting.blogspot.com/2017/03/getting-to-know-gstreamer-part-4.html

GstBuffer flow monitoring in GStreamer pipeline

I want to monitor buffers traveling through my GStreamer pipelines.
For example, in the following pipeline: I want to know if 1 buffer (ie.GstBuffer) flowing between rtph264pay and udpsink correspond to 1 packet streamed on my Ethernet interface.
gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink
What tool can I use to figure it out? Do I have to go into source code to get the answer? What will be the answer?
You can use GST_SCHEDULING debug category to monitor the data flow.
GST_DEBUG="*SCHED*:5" gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink 2> gst.log
This will produce a log of every buffers that reaches a sink pad. You can filter the udpsink sink pad to obtain the information you want. For the network side, you'll need to use a network analyser, like Wireshark. You should then be able to compare.
In practice, each payloaded buffer will represent 1 UDP packet, unless your network MTU is smaller then what you have configured on the payload (see mtu property).

Fixing a TS file made by the HD Home Run

I am recording from a cable stream using the hdhomerun command line tool, hdhomerun_config, to a .ts file. The way it works is that you run the command, it produces periods every second or so to let you know that the stream is being successfully recorded. So when I record, it produces only periods, which is desired. And the way to end it is by doing a Ctrl-C. However, whenever I try to convert this to an avi or a mov using FFMpeg, it gives a bunch of errors, some of which being
[mpeg2video # 0x7fbb4401a000] Invalid frame dimensions 0x0
[mpegts # 0x7fbb44819600] PES packet size mismatch
[ac3 # 0x7fbb44015c00] incomplete frame
It still creates the file, but it is bad quality and it doesn't work with OpenCV and other services. Has anyone else encountered this problem? Does anyone have any knowledge that may help with this situation? I tried to trim the ts file but most things require conversion before editing. Thank you!
Warnings/errors like that are normal at the very start of the stream as the recording started mid stream (ie mid PES packet) and ffmpeg expects PES headers (ie the start of the PES packet). Once ffmpeg finds the next PES header it will be happy (0-500ms later in play time).
Short version is that it is harmless. You could eliminate the warnings/errors but removing all TS-frames for each ES until you hit a payload unit start flag, but that is what ffmpeg is already doing itself.
If you see additional warnings/errors after the initial/start then there might be a reception of packet loss issue that needs investigation.

openCV VideoCapture doesn't work with gstreamer x264

I'd like to display a rtp / vp8 video stream that comes from gstreamer, in openCV.
I have already a working solution which is implemented like this :
gst-launch-0.10 udpsrc port=6666 ! "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)VP8-DRAFT-IETF-01,payload=(int)120" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! ffenc_mpeg4 ! filesink location=videoStream
Basically it grabs incoming data from a UDP socket, depacketize rtp, decode vp8, pass to ffmpegcolorspace (I still don't understand what this is for, I see it everywhere in gstreamer).
The videoStream is a pipe I created with mkfifo. On the side, I have my openCV code that does :
VideoCapture cap("videoStream");
and uses cap.read() to push into a Mat.
My main concern is that I use ffenc_mpeg4 here and I believe this alters my video quality. I tried using x264enc in place of ffenc_mpeg4 but I have no output : openCV doesn't react, neither does gstreamer, and after a couple of seconds, gst-launch just stops.
Any idea what I could use instead of ffenc_mpeg4 ? I looked for "lossless codec" on the net, but it seems I am confusing things such as codec, contains, format, compression and encoding ; so any help would be (greatly) appreciated !
Thanks in advance !

How to set BGR24 format with OpenCv?

I've got a V4L2 camera that can grab frame in JPEG format or YUV422 or BGR24. I'd like to set camera to BGR24#640x480 by OpenCV. To do this, I did the following settings:
capture = cvCreateCameraCapture(0);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FOURCC, CV_FOURCC('B', 'G', 'R', '3'));
but opencv gives me back the following error message:
HIGHGUI ERROR: V4L: Property <unknown property string>(6) not supported by device
So, openCV set JPEG#640x480 format instead of BGR24.
How can I fix it?
NOTE: BGR24 format was tested with the following gstreamer pipeline and it works properly:
gst-launch-0.10 v4l2src num-buffers=10 device=/dev/video0 ! 'video/x-raw-rgb,width=640,height=480,bpp=24,depth=24,red_mask=255,green_mask=65280,blue_mask=16711680,endianness=4321' ! filesink location=/tmp/output10.rgb24
Kind regards
I'd check that you are accessing the correct camera
If you have multiple cameras varying N in cvCreateCameraCapture(N) should cycle through them.
Other than that I would check that the webcam itself conforms to the UVC specification. V4L might be having trouble querying the parameters of the cam.
Just because the Camera supports the capture of a certain format, if it doesn't strictly comply with the Usb Video Class, OpenCV is not guaranteed to be able to detect that it can capture in that format and, to the best of my knowledge, cannot be forced to.

Resources