I'd like to display a rtp / vp8 video stream that comes from gstreamer, in openCV.
I have already a working solution which is implemented like this :
gst-launch-0.10 udpsrc port=6666 ! "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)VP8-DRAFT-IETF-01,payload=(int)120" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! ffenc_mpeg4 ! filesink location=videoStream
Basically it grabs incoming data from a UDP socket, depacketize rtp, decode vp8, pass to ffmpegcolorspace (I still don't understand what this is for, I see it everywhere in gstreamer).
The videoStream is a pipe I created with mkfifo. On the side, I have my openCV code that does :
VideoCapture cap("videoStream");
and uses cap.read() to push into a Mat.
My main concern is that I use ffenc_mpeg4 here and I believe this alters my video quality. I tried using x264enc in place of ffenc_mpeg4 but I have no output : openCV doesn't react, neither does gstreamer, and after a couple of seconds, gst-launch just stops.
Any idea what I could use instead of ffenc_mpeg4 ? I looked for "lossless codec" on the net, but it seems I am confusing things such as codec, contains, format, compression and encoding ; so any help would be (greatly) appreciated !
Thanks in advance !
Related
hello I have a problem with Caps, when i try to use this pipeline :gst-launch-1.0 v4l2src ! caps ! nvvidconv ! caps ! tee ! queue1 ! nvvideosink
I get :WARNING: erroneous pipeline: no element "caps"
thanks
GstElement and GstCaps are two different things.
Caps is like a structure that can define the stream media type and some stream specifications. It is not a GstElement. Therefore, you should use capsfilter element which is a GstElement and then set your caps into it.
Your pipeline should be like this:
v4l2src ! capsfilter caps="video/x-raw,width=640,height=480,format=I420" ! nvvidconv ! tee ! queue ! nvvideosink
(Careful, your caps may need to be on GPU like 'video/x-raw(memory:NVMM)'. I explained on below continue to read.
You can arrange your format whatever you want. If you are not sure your camera's format, just don't set it, like: caps="video/x-raw,width=640,height=480"
When you use Capsfilter, you force your pipeline to get THAT stream settings. For instance, if your camera does not support 640x480, your pipeline gives crash!
If you are not sure your camera's specifications, just use nvvidconv or videoconvert element which converts stream for you.
If you are not sure whatever you should do, try this pipeline:
v4l2src ! nvvidconv ! nvvideosink
Warning: nvvidconv and nvvideosink probably works on GPU. So if you try to use videoconvert with nvvideosink program might crash because videoconvert works on CPU and nvvideosink may not be able to work on CPU.
Give it a look to that
https://forums.developer.nvidia.com/t/window-playback-using-nvvideosink/42346 , he built his pipeline to be able to work on GPU. He used nvcamerasrc for gettings stream from GPU. v4l2src gets only from CPU.
You decide whether to get stream from CPU or GPU. Try to look at this link too for your pipeline creation:
https://forums.developer.nvidia.com/t/gstreamer-input-nvcamerasrc-vs-v4l2src/50658/2
I want to take a video file and overlay subtitles that fade in and fade out.
I'm just beginning to learn how to work with Gstreamer.
So far, I've managed to put together a pipeline that composits a subtitle stream drawn by the textrender element onto an original video stream with the videomixer element. Unfortunately, textrender, and its sister element textoverlay do not have a fade-in/fade-out feature.
The videomixer sink pad does have an alpha property. For now, I have set the alpha value of the pad named videomixer.sink_1 to 1.0. Here is the command-line version of that pipeline:
#!/bin/bash
gst-launch-1.0 \
filesrc location=sample_videos/my-video.mp4 ! decodebin ! mixer.sink_0 \
filesrc location=subtitles.srt ! subparse ! textrender ! mixer.sink_1 \
videomixer name=mixer sink_0::zorder=2 sink_1::zorder=3 sink_1::ypos=-25 sink_1::alpha=1 \
! video/x-raw, height=540 \
! videoconvert ! autovideosink
I am looking for a way to dynamically modify that alpha value over time so that I can make the subtitle component fade in and out at the appropriate times. (I will parse the SRT file separately to determine when fades begin and end.)
I am studying the GstBin C API (my actual code is in Python). I think after I create the pipeline with Gst.parse_launch(), I can grab any named element with gst_get_bin_by_name(), then use that value to access the pad "sink_1".
Once I've gotten that far, will I be able to modify that alpha value dynamically from an event handler that receives timer events? Will the videomixer element respond instantly to changes in that pad's property? Has anyone else done this?
I found a partial answers here: https://stackoverflow.com/a/17331845/270511 but they don't tell me if this will work after the pipeline is running.
Yes, it will work.
The videomixer pads respond dynamically to changes; I have done this with both the alpha and position properties. The pad properties can be changed using
g_object_set (mix_sink_pad, "alpha", 0.5, NULL);
I am using C, but your python strategy for accessing the bin and pad sound correct. My gstreamer code responds based on inputs from a udp socket, but timer events will work perfectly fine. For example, if you wanted to change the alpha value every 100ms, you could do something like this
g_timeout_add_seconds (100, alpha_changer_cb, loop);
You can then change the alpha property using g_object_set in the callback; it will update dynamically and looks very smooth.
I got this to work. You can read about it in this post: https://westside-consulting.blogspot.com/2017/03/getting-to-know-gstreamer-part-4.html
I want to monitor buffers traveling through my GStreamer pipelines.
For example, in the following pipeline: I want to know if 1 buffer (ie.GstBuffer) flowing between rtph264pay and udpsink correspond to 1 packet streamed on my Ethernet interface.
gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink
What tool can I use to figure it out? Do I have to go into source code to get the answer? What will be the answer?
You can use GST_SCHEDULING debug category to monitor the data flow.
GST_DEBUG="*SCHED*:5" gst-launch-1.0 filesrc ! x264enc ! rtph264pay ! udpsink 2> gst.log
This will produce a log of every buffers that reaches a sink pad. You can filter the udpsink sink pad to obtain the information you want. For the network side, you'll need to use a network analyser, like Wireshark. You should then be able to compare.
In practice, each payloaded buffer will represent 1 UDP packet, unless your network MTU is smaller then what you have configured on the payload (see mtu property).
I've got a V4L2 camera that can grab frame in JPEG format or YUV422 or BGR24. I'd like to set camera to BGR24#640x480 by OpenCV. To do this, I did the following settings:
capture = cvCreateCameraCapture(0);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FOURCC, CV_FOURCC('B', 'G', 'R', '3'));
but opencv gives me back the following error message:
HIGHGUI ERROR: V4L: Property <unknown property string>(6) not supported by device
So, openCV set JPEG#640x480 format instead of BGR24.
How can I fix it?
NOTE: BGR24 format was tested with the following gstreamer pipeline and it works properly:
gst-launch-0.10 v4l2src num-buffers=10 device=/dev/video0 ! 'video/x-raw-rgb,width=640,height=480,bpp=24,depth=24,red_mask=255,green_mask=65280,blue_mask=16711680,endianness=4321' ! filesink location=/tmp/output10.rgb24
Kind regards
I'd check that you are accessing the correct camera
If you have multiple cameras varying N in cvCreateCameraCapture(N) should cycle through them.
Other than that I would check that the webcam itself conforms to the UVC specification. V4L might be having trouble querying the parameters of the cam.
Just because the Camera supports the capture of a certain format, if it doesn't strictly comply with the Usb Video Class, OpenCV is not guaranteed to be able to detect that it can capture in that format and, to the best of my knowledge, cannot be forced to.
I have a problem with a RAW YUB video load in OpenCV. I can play it in mplayer with the following command:
mplayer myvideo.raw -rawvideo w=1280:h=1024:fps=30:y8 -demuxer rawvideo
My code for load in OpenCV is:
CvCapture* capture=cvCaptureFromFile("C:\\myvideo.raw");
cvCaptureFromFile always return NULL. But if I try with a normal avi file, the code runs normally (capture is not null).
I'm working with the lastest version of OpenCV under Windows 7.
EDIT: Output messages are
[IMGUTILS # 0036f724] Picture size 0x0 is invalid
[image2 # 009f3300] Could not find codec parameters (Video: rawvideo, yuv420p)
Thanks
OpenCV uses ffmpeg as back-end, however, it includes only a subset of ffmpeg functions. What you can try is to install some codecs. (K-lite helped me some time ago)
But, if your aim is to obtain raw YUV in OpenCV, the answer is "not possible".
OpenCV is hardcoded to convert every input format to BGR, so even if you will be able to open the raw input, it will automatically convery it to BGR before passing it. No chance to solve that, the only way is to use a different capture library or hack into OpenCV.
What you can do (to simulate YUV input) is to capture the avi, convert to YUV
cvtColor(...,CV_BGR2YCBCR /* or CV_BGR2YUV */ );
and then process it