GStreamer vs FFmpeg - opencv

I try to record a Video with the OpenCV Framework an would like to save that into an Matroska(mkv) Container together with some additional data streams.
First I thought using FFmpeg is the way that.
But while looking into the OpenCV Sourcecode and searching in the web I found GStreamer.
Because the documentation in GStreamer is much better than the FFmpeg documentation I would prefer using this Framework.
In my understanding GStreamer is primarily used for Streaming, but could also rncode and mux video data.
Is there any disadvantage when using GStreamer instead of FFmpeg?
Thanks in advance
Horst

I try to record a Video with the OpenCV Framework an would like to save that into an Matroska(mkv)
I don't think OpenCV can store video as MKV,
together with some additional data streams.
OpenCV doesn't provide features for this operation.
An easy workaround is to simply call ffmpeg's or gstreamer's cmd-line application to do the conversion for you.
GStreamer has indeed a decent documentation and it can also do the job. The obvious disadvantage is that if you know how to work with FFmpeg, changing to GStreamer will require some extra time to understand how it works since both have completely different APIs: GStreamer architecture was inspired by DirectShow and Quicktime.
The advantage is that GStreamer (besides being cross-platform as well) is used on several big projects and getting to know GStreamer will certainly add a great skill to your programming arsenal.

You can use an OpenCv VideoWriter using either ffmpeg or gstreamer backend and compare. For example (you may adapt to your platform plugins and video mode):
# Using ffmpeg backend
cv::VideoWriter ffmpeg_h264_writer ("test-ffmpeg-h264-writer.mkv",
cv::CAP_FFMPEG,
cv::VideoWriter::fourcc ('X', '2', '6', '4'),
fps,
cv::Size (width, height));
# Using gstreamer backend:
cv::VideoWriter gst_omxh264_writer ("appsrc ! queue ! videoconvert ! video/x-raw,format=I420 ! queue ! omxh264enc ! video/x-h264,format=byte-stream ! matroskamux ! filesink location=test-gstreamer-omxh264-writer.mkv ",
cv::CAP_GSTREAMER,
0,
fps,
cv::Size (width, height));
where width and height are integer values and fps is a float value.

I see no disadvantage using gstreamer. In fact the way I see is gstreamer is a framework not just as a tool or a single purpose library. You can use it to develop your own plugins that can seamlessly hook into gstreamer pipeline.

Related

Gstreamer frames handlem

I'm on my project about rtsp streams, merging and stuff and one of things I need to do is to pass frames from rtsp stream to a handler or buffer or something so another person could process it with f.e. OpenCV.
how could I do it with gstreamer?
Thanks!
GStreamer already has opencv based plugins (. So the best way is to write a similar plugin that applys your opencv code. There are elements called appsrc and appsink to source data from ann app or receive data by an app, but there is not appfilter element. One could use a pad-probe for it, but it is not a good approach.

Read h264 stream from an IP camera

Currently, I am trying to use opencv to read a video from my Canon VB-H710F camera.
For this purpose I tried two different solutions:
SOLUTION 1: Read the stream from rtsp address
VideoCapture cam ("rtsp://root:camera#10.0.4.127/stream/profile1=u");
while(true)
cam >> frame;
In this case I am using opencv to directly read from a stream encoded with in H264 (profile1), however this yields the same problem reported here http://answers.opencv.org/question/34012/ip-camera-h264-error-while-decoding/
As suggested in the previous question, I tried to disable FFMPEG support in opencv installation, which solved the h264 decoding errors but raised other problem.
When accessing the stream with opencv, supported by gstreame, there is always a large delay associated.
With this solution I achieve 15 FPS but I have a delay of 5 seconds, which is not acceptable considering that I need a real time application.
SOLUTION 2: Read the frames from http address
while(true)
{
startTime=System.currentTimeMillis();
URL url = new URL("h t t p://[IP]/-wvhttp-01-/image.cgi");
URLConnection con = url.openConnection();
BufferedImage image = ImageIO.read(con.getInputStream());
showImage(image);
estimatedTime=System.currentTimeMillis()-startTime;
System.out.println(estimatedTime);
Thread.sleep(5);
}
This strategy simply grabs the frame from the url that the camera provides. The code is in Java but the results are the same in C++ with the curl library.
This solution avoids the delay of the first solution however it takes little more than 100 ms to grab each frame, which means that I can only achieve on average 10 FPS.
I would like to know how can I read the video using c++ or another library developed in c++ ?
I struggled with similar issues and think I have solved some of your problems using libVLC with OpenCV. FFMPEG seemed to have issues of not decoding H264 properly, plus the newer versions (2.4.11) seemed to have the TCP fix in there already for FFMPEG. Anyways, I use MS Visual Studio on Windows 7 and 8.1.
Details are given here: http://answers.opencv.org/question/65932
Personally, I suggest you to use ffmpeg to read rtsp streams from IP cameras, and then use openCV to read from decoded buffer from ffmpeg. ffmpeg has very good optimizations towards H.264 decoding, performance should not be a critical issue.
You can use ffmpeg binary to verify whether this can work correctly:
ffmpeg -i "rtsp://root:camera#10.0.4.127/stream/profile1=u" -vcodec copy -acodec none test.mp4
If test.mp4 can be played successfully, then it's definitely OK for you to integrate ffmpeg libs into your project.
Good luck!
You can process each frame using ffmpeg as well. you need to create your own filter as per your requirement. https://trac.ffmpeg.org/wiki/FilteringGuide

How to embed flash (.swf) file in opencv project?

I have a .swf file, which i want to embed in my opencv and overlay over camera stream and display it to the user. Until now i have not found a solution by simple google search. I would appreciate if anyone has any idea how to approach this.
Thanks
OpenCV doesn't deal with .swf files, so you need to use some other technology like FFMPEG or GStreamer to retrieve the frames and decode them to BGR to be able to create a valid IplImage (or cv::Mat if you are insterested in the C++ interface).
GStreamer also provides a simple mechanism to stream video over the network.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

FlyCapture2 and OpenCV, CMake build question

Platform: amd_64
Operating System: Ubuntu 8.10
Problem:
The current release of OpenCV (2.1 at time of writing) and libdc1394 doesn't properly interface with the new USB-interface PointGrey High-Res FireFlyMV Color camera.
Does anyone have this camera working with OpenCV on Ubuntu?
Currently, I'm working on writing my own frame-grabber using PointGrey's FlyCapture2 SDK, which works well with the camera. I'd like to interface this with OpenCV, by converting each image I grab into an IplImage object. When I write OpenCV programs, I use CMake. The example code for the FlyCapture2 SDK uses fairly simple makefiles. Does anyone know how I can take the information from the simple FlyCapture2 makefile so I can include the appropriate lines in CMakeLists.txt for my CMake build routine?
Not a simple answer (sorry) - but.
Generally you don't want to use cvCaptureCam() for high performance cameras beyond initial tests that they work. Even for standard interfaces like firewire It is very limited in what features of the camera it can control, it doesn't handle threading well and the performance is poor - especially at high data rates.
The more common way is to control the camera with the makers own SDK and output frames in a form (cv::mat/iplimage) that openCV can process. All openCV image types are very flexible in being able to share data with the camera API and specify padding/row striping etc so you should be able to design it so there is no unnecessary copying.

Resources