Streaming opencv Video - opencv

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!

OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.

I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

Related

Using OpenCV to process live video from Phantom 4

I would like to process frames live in OpenCV from the video feed on a DJI Phantom 4. I've been able to set up OpenCV for iOS in xCode but I need help finding a tutorial/instructions on how to send the frames over from the DJI Camera into OpenCV in the correct format on the fly. Any suggestions?
Thanks
Hello there Ilia Labkovsky,
I think am in the same boat, I have got a P3 and would like to process the images via OpenCV. I am intending to use my laptop PC as an image processor, sending the images directly via TCP/IP and doing my own image processing off-board. I am yet to establish this though, I may come across some problems.
Is there a way I can privately message you on Stack Overflow?
Best of luck with the programming :)
There is a tutorial for Android on the DJI sample apps on how to parse and obtain the yuv frames. From there you can use openCv to process the frames: https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample

Gstreamer frames handlem

I'm on my project about rtsp streams, merging and stuff and one of things I need to do is to pass frames from rtsp stream to a handler or buffer or something so another person could process it with f.e. OpenCV.
how could I do it with gstreamer?
Thanks!
GStreamer already has opencv based plugins (. So the best way is to write a similar plugin that applys your opencv code. There are elements called appsrc and appsink to source data from ann app or receive data by an app, but there is not appfilter element. One could use a pad-probe for it, but it is not a good approach.

Read h264 stream from an IP camera

Currently, I am trying to use opencv to read a video from my Canon VB-H710F camera.
For this purpose I tried two different solutions:
SOLUTION 1: Read the stream from rtsp address
VideoCapture cam ("rtsp://root:camera#10.0.4.127/stream/profile1=u");
while(true)
cam >> frame;
In this case I am using opencv to directly read from a stream encoded with in H264 (profile1), however this yields the same problem reported here http://answers.opencv.org/question/34012/ip-camera-h264-error-while-decoding/
As suggested in the previous question, I tried to disable FFMPEG support in opencv installation, which solved the h264 decoding errors but raised other problem.
When accessing the stream with opencv, supported by gstreame, there is always a large delay associated.
With this solution I achieve 15 FPS but I have a delay of 5 seconds, which is not acceptable considering that I need a real time application.
SOLUTION 2: Read the frames from http address
while(true)
{
startTime=System.currentTimeMillis();
URL url = new URL("h t t p://[IP]/-wvhttp-01-/image.cgi");
URLConnection con = url.openConnection();
BufferedImage image = ImageIO.read(con.getInputStream());
showImage(image);
estimatedTime=System.currentTimeMillis()-startTime;
System.out.println(estimatedTime);
Thread.sleep(5);
}
This strategy simply grabs the frame from the url that the camera provides. The code is in Java but the results are the same in C++ with the curl library.
This solution avoids the delay of the first solution however it takes little more than 100 ms to grab each frame, which means that I can only achieve on average 10 FPS.
I would like to know how can I read the video using c++ or another library developed in c++ ?
I struggled with similar issues and think I have solved some of your problems using libVLC with OpenCV. FFMPEG seemed to have issues of not decoding H264 properly, plus the newer versions (2.4.11) seemed to have the TCP fix in there already for FFMPEG. Anyways, I use MS Visual Studio on Windows 7 and 8.1.
Details are given here: http://answers.opencv.org/question/65932
Personally, I suggest you to use ffmpeg to read rtsp streams from IP cameras, and then use openCV to read from decoded buffer from ffmpeg. ffmpeg has very good optimizations towards H.264 decoding, performance should not be a critical issue.
You can use ffmpeg binary to verify whether this can work correctly:
ffmpeg -i "rtsp://root:camera#10.0.4.127/stream/profile1=u" -vcodec copy -acodec none test.mp4
If test.mp4 can be played successfully, then it's definitely OK for you to integrate ffmpeg libs into your project.
Good luck!
You can process each frame using ffmpeg as well. you need to create your own filter as per your requirement. https://trac.ffmpeg.org/wiki/FilteringGuide

How to embed flash (.swf) file in opencv project?

I have a .swf file, which i want to embed in my opencv and overlay over camera stream and display it to the user. Until now i have not found a solution by simple google search. I would appreciate if anyone has any idea how to approach this.
Thanks
OpenCV doesn't deal with .swf files, so you need to use some other technology like FFMPEG or GStreamer to retrieve the frames and decode them to BGR to be able to create a valid IplImage (or cv::Mat if you are insterested in the C++ interface).
GStreamer also provides a simple mechanism to stream video over the network.

Snapshot using vlc (to get snapshot on RAM)

I was planning to use the vlc library to decode an H.264 based RTSP stream and extract each frame from it (convert vlc picture to IplImage). I have done a bit of exploration of the vlc code and concluded that there is a function called libvlc_video_take_snapshot which does a similar thing. However the captured frame in this case is saved on the hard disk which I wish to avoid due to the real time nature of my application. What would be the best way to do this? Would it be possible without modifying the vlc source (I want to avoid recompilation if possible). I have heard of vmem etc but could not really figure out what it does and how to use it.
The picture_t structure is internal to the library, how can we get an access to the same.
Awaiting your response.
P.S. Earlier I tried doing this using FFMPEG, however the ffmpeg library has a lot of issues while decoding an H.264 based RTSP stream on windows and hence I had to switch to VLC.
Regards,
Saurabh Gandhi

Resources