Read h264 stream from an IP camera - opencv

Currently, I am trying to use opencv to read a video from my Canon VB-H710F camera.
For this purpose I tried two different solutions:
SOLUTION 1: Read the stream from rtsp address
VideoCapture cam ("rtsp://root:camera#10.0.4.127/stream/profile1=u");
while(true)
cam >> frame;
In this case I am using opencv to directly read from a stream encoded with in H264 (profile1), however this yields the same problem reported here http://answers.opencv.org/question/34012/ip-camera-h264-error-while-decoding/
As suggested in the previous question, I tried to disable FFMPEG support in opencv installation, which solved the h264 decoding errors but raised other problem.
When accessing the stream with opencv, supported by gstreame, there is always a large delay associated.
With this solution I achieve 15 FPS but I have a delay of 5 seconds, which is not acceptable considering that I need a real time application.
SOLUTION 2: Read the frames from http address
while(true)
{
startTime=System.currentTimeMillis();
URL url = new URL("h t t p://[IP]/-wvhttp-01-/image.cgi");
URLConnection con = url.openConnection();
BufferedImage image = ImageIO.read(con.getInputStream());
showImage(image);
estimatedTime=System.currentTimeMillis()-startTime;
System.out.println(estimatedTime);
Thread.sleep(5);
}
This strategy simply grabs the frame from the url that the camera provides. The code is in Java but the results are the same in C++ with the curl library.
This solution avoids the delay of the first solution however it takes little more than 100 ms to grab each frame, which means that I can only achieve on average 10 FPS.
I would like to know how can I read the video using c++ or another library developed in c++ ?

I struggled with similar issues and think I have solved some of your problems using libVLC with OpenCV. FFMPEG seemed to have issues of not decoding H264 properly, plus the newer versions (2.4.11) seemed to have the TCP fix in there already for FFMPEG. Anyways, I use MS Visual Studio on Windows 7 and 8.1.
Details are given here: http://answers.opencv.org/question/65932

Personally, I suggest you to use ffmpeg to read rtsp streams from IP cameras, and then use openCV to read from decoded buffer from ffmpeg. ffmpeg has very good optimizations towards H.264 decoding, performance should not be a critical issue.
You can use ffmpeg binary to verify whether this can work correctly:
ffmpeg -i "rtsp://root:camera#10.0.4.127/stream/profile1=u" -vcodec copy -acodec none test.mp4
If test.mp4 can be played successfully, then it's definitely OK for you to integrate ffmpeg libs into your project.
Good luck!

You can process each frame using ffmpeg as well. you need to create your own filter as per your requirement. https://trac.ffmpeg.org/wiki/FilteringGuide

Related

OpenCV not working with different RTSP streaming URLs. grab() from videocapture is always 0

I'm working with opencv to read the frames from a RTSP streaming link via VideoCapture function. It worked well for a specific RTSP camera. But the thing is, I have tried to connect different RTSP cameras in the same network but for my surprise, it wouldnt work.
Any thoughts of what could cause this problem? I need to be able to get the stream of any rtsp url with the same openCV code for my purpose.
The camera that worked is a generic chinese one and it also worked with the big buck bunny comic provided by rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov. The second camera that I tried and got no outputs is an AirCam Dome from Ubiquiti wich has 4 rtsp links. I tried every resolution.
Check if you have 'opencv_ffmpeg341.dll' or something like this in your running folder.
If you are using Windows system, put the dll in x64 or x86 folder.

OpenCV stream captured CAM with H264 (mp4) codec

I like to stream the web cam pictures wich are captured by opencv. I think about a solution with ffmpeg and live555 (poorly they are not document so well). My problems are:
How can convert the captured images to a H264 format so the picures/second match. If it is in a loop I get more than 25 pictures/sekond and the video is to fast.
How can i directly stream the converted H264 stream over the network via rtp / rtps or similar.
Thanks for your help!
This is a common problem.
if you are not require to distribute your software (private use / server side / open-source), you may use FFMpeg compiled with x264 encoder, there's a config flag for that in FFMpeg's config script.
If you do require to distribute your software, i don't know any LGPL licensed library for that, i believe there is no such library. You'd have to use some paid solution.
You should implement DeviceSource.cpp, see DeviceSource.hh and use it as the FramedSource.
Edit: Apple revealed video encoder API, allowing access to stream of h264 frames in iOS8
For an example of how to use x264 and Live555 to encode and stream frames, see the following:
spyPanda open source project.
How to write a Live555 FramedSource to allow me to stream H.264 live SO question.

Streaming opencv Video

I need some ideas about how to stream video feed coming from opencv to a webpage. I currently have gStreamer, but I don't know if this is the right tool for the job. Any advice on using gStreamer or any hyperlinks to tutorials would be helpful and appreciated!
Thanks!
OpenCV doesn't provide an interface for streaming video, which means that you'll need to use some other techonology for this purpose.
I've used GStreamer in several professional projects: this is the droid you are looking for.
I do not have any experience w/ streaming OpenCV output to a website. However I'm sure this is possible using gstreamer.
Using a gstreamer stream, it is possible to get data and convert the data in to OpenCV format. I recommend you read up on GstAppSink and GstBuffer.
Basically, if I remember correctly, you must run a pipeline in the a background thread. Then using some function in gst_app_sink, you can get the buffer data from the sink.
A quick lookup on the issue, you had to use GST_BUFFER_DATA for this
I remember having to convert the result from yCBCr to bgr, a collegue had problems as the conversion of opencv was inadequate. So you might have to write your own. (This was back in the IplImage* days)

FFMPEG on windows (for H.264 RTSP decoding)

Has anyone used the latest FFMPEG version for decoding H.264 based RTSP stream on windows environment using OpenCV.
My problem is that I am able to successfully decode H.264 based RTSP stream on Linux successfully but when I use the same code to decode H.264 based RTSP stream on windows the output is pretty much pixelated. Can someone tell me as to why there is a difference in behaviour (is it due to version mismatch)? Also how do I find out which version of FFMPEG is being used by the OpenCV SDK 2.1.0 and 2.2.0 available for windows?
Awaiting your response.
Thanks in advance.
I didn't know that you can decode RTPS stream using Opencv.
I have decode RTSP stream using Direct show techmology, I'd recommend using Directshow platform due to low cpu consumption ,the video decoding is consumption mostly consumed by the graphic card.
Instead of you , I'll chose to decode the RTSP stream using DirectShow platform,
First install direct show SDK , then install FFD show ,
I'd recommend using filters taken from elecard
(I didn't find any other implementation for RTSP source filter).
Use edit graph to watch your stream
Great tuturial I have found is this (Please read the continuation of this tuturial )
I'm not sure this would be the right answer for you , since I was using a totally different technology...

Snapshot using vlc (to get snapshot on RAM)

I was planning to use the vlc library to decode an H.264 based RTSP stream and extract each frame from it (convert vlc picture to IplImage). I have done a bit of exploration of the vlc code and concluded that there is a function called libvlc_video_take_snapshot which does a similar thing. However the captured frame in this case is saved on the hard disk which I wish to avoid due to the real time nature of my application. What would be the best way to do this? Would it be possible without modifying the vlc source (I want to avoid recompilation if possible). I have heard of vmem etc but could not really figure out what it does and how to use it.
The picture_t structure is internal to the library, how can we get an access to the same.
Awaiting your response.
P.S. Earlier I tried doing this using FFMPEG, however the ffmpeg library has a lot of issues while decoding an H.264 based RTSP stream on windows and hence I had to switch to VLC.
Regards,
Saurabh Gandhi

Resources