It had to happen, I'm stuck in the last phase of my project, when I want to use my code which works like a charm on my webcam, on an IP camera. The URL works perfectly in my browser, but nothing comes out with OpenCV...
Here is my code:
#include <opencv/highgui.h>
using namespace cv;
int main(int argc, char *argv[])
{
Mat frame;
namedWindow("video", 1);
VideoCapture cap("http://192.168.1.99:99/videostream.cgi?resolution=32&rate=0&user=admin&pwd=password&.mjpg");
while ( cap.isOpened() )
{
cap >> frame;
if(frame.empty()) break;
imshow("video", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
And the compiler settings :
//Added to the .pro file of QtCreator
INCLUDEPATH += C:\\OpenCV243\\release\\include
LIBS += -LC:\\OpenCV243\\release\\lib \
-lopencv_core243.dll \
-lopencv_highgui243.dll
I've tested opening a .avi file with the same code and it works... But a public IP camera URL like http://66.184.211.231/mjpg/video.mjpg doesn't ! What's the matter then ?
Removed by edit: I had considered FFMPEG to be an issue, but v2.4.3. has built-in FFMPEG support and .avi files work although I don't have any FFMPEG library installed (care to explain?)
Thanks in advance,
Regards,
Mister Mystère
Solved it by copying opencv_ffmpeg.dll from the build\x86\mingw\bin folder of the sources and pasting it next to built DLLs (bin folder accessible through PATH): I have no idea why, but the opencv_ffmpeg_64.dll had been produced instead.
Since you can connect and grab frames from a web camera I think that your library is set up correctly and you should be able to connect to IP cameras. I believe that the issue is with the supplied URL address of the camera.
Try logging into the camera and disable its password protection. Remove login and password fields from the URL, so it'll be something like "http://192.168.1.99:99/videostream.cgi?resolution=32&.mjpg". Also, you can log into the camera and check it's resolution. I noticed you have resolution=32 but I think it should be something like resolution=704x480.
Hope this helps.
Related
I have a problem seemingly caused by OpenCV 3.xx - the problem does not manifest in OpenCV 2.xx
The issue is reading video files. I've set my code up as follows:
>#include <opencv2\opencv.hpp>
>#include <opencv2\core\core.hpp>
>#include <opencv2\highgui\highgui.hpp>
>#include <opencv2\imgproc\imgproc.hpp>
>#include <opencv2\features2d\features2d.hpp>
>int main()
> cv::VideoCapture cap;
> cv::Mat frame;
> if(!cap.open("Myfile.avi"))
> std::cout << "Open failed" << std::endl;
> else
> cap.read(frame);
>
> cv::imshow("Frame", frame);
> cv::waitKey(5000);
> return 0;
Now the problem is when the code gets to "cap.read(frame)" I get a "vector subscript is out of range" error with OpenCV 3.40 and this does not happen with my build of OpenCV 2.4.9. The format of the file is in avi, its not some weird codec, and clearly it works in previous versions of OpenCV.
I've tried other OpenCV 3.xx builds and I get the same or similar problems with simply reading a file in.
My question is twofold:
How do I get OpenCV 3.xx to work with reading video files (or do I need to regress to 2.xx?)
Why has the major revision change completely screwed up video file reading? That doesn't make any sense for a computer vision API.
As a guess it will be something to do with the FFMPEG implementation because various searches have turned up other people having issues with this.
Any help is much appreciated.
Thanks
I've managed to resolve it myself, it turns out that in OpenCV 3.xx I have to force the VideoCapture::open to use the FFMPEG library by doing this:
>cap.open("Myfile.avi", cv::CAP_FFMPEG)
where the latter parameter is the flags to identify which VideoCapture API backend to use. The list can be found here for any one else interested:
https://docs.opencv.org/3.3.0/d4/d15/group__videoio__flags__base.html
my goal is to capture a frame from a rtmp stream every second, and process it using OpenCV. I'm using FFmpeg version N-71899-g6ef3426 and OpenCV 2.4.9 with the Java interface (but I'm first experimenting with Python).
For the moment, I can only take the simple and dirty solution, which is to capture images using FFmpeg, store them in disk, and then read those images from my OpenCV program. This is the FFmpeg command I'm using:
ffmpeg -i "rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1" -r 1 capImage%03d.jpg
This is currently working for me, at least with this concrete rtmp source. Then I would need to read those images from my OpenCV program in a proper way. I have not actually implemented this part, because I'm trying to find a better solution.
I think the ideal way would be to capture the rtmp frames directly from OpenCV, but I cannot find the way to do it. This is the code in Python I'm using:
cv2.namedWindow("camCapture", cv2.CV_WINDOW_AUTOSIZE)
cap = cv2.VideoCapture()
cap.open('"rtmp://antena3fms35livefs.fplive.net:1935/antena3fms35live-live/stream-lasexta_1 live=1"')
if not cap.open:
print "Not open"
while (True):
err,img = cap.read()
if img and img.shape != (0,0):
cv2.imwrite("img1", img)
cv2.imshow("camCapture", img)
if err:
print err
break
cv2.waitKey(30)
Instead of read() function, I'm also trying with grab() and retrieve() functions without any good result. The read() function is being executed every time, but no "img" or "err" is received.
Is there any other way to do it? or maybe there is no way to get frames directly from OpenCV 2.4.9 from a stream like this?
I've read OpenCV uses FFmpeg to do this kind of tasks, but as you can see, in my case FFmpeg is able to get frames from the stream while OpenCV is not.
In the case I could not find the way to get the frames directly from OpenCV, my next idea is to pipe somehow, FFmpeg output to OpenCV, which seems harder to implement.
Any idea,
thank you!
UPDATE 1:
I'm in Windows 8.1. Since I was running the python script from Eclipse PyDev, this time I run it from cmd instead, and I'm getting the following warning:
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:545)
This warning means, as far as I could read, that either the file-path is wrong, or either the codec is not supported. Now, the question is the same. Is OpenCV not capable of getting the frames from this source?
Actually I have spent more that one day to figure out how to solve this issue. Finally I have solved this problem with the help of this link.
Here is client side code.
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**) {
cv::VideoCapture vcap;
cv::Mat image;
const std::string videoStreamAddress = "rtmp://192.168.173.1:1935/live/test.flv";
if(!vcap.open(videoStreamAddress)) {
std::cout << "Error opening video stream or file" << std::endl;
return -1;
}
cv::namedWindow("Output Window");
cv::Mat edges;
for(;;) {
if(!vcap.read(image)) {
std::cout << "No frame" << std::endl;
cv::waitKey();
}
cv::imshow("Output Window", image);
if(cv::waitKey(1) >= 0) break;
}
}
Note: In this case I have created a android application to get real time video and send it to rtmp server wowza which is deployed in PC.So that is where I created this c++ implementation for real time video processing.
python -c "import cv2; print(cv2.getBuildInformation())"
check build opencv with ffmpeg。If it is correct, your code should be fine。
If not, rebuild opencv with ffmpeg。
Under osx
brew install opencv --with-ffmpeg
In opencv 2.4.6. I am trying to load a mat image file with a simple code given below. But the image is not loaded as I print the image size, it is showing '0'. Can anybody please tell me , what is going wrong?
int main(int argc, char argv[])
{
Mat a=imread("C:/image3.jpg");
cv::Size frame11_size = a.size();
printf("%d",frame11_size.height);
return 0;
}
Update: I solved the problem. The problem was, I was only including all the library,include and additional dependencies in 'debug mode' only. I did not change anything in 'release mode'. When I change the properties in 'release mode' as-well, it worked. thanks all for your kind responses, I am giving '+1' for your answers.
I think there should be single slash on your image path, and always check whether image is successfully loaded.
Mat a=imread("C:/image3.jpg");
if(! a.data ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
OpenCV can't open jpg files by itself. It depends on third parties to do so. Maybe you are missing certain dlls, or maybe your OpenCV installation don't have the right path to them. To test this assumption store your image in other formats. For example pgm or ppm. Those formats does not perform any encoding and just store image buffer in file as is. As a result OpenCV will not need any external libraries to open image in ppm format.
I have a problem using VideoCapture class with OpenCV 2.4.2 under windows XP 32bits.
It doesn't open any file or camera and fixing it's being a pain.
Im using visual studio 2010 but i have also tried the code in QTcreator with the same result.
The testing code is the following:
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
using namespace cv;
using namespace std;
int main()
{
const char* videoPath = "C:/video/";
string videoName = string(videoPath) + "avi.avi";
VideoCapture cap(videoName);
if(!cap.isOpened())
{
std::cout<<"Fail"<<std::endl;
return -3;
}
return 0;
}
The output is always '-3'.
Qt Creator shows a
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:361)
I debugged it and the problem appears in the first line of:
CvCapture* cvCreateFileCapture_FFMPEG_proxy(const char * filename)
{
CvCapture_FFMPEG_proxy* result = new CvCapture_FFMPEG_proxy;
if( result->open( filename ))
return result;
delete result;
#if defined WIN32 || defined _WIN32
return cvCreateFileCapture_VFW(filename);
#else
return 0;
#endif
}
in the cap_ffmpeg.cpp internal file.
I have tested the same code in a mac under snow leopard and it works. No surprises here since it must be a library issue.
I have opened the avi file with the same path route using the c-function cvCapture easy and fast.
I got all the dlls of 'C:\opencv\opencv\build\x86\vc10\bin'
included in mi debug file. I got the tbb.dll and all the 'C:\opencv\opencv\3rdparty\ffmpeg' content included too.
This is drving me crazy so any help would be appreciated.
Thanks in advance.
In my case, the same problem was resolved after deleting all opencv_***.dll files in C:\Windows\System32. So, I use the dll files just through the path like "%PATH%;C: \Program Files \OpenCV2.4.2\build\x86\vc10/bin". Please try it.
I also faced with this problem and solved it by correct the path of the function:
VideoCapture cap(videoName);
If the AVI file of videoName does't exist, it will be an error:
(../../modules/highgui/src/cap_ffmpeg_impl.hpp:XXX)
where XXX represents the line number.
I had the same issue with the open method whilst running under Windows 8 (64bit), opencv 2.4.10. IDE is running in x86.
I found that running the application in release configuration solved the problem.
Stumbled across the answer because I had the same issue with imread. Issue is presented in the this thread.
imread not working in Opencv
See the fix I found below, for mp4 files.
I faced the same issue on Windows 7, using OpenCV 2.4.9. I am using the java wrapper for opencv.
Matthias Krings has done a lot of research for this. See this. Apparently this is an issue based on the video file type. With .avi files, it seems to work for a lot of people. Unfortunately his solution of setting OPENCV_DIR did not work for me. But his comments in the bug listing gave me a hint to fix the issue.
You have to do two things:
Set java.library.path to include the directory {opencv\install\dir}opencv-2.4.9\build\x86\vc10\bin. You can set the variable using the -D option on the java command line: java -Djava.library.path=PATH_TO_YOUR_DLL .... Also fetch this variable from your environment, using System.getProperty(...), and print it before calling loadLibrary(), to verify that the path setting is working.
And in your java class, load the ffmpeg dll using System.loadLibrary("opencv_ffmpeg249");. The loadLibrary() function should be invoked from within a static block in java.
There is a file named opencv_ffmpeg249.dll in the java.library.path that we set.
This works on ubuntu also, for .so files.
I too faced the same issue and resolved after pointing to the correct location of the input video.
I use Eclipse CDT for developing C with mingw. I also add opencv libary. Everything compiled without problems. But if I start the compiled application (using a opencv-function) there is an init error. If I only include the .h-files without using a function it works.
The code:
#include <opencv2/opencv.hpp>
using namespace std;
int main() {
cout << "!!!Streaming!!!" << endl; // prints !!!Streaming!!!
// Nothing but create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin", 100, 100);
cvWaitKey(0);
return 0;
}
Error-Image: http://i.stack.imgur.com/zdmT7.png
If I do not use a cv.. - function there will be no init error. Even if I include opencv2/opencv.hpp
I do not have an idea how it works.
Hope you can help.
I found the solution. The opencv-dll-files for mingw are damaged. I rename the visualstudio dlls to the names of the mingw-dlls and put it directly to in the exe folder and it works.