Can't access webcam with OpenCV - opencv

I'm using OpenCV 2.2 with visual studio 2010 on a win 7 64 bit pc.
I'm able to display pictures and play AVI files through OpenCV as given in the book "Learning OpenCV" but I'm not able to capture webcam images. Even the samples given along with the OpenCV files cant access the webcam.
I get asked for " video source -> capture source" and there are two options: HP webcam Splitter and HP webcam. If I select HP webcam the window closes immediately without displaying any error. (i think any error message is too fast to be seen before it closes). If I select HP Webcam splitter then the new window, where the webcam images(video) are supposed to come, is filled with uniform gray. The webcam LED is on but no video is seen. My webcam works fine with flash (www.testmycam.com) and with DirectShow http://www.codeproject.com/KB/audio-video/WebcamUsingDirectShowNET.aspx
I did try getting some error message by using this:
#include "cv.h"
#include "highgui.h"
#include <iostream>
using namespace cv;
using namespace std;
int main(int, char**)
{
VideoCapture cap("0"); // open the default camera
if(!cap.isOpened()) // check if we succeeded
{
cout << "Error opening camera!";
getchar();
return -1;
}
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
And the error message I got was:
warning: Error opening file (C:\Users\vp\work\ocv\opencv\modules\highgui\src\cap
_ffmpeg.cpp:454)
Error opening camera!
I don't know what this "cap_ffmpeg.cpp" is and I don't know if this is any issue with the nosy "HP Media Smart" stuff.
Any help will be greatly appreciated.

I had the same issue on Windows 7 64-bit. I had to recompile opencv_highgui changing the "Preprocesser Definitions" in the C/C++ panel of the properties page to include:
HAVE_VIDEOINPUT
HAVE_DSHOW
Hope this helps

The cap_ffmpeg.cpp is the source file which uses ffmpeg to perform capturing of the device. If the default example given from OpenCV doesn't work with your webcam, you are out of luck. I suggest you buy another one that is supported.

Recently I have installed OpenCV 2.2 and NetBeans 6.9.1. I had a problem with camera capture, the image in the window was black but the program runs perfectly, without errors. I had to run NetBeans as admin user to fix this problem.
I hope this can help you all.

I just switched to OpenCV 2.2 and am having essentially the same problem but a 32 bit compture running Vista. The webcam would start but I'd get an error message setting the width property for the camera. If I specifically request the DirectShow camera, the cvCreateCameraCapture would fail.
What I think is going on is that the distribution version of HighGUI was build excluding the DirectShow camera. The favored Windows camera on OpenCV used to be Video For Windows, VFW but that has been deprecated since Windows Vista came out and has created all sorts of problems. Why they don't just include it, I don't know. Check the source file cap.cpp
My next step is to rebuild HighGUI myself and make sure the flag HAVE_DSHOW is set. I seem to remember having the same problem with the last version of OpenCV I've been using until I rebuilt it making sure the DirectShow version was enabled.

I experienced the same problem. My Vaio Webcam LED is on but no image on the screen.
Then I tried to export the first frame to a JPEG file and its working. Then I tried to insert a delay of 33ms before capture any frame, this time it works like a charm. Hope this'll help.

Here's an article I wrote some time back. It uses the videoInput library to get input from webcams. It uses DirectX, so it works with almost every webcam out there. Capturing images with DirectX

Once you create the cv::VideoCapture you should give an integer not a string (since string implies the input is a file).
To open the default camera, open the stream with
cv::VideoCapture capture(0);
and it will work fine.

CMAKE GUI, MSVC++10E, Vista 32bit, OpenCV2.2
It looks like HAVE_VIDEOINPUT/WITH_VIDEOINPUT option doesn't work.
However adding: /D HAVE_DSHOW /D HAVE_VIDEOINPUT to CMAKE_CXX_FLAGS, and CMAKE_C_FLAGS did the trick for me (there will be warns due to macro redefinitions).

Related

Using OSVR camera in OpenCV 3

I'm trying to use the OSVR IR camera in OpenCV 3.1.
Initialization works OK.
Green LED is lit on camera.
When I call VideoCapture.read(mat) it returns false and mat is empty.
Other cameras work fine with the same code and VLC can grab the stream from the OSVR camera.
Some further testing reveals: grab() return true, whereas retrieve(mat) again returns false.
Getting width and height from the camera yields expected results but MODE and FORMAT gets me 0.
Is this a config issue? Can it be solved by a combination of VideoCapture.set calls?
Alternative Official answer received from the developers (after my own solution below):
The reason my camera didn't work out of the box with OpenCV might be that it has old firmware (pre-v7).
Work around (or just update firmware):
I found the answer here while browsing anything remotely linked to the issue:
Fastest way to get frames from webcam
You need to specify that it should use DirectShow.
VideoCapture capture( CV_CAP_DSHOW + id_of_camera );

Capturing through a single multi-head (stereo) camera using OpenCV

I have a single multi-head (stereo) usb camera that can be detected and can stream stereo videos using the "Video Capture Sources" filter in GraphEdit .
I'm trying to access both channels using OpenCV2.4.8 (on PC that has VS2010, Win7 X64) for further stereo image processing. However, I can only detect/stream single head(channel) of the camera and not both stereo heads of it. My code is set according to the related documentation notes of VideoCapture::grab/VideoCapture::retrieve and looks like the following:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat Lframe,Rframe;
namedWindow("Lframe",CV_WINDOW_AUTOSIZE);namedWindow("Rframe",CV_WINDOW_AUTOSIZE);
while(char(waitKey(1)) != 'q') {
if(cap.grab())
cap.retrieve(Lframe,0); cap.retrieve(Rframe,1);// get a new frame
imshow("Lframe",Lframe);imshow("Rframe",Rframe);
if(waitKey(30) >= 0) break;
}
return 0;
}
The problem is that the rendered channels (Lframe,Rframe) are identical no matter which Channel index is passed. Hence, only certain head is accessed & I can't get stereo streaming.
Is there a way to use "Video Capture Sources" filter directly with OpenCV?
Waiting for your assistance & Thank you in advance,

How to read a frame from .mts video file with gstreamer 1.0 and process it by OpenCV

For the past two weeks I am trying to find a proper way to read frames from .mts video file and process them in OpenCV. When .mts file is in 25p (25 fps progressive) format VideoCapture of OpenCV works fine for seeking video frames but when it is in 50i (25 fps interlaced) format VideoCapture of OpenCV can not properly decode it frame by frame.
(e.g. in a sample scenario when I get frame #1 and then read frame #300 and later read frame #1, it returns a corrupted image different from my previous read of frame #1) (i am using OpenCV 2.4.6)
I decided to replace video decoder part of the program.
I tried FFmpegSource2 but the problem of proper frame seeking for .mts was not resolved (most of the time FFMS_GetFrame function returns same output for several consecutive frames for 50i .mts file).
I also tried DirectShow. But IsFormatSupported method of IMediaSeeking for TIME_FORMAT_FRAME does not return S_OK for 50i .mts video file and it only supports TIME_FORMAT_MEDIA_TIME for this kind of video file. I have not tried myself but a friend said even using TIME_FORMAT_MEDIA_TIME for frame seeking will result in the same problem as above and I may not be able to jump back and forward to individual frames and read their data.
Now I am going to try gstreamer. I found sample method for linking gstreamer and openCV in the following link:
Adding opencv processing to gstreamer application
When I try to compile it in gstreamer 1.0, I get the following error:
error C3861: 'gst_app_sink_pull_buffer': identifier not found
I have included gst/gst.h, gst/app/gstappsink.h, gst/app/gstappsrc.h
Looked at the following help link and there was not gst_app_sink_pull_buffer function there too.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
I am using gstreamer 1.0 (v1.2.0) from gstreamer.freedesktop.org
May be gstreamer SDK from www.gstreamer.com (based on gstreamer 0.1) work for that, but I have not tried it yet and prefer to use gstreamer from gstreamer.freedesktop.org
I don't know where gst_app_sink_pull_buffer is defined. Anybody knows how I can compile the sample method provided for gstreamer 0.1 in
Adding opencv processing to gstreamer application for gstreamer 1.0?
Thank you in advance.
UPDATE 1: I am new to gstreamer. Now I know that have to port the sample method of Adding opencv processing to gstreamer application from gstreamer 0.1 to gstreamer 1.0. I replaced gst_app_sink_pull_buffer function with gst_app_sink_pull_sample and gst_sample_get_buffer. Have to work more on the other parts of the code and see if can open a desired frame from 50i .mts video file and process it with OpenCV.
UPDATE 2: I found a very good example at
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-data-spoof.html#section-spoof-appsink
And I easily replaced the part which saves the snapshot using GTk with functions that load frame data buffer to an OpenCV Mat. This program works fine for many video file types and I can grab frames of the video file in OpenCV Mat. But when the input video file is a 50i .mts video file, it returns the following errors and I can not read the frame data:
No accelerated IMDCT transform found
0:00:00.405110839 4632 0B775380 ERROR libav :0:: get_buffer() failed (-1 2 00000000)
0:00:00.405740899 4632 0B775380 ERROR libav :0:: decode_slice_header error
0:00:00.406401077 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.406705867 4632 0B7756A0 ERROR libav :0:: Missing reference picture
0:00:00.416044436 4632 0B7759C0 ERROR libav :0:: Cannot combine reference and non-reference fields in the same frame
0:00:00.416813339 4632 0B7759C0 ERROR libav :0:: decode_slice_header error
0:00:00.417725301 4632 0B775CE0 ERROR libav :0:: Missing reference picture
The step by step debug shows that "No accelerated IMDCT transform found" appears after running
ret = gst_element_get_state( pipeline, NULL, NULL, 5 * GST_SECOND );
and google search shows that I can ignore it as a warning.
All of the other errors emerge just after running
g_signal_emit_by_name( sink, "pull-preroll", &sample, NULL );
I have no idea how to resolve this issue? I have already played this .mts file in another example using playbin and gstreamer can play this .mts video file well when I use playbin.

alternative to cvWaitKey() function in iOS

ok, what i am trying to do is retrieve a frame from an existing video file, do some work on the frame and then save it to a new file,
what actually happens is that it writes some frames and then crashes as the code is quite fast,
if i don't put cvWaitKey() i get the same error i get when writing video frames with AVFoundation Library without using
AVAssetWriterInput.readyForMoreMediaData
OpenCV video writer is implemented using AVFoundation classes but we lose access to
AVAssetWriterInput.readyForMoreMediaData
or am i missing something ?
here is the code similar to what i'm trying to do,
while (grabResult&&frameResult) {
grabResult = cvGrabFrame(capture); // capture a frame
if(grabResult){
img = cvRetrieveFrame(capture, 0); // retrieve the captured frame
cvFlip(img,NULL,0); // edit img
frameResult = cvWriteFrame(writer,img); // add the frame to the file
cvWaitKey(-1); or anything that helps to finish adding the previous frame
}
}
I am trying to convert a video file using OpenCV (without displaying)
in my iPhone/iPad app, everything works except cvWaitKey() function
and I get this error:
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvWaitKey,
Without this function frames are dropped as there's no way to know if
the video writer is ready, is there an alternative to my problem?
I am using OpenCV 2.4.2 and I get same error with the latest
precompiled version of OpenCV.
Repaint the UIImageView:
[[NSRunLoop currentRunLoop]runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.0f]];
followed by your imshow or cvShowImage
I don't know anything about iOS development, but did you try calling the C++ interface method waitKey()?

Why does OpenCV give me a black screen?

I'm currently trying to use OpenCV (using the Processing library).
However, when I try to run any examples (either the Processing ones or the C ones included with OpenCV), I see nothing but black instead of input from the camera. The camera's LED indicator does turn on.. has anyone had the same problem? is my camera somehow incompatible with openCV? It's an Acer Crystal Eye...
Thanks,
OpenCV 2.1 still has a few problems with 64bits OS. You can read this topic on the subject.
If you're looking for working/compilable source code that shows how to use the webcam, check this out.
Let us know if it helped you.
I recently had the same problem. The OpenCV library on its own just gave me a blank screen, I had to include the videoInput library:
http://muonics.net/school/spring05/videoInput/
An example I followed was:
#include "stdafx.h"
#include "videoInput.h"
#include "cv.h"
#include "highgui.h"
int main()
{
videoInput VI;
int numDevices = VI.listDevices();
int device1= 0;
VI.setupDevice(device1);
int width = VI.getWidth(device1);
int height = VI.getHeight(device1);
IplImage* image= cvCreateImage(cvSize(width, height), 8, 3);
unsigned char* yourBuffer = new unsigned char[VI.getSize(device1)];
cvNamedWindow("test");
while(1)
{
VI.getPixels(device1, yourBuffer, false, false);
image->imageData = (char*)yourBuffer;
cvConvertImage(image, image, CV_CVTIMG_FLIP);
cvShowImage("test", image);
if(cvWaitKey(15)==27) break;
}
VI.stopDevice(device1);
cvDestroyWindow("test");
cvReleaseImage(&image);
return 0;
}
From this source: http://www.aishack.in/2010/03/capturing-images-with-directx/
I had somewhat same problem on Ubuntu. I downloaded a code from here:
http://www.rainsoft.de/projects/pwc.html
It does an extra step before starting to get frames(i think setting FPS). Worth a try, the code is easy to read and works with non-philips cams.
OpenCV only supports a limited number of types of cameras. Most likely your camera is not supported. You can look at either the source code or their web site to see which are supported.

Resources