I am trying to capture video into a Mat type from two or more MSFT LifeCam HD-3000s using the videoInput library, OpenCV 2.4.3, and VS2010 Express.
I followed the example at: Most efficient way to capture and send images from a webcam in a network and it worked great.
Now I want to replace the IplImage type with a c++ Mat type. I tried to follow the example at: opencv create mat from camera data
That gave me the following:
VI = new videoInput;
int CurrentCam = 0;
VI->setupDevice(CurrentCam,WIDTH,HEIGHT);
int width = VI->getWidth(CurrentCam);
int height = VI->getHeight(CurrentCam);
unsigned char* yourBuffer = new unsigned char[VI->getSize(CurrentCam)];
cvNamedWindow("test",1);
while(1)
{
VI->getPixels(CurrentCam, yourBuffer, false, true);
cv::Mat image(width, height, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
imshow("test", image);
if(cvWaitKey(15)==27) break;
}
The output is a lined image (i.e., it looks like the first line is correct but the second line seems off, third correct, fourth off, etc). That suggests that either the step part is wrong or there is some difference between the IplImage type and the Mat type that I am not getting. I have tried looking at/altering all the parameters, but I can't find anything.
Hopefully, an answer will help those facing what appears to be a fairly common issue with loading an image form the videoInput library to the Mat type.
Thanks in advance!
Try
cv::Mat image(height, width, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
Related
I'm using INRIA person dataset, i iterate the images and everything is fine and after i have this function
vector<Mat> HOG_extract(Mat input_image, bool patch_size, int width, int height)
{
Mat gray_image;
cvtColor(input_image, gray_image, CV_BGR2GRAY);
HOGDescriptor hog;
hog.winSize = Size(width, height);
hog.blockSize = Size(block_size, block_size);
hog.blockStride = Size(block_stride, block_stride);
hog.cellSize = Size(cell_size, cell_size);
hog.nbins = bin_size;
vector<float> hog_value;
vector<Point> locations;
hog.compute(gray_image, hog_value, Size(0, 0), Size(0, 0), locations);
}
when he gets to hog.compute i receive an exception and libpng error: IDAT: invalid distance too far back.
like how i can solve this? looks like something happend when using imread and converting in gray
There seems to be an issue with the dataset. I think it was edited with an error using an older version of libpng.
I managed to fix it with the help of "png-fix-IDAT-windowsize" tool which you can find here: http://www.libpng.org/pub/png/apps/pngcheck.html
I'm now using pcl openni grabber to get point cloud from kinect cameras. But I also want to get OpenCV Mat variables for the rgb and depth information.
Does anyone know how to achieve this?
Thanks a lot!
I just found pcl has its own openni2 wrapper to get the color and depth image directly. We can write a callback function like:
void PclProcessor::image_cb1_ (const boost::shared_ptr<pcl::io::Image>& rgb1, const boost::shared_ptr<pcl::io::DepthImage>& depth1, float reciprocalFocalLength)
{
if (refreshK1)
{
C1 = Mat(rgb1->getHeight(), rgb1->getWidth(), CV_8UC3);
rgb1->fillRGB(C1.cols,C1.rows,C1.data,C1.step);
cvtColor(C1, C1, CV_RGB2BGR);
D1 = Mat(depth1->getHeight(), depth1->getWidth(), CV_32F);
depth1->fillDepthImage(D1.cols, D1.rows,(float *)D1.data,D1.step);
refreshK1 = false;
}
imshow("camera 1 color",C1);
imshow("camera 1 depth",D1);
cv::waitKey(0);
}
In this case, I can get the color image right. However, the depth image does not look right.
The camera keeps on crashing when I run my code. Trying to convert cv::mat to IplImage.
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
IplImage test =canvas;
while(true )
{
canvas =cvQueryFrame(capture);
imgScribble = cvCreateImage(cvGetSize(&test), 8, 3);
IplImage* imgYellowThresh1 = GetThresholdedImage1(&test);
cvAdd(&test,imgScribble,&test);
cvShowImage("video", &test);
//This is the only line that uses the C++ API, so I assume you want to use the C API instead
cv::Mat canvas(320, 240, CV_8UC3, Scalar(255,255,255));
//I have used OpenCV for quite a while now and I've always declared IplImage*, and never IplImage. Use it safely as a rule of thumb, * always goes after IplImage
IplImage test =canvas;
This will become:
//although why you need to clone a newly created
//blank image is a valid concern
IplImage* canvas = cvCreateImage(....);
IplImage* test = cvClone(canvas);
cvZero(test);
//don't forget to release resources at some point
cvReleaseImage(&canvas);
cvReleaseImage(&test);
I have been trying for hours to run an xcode project with openCV. I have built the source, imported it into the project and included
#ifdef __cplusplus #import opencv2/opencv.hpp>
#endif
in the .pch file.
I followed the instructions from http://docs.opencv.org/trunk/doc/tutorials/introduction/ios_install/ios_install.html
Still I am getting many Apple Mach-O linker errors when I compile.
Undefined symbols for architecture i386:
"std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from:
Please help me I am really lost..
UPDATE:
Errors all fixed and now I am trying to detect circles..
Mat src, src_gray;
cvtColor( image, src_gray, CV_BGR2GRAY );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, image.rows/8, 200, 100, 0, 0 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
I am using the code above, however no circles are being drawn on the image.. is there something obvious that I am doing wrong?
Try the solution in my answer to this question...
How to resolve iOS Link errors with OpenCV
Also on github I have a couple of simple working samples - with recently built openCV framework.
NB - OpenCVSquares is simpler than OpenCVSquaresSL. The latter was adapted for Snow Leopard backwards compatibility - it contains two builds of the openCV framework and 3 targets, so you are better off using the simpler OpenCVSquares if it will run on your system.
To adapt OpenCVSquares to detect circles, I suggest that you start with the Hough Circles c++ sample from the openCV distro, and use it to adapt/replace CVSquares.cpp and CVSquares.h with, say CVCircles.cpp and CVCicles.h
The principles are exactly the same:
remove UI code from the c++, the UI is provided on the obj-C side
transform the main() function into a static member function for the class declared in the header file. This should mirror in form an Objective-C message to the wrapper (which translates the obj-c method to a c++ function call).
From the objective-C side, you are passing a UIImage to the wrapper object, which:
converts the UIImage to a cv::Mat image
pass the Mat to a c++ class for processing
converts the result from Mat back to UIImage
returns the processed UIImage back to the objective-C calling object
update
The adapted houghcircles.cpp should look something like this at it's most basic (I've replaced the CVSquares class with a CVCircles class):
cv::Mat CVCircles::detectedCirclesInImage (cv::Mat img)
{
//expects a grayscale image on input
//returns a colour image on ouput
Mat cimg;
medianBlur(img, img, 5);
cvtColor(img, cimg, CV_GRAY2RGB);
vector<Vec3f> circles;
HoughCircles(img, circles, CV_HOUGH_GRADIENT, 1, 10,
100, 30, 1, 60 // change the last two parameters
// (min_radius & max_radius) to detect larger circles
);
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
circle( cimg, Point(c[0], c[1]), c[2], Scalar(255,0,0), 3, CV_AA);
circle( cimg, Point(c[0], c[1]), 2, Scalar(0,255,0), 3, CV_AA);
}
return cimg;
}
Note that the input parameters are reduced to one - the input image - for simplicity. Shortly I will post a sample on github which will include some parameters tied to slider controls in the iOS UI, but you should get this version working first.
As the function signature has changed you should follow it up the chain...
Alter the houghcircles.h class definition:
static cv::Mat detectedCirclesInImage (const cv::Mat image);
Modify the CVWrapper class to accept a similarly-structured method which calls detectedCirclesInImage
+ (UIImage*) detectedCirclesInImage:(UIImage*) image
{
UIImage* result = nil;
cv::Mat matImage = [image CVGrayscaleMat];
matImage = CVCircles::detectedCirclesInImage (matImage);
result = [UIImage imageWithCVMat:matImage];
return result;
}
Note that we are converting the input UIImage to grayscale, as the houghcircles function expects a grayscale image on input. Take care to pull the latest version of my github project, I found an error in the CVGrayscaleMat category which is now fixed . Output image is colour (colour applied to grayscale input image to pick out found circles).
If you want your input and output images in colour, you just need to ensure that you make a grayscale conversion of your input image for sending to Houghcircles() - eg cvtColor(input_image, gray_image, CV_RGB2GRAY); and apply your found circles to the colour input image (which becomes your return image).
Finally in your CVViewController, change your messages to CVWrapper to conform to this new signature:
UIImage* image = [CVWrapper detectedCirclesInImage:self.image];
If you follow all of these details your project will produce circle-detected results.
update 2
OpenCVCircles now on Github
With sliders to adjust HoughCircles() parameters
I have a pointer to image:
IplImage *img;
which has been converted to Mat
Mat mt(img);
Then, the Mat is sent to a function that gets a reference to Mat as input void f(Mat &m);
f(mt);
Now I want to copy back the Mat data to the original image.
Do you have any suggestion?
Best
Ali
Your answer can be found in the documentation here: http://opencv.willowgarage.com/documentation/cpp/c++_cheatsheet.html
Edit:
The first half of the first code area indeed talks about the copy constructor which you already have.
The second half of the first code area answers your question. Reproduced below for clarity.
//Convert to IplImage or CvMat, no data copying
IplImage ipl_img = img;
CvMat cvmat = img; // convert cv::Mat -> CvMat
For the following case:
double algorithm(IplImage* imgin)
{
//blabla
return erg;
}
I use the following way to call the function:
cv::Mat image = cv::imread("image.bmp");
double erg = algorithm(&image.operator IplImage());
I have made some tests and how it looks the image object will manage the memory. The operator IplImage() will only construct the header for IplImage. Maybe this could be useful?
You can use this form:
Your Code:
plImage *img;
Mat mt(img);
f(mt);
Now copy back the Mat data to the original image.
img->imageData = (char *) mt.data;
You can also copy the data instead of pointer:
memcpy(mt.data, img->imageData, (mt.rows*mt.cols));
(mt.rows*mt.cols) is the size that you should use for copy all data the mt to img.
Hope I helped