I have extracted some feature points of an image using the following code
vector<Point2f> cornersFrame1;
goodFeaturesToTrack( frame1, cornersFrame1, maxCorners, qualityLevel, minDistance, Mat(), blockSize, useHarrisDetector, k );
After that i want to read the values of present at these feature points. So, i am using the following code:
for(int i=0; i<cornersFrame1.size(); i++)
{
float frame1 = calculatedU.at<float>( cornersFrame1[i].x, cornersFrame1[i].y );
}
then i get Segmentation fault.
But if i use the following code in "For loop" then it work.
float frame1 = calculatedU.at<float>( cornersFrame1[i].y, cornersFrame1[i].x );
I am confused because i think that "Point2f" stores pixel information as (row , col). Isn't it?
No, it is not. All types of points in OpenCV are just normal points that you can think about: (x,y). When it comes to coordinate in image this means that 'x' is a column and 'y' is a row. On the other hand at<> requires as input (row, column). This is why you had to provide (y,x) instead of (x,y).
Just to prevent future confusion, one of the ways of using at<> is this one:
float frame1 = calculatedU.at<float>( cornersFrame1[i] );
This way you don't need to think whether you should provide (x,y) or (y,x).
Related
I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.
I try to get the center of circles using Hough Circle algorithm from
https://github.com/Itseez/opencv/blob/master/samples/cpp/houghcircles.cpp
but I need more accurate coordinates.
When I get those coordinates like
this
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
cout<<c[0]<<" "<<c[1]<<endl;
}
it prints just the integer part.
Is there any posibility to get the center more precise(4 decimals or more)?
You are explicitly converting the coordinates to integers by assigning them to an integer vector (Vec3i). If you print them like this, you will print the values as you get them from OpenCV:
cout<<circles[0]<<" "<<circles[1]<<endl;
However, these results might not be as accurate as you desire. In that case, you are out of luck with your current approach as OpenCV does not provide more accurate results.
Hi I'm trying to write some camera calibration code and I'm having a hard time using MatVectors in JavaCV that should be the equivalents of std::vec in C++.
This is how i generate my image and object points:
Mat objectPoints = new Mat(allImagePoints.rows(),1,opencv_core.CV_32FC3);
float x = 0;
float y = 0;
for (int h=0;h<patternHeight;h++) {
y = h*rectangleSize;
for (int w=0;w<patternWidth;w++) {
x = w*rectangleSize;
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w), x);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+1, y);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+2, 0);
}
}
MatVector allObjectPointsVec = new MatVector(allImagePoints.cols());
MatVector allImagePointsVec = new MatVector(allImagePoints.cols());
for (int i=0;i<allImagePoints.cols();i++) {
allObjectPointsVec.put(i,objectPoints);
allImagePointsVec.put(i,allImagePoints.col(i));
}
My image points are given in the Mat allImagePoints and as you can see I create corresponding vectors allObjectPointsVec and allImagePointsVec accordingly. When i try to do a camera calibration with these points i get the following error:
OpenCV Error: Assertion failed (ni > 0 && ni == ni1) in cv::collectCalibrationData, file ..\..\..\..\opencv\modules\calib3d\src\calibration.cpp, line 3193
java.lang.reflect.InvocationTargetException
...
which seems like the lengths of the image and object points don't coincide but i'm pretty sure that i got this right. Printing the MatVector objects gives
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237b8a0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#4d353a7a]
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237acd0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#772f4d0]
which also confuses me as I would have expected that the capacity should correspond to the length (number of matrices in the vector). If I print the size field I get the expected value. If i access a random element in the vector (e.g. allObjectPointsVec.get(i)) and print it to a string, I reveive the following:
AbstractArray[width=1,height=77,depth=32,channels=3] (for object points)
AbstractArray[width=1,height=77,depth=32,channels=2] (for image points)
which is what I would expect... Any ideas? To me this seems sort of a bug, also because I don't understand what the capacity represents if not the vector length...
I have tried the cvMatchTemplate function to compare two images(a template and an image).
IplImage img = cvLoadImage("thumbnail.jpg");
IplImage template = cvLoadImage("temp.jpg");
IplImage result = cvCreateImage(cvSize(img.width()-template.width()+1, img.height()-template.height()+1), IPL_DEPTH_32F, 1);
int method = CV_TM_SQDIFF;
cvMatchTemplate(img,template,result,method);
cvShowImage("res",result);
double[] min_val = new double[2];
double[] max_val = new double[2];
//Where are located our max and min correlation points
CvPoint minLoc = new CvPoint();
CvPoint maxLoc = new CvPoint();
cvMinMaxLoc(result, min_val, max_val, minLoc, maxLoc, null); //the last null it's for optional mask mat()
CvPoint point = new CvPoint();
point.x(minLoc.x()+template.width());
point.y(minLoc.y()+template.height());
cvRectangle(img, minLoc, point, CvScalar.WHITE, 2, 8, 0); //Draw the rectangle result in original img.
cvShowImage("Image", img);
cvWaitKey(0);
//Release
cvReleaseImage(img);
cvReleaseImage(template);
cvReleaseImage(result);
I got the desired result but could not find a way of comparing two and more images with a template.
I converted the result image that is obtained to a matrix using asCvMat and got the matrix of probability of match on every pixel of original image.
I came across the determinant function in OpenCv to compare the two matrices to understand which of the images is a closer match to the template but could not find a corresponding function in JavaCv.
Is there any way by which I could compare the results and determine that which image is a closer match. I did come across ObjectFinder but could not find proper documentation of how to use it.
Please point out certain links or examples which may help me solve my problem.
You can compare image matching results by compering the max_val
I would even change the method to CV_TM_SQDIFF_NORMED and then you can set a threshold for max_val that is somewhere between 0 to 1.
Here's my problem. I manually extracted key points features with SURF on multiple images. But I also already know which pair of points are going to match. The thing is, I'm trying to create my matching pairs, but I don't understand how. I tried by looking at the code, but it's a mess.
Right now, I know that the size of the features.descriptors, a matrix, is the same as the number of key points (the other dimension is 1). In the code, to detect matching pairs, it's only using the descriptors, so it's comparing rows (or columns, I'm not sure) or two descriptors matrix and determined if there's anything in common.
But in my case, I already know that there's a match between keypoint i from image 1 and keypoint j from image 2. How do I describe that as a MatchesInfo value. Particularly the element matches of type std::vector< cv::DMatch >.
EDIT: So, for this, I don't need to use any matcher or anything like this. I know which pairs are going together!
If I understood you're question correctly, I assume that you want the keypoint matches in std::vector<cv::DMatch> for the purpose of drawing them with the OpenCV cv::drawMatches or usage with some similar OpenCV function. Since I was also doing matching "by hand" recently, here's my code that draws up arbitrary matches contained originally in a std::vector<std::pair <int, int> > aMatches and displays them in a window:
const cv::Mat& pic1 = img_1_var;
const cv::Mat& pic2 = img_2_var;
const std::vector <cv::KeyPoint> &feats1 = img_1_feats;
const std::vector <cv::KeyPoint> &feats2 = img_2_feats;
// you of course can work directly with original objects
// but for drawing you only need const references to
// images & their corresponding extracted feats
std::vector <std::pair <int, int> > aMatches;
// fill aMatches manually - one entry is a pair consisting of
// (index_in_img_1_feats, index_in_img_2_feats)
// the next code draws the matches:
std::vector <cv::DMatch> matches;
matches.reserve((int)aMatches.size());
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
cv::Mat output;
cv::drawMatches(pic1, feats1, pic2, feats2, matches, output);
cv::namedWindow("Match", 0);
cv::setWindowProperty("Match", CV_WINDOW_FULLSCREEN, 1);
cv::imshow("Match", output);
cv::waitKey();
cv::destroyWindow("Match");
Alternatively, if you need fuller information about the matches for purposes more complicated than drawing then you might also want to set the distance between matches to a proper value. E.g. if you want to calculate distances using L2 distance, you should replace the following line:
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
with this (note, for this a reference to feature descriptor vectors is also needed):
cv::L2<float> cmp;
const std::vector <std::vector <float> > &desc1 = img_1_feats_descriptors;
const std::vector <std::vector <float> > &desc2 = img_2_feats_descriptors;
for (int i=0; i < (int)aMatches.size(); ++i){
float *firstFeat = &desc1[aMatches[i].first];
float *secondFeat = &desc2[aMatches[i].second];
float distance = cmp(firstFeat, secondFeat, firstFeat->size());
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
distance));
}
Note that in the last snippet, descX[i] is a descriptor for featsX[i], each element of the inner vector being one component of the descriptor vector. Also, note that all descriptor vectors should have the same dimensionality.