Manually make pairwise matching in OpenCV from features key points - opencv

Here's my problem. I manually extracted key points features with SURF on multiple images. But I also already know which pair of points are going to match. The thing is, I'm trying to create my matching pairs, but I don't understand how. I tried by looking at the code, but it's a mess.
Right now, I know that the size of the features.descriptors, a matrix, is the same as the number of key points (the other dimension is 1). In the code, to detect matching pairs, it's only using the descriptors, so it's comparing rows (or columns, I'm not sure) or two descriptors matrix and determined if there's anything in common.
But in my case, I already know that there's a match between keypoint i from image 1 and keypoint j from image 2. How do I describe that as a MatchesInfo value. Particularly the element matches of type std::vector< cv::DMatch >.
EDIT: So, for this, I don't need to use any matcher or anything like this. I know which pairs are going together!

If I understood you're question correctly, I assume that you want the keypoint matches in std::vector<cv::DMatch> for the purpose of drawing them with the OpenCV cv::drawMatches or usage with some similar OpenCV function. Since I was also doing matching "by hand" recently, here's my code that draws up arbitrary matches contained originally in a std::vector<std::pair <int, int> > aMatches and displays them in a window:
const cv::Mat& pic1 = img_1_var;
const cv::Mat& pic2 = img_2_var;
const std::vector <cv::KeyPoint> &feats1 = img_1_feats;
const std::vector <cv::KeyPoint> &feats2 = img_2_feats;
// you of course can work directly with original objects
// but for drawing you only need const references to
// images & their corresponding extracted feats
std::vector <std::pair <int, int> > aMatches;
// fill aMatches manually - one entry is a pair consisting of
// (index_in_img_1_feats, index_in_img_2_feats)
// the next code draws the matches:
std::vector <cv::DMatch> matches;
matches.reserve((int)aMatches.size());
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
cv::Mat output;
cv::drawMatches(pic1, feats1, pic2, feats2, matches, output);
cv::namedWindow("Match", 0);
cv::setWindowProperty("Match", CV_WINDOW_FULLSCREEN, 1);
cv::imshow("Match", output);
cv::waitKey();
cv::destroyWindow("Match");
Alternatively, if you need fuller information about the matches for purposes more complicated than drawing then you might also want to set the distance between matches to a proper value. E.g. if you want to calculate distances using L2 distance, you should replace the following line:
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
with this (note, for this a reference to feature descriptor vectors is also needed):
cv::L2<float> cmp;
const std::vector <std::vector <float> > &desc1 = img_1_feats_descriptors;
const std::vector <std::vector <float> > &desc2 = img_2_feats_descriptors;
for (int i=0; i < (int)aMatches.size(); ++i){
float *firstFeat = &desc1[aMatches[i].first];
float *secondFeat = &desc2[aMatches[i].second];
float distance = cmp(firstFeat, secondFeat, firstFeat->size());
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
distance));
}
Note that in the last snippet, descX[i] is a descriptor for featsX[i], each element of the inner vector being one component of the descriptor vector. Also, note that all descriptor vectors should have the same dimensionality.

Related

Compute fundamental matrix with 8 point algorithm

I need to write an own implementation of computing the fundamental matrix between two images based on the corresponding image coordinates without using OpenCV.
Is it possible to describe this algorithm in its simplest form in accordance with the following function? a simple and straightforward formula.
FMatrixEightPoint()
Input Arguments:
points1(x,y)−pixel coordinates in the first image ,
corresponding to points2 in the second image
points2(x,y)−pixel coordinates in the second image ,
corresponding to points1 in the first image
Output :
F − the fundamental matrix between the first image and the second image
Yes, it is possible to describe the algorithm in the mentioned form.
If you would use OpenCV, you could just use findFundamentalMat. This also provides the 8-point method for computing the fundamental matrix.
The example (in C++) taken from the OpenCV documentation, but adapted (using the RANSAC algorithm for computing the fundamental matrix):
// Example. Estimation of fundamental matrix using the 8-point algorithm
int point_count = 8; // must be >= 8
vector<Point2f> points1(point_count);
vector<Point2f> points2(point_count);
// initialize the points here ... */
for( int i = 0; i < point_count; i++ )
{
points1[i] = ...;
points2[i] = ...;
}
Mat fundamental_matrix =
findFundamentalMat(points1, points2, CV_FM_8POINT);
If you want to write your own function, it would look like this (no valid code)
Matrix findFundamentalMat(Array points1, Array points2)
{
Matrix fundamentalMatrix;
// compute fundamental matrix based on input points1 and points2 or call OpenCV's findFundamentalMat
return fundamentalMatrix;
}

Efficiently tell if one image is entirely comprised of the pixel values of another in OpenCV

I am trying to find an efficient way to see if one image is a subset of another (meaning that each unique pixel in one image is also found in the other.) The repetition or ordering of the pixels do not matter.
I am working in Java, so I would like all of my operations to be completed in OpenCV for efficiency's sake.
My first idea was to export a list of unique pixel values, and compare it to the list from the second image.
As there is not a built in function to extract unique pixels, I abandoned this approach.
I also understand that I can find the locations of a particular color with the inclusive inRange, and findNonZero operations.
Core.inRange(image, color, color, tempMat); // inclusive
Core.findNonZero(tempMat, colorLocations);
Unfortunately, this does not provide an adequate answer, as it would need to be executed per color, and would still require extracting unique pixels.
Essentially, I'm asking if there is a clever way to use the built in OpenCV functions to see if an image is comprised of the pixels found in another image.
I understand that this will not work for slight color differences. I am working on a limited dataset, and care about the exact pixel values.
To put the question more mathematically:
Because the only think you are interested in is the pixel values i would suggest to do the following.
Compute the histogram of image 1 using hist1 = calcHist()
Compute the histogram of image 2 using hist2 = calcHist()
Calculate the difference vector diff = hist1 - hist2
Check if each bin of the hist of the subimage is less or equal than the corresponding bin in the hist of the bigger image
Thanks to Miki for the fix.
I will keep Amitay's as the accepted answer, as he absolutely lead me down the correct path. I wanted to also share my exact answer for anyone who finds this in the future.
As I stated in my question, I was looking for an efficient way to see if the RGB values of one image were a subset of the RGB values of another image.
I made a function to the following specification:
The Java code is as follows:
private boolean isSubset(Mat subset, Mat subMask, Mat superset) {
// Get unique set of pixels from both images
subset = getUniquePixels(subset, subMask);
superset = getUniquePixels(superset, null);
// See if the superset pixels encapsulate the subset pixels
// OR the unique pixels together
Mat subOrSuper = new Mat();
Core.bitwise_or(subset, superset, subOrSuper);
//See if the ORed matrix is equal to the superset
Mat notEqualMat = new Mat();
Core.compare(superset, subOrSuper, notEqualMat, Core.CMP_NE);
return Core.countNonZero(notEqualMat) == 0;
}
subset and superset are assumed to be CV_8UC3 matricies, while subMask is assumed to be CV_8UC1.
private Mat getUniquePixels(Mat img, Mat mask) {
if (mask == null) {
mask = new Mat();
}
// int bgrValue = (b << 16) + (g << 8) + r;
img.convertTo(img, CvType.CV_32FC3);
Vector<Mat> splitImg = new Vector<>();
Core.split(img, splitImg);
Mat flatImg = Mat.zeros(img.rows(), img.cols(), CvType.CV_32FC1);
Mat multiplier;
for (int i = 0; i < splitImg.size(); i++) {
multiplier = Mat.ones(img.rows(), img.cols(), CvType.CV_32FC1);
// set powTwo = to 2^i;
int powTwo = (1 << i);
// Set multiplier matrix equal to powTwo;
Core.multiply(multiplier, new Scalar(powTwo), multiplier);
// n<<i == n * 2^i;
// I'm shifting the RGB values into separate parts of the same 32bit
// integer.
Core.multiply(multiplier, splitImg.get(i), splitImg.get(i));
// Add the shifted RGB components together.
Core.add(flatImg, splitImg.get(i), flatImg);
}
// Create a histogram of the pixel values.
List<Mat> images = new ArrayList<>();
images.add(flatImg);
MatOfInt channels = new MatOfInt(0);
Mat hist = new Mat();
// 16777216 == 256*256*256
MatOfInt histSize = new MatOfInt(16777216);
MatOfFloat ranges = new MatOfFloat(0f, 16777216f);
Imgproc.calcHist(images, channels, mask, hist, histSize, ranges);
Mat uniquePixels = new Mat();
Core.inRange(hist, new Scalar(1), new Scalar(Float.MAX_VALUE), uniquePixels);
return uniquePixels;
}
Please feel free to ask questions, or point out problems!

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

OpenCV matchShapes() output value

How do I use value from OpenCV matchShapes output? We implemented OpenCV matchShapes function to compare two images, particularly, shapes. But when we obtained the answer we are confused how to use these values?
The code is
- (bool) someMethod:(UIImage *)image :(UIImage *)temp {
RNG rng(12345);
cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;
src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];
int thresh=150;
double ans=0, result=0;
Mat imageresult1, imageresult2;
cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);
std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;
Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);
findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
//cout<<contours1[i]<<endl;
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}
findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}
for(int i=0;i<contours1.size();i++)
{
ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
cout<<" "<<ans<<endl;
}
std::cout<<"The answer is "<<ans<<endl;
if (ans<=20) {
return true;
}
return false;
}
The output values are
0.225069
0.234417
0
7.63599
0
7.06392
0.335966
0.211358
0.327552
0.842969
0.761659
0.614039
The image is
See my comment on imoutidi's answer. Here is a visual explanation:
The first col are the two original images,the second the canny edges. The 3. col are an arbitrary selection of detected shapes with the same index in both images. As you see, it is not even guaranteed that they correspond to the same image parts as a human would see them. What you end up comparing are different triangles in this case, which say little about the overall shape similarity. The two shape arrays are not even of the same size, since there are more structures in the bottom drawing for example(like small shapes between a thick line). in The 4. col is the last shape in the array. This is the best bet you can make to compare the images. In this example, I get a value of 0.0920794532771 for their similarity.
If I understand correctly your question, you want to know what the return value of matchShapes() stands for.
In your case given the two contours (shapes) the function returns a similarity metric (value). A small value indicates that the two shapes are similar and a big value that they are not.
A good explanation is here: http://docs.opencv.org/3.1.0/d5/d45/tutorial_py_contours_more_functions.html (check the third paragraph).
Also check out the documentation: http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317

getting segmentation fault with Point2f

I have extracted some feature points of an image using the following code
vector<Point2f> cornersFrame1;
goodFeaturesToTrack( frame1, cornersFrame1, maxCorners, qualityLevel, minDistance, Mat(), blockSize, useHarrisDetector, k );
After that i want to read the values of present at these feature points. So, i am using the following code:
for(int i=0; i<cornersFrame1.size(); i++)
{
float frame1 = calculatedU.at<float>( cornersFrame1[i].x, cornersFrame1[i].y );
}
then i get Segmentation fault.
But if i use the following code in "For loop" then it work.
float frame1 = calculatedU.at<float>( cornersFrame1[i].y, cornersFrame1[i].x );
I am confused because i think that "Point2f" stores pixel information as (row , col). Isn't it?
No, it is not. All types of points in OpenCV are just normal points that you can think about: (x,y). When it comes to coordinate in image this means that 'x' is a column and 'y' is a row. On the other hand at<> requires as input (row, column). This is why you had to provide (y,x) instead of (x,y).
Just to prevent future confusion, one of the ways of using at<> is this one:
float frame1 = calculatedU.at<float>( cornersFrame1[i] );
This way you don't need to think whether you should provide (x,y) or (y,x).

Resources