OpenCV BFMatcher - Ignore False Positives - opencv

Can someone describe a good process for ignoring false positives from BFMatcher?
I define an image to find in the scene, use SiftFeatureDetector, SiftDescriptorExtractor, then use a BFMatcher. When searching for the correct marker, I find the matches no problem, but I want to make my code more robust to false positives.
//Detect keypoints using ORB Detector
SiftFeatureDetector detector;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(im1, keypoints1);
detector.detect(im2, keypoints2);
//Draw keypoints on images
Mat display1, display2;
drawKeypoints(im1, keypoints1, display1, Scalar(0,0,255));
drawKeypoints(im2, keypoints2, display2, Scalar(0,0,255));
//Extract descriptors
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher(NORM_L1, true);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
I tried to filter out false positives by skip:
if (matches.size() < 50) {
//false positive - skip
} else {
//perform actions
}
But this is not at all robust. I think I saw a few articles on people using radius matchers, but I couldn't find a good description of using a radius match with Brute Force. I checked the documentation: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html , but it was extremely clear to me how I decide what is a good min_dist/max_dist for this application?
I'm sure this is a pretty simple answer for some of you - your help is greatly appreciated!

You need to filter on the distance of the matche.
Take care that distance is dependent of the norm you choose in BFMatcher.
Here an exemple from openCV samples :
double min_dist = 100;
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
}
/** Keep only "good" matches (i.e. whose distance is less than 3*min_dist ) **/
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}

Related

Cannot perform corner matching between two images

I'm using openCV 2.4, where I'm using FlannBasedMatcher to match keypoints. However, instead of using SurfFeatureDetector::detect()to extract keypoints, I'm passing the image corners as keypoints.
Sadly I'm getting zero matches for all values of minHessian.
Below is my code:
void matches(int, void*)
{
SurfFeatureDetector detector( minHessian );
vector<KeyPoint> keypoints_frame,keypoints_trng;
Mat descriptors_frame,descriptors_trng;
//detect keypoints
for(int i=0;i<corners.size();i++)
{
keypoints_frame.push_back(KeyPoint(corners[i].x,corners[i].y,0));
cout<<"\n"<<keypoints_frame[i].pt.x<<"\t"<<keypoints_frame[i].pt.y;
}
keypoints_trng.push_back(KeyPoint(337,288,0));
keypoints_trng.push_back(KeyPoint(337,241,0));
keypoints_trng.push_back(KeyPoint(370,288,0));
keypoints_trng.push_back(KeyPoint(370,241,0));
keypoints_trng.push_back(KeyPoint(291,239,0));
keypoints_trng.push_back(KeyPoint(287,203,0));
keypoints_trng.push_back(KeyPoint(288,329,0));
keypoints_trng.push_back(KeyPoint(426,237,0));
keypoints_trng.push_back(KeyPoint(428,326,0));
keypoints_trng.push_back(KeyPoint(426,201,0));
keypoints_trng.push_back(KeyPoint(427,293,0));
keypoints_trng.push_back(KeyPoint(287,297,0));
for(int i=0;i<corners.size();i++)
{
cout<<"\n"<<keypoints_trng[i].pt.x<<"\t"<<keypoints_trng[i].pt.y;
}
//describe keypoints
SurfDescriptorExtractor extractor;
extractor.compute( src_gray_resized, keypoints_frame, descriptors_frame);
extractor.compute( trng_gray_resized, keypoints_trng, descriptors_trng);
//matching the keypoints
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_frame, descriptors_trng, matches );
cout<<"\n\n no of matches: "<<matches.size();
std::vector< DMatch > good_matches;
if (matches.size()>20)
{minHessian=minHessian+100;}
for( int i = 0; i < descriptors_frame.rows; i++ )
{ if( matches[i].distance <= max(2*100.00, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//cout<<"\nno of good matches"<<good_matches.size();
//-- Draw only "good" matches
Mat img_matches;
drawMatches( src_gray, keypoints_frame, trng_gray, keypoints_trng,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
namedWindow("Good Matches",CV_WINDOW_NORMAL);
imshow( "Good Matches", img_matches );
}

Opencv Feature Matching is not matching correctly for cropped images of different sizes and images taken from various sources?

I am trying to match two images one is screen shot of mobile screen and template image is any app icon.If i match source and template cropped from same images it is matching perfectly.But when i use app icon cropped from different mobile screen it is not matching properly.
For image matching am working on the following code:
int main( int argc, char** argv )
{
Mat objectImg = imread("source.jpg", cv::IMREAD_GRAYSCALE);
Mat sceneImg = imread("note4-3.jpg", cv::IMREAD_GRAYSCALE);
//cv::resize(sceneImg,sceneImg,objectImg.size(),0,0,CV_INTER_CUBIC);
if( !objectImg.data || !sceneImg.data )
{
printf( " No image data \n " );
return -1337;
}
std::vector<cv::KeyPoint> objectKeypoints;
std::vector<cv::KeyPoint> sceneKeypoints;
cv::Mat objectDescriptors;
cv::Mat sceneDescriptors;
Ptr<FeatureDetector> detector;
detector = cv::MSER::create();
detector->detect(objectImg, objectKeypoints);
detector->detect(sceneImg, sceneKeypoints);
Ptr<DescriptorExtractor> extractor = cv::ORB::create();
extractor->compute( objectImg, objectKeypoints, objectDescriptors );
extractor->compute( sceneImg, sceneKeypoints, sceneDescriptors );
if(objectDescriptors.type()!=CV_32F) {
objectDescriptors.convertTo(objectDescriptors, CV_32F);
}
if(sceneDescriptors.type()!=CV_32F) {
sceneDescriptors.convertTo(sceneDescriptors, CV_32F);
}
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( objectDescriptors, sceneDescriptors, matches, 8 );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < objectDescriptors.rows; i++ )
{
double dist = matches[i][0].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
std::vector<cv::DMatch> good_matches;
for( int i = 0; i < objectDescriptors.rows; i++ )
{
if( matches[i][0].distance <= max(2*min_dist, 0.02) ) {
good_matches.push_back( matches[i][0]);
}
}
//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
/*double tresholdDist = 0.50 * sqrt(double(sceneImg.size().height*sceneImg.size().height + sceneImg.size().width*sceneImg.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
Point2f from = objectKeypoints[matches[i][j].queryIdx].pt;
Point2f to = sceneKeypoints[matches[i][j].trainIdx].pt;
//calculate local distance for each possible match
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area and on same height
if (dist < tresholdDist && abs(from.y-to.y)<5)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
}*/
Mat allmatchs;
drawMatches(objectImg,objectKeypoints,sceneImg,sceneKeypoints,good_matches,allmatchs,Scalar::all(-1), Scalar::all(-1),vector<char>(),0);
namedWindow("Matchs" , CV_WINDOW_NORMAL);
imshow( "Matchs",allmatchs);
waitKey(0);
}
[Wrong Match When cropped from different source][1]
The above result is obtained when matching source from one mobile screen shot and template from different screen shot.
I am using opencv3.0
Please help whether I have make changes in code or i have to use template matching or some other technique.I cannot use SUR detectors since i cannot have paid versions due to license conflits??
Sample images:
Source Image
Template
Looking at image you've provided, I can suggest some changes which will help you out.
Remove selecting good matches, this creates issues when sharp features are present. Sharp features have very less hamming distance when compared to other good matches. When you select 2*min_dist , indirectly you are ignoring possible good matches.
Make sure to have reasonable number of feature points in object image.
If this feature detector and descriptor combination doesn't work out, select other feature detector and descriptors like STAR-BRIEF, SURF, which are far better then MSER-ORB.
In your situation, detector-matcher need not be rotation invariant, it should be scale invariant. So try re-sizing object image
Hope my suggestions help you
I got better matches by the following combinations:
Kaze detector
Kaze extractor
BruteForce-L1 matcher
combined with cross check matching given in following link
http://ecee.colorado.edu/~siewerts/extra/ecen5043/ecen5043_code/sift/descriptor_extractor_matcher.cpp

OpenCV: Fundamental matrix accuracy

I am trying to calculate the fundamental matrix of 2 images (different photos of a static scene taken by a same camera).
I calculated it using findFundamentalMat and I used the result to calculate other matrices (Essential, Rotation, ...). The results were obviously wrong. So, I tried to be sure of the accuracy of the calculated fundamental matrix.
Using the epipolar constraint equation, I Computed fundamental matrix error. The error is very high (like a few hundreds). I do not know what is wrong about my code. I really appreciate any help. In particular: Is there any thing that I am missing in Fundamental matrix calculation? and is the way that I calculate the error right?
Also, I ran the code with very different number of matches. There are usually lots of outliers. e.g in a case with more than 80 matches, there was only 10 inliers.
Mat img_1 = imread( "imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( "imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{ return -1; }
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 1000;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L1, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
vector<Point2f>imgpts1,imgpts2;
for( unsigned int i = 0; i<matches.size(); i++ )
{
// queryIdx is the "left" image
imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt);
// trainIdx is the "right" image
imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
//-- Step 4: Calculate Fundamental matrix
Mat f_mask;
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.5, 0.99, f_mask);
//-- Step 5: Calculate Fundamental matrix error
//Camera intrinsics
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};
Mat K(3, 3, CV_64F, data);
//Camera distortion parameters
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
//working with undistorted points
vector<Point2f> undistorted_1,undistorted_2;
vector<Point3f> line_1, line_2;
undistortPoints(imgpts1,undistorted_1,K,D);
undistortPoints(imgpts2,undistorted_2,K,D);
computeCorrespondEpilines(undistorted_1,1,F,line_1);
computeCorrespondEpilines(undistorted_2,2,F,line_2);
double f_err=0.0;
double fx,fy,cx,cy;
fx=K.at<double>(0,0);fy=K.at<double>(1,1);cx=K.at<double>(0,2);cy=K.at<double>(1,2);
Point2f pt1, pt2;
int inliers=0;
//calculation of fundamental matrix error for inliers
for (int i=0; i<f_mask.size().height; i++)
if (f_mask.at<char>(i)==1)
{
inliers++;
//calculate non-normalized values
pt1.x = undistorted_1[i].x * fx + cx;
pt1.y = undistorted_1[i].y * fy + cy;
pt2.x = undistorted_2[i].x * fx + cx;
pt2.y = undistorted_2[i].y * fy + cy;
f_err += = fabs(pt1.x*line_2[i].x +
pt1.y*line_2[i].y + line_2[i].z)
+ fabs(pt2.x*line_1[i].x +
pt2.y*line_1[i].y + line_1[i].z);
}
double AvrErr = f_err/inliers;
I believe the problem is because you calculated the Fundamental matrix based on brute force matcher only, you should make some more optimization for these corresponding point, like ration test and symmetric test.
I recommend you to ready page 233, from book "OpenCV2 Computer Vision Application Programming Cookbook" Chapter 9.
Its explained very well!
Given that we are supplied with the intrinsic matrix K, and distortion matrix D, we should undistort the image points before feeding it to findFundamentalMat and should work on undistorted image co-ordinatates henceforth (ie for computing the error). I found that this simple change reduced the maximum error of any image point pair from 176.0f to 0.2, and the number of inliers increased from 18 to 77.
I also toyed with normalizing the undistorted image points before it to findFundamentalMat, which reduced the maximum error of any image point pair to almost zero, though it does not increase the number of inliers any further.
const float kEpsilon = 1.0e-6f;
float sampsonError(const Mat &dblFMat, const Point2f &pt1, const Point2f &pt2)
{
Mat m_pt1(3, 1 , CV_64FC1 );//m_pt1(pt1);
Mat m_pt2(3, 1 , CV_64FC1 );
m_pt1.at<double>(0,0) = pt1.x; m_pt1.at<double>(1,0) = pt1.y; m_pt1.at<double>(2,0) = 1.0f;
m_pt2.at<double>(0,0) = pt2.x; m_pt2.at<double>(1,0) = pt2.y; m_pt2.at<double>(2,0) = 1.0f;
assert(dblFMat.rows==3 && dblFMat.cols==3);
assert(m_pt1.rows==3 && m_pt1.cols==1);
assert(m_pt2.rows==3 && m_pt2.cols==1);
Mat dblFMatT(dblFMat.t());
Mat dblFMatp1=(dblFMat * m_pt1);
Mat dblFMatTp2=(dblFMatT * m_pt2);
assert(dblFMatp1.rows==3 && dblFMatp1.cols==1);
assert(dblFMatTp2.rows==3 && dblFMatTp2.cols==1);
Mat numerMat=m_pt2.t() * dblFMatp1;
double numer=numerMat.at<double>(0,0);
if (numer < kEpsilon)
{
return 0;
} else {
double denom=dblFMatp1.at<double>(0,0) + dblFMatp1.at<double>(1,0) + dblFMatTp2.at<double>(0,0) + dblFMatTp2.at<double>(1,0);
double ret=(numer*numer)/denom;
return (numer*numer)/denom;
}
}
#define UNDISTORT_IMG_PTS 1
#define NORMALIZE_IMG_PTS 1
int filter_imgpts_pairs_with_epipolar_constraint(
const vector<Point2f> &raw_imgpts_1,
const vector<Point2f> &raw_imgpts_2,
int imgW,
int imgH
)
{
#if UNDISTORT_IMG_PTS
//Camera intrinsics
double data[] = {1189.46 , 0.0, 805.49,
0.0, 1191.78, 597.44,
0.0, 0.0, 1.0};
Mat K(3, 3, CV_64F, data);
//Camera distortion parameters
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000};
Mat D(1, 5, CV_64F, dist);
//working with undistorted points
vector<Point2f> unnormalized_imgpts_1,unnormalized_imgpts_2;
undistortPoints(raw_imgpts_1,unnormalized_imgpts_1,K,D);
undistortPoints(raw_imgpts_2,unnormalized_imgpts_2,K,D);
#else
vector<Point2f> unnormalized_imgpts_1(raw_imgpts_1);
vector<Point2f> unnormalized_imgpts_2(raw_imgpts_2);
#endif
#if NORMALIZE_IMG_PTS
float c_col=imgW/2.0f;
float c_row=imgH/2.0f;
float multiply_factor= 2.0f/(imgW+imgH);
vector<Point2f> final_imgpts_1(unnormalized_imgpts_1);
vector<Point2f> final_imgpts_2(unnormalized_imgpts_2);
for( auto iit=final_imgpts_1.begin(); iit != final_imgpts_1.end(); ++ iit)
{
Point2f &imgpt(*iit);
imgpt.x=(imgpt.x - c_col)*multiply_factor;
imgpt.y=(imgpt.y - c_row)*multiply_factor;
}
for( auto iit=final_imgpts_2.begin(); iit != final_imgpts_2.end(); ++ iit)
{
Point2f &imgpt(*iit);
imgpt.x=(imgpt.x - c_col)*multiply_factor;
imgpt.y=(imgpt.y - c_row)*multiply_factor;
}
#else
vector<Point2f> final_imgpts_1(unnormalized_imgpts_1);
vector<Point2f> final_imgpts_2(unnormalized_imgpts_2);
#endif
int algorithm=FM_RANSAC;
//int algorithm=FM_LMEDS;
vector<uchar>status;
Mat F = findFundamentalMat (final_imgpts_1, final_imgpts_2, algorithm, 0.5, 0.99, status);
int n_inliners=std::accumulate(status.begin(), status.end(), 0);
assert(final_imgpts_1.size() == final_imgpts_2.size());
vector<float> serr;
for( unsigned int i = 0; i< final_imgpts_1.size(); i++ )
{
const Point2f &p_1(final_imgpts_1[i]);
const Point2f &p_2(final_imgpts_2[i]);
float err= sampsonError(F, p_1, p_2);
serr.push_back(err);
}
float max_serr=*max_element(serr.begin(), serr.end());
cout << "found " << raw_imgpts_1.size() << "matches " << endl;
cout << " and " << n_inliners << " inliners" << endl;
cout << " max sampson err" << max_serr << endl;
return 0;
}

OpenCV iOS - show images returned from drawMatches

I'm new to OpenCV.
I'm trying to draw matches of features between to images using FLANN/SURF in OpenCV on iOS.
I'm following this example:
http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann
Here's my code with some little modifications (wrapped the code in the example in a function that return a UIImage as a result and read the starting images from bundle):
UIImage* SURFRecognition::test()
{
UIImage *img1 = [UIImage imageNamed:#"wallet"];
UIImage *img2 = [UIImage imageNamed:#"wallet2"];
Mat img_1;
Mat img_2;
UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);
if( !img_1.data || !img_2.data )
{
std::cout<< " --(!) Error reading images " << std::endl;
}
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance <= 2*min_dist )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
//imshow( "Good Matches", img_matches );
UIImage *imgTemp = MatToUIImage(img_matches);
for( int i = 0; i < good_matches.size(); i++ )
{
printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
}
return imgTemp;
}
The result I the function above is:
Only the line that connect the matches are show, but the to original images are not show. If I understood well the drawMatches fiunction return an cv::Mat which contains the images and the connection between similar feature. Is this correct or I am missing something? Can someone help me?
I've found the solution by myself.
It seems, after searching a lot, that drawMatches need img1 and img2 to be with 1 to 3 channel. I was opening a PNGa with alpha so these were 4 channel images.
Here's my code reviewed:
Added
UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);
cvtColor(img_1, img_1, CV_BGRA2BGR);
cvtColor(img_2, img_2, CV_BGRA2BGR);

OpenCV Error when using miniflann [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I found this code online .. and I wanted to try it >
int main( int argc, char **argv )
{
VideoCapture cam1, cam2; //Middle and left cameras
//VideoCapture cam3; //Right camera
cam1.open(0);
cam2.open(1);
//cam3.open(2);
cam1.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cam1.set(CV_CAP_PROP_FRAME_HEIGHT, 240);
cam2.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cam2.set(CV_CAP_PROP_FRAME_HEIGHT, 240);
//cam3.set(CV_CAP_PROP_FRAME_WIDTH, 320);
//cam3.set(CV_CAP_PROP_FRAME_HEIGHT, 240);
Mat frm1, frm2;
while(1)
{
//Step 1: Get frames from a few cameras
cam1 >> frm1; //Reference plane
cam2 >> frm2; //Right-side target plane
// cam3 >> frm3;
if( frm1.empty()||frm2.empty() )
break;
//Step2: SURF detection
int minHessian = 800;
SurfFeatureDetector detector( minHessian );
vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( frm1, keypoints_1 );
detector.detect( frm2, keypoints_2 );
SurfDescriptorExtractor extractor; ///
Mat descriptors_1, descriptors_2;
extractor.compute( frm1, keypoints_1, descriptors_1 );
extractor.compute( frm2, keypoints_2, descriptors_2 );
//Step 3: Feature matching
//-- Matching descriptor vectors with a matcher
FlannBasedMatcher matcher;
vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches);
//vector< vector<DMatch> > matches;
//matcher.knnMatch( descriptors_1, descriptors_2, matches,2);
double max_dist = 0; double min_dist = 50;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//printf("-- Max dist : %f \n", max_dist );
//printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{ if( matches[i].distance < 2*min_dist )
{ good_matches.push_back( matches[i]); }
}
Mat img_matches;
drawMatches( frm1, keypoints_1, frm2, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object from frame1 in frame2
vector<Point2f> frame1;
vector<Point2f> frame2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
frame1.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
frame2.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}
//Step 4: RANSAC and Homography
Mat H = findHomography( Mat(frame2), Mat(frame1), CV_RANSAC, 5.0 );
cout << "Homography for mapping the r-side plane to the ref. plane: " << endl << H << endl;
//-- Show detected matches
imshow("Matches", img_matches );
//Step 5:Warping
Mat result;
warpPerspective(frm2,result,H,Size(3*frm1.cols,1.5*frm1.rows));
imshow("Warping result",result);
Mat half(result,Rect(0,0,frm2.cols,frm2.rows));
frm1.copyTo(half);//Reference image
//Step 6:Blending
//Step 7:Showing the panoramic video
imshow("Blended view",result);
char c=waitKey(20);
if (c==27)
break;
}
return 0;
}
However, when I try to run the code, I found this error .. I checked the code in the OpenCV tutorial as well and they are almost exactly the same until the findhomography part.
OpenCV Error: Unsupported format or combination of formats
(type=0) in buildIndex_, file OpenCV-2.3.1/modules/flann/src/miniflann.cpp,
line 297 terminate called after throwing an instance of 'cv::Exception' what()
OpenCV-2.3.1/modules/flann/src/miniflann.cpp:297: error:
(-210) type=0 in function buildIndex_
Any ideas what could went wrong here ?
thank you
Following resolved my problem:
just before step 3 add the following:
descriptors_1.convertTo(descriptors_1,CV_32F);
descriptors_2.convertTo(descriptors_2,CV_32F);
Hope this helps.
my boss suggested restarting the computer, since I was using USB-cameras.. he said sometimes you need to restart it in some linux systems like Ubuntu for it to work again! and guess what? it really did work after restarting my computer!!!!!!!!!

Resources