Extract point descriptors from small images using OpenCV - opencv

I am trying to extract different point descriptors (SIFT, SURF, ORB, BRIEF,...) to build Bag of Visual words. The problem seems to be that I am using very small images : 12x60px.
Using a dense extractor I am able to get some keypoints, but then when extracting the descriptor no data is extracted.
Here is the code :
vector<KeyPoint> points;
Mat descriptor; // descriptor of the current image
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("BRIEF");
Ptr<FeatureDetector> detector(new DenseFeatureDetector(1.f,1,0.1f,6,0,true,false));
image = imread(filename, 0);
roi = Mat(image,Rect(0,0,12,60));
detector->detect(roi,points);
extractor->compute(roi,points,descriptor);
cout << descriptor << endl;
The result is [] (with BRIEF and ORB) and SegFault (with SURF and SIFT).
Does anyone have a clue on how to densely extract point descriptors from small images on OpenCV ?
Thanks for your help.

Indeed, I finally managed to work my way to a solution. Thanks for the help.
I am now using an Orb detector with initalised parameters instead of a random one, e.g:
Ptr<DescriptorExtractor> extractor(new ORB(500, 1.2f, 8, orbSize, 0, 2, ORB::HARRIS_SCORE, orbSize));
I had to explore the documentation of OpenCV thoroughly before finding the answer to my problem : Orb documentation.
Also if people are using the dense point extractor they should be aware that after the descriptor computing process they may have less keypoints than produced by the keypoint extractor. The descriptor computing removes any keypoints for which it cannot get the data.

BRIEF and ORB use a 32x32 patch to get the descriptor. Since it doesn't fit your image, they remove those keypoints (to avoid returning keypoints without descriptor).
In the case of SURF and SIFT, they can use smaller patches, but it depends on the scale provided by the keypoint. In this case, I guess they have to use a bigger patch and the same as before happens. I don't know why you get a segfault, though; maybe the SIFT/SURF descriptor extractors don't check that keypoints are inside the image boundaries, as BRIEF/ORB ones do.

Related

Why doesn't RANSAC remove all the outliers in SIFT matches?

I use SIFT to detect, describe feature points in two images as follows.
void FeaturePointMatching::SIFTFeatureMatchers(cv::Mat imgs[2], std::vector<cv::Point2f> fp[2])
{
cv::SiftFeatureDetector dec;
std::vector<cv::KeyPoint>kp1, kp2;
dec.detect(imgs[0], kp1);
dec.detect(imgs[1], kp2);
cv::SiftDescriptorExtractor ext;
cv::Mat desp1, desp2;
ext.compute(imgs[0], kp1, desp1);
ext.compute(imgs[1], kp2, desp2);
cv::BruteForceMatcher<cv::L2<float> > matcher;
std::vector<cv::DMatch> matches;
matcher.match(desp1, desp2, matches);
std::vector<cv::DMatch>::iterator iter;
fp[0].clear();
fp[1].clear();
for (iter = matches.begin(); iter != matches.end(); ++iter)
{
//if (iter->distance > 1000)
// continue;
fp[0].push_back(kp1.at(iter->queryIdx).pt);
fp[1].push_back(kp2.at(iter->trainIdx).pt);
}
// remove outliers
std::vector<uchar> mask;
cv::findFundamentalMat(fp[0], fp[1], cv::FM_RANSAC, 3, 1, mask);
std::vector<cv::Point2f> fp_refined[2];
for (size_t i = 0; i < mask.size(); ++i)
{
if (mask[i] != 0)
{
fp_refined[0].push_back(fp[0][i]);
fp_refined[1].push_back(fp[1][i]);
}
}
std::swap(fp_refined[0], fp[0]);
std::swap(fp_refined[1], fp[1]);
}
In the above code, I use findFundamentalMat() to remove outliers, but in the result img1 and img2 there are still some bad matches. In the images, each green line connects the matched feature point pair. And please ignore red marks. I can not find anything wrong, could anyone give me some hints? Thanks in advance.
RANSAC is just one of the robust estimators. In principle, one can use a variety of them but RANSAC has been shown to work quite well as long as your input data is not dominated by outliers. You can check other variants on RANSAC like MSAC, MLESAC, MAPSAC etc. which have some other interesting properties as well. You may find this CVPR presentation interesting (http://www.imgfsr.com/CVPR2011/Tutorial6/RANSAC_CVPR2011.pdf)
Depending on the quality of the input data, you can estimate the optimal number of RANSAC iterations as described here (https://en.wikipedia.org/wiki/RANSAC#Parameters)
Again, it is one of the robust estimator methods. You may take other statistical approaches like modelling your data with heavy tail distributions, trimmed least squares etc.
In your code you are missing the RANSAC step. RANSAC has basically 2 steps:
generate hypothesis (do a random selection of data points necessary to fit your mode: training data).
model evaluation (evaluate your model on the rest of the points: testing data)
iterate and choose the model that gives the lowest testing error.
RANSAC stands for RANdom SAmple Consensus, it does not remove outliers, it selects a group of points to calculate the fundamental matrix for that group of points. Then you need to do a re projection using the fundamental matrix just calculated with RANSAC to remove the outliers.
Like any algorithm, ransac is not perfect. You can try to run other disponible (in the opencv implementation) robust algorithms, like LMEDS. And you can reiterate, using the last points marked as inliers as input to a new estimation. And you can vary the threshold and confidence level. I suggest run ransac 1 ~ 3 times and then run LMEDS, that does not need a threshold, but only work well with at least +50% of inliers.
And, you can have geometrical order problems:
*If the baseline between the two stereo is too small the fundamental matrix estimation can be unreliable, and may be better to use findHomography() instead, for your purpose.
*if your images have some barrel/pincushin distortion, they are not in conformity with epipolar geometry and the fundamental matrix are not the correct mathematical model to link matches. In this case, you may have to calibrate your camera and then run undistort() and then process the output images.

Bruteforcematcher with FREAK extractor gives zero matches

Hello guys hope you are doing well. I am implementing a system that can detect object from given image frame in opencv 2.4.8. Currently I am dealing with FREAK algorithm because it is free. So as mentioned in tutorials and opencv docs I created objects of fastfeaturedetector and FREAK class
FastFeatureDetector detector(30);
FREAK extractor;
from here on code is most similar to the opencv example http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
instead of FLANN I used Bruteforce
BruteForceMatcher<Hamming> matcher;
for both object and real time image frame I find key points and descriptors
detector.detect(frame,keypoints_frame);
descriptors_frame.convertTo(descriptors_frame,CV_32F);
extractor.compute(frame, keypoints_frame, descriptors_frame);
Then I match descriptors using "match"-
matcher.match( descriptors_object, descriptors_frame, matches);
When I check the size of matches(which is defined as std::vector< DMatch > matches;) IT IS ZERO for image frame that has object(object to be detected).So I can't perform findhomography.(but the code works up to finding matches )
But when I run drawmatches it draws the the points on the detected object on the given frame. When I run the same algorithm with surf, BRISK they gives match size >0 and then I can perform find homography with it and proceed.
Can you please tell me why I am getting zero matches for FREAK?
What can I do to avoid that and to perform find homography?
The code works well with surf and BRISK (but they give false results too but I can deal with them)
Thanks in advance!!
note:- I think my question is clearer to you. Please let me know and I will edit as you want.

Different matching results for opencv's descriptor_extractor_matcher when loading data from file

i am using the following code in the descriptor_extractor_matcher.cpp sample to compute the descriptors of img1 (Mat descriptors01), write it to my disk and load it back (Mat descriptors1). (same steps for the keypoints, but code is rather much the same ...)
Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create( argv[2] );
...
Mat descriptors01;
descriptorExtractor->compute( img1, keypoints1, descriptors01 ); // compute descriptors
FileStorage storage("test.yml", FileStorage::WRITE); //save it to disc
storage << "blub" << descriptors01;
storage.release();
Mat descriptors1;
FileStorage storage1("test.yml", FileStorage::READ); // load it again
storage1["blub"] >> descriptors1;
storage1.release();
The keypoints & descriptors for image 2 are computed and used without saving and loading.
I am using only the loaded data (keypoints & descriptors) for image 1 for the matching, so for the descriptors: descriptors1.
Now here is the thing: if I compare the cases
A) Using the code above for computing, storing and loading;
B) Using only loaded data (without computing and store it again)
for the matching I get different results, as you can see in the pictures for keypoints aswell as for the matching descriptors. I would have expect no differences... What am I missing here? Must I compare 2 images, and cannot compare an image to a stored set of keypoints and it's descriptors ?
Of course I'm using the same values for [detectorType] [descriptorType] [matcherType] [matcherFilterType] [image1] [image2] [ransacReprojThreshold], by the way ;)
Thanks alot!
UPDATE:
It seems the issue is depending on the descriptor. Working with loaded descriptors works for SIFT and SURF, but not for ORB and other. Images: Results with different descriptors for case A and B:
Try repeating A or B individually and see if the results are coming out to be the same. I suspect they won't and I say that because, #1 Your object of interest has poor texture and that would result in poor descriptors. #2 The viewpoint change between the two images is huge and which leads to the problem of non-repeatability even for the best of the descriptors like SIFT.
Now, comes the part of how to solve this repeatability issue, #1 use some threshold on the norm of the descriptor so that only very strong features are used for matching. #2 use the epipolar constraint along with RANSAC to filter out wrong matches. I am attaching two images to show how the filter hugely affects the correspondences.
Using SURF to find correspondence between the two images (two images in red-cyan colormap)
After filtering the images using RANSAC using epipolar constraint.
Feel free to comment and discuss further over this issue. :-)

Generate local features For each keypoint by using SIFT

I have an image and i want to locate key points by using SIFT detector and group them, then i want to generate local features for each key point by using SIFT, would you please help me how I can do it ? Please give me any suggestions
I really appreciate your help
I'm not sure that I understand what you mean, but if you extract SIFT features from an image, you automatically get the feature descriptor which is used to compare features to each other. Of course you also get the feature location, size, direction and hessian value with it.
While you can group those features by there position in the image, but there is currently no way that I'm aware of to compare those groups, since they may be locally related, but can have wildly different feature descriptors.
Also I would suggest SURF. It is faster and not patent encumbered.
Have a look at the examples from OpenCV if you want specific instructions on how to retrieve and compare descriptors.
If you are using opencv here are the commands to do it, else if you are using the matlab see the link MATCHING_using surf
USING OPENCV::
// you can change the parameters for your requirement
double hessianThreshold=200;
int octaves=3;
int octaveLayers=4;
bool upright=false;
vector<KeyPoint>keypoints;
//The detector detects the keypoints in an image here image is RGBIMAGE of Mat type
SurfFeatureDetector detector( hessianThreshold, octaves, octaveLayers, upright );
detector.detect(RGB_IMAGE, keypoints);
//The extractor computesthe local features around the keypoints
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( last_ref, keypoints, descriptors);
// all the key points local features are stored in rows one after another in descriptors matrix...
Hope it is useful:)

How to use flann based matcher, or generally flann in opencv?

http://opencv.willowgarage.com/documentation/cpp/features2d_common_interfaces_of_descriptor_matchers.html#flannbasedmatcher
Please can somebody show me sample code or tell me how to use this class and methods.
I just want to match SURF's from a query image to those with an image set by applying Flann. I have seen many image match code in the samples but what still eludes me is a metric to quantify how similar an image is to other. Any help will be much appreciated.
Here's untested sample code
using namespace std;
using namespace cv;
Mat query; //the query image
vector<Mat> images; //set of images in your db
/* ... get the images from somewhere ... */
vector<vector<KeyPoint> > dbKeypoints;
vector<Mat> dbDescriptors;
vector<KeyPoint> queryKeypoints;
Mat queryDescriptors;
/* ... Extract the descriptors ... */
FlannBasedMatcher flannmatcher;
//train with descriptors from your db
flannmatcher.add(dbDescriptors);
flannmatcher.train();
vector<DMatch > matches;
flannmatcher.match(queryDescriptors, matches);
/* for kk=0 to matches.size()
the best match for queryKeypoints[matches[kk].queryIdx].pt
is dbKeypoints[matches[kk].imgIdx][matches[kk].trainIdx].pt
*/
Finding the most 'similar' image to the query image depends on your application. Perhaps the number of matched keypoints is adequate. Or you may need a more complex measure of similarity.
To reduce the number of false positives, you can compare the first most nearest neighbor to the second most nearest neighbor by taking the ratio of there distances.
distance(query,mostnearestneighbor)/distance(query,secondnearestneighbor) < T, the smaller the ratio is, the higher the distance of the second nearest neighbor to the query descriptor. This thus is a translation of high distinctiveness. Used in many computer vision papers that envision registration.

Resources