Bruteforcematcher with FREAK extractor gives zero matches - opencv

Hello guys hope you are doing well. I am implementing a system that can detect object from given image frame in opencv 2.4.8. Currently I am dealing with FREAK algorithm because it is free. So as mentioned in tutorials and opencv docs I created objects of fastfeaturedetector and FREAK class
FastFeatureDetector detector(30);
FREAK extractor;
from here on code is most similar to the opencv example http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
instead of FLANN I used Bruteforce
BruteForceMatcher<Hamming> matcher;
for both object and real time image frame I find key points and descriptors
detector.detect(frame,keypoints_frame);
descriptors_frame.convertTo(descriptors_frame,CV_32F);
extractor.compute(frame, keypoints_frame, descriptors_frame);
Then I match descriptors using "match"-
matcher.match( descriptors_object, descriptors_frame, matches);
When I check the size of matches(which is defined as std::vector< DMatch > matches;) IT IS ZERO for image frame that has object(object to be detected).So I can't perform findhomography.(but the code works up to finding matches )
But when I run drawmatches it draws the the points on the detected object on the given frame. When I run the same algorithm with surf, BRISK they gives match size >0 and then I can perform find homography with it and proceed.
Can you please tell me why I am getting zero matches for FREAK?
What can I do to avoid that and to perform find homography?
The code works well with surf and BRISK (but they give false results too but I can deal with them)
Thanks in advance!!
note:- I think my question is clearer to you. Please let me know and I will edit as you want.

Related

Different matching results for opencv's descriptor_extractor_matcher when loading data from file

i am using the following code in the descriptor_extractor_matcher.cpp sample to compute the descriptors of img1 (Mat descriptors01), write it to my disk and load it back (Mat descriptors1). (same steps for the keypoints, but code is rather much the same ...)
Ptr<DescriptorExtractor> descriptorExtractor = DescriptorExtractor::create( argv[2] );
...
Mat descriptors01;
descriptorExtractor->compute( img1, keypoints1, descriptors01 ); // compute descriptors
FileStorage storage("test.yml", FileStorage::WRITE); //save it to disc
storage << "blub" << descriptors01;
storage.release();
Mat descriptors1;
FileStorage storage1("test.yml", FileStorage::READ); // load it again
storage1["blub"] >> descriptors1;
storage1.release();
The keypoints & descriptors for image 2 are computed and used without saving and loading.
I am using only the loaded data (keypoints & descriptors) for image 1 for the matching, so for the descriptors: descriptors1.
Now here is the thing: if I compare the cases
A) Using the code above for computing, storing and loading;
B) Using only loaded data (without computing and store it again)
for the matching I get different results, as you can see in the pictures for keypoints aswell as for the matching descriptors. I would have expect no differences... What am I missing here? Must I compare 2 images, and cannot compare an image to a stored set of keypoints and it's descriptors ?
Of course I'm using the same values for [detectorType] [descriptorType] [matcherType] [matcherFilterType] [image1] [image2] [ransacReprojThreshold], by the way ;)
Thanks alot!
UPDATE:
It seems the issue is depending on the descriptor. Working with loaded descriptors works for SIFT and SURF, but not for ORB and other. Images: Results with different descriptors for case A and B:
Try repeating A or B individually and see if the results are coming out to be the same. I suspect they won't and I say that because, #1 Your object of interest has poor texture and that would result in poor descriptors. #2 The viewpoint change between the two images is huge and which leads to the problem of non-repeatability even for the best of the descriptors like SIFT.
Now, comes the part of how to solve this repeatability issue, #1 use some threshold on the norm of the descriptor so that only very strong features are used for matching. #2 use the epipolar constraint along with RANSAC to filter out wrong matches. I am attaching two images to show how the filter hugely affects the correspondences.
Using SURF to find correspondence between the two images (two images in red-cyan colormap)
After filtering the images using RANSAC using epipolar constraint.
Feel free to comment and discuss further over this issue. :-)

Extract point descriptors from small images using OpenCV

I am trying to extract different point descriptors (SIFT, SURF, ORB, BRIEF,...) to build Bag of Visual words. The problem seems to be that I am using very small images : 12x60px.
Using a dense extractor I am able to get some keypoints, but then when extracting the descriptor no data is extracted.
Here is the code :
vector<KeyPoint> points;
Mat descriptor; // descriptor of the current image
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("BRIEF");
Ptr<FeatureDetector> detector(new DenseFeatureDetector(1.f,1,0.1f,6,0,true,false));
image = imread(filename, 0);
roi = Mat(image,Rect(0,0,12,60));
detector->detect(roi,points);
extractor->compute(roi,points,descriptor);
cout << descriptor << endl;
The result is [] (with BRIEF and ORB) and SegFault (with SURF and SIFT).
Does anyone have a clue on how to densely extract point descriptors from small images on OpenCV ?
Thanks for your help.
Indeed, I finally managed to work my way to a solution. Thanks for the help.
I am now using an Orb detector with initalised parameters instead of a random one, e.g:
Ptr<DescriptorExtractor> extractor(new ORB(500, 1.2f, 8, orbSize, 0, 2, ORB::HARRIS_SCORE, orbSize));
I had to explore the documentation of OpenCV thoroughly before finding the answer to my problem : Orb documentation.
Also if people are using the dense point extractor they should be aware that after the descriptor computing process they may have less keypoints than produced by the keypoint extractor. The descriptor computing removes any keypoints for which it cannot get the data.
BRIEF and ORB use a 32x32 patch to get the descriptor. Since it doesn't fit your image, they remove those keypoints (to avoid returning keypoints without descriptor).
In the case of SURF and SIFT, they can use smaller patches, but it depends on the scale provided by the keypoint. In this case, I guess they have to use a bigger patch and the same as before happens. I don't know why you get a segfault, though; maybe the SIFT/SURF descriptor extractors don't check that keypoints are inside the image boundaries, as BRIEF/ORB ones do.

OpenCV - finding translation of rigid body point clould

I have 2 images sourceImg, refImg.
I've extracted the features like so:
cv::GoodFeaturesToTrackDetector detector;
std::vector<cv::KeyPoint> sourceKeyPoints, refKeyPoints;
detector.detect(sourceImg, sourceKeyPoints);
detector.detect(refImg, refKeyPoints);
I want to find the translation of an object from refImg to sourceImg. There is no rotation or perspective change, only simple 2d translation. There may be some noise.
findHomography() works fine when both sets have the same number of features extracted, even handling noise quite well.
My question is, what do I do when the number of features differs?
Can someone point me in the right direction regarding DescriptorExtractor and Matching?
Note: I can't use SURF/SIFT for patent reasons.
You could try the FlannBasedMatcherclass from OpenCV. Use this to match descriptors (of any type) and then use the best matches to find your homography.

Generate local features For each keypoint by using SIFT

I have an image and i want to locate key points by using SIFT detector and group them, then i want to generate local features for each key point by using SIFT, would you please help me how I can do it ? Please give me any suggestions
I really appreciate your help
I'm not sure that I understand what you mean, but if you extract SIFT features from an image, you automatically get the feature descriptor which is used to compare features to each other. Of course you also get the feature location, size, direction and hessian value with it.
While you can group those features by there position in the image, but there is currently no way that I'm aware of to compare those groups, since they may be locally related, but can have wildly different feature descriptors.
Also I would suggest SURF. It is faster and not patent encumbered.
Have a look at the examples from OpenCV if you want specific instructions on how to retrieve and compare descriptors.
If you are using opencv here are the commands to do it, else if you are using the matlab see the link MATCHING_using surf
USING OPENCV::
// you can change the parameters for your requirement
double hessianThreshold=200;
int octaves=3;
int octaveLayers=4;
bool upright=false;
vector<KeyPoint>keypoints;
//The detector detects the keypoints in an image here image is RGBIMAGE of Mat type
SurfFeatureDetector detector( hessianThreshold, octaves, octaveLayers, upright );
detector.detect(RGB_IMAGE, keypoints);
//The extractor computesthe local features around the keypoints
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( last_ref, keypoints, descriptors);
// all the key points local features are stored in rows one after another in descriptors matrix...
Hope it is useful:)

Simple and fast method to compare images for similarity

I need a simple and fast way to compare two images for similarity. I.e. I want to get a high value if they contain exactly the same thing but may have some slightly different background and may be moved / resized by a few pixel.
(More concrete, if that matters: The one picture is an icon and the other picture is a subarea of a screenshot and I want to know if that subarea is exactly the icon or not.)
I have OpenCV at hand but I am still not that used to it.
One possibility I thought about so far: Divide both pictures into 10x10 cells and for each of those 100 cells, compare the color histogram. Then I can set some made up threshold value and if the value I get is above that threshold, I assume that they are similar.
I haven't tried it yet how well that works but I guess it would be good enough. The images are already pretty much similar (in my use case), so I can use a pretty high threshold value.
I guess there are dozens of other possible solutions for this which would work more or less (as the task itself is quite simple as I only want to detect similarity if they are really very similar). What would you suggest?
There are a few very related / similar questions about obtaining a signature/fingerprint/hash from an image:
OpenCV / SURF How to generate a image hash / fingerprint / signature out of the descriptors?
Image fingerprint to compare similarity of many images
Near-Duplicate Image Detection
OpenCV: Fingerprint Image and Compare Against Database.
more, more, more, more, more, more, more
Also, I stumbled upon these implementations which have such functions to obtain a fingerprint:
pHash
imgSeek (GitHub repo) (GPL) based on the paper Fast Multiresolution Image Querying
image-match. Very similar to what I was searching for. Similar to pHash, based on An image signature for any kind of image, Goldberg et al. Uses Python and Elasticsearch.
iqdb
ImageHash. supports pHash.
Image Deduplicator (imagededup). Supports CNN, PHash, DHash, WHash, AHash.
Some discussions about perceptual image hashes: here
A bit offtopic: There exists many methods to create audio fingerprints. MusicBrainz, a web-service which provides fingerprint-based lookup for songs, has a good overview in their wiki. They are using AcoustID now. This is for finding exact (or mostly exact) matches. For finding similar matches (or if you only have some snippets or high noise), take a look at Echoprint. A related SO question is here. So it seems like this is solved for audio. All these solutions work quite good.
A somewhat more generic question about fuzzy search in general is here. E.g. there is locality-sensitive hashing and nearest neighbor search.
Can the screenshot or icon be transformed (scaled, rotated, skewed ...)? There are quite a few methods on top of my head that could possibly help you:
Simple euclidean distance as mentioned by #carlosdc (doesn't work with transformed images and you need a threshold).
(Normalized) Cross Correlation - a simple metrics which you can use for comparison of image areas. It's more robust than the simple euclidean distance but doesn't work on transformed images and you will again need a threshold.
Histogram comparison - if you use normalized histograms, this method works well and is not affected by affine transforms. The problem is determining the correct threshold. It is also very sensitive to color changes (brightness, contrast etc.). You can combine it with the previous two.
Detectors of salient points/areas - such as MSER (Maximally Stable Extremal Regions), SURF or SIFT. These are very robust algorithms and they might be too complicated for your simple task. Good thing is that you do not have to have an exact area with only one icon, these detectors are powerful enough to find the right match. A nice evaluation of these methods is in this paper: Local invariant feature detectors: a survey.
Most of these are already implemented in OpenCV - see for example the cvMatchTemplate method (uses histogram matching): http://dasl.mem.drexel.edu/~noahKuntz/openCVTut6.html. The salient point/area detectors are also available - see OpenCV Feature Detection.
I face the same issues recently, to solve this problem(simple and fast algorithm to compare two images) once and for all, I contribute an img_hash module to opencv_contrib, you can find the details from this link.
img_hash module provide six image hash algorithms, quite easy to use.
Codes example
origin lena
blur lena
resize lena
shift lena
#include <opencv2/core.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/img_hash.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
void compute(cv::Ptr<cv::img_hash::ImgHashBase> algo)
{
auto input = cv::imread("lena.png");
cv::Mat similar_img;
//detect similiar image after blur attack
cv::GaussianBlur(input, similar_img, {7,7}, 2, 2);
cv::imwrite("lena_blur.png", similar_img);
cv::Mat hash_input, hash_similar;
algo->compute(input, hash_input);
algo->compute(similar_img, hash_similar);
std::cout<<"gaussian blur attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
//detect similar image after shift attack
similar_img.setTo(0);
input(cv::Rect(0,10, input.cols,input.rows-10)).
copyTo(similar_img(cv::Rect(0,0,input.cols,input.rows-10)));
cv::imwrite("lena_shift.png", similar_img);
algo->compute(similar_img, hash_similar);
std::cout<<"shift attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
//detect similar image after resize
cv::resize(input, similar_img, {120, 40});
cv::imwrite("lena_resize.png", similar_img);
algo->compute(similar_img, hash_similar);
std::cout<<"resize attack : "<<
algo->compare(hash_input, hash_similar)<<std::endl;
}
int main()
{
using namespace cv::img_hash;
//disable opencl acceleration may(or may not) boost up speed of img_hash
cv::ocl::setUseOpenCL(false);
//if the value after compare <= 8, that means the images
//very similar to each other
compute(ColorMomentHash::create());
//there are other algorithms you can try out
//every algorithms have their pros and cons
compute(AverageHash::create());
compute(PHash::create());
compute(MarrHildrethHash::create());
compute(RadialVarianceHash::create());
//BlockMeanHash support mode 0 and mode 1, they associate to
//mode 1 and mode 2 of PHash library
compute(BlockMeanHash::create(0));
compute(BlockMeanHash::create(1));
}
In this case, ColorMomentHash give us best result
gaussian blur attack : 0.567521
shift attack : 0.229728
resize attack : 0.229358
Pros and cons of each algorithm
The performance of img_hash is good too
Speed comparison with PHash library(100 images from ukbench)
If you want to know the recommend thresholds for these algorithms, please check this post(http://qtandopencv.blogspot.my/2016/06/introduction-to-image-hash-module-of.html).
If you are interesting about how do I measure the performance of img_hash modules(include speed and different attacks), please check this link(http://qtandopencv.blogspot.my/2016/06/speed-up-image-hashing-of-opencvimghash.html).
Does the screenshot contain only the icon? If so, the L2 distance of the two images might suffice. If the L2 distance doesn't work, the next step is to try something simple and well established, like: Lucas-Kanade. Which I'm sure is available in OpenCV.
If you want to get an index about the similarity of the two pictures, I suggest you from the metrics the SSIM index. It is more consistent with the human eye. Here is an article about it: Structural Similarity Index
It is implemented in OpenCV too, and it can be accelerated with GPU: OpenCV SSIM with GPU
If you can be sure to have precise alignment of your template (the icon) to the testing region, then any old sum of pixel differences will work.
If the alignment is only going to be a tiny bit off, then you can low-pass both images with cv::GaussianBlur before finding the sum of pixel differences.
If the quality of the alignment is potentially poor then I would recommend either a Histogram of Oriented Gradients or one of OpenCV's convenient keypoint detection/descriptor algorithms (such as SIFT or SURF).
If for matching identical images - code for L2 distance
// Compare two images by getting the L2 error (square-root of sum of squared error).
double getSimilarity( const Mat A, const Mat B ) {
if ( A.rows > 0 && A.rows == B.rows && A.cols > 0 && A.cols == B.cols ) {
// Calculate the L2 relative error between images.
double errorL2 = norm( A, B, CV_L2 );
// Convert to a reasonable scale, since L2 error is summed across all pixels of the image.
double similarity = errorL2 / (double)( A.rows * A.cols );
return similarity;
}
else {
//Images have a different size
return 100000000.0; // Return a bad value
}
Fast. But not robust to changes in lighting/viewpoint etc.
Source
If you want to compare image for similarity,I suggest you to used OpenCV. In OpenCV, there are few feature matching and template matching. For feature matching, there are SURF, SIFT, FAST and so on detector. You can use this to detect, describe and then match the image. After that, you can use the specific index to find number of match between the two images.
Hu invariant moments is very powerful tool to compare two images
Hash functions are used in the undouble library to detect (near-)identical images (disclaimer: I am also the author). This is a simple and fast way to compare two or more images for similarity. It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.

Resources