I use opencv and need to find a match on picture which is taken from video flow. Functions cvMatchTemplate() and cvMinMaxLoc() finds the match absolutely correct. But the problem is that when there is no match, opencv finds it anyway. Even on white sheet.
Can anyone say what the problem is? Maybe use another function to detect match or to some to understand that there are no match?
Thank you
cvMatchTemplate ouputs a map of similarity measures/image distances (depends on the method parameter). You want to have a look at the similarity measures and threshold them at a reasonable value (so, only accept a match if the similarity is high/distance is low). cvMinMaxLoc also gives you the values, you can use those for thresholding (look at values for positive and negative samples, and set the threshold in between).
Related
I am trying to write a function in OpenCv for comparing two images - imageA and imageB, to check to what extent they are similar.
I want to arrive at three comparison scores(0 to 100 value) as shown below.
1. Histograms - compareHist() : OpenCV method
2. Template Matching - matchTemplate() : OpenCV method
3. Feature Matching - BFMatcher() : OpenCV method
Above on the scores derived from the above calculations I want to arrive at a conclusion regarding the matching.
I was successful in getting this functions to work, but not at getting a comparison score for it. I would be great if someone could help me with that. Also, any other advice regarding this sort of image matching is also welcome.
I know there are different kind of algorithms that can be used for the above functions. So, just clarifying on the kind of images that I will be using.
1. As mentioned above it will be a one-to-one comparison.
2. Its all images taken by a human using a mobile camera.
3. The images that match will be taken of the same object/place from the same spot mostly. (Accoding to the time of the day, the lighting could differ)
4. If the images doesn't match the user will be asked to click another one, till it matches.
5. The kind of images compared could include - corridor, office table, computer screen(content on the screen to be compared), pepper document etc.
1- With histogram you can get a comparison score using histogram intersection. If you divide the intersection of two histograms to the union of the two histogram, will give you a score between 0 (no match at all) and 1 (complete match) like the example in the below graph:
You can compute the intersection for histogram with a simple For loop.
2- In template matching, the score you get is different for each method of comparing. In this link you can see the details of each method. In some methods highest score means the best match, but in some others, the lowest score means the most matched. For defining a score between 0 and 1, you should consider 2 scores: one for matching an image with itself (most match score) and two, matching two completely different images (lowest match) and then normalize the scores by the number of pixels in the image (height*width).
3- Feature matching is different than the last two methods. You may have two similar image with poor features (which fail in matching) or having two conceptually different images and have many matched features. Although if the images are feature-rich we can define something as a score. For this purpose, consider this example:
Img1 has 200 features
Img2 has 170 features
These two images have 100 matched features
Consider 0.5 (100/200) as the whole image matching score
You can also involve the distances between the matched pairs of features into the scoring but I think that's enough.
Regarding the comparison score. Have you tried implementing a weighted average to get a final comparison metric? Weight the 3 matching methods you are implementing according to their accuracy, the best method gets the “heaviest” weight.
Also, if you want to explore additional matching methods, give FFT-based matching a try: http://machineawakening.blogspot.com/2015/12/fft-based-cosine-similarity-for-fast.html
with inspire of this tutorial:
Feature Matching, I'm trying to do template matching and clustering of image set I have.
The dataset I have in most of it, the image is straight ( maybe 10-degree rotate max )
I would like to use this information to have better matches,
I have noticed that sometimes I have a false match that when I display the match I can see the match vectors are all in different angles (not straight line ) how can I check if the match it's got is a straight line or rotate?
Thanks for the help
I'm not sure to understand everything, what do you mean by straight image?
And for the matches, when you compare two images, you will probably have many features that correspond between those two images, and you cannot ensure that they all describe a straight line, you can just assume having kind of straight lines when you try to find an object in an image as in the example, but this is just a representation...
If you only want to do clustering, I advise you to compare features only without doing some matching, you'll probably find a cluster of common features for some images that you can regroup
So ORB and SIFT try to match features in a pair of images. The reason why you have mismatching is because some of the features are too similar and the system mistakes them as a match.
You will need to increase your detector's threshold the matcher's acceptable matches.
I'm quiet new to OpenCV and image processing, so my questions to the feature matching approach a a bit general. I read something about the theory, but i have problems to arrange the very specific theory in this steps;
As i understand it i would group the sequence in the following steps:
Feature detection: Special points from image are found in a very
Feature description: Information about the near neighborhood is collected and a per featurepoint one vector is created
->(1) is this always in the form of an histogram?
Matching: A distance between the descriptors is calculated
->(2) can I determine what kind of distance is used? I read about χ^2 and EMD, even if they are not implemented, are these the right keywords in this place
Corresponding matches are determined
->(3) I guess the Hungarian method would be one method?
Transformation estimation: In an optimization problem the best position is estimated
It would be nice if someone could clarify the italic marked question
(1): is this always in the form of an histogram?
No, for example there are binary descriptors for ORB features. In Theory, descriptors can be anything. Often they are normalized and often they are either binary or floating points. But: Histograms have some properties which can make them good descriptors.
(2) can I determine what kind of distance is used?
For floating point descriptors, sum of squared distances might be the most used metric to measure the distance. For binary descriptor afaik, hamming distance is used?
(3) I guess the Hungarian method would be one method?
Could be used, I guess, but this might or might not lead to some problems. Typically nearest neighbor approaches are used. Often just brute-force (which is O(n^2) instead of O(n^3) of hungarian). The "problem", that multiple descriptors of one set might have the same nearest neighbor in the second set, is in fact another feature, because if that happens, you might be able to filter out some "uncertain" matches (often the ratio of the best n matches is used to filter out even more). You must assume that many descriptors in a set will have no fitting correspondence in the second set and you must assume, that the matching itself won't produce perfect matches. Typically some additional steps like homography computation are used to make the matching more robust and filter out outliers.
I have a set of templates images against which I need to compare a test image and find the best match. Given that we have a SIFT descriptor, I select the best feature match and all feature matches that lie within 3*distance of the best match are considered good matches. Then I add up the distance of all the good matches. I don't know if this is a good approach because I think I should also take into the account the number of good matches, not just the average of the distances between the good matches. I am new to template matching, so I would appreciate your inputs.
In these test images, is the template that you are looking for always in the same perspective (undistorted)? If so, I would recommend a more accurate technique than using feature point matching. OpenCV offers a function called matchTemplate() and there is even a gpu implementation. Your measure can be based on the pixel averaged result of that function.
If they are distorted, then using SIFT or SURF might suffice. You should send your point matches through findHomography(), which will use RANSAC to remove outliers. The number of matches that survive this test could be used as a measure to decide if the object is found.
I am using openCV Surf tracker to find exact points in two images.
as you know, Surf returns many Feature points in both images. what i want to do is using these feature parameters to find out which matches are exactly correct (true positive matches). In my application i need only true positive matches.
These parameters existed : Hessian, Laplacian, Distance, Size, Dir.
I dont know how to use these parameters?
is exact matches have less distance or more hessian? laplacian can help ? size or dir can help ?
How can i find Exact matches(true positives)??
You can find very decent matches between descriptors in the query and image by adopting the following strategy -
Use a 2 NN search for query descriptors among the image descriptors, and the following condition-
if distance(1st match) < 0.6*distance(2nd match) the 1st match is a "good match".
to filter out false positives.
It obvious you can't be 100% sure which points truly match. You can increase (in the cost of performance) positives by tuning SURF parameters (see some links here). Depending on your real task you can use robust algorithms to eliminate outliers, i.e. RANSAC if you perform kind of model fitting. Also, as Erfan said, you can use spatial information (check out "Elastic Bunch Graph Matching" and Spatial BoW).
The answer which I'm about to post is just my guess because I have not tested it to see whether it exactly works as predicted or not.
By comparing the relative polar distance between 3 random candidate feature points returned by opencv and comparing it with the counterpart points in the template (with a certain error), you can not only compute the probability of true positiveness, but also the angle and the scale of your matched pattern.
Cheers!