I have extracted DenseSIFT from the query and database image and quantized by kmeans using VLFeat. The challenge is to find those SIFT features that quantized to the same visual words and be spatially consistent (have a similar position to object centers). I have tried few techniques:
using FLANN() on the SIFT (normal SIFT) coordinates on both query and database image and find the nearest neighbor and then comparing the visual words (NOTE: this gave few points that did not work).
Using Coherent-Point-Drift (CPD) on SIFT coordinates to find the matched points (I am not sure about this whether it is a right solution or not).
I am struggling with it for many days, and I hope experts can guide me with this. What are the possible solutions or algorithms that I can use for solving this?
Neither of those two methods you mentioned achieve what you want do. The answer depends on the object in your pictures. If it has mostly flat faces, then you can rely on estimating the homography, see this tutorial.
If that's not case then can use the epipolar constraint to remove outliers / get geometrically consistent matches, see this tutorial. There are some other ways to achieve this if the speed is of importance in your application.
Related
I have a large image (5400x3600) that has multiple CCTVs that I need to detect.
The detection takes lot of time (4-7 minutes) with rotation. But it still fails to resolve certain CCTVs.
What is the best method to match a template like this?
I am using skImage - openCV is not an option for me, but I am open to suggestions on that too.
For example: in the images below, the template is correct matched with the second image - but the first image is not matched - I guess due to the noise created by the text "BLDG..."
Template:
Source image:
Match result:
The fastest method is probably a cascade of boosted classifiers trained with several variations of your logo and possibly a few rotations and some negative examples too (non-logos). You have to roughly scale your overall image so the test and training examples are approximately matched by scale. Unlike SIFT or SURF that spend a lot of time in searching for interest points and creating descriptors for both learning and searching, binary classifiers shift most of the burden to a training stage while your testing or search will be much faster.
In short, the cascade would run in such a way that a very first test would discard a large portion of the image. If the first test passes the others will follow and refine. They will be super fast consisting of just a few intensity comparison in average around each point. Only a few locations will pass the whole cascade and can be verified with additional tests such as your rotation-correlation routine.
Thus, the classifiers are effective not only because they quickly detect your object but because they can also quickly discard non-object areas. To read more about boosted classifiers see a following openCV section.
This problem in general is addressed by Logo Detection. See this for similar discussion.
There are many robust methods for template matching. See this or google for a very detailed discussion.
But from your example i can guess that following approach would work.
Create a feature for your search image. It essentially has a rectangle enclosing "CCTV" word. So the width, height, angle, and individual character features for matching the textual information could be a suitable choice. (Or you may also use the image having "CCTV". In that case the method will not be scale invariant.)
Now when searching first detect rectangles. Then use the angle to prune your search space and also use image transformation to align the rectangles in parallel to axis. (This should take care of the need for the rotation). Then according to the feature choosen in step 1, match the text content. If you use individual character features, then probably your template matching step is essentially a classification step. Otherwise if you use image for matching, you may use cv::matchTemplate.
Hope it helps.
Symbol spotting is more complicated than logo spotting because interest points work hardly on document images such as architectural plans. Many conferences deals with pattern recognition, each year there are many new algorithms for symbol spotting so giving you the best method is not possible. You could check IAPR conferences : ICPR, ICDAR, DAS, GREC (Workshop on Graphics Recognition), etc. This researchers focus on this topic : M Rusiñol, J Lladós, S Tabbone, J-Y Ramel, M Liwicki, etc. They work on several techniques for improving symbol spotting such as : vectorial signatures, graph based signature and so on (check google scholar for more papers).
An easy way to start a new approach is to work with simples shapes such as lines, rectangles, triangles instead of matching everything at one time.
Your example can be recognized by shape matching (contour matching), much faster than 4 minutes.
For good match , you require nice preprocess and denoise.
examples can be found http://www.halcon.com/applications/application.pl?name=shapematch
I've been playing with the excellent GPUImage library, which implements several feature detectors: Harris, FAST, ShiTomas, Noble. However none of those implementations help with the feature extraction and matching part. They simply output a set of detected corner points.
My understanding (which is shakey) is that the next step would be to examine each of those detected corner points and extract the feature from then, which would result in descriptor - ie, a 32 or 64 bit number that could be used to index the point near to other, similar points.
From reading Chapter 4.1 of [Computer Vision Algorithms and Applications, Szeliski], I understand that using a BestBin approach would help to efficient find neighbouring feautures to match, etc. However, I don't actually know how to do this and I'm looking for some example code that does this.
I've found this project [https://github.com/Moodstocks/sift-gpu-iphone] which claims to implement as much as possible of the feature extraction in the GPU. I've also seen some discussion that indicates it might generate buggy descriptors.
And in any case, that code doesn't go on to show how the extracted features would be best matched against another image.
My use case if trying to find objects in an image.
Does anyone have any code that does this, or at least a good implementation that shows how the extracted features are matched? I'm hoping not to have to rewrite the whole set of algorithms.
thanks,
Rob.
First, you need to be careful with SIFT implementations, because the SIFT algorithm is patented and the owners of those patents require license fees for its use. I've intentionally avoided using that algorithm for anything as a result.
Finding good feature detection and extraction methods that also work well on a GPU is a little tricky. The Harris, Shi-Tomasi, and Noble corner detectors in GPUImage are all derivatives of the same base operation, and probably aren't the fastest way to identify features.
As you can tell, my FAST corner detector isn't operational yet. The idea there is to use a lookup texture based on a local binary pattern (why I built that filter first to test the concept), and to have that return whether it's a corner point or not. That should be much faster than the Harris, etc. corner detectors. I also need to finish my histogram pyramid point extractor so that feature extraction isn't done in an extremely slow loop on the GPU.
The use of a lookup texture for a FAST corner detector is inspired by this paper by Jaco Cronje on a technique they refer to as BFROST. In addition to using the quick, texture-based lookup for feature detection, the paper proposes using the binary pattern as a quick descriptor for the feature. There's a little more to it than that, but in general that's what they propose.
Feature matching is done by Hamming distance, but while there are quick CPU-side and CUDA instructions for calculating that, OpenGL ES doesn't have one. A different approach might be required there. Similarly, I don't have a good solution for finding a best match between groups of features beyond something CPU-side, but I haven't thought that far yet.
It is a primary goal of mine to have this in the framework (it's one of the reasons I built it), but I haven't had the time to work on this lately. The above are at least my thoughts on how I would approach this, but I warn you that this will not be easy to implement.
For object recognition / these days (as of a couple weeks ago) best to use tensorflow /Convolutional Neural Networks for this.
Apple has some metal sample code recently added. https://developer.apple.com/library/content/samplecode/MetalImageRecognition/Introduction/Intro.html#//apple_ref/doc/uid/TP40017385
To do feature detection within an image - I draw your attention to an out of the box - KAZE/AKAZE algorithm with opencv.
http://www.robesafe.com/personal/pablo.alcantarilla/kaze.html
For ios, I glued the Akaze class together with another stitching sample to illustrate.
detector = cv::AKAZE::create();
detector->detect(mat, keypoints); // this will find the keypoints
cv::drawKeypoints(mat, keypoints, mat);
// this is the pseudo SIFT descriptor
.. [255] = {
pt = (x = 645.707153, y = 56.4605064)
size = 4.80000019
angle = 0
response = 0.00223364262
octave = 0
class_id = 0 }
https://github.com/johndpope/OpenCVSwiftStitch
Here is a GPU accelerated SIFT feature extractor:
https://github.com/lukevanin/SIFTMetal
The code is written in Swift 5 and uses Metal compute shaders for most operations (scaling, gaussian blur, key point detection and interpolation, feature extraction). The implementation is largely based on the paper and code from the "Anatomy of the SIFT Method Article" published in the Image Processing Online Journal (IPOL) in 2014 (http://www.ipol.im/pub/art/2014/82/). Some parts are based on code by Rob Whess (https://github.com/robwhess/opensift), which I believe is now used in OpenCV.
For feature matching I tried using a kd-tree with the best-bin first (BBF) method proposed by David Lowe. While BBF does provide some benefit up to about 10 dimensions, with a higher number of dimensions such as used by SIFT, it is no better than quadratic search due to the "curse of dimensionality". That is to say, if you compare 1000 descriptors against 1000 other descriptors, it stills end up making 1,000 x 1,000 = 1,000,000 comparisons - the same as doing brute-force pairwise.
In the linked code I use a different approach optimised for performance over accuracy. I use a trie to locate the general vicinity for potential neighbours, then search a fixed number of sibling leaf nodes for the nearest neighbours. In practice this matches about 50% of the descriptors, but only makes 1000 * 20 = 20,000 comparisons - about 50x faster and scales linearly instead of quadratically.
I am still testing and refining the code. Hopefully it helps someone.
I have 2 objects. I get n features from object 1 & m features from object 2.
n!=m
I have to measure the probability that object 1 is similar to object 2.
How can I do this?
There is a nice tutorial in the OpenCV website that does this. Check it out.
The idea is to get the distances between all those descriptors with a FlannBasedMatcher, get the closest ones, and run RANSAC to find some set of consistent features between the two objects. You don't get a probability, but the number of consistent features, from which you may score how good your detection is, but that is up to you.
You can group the features in the image where features are more.
Set a vector to use the same. There may be multiple matches from among-st that you can choose the highest one.
Are you talking about point feature descriptors, like SIFT, SURF, or FREAK?
In that case there are several strategies. In all cases you need a distance measure. For SIFT or SURF you can use the Euclidean distance between the descriptors, or the L1 norm, or the dot product (correlation). For binary features, like FREAK or BRISK, you typically use the Hamming distance.
Then, one approach, is to simply pick a threshold on the distance. This is likely to give you many-to-many matches. Another way is to use bipartite graph matching to find the minimum-cost or maximum-weight assignment between the two sets. A very practical approach is described by David Lowe, which uses a ratio test to discard ambiguous matches.
Many of these strategies are implemented in the matchFeatures function in the Computer Vision System Toolbox for MATLAB.
I find out that SIFT features is only good for find the same object in the scene, but it seems not suitable for "similar" objects.
maybe I doing something wrong?
maybe I must use some other descriptors?
images and SIFT\ASIFT algorithms work:
link
same problem- no matches
link
I find out that SIFT features is only good for find the same object in the scene, but it seems not suitable for "similar" objects.
It is exactly what they are doing (and not only them, task is called "wide baseline matching") - 1)for each feature find the most similar - called "tentative" or "putative" correspondence
2)use RANSAC or other similar method to find geometric transformation between sets of correspondences.
So, if you need to find "similar", you have to use other method, like Viola-Jones http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
Or (but it will give you a lot of false positives) you can compare big image to small and do not use step 2.
The basic SIFT algorithm using VLfeat gives me this as a result. Which given the small and not so unique target image, is a pretty good result I would say.
Does anyone know the particular algorithm for Probabilistic Hough Transform in the OpenCV's implementation? I mean, is there a reference paper or documentation about the algorithm?
To get the idea, I can certainly look into the source code, but I wonder if there is any documentation about it. -- it's not in the source code's comments (OpenCV 1.0).
Thank you!
-Jin
The OpenCV documentation states that the algoithm is based on "Robust detection of lines using the progressive probabilistic hough transform", by J Matas et al. This is quite different from the RHT described on wikipedia.
The paper does not seem to be freely available on the internet, but you can purcahse it from Elsevier
The source code for HoughLinesProbabilistic in OpenCV 2.4.4 contains inline comments that explain the various steps involved.
https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/hough.cpp
The article Line Detection by Hough transformation in the section 6 could be useful.
Here is a fairly concise paper by Matas et.al. that describes the approach, and as others mentioned, it is indeed quite different from Randomized Hough Transform:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2186&rep=rep1&type=pdf
(Not sure for how long this link is going to be valid though. It's on/from citeseer, wouldn't expect it to just vanish tomorrow, but who knows...)
I had quick look at the implementation icvHoughLinesProbabilistic() in hough.cpp, because I'll be using it :-) It seems fairly straightforward, anyway, my primary interest was whether it does some least squares line-fitting in the end - it doesn't, which is fine. It just means, if it is desired to get accurate line-segments, one may want to use the start/end-point and implied line-parameters as returned by OpenCV to select related points from the overall point-set. I'd be using a fairly conservative distance-threshold in the first place, and run RANSAC/MSAC on these points with a smaller threshold. Finally, fit a line to the inlier-set as usual, e.g. using OpenCV's cvFitLine().
Here's an article about the Randomized Hough Transform which i believe to be the same as the "probabilistic Hough transform" used in OpenCV
http://en.wikipedia.org/wiki/Randomized_Hough_Transform
basically, you dont fill up the accumulator for all points but choose a set of points with a certain criteria to fill up the Hough transform.
The consequence is that sometimes, you could miss the actual line if there wasnt eenough points ot start with. I guess you'd want to use this if you have somewhat linear structures so that most points would be redundant.
reference no 2: L. Xu, E. Oja, and P. Kultanan, "A new curve detection method: Randomized Hough transform (RHT)", Pattern Recog. Lett. 11, 1990, 331-338.
I also read about some pretty different approaches where the algorithms would take two points and compute the point in the middle of those two points. if the point is an edge point, then we'd accumulate the bin for that line. This is apparently extremely fast but you'd assume a somewhat non-sparse matrix as you could easily miss lines if there wasnt enough edge points to start with.