Feature Detection and Feature Descriptor in Image Processing - image-processing

Well, I am clear about Feature Detection and Feature Descriptor. Feature detection is finding some interesting points in an image and we can describe them by descriptor like SIFT, HoG etc. My doubt is very specific. Suppose I have an image(I), I applied Harris Detector and found x,y positions of the corners in that image. Now, I want to apply SIFT to find SIFT features so how should I do it ? Should I make a new image with detected corners only and then should apply SIFT over it ? Or SIFT should be applied on image I (but that serves no purpose I guess) ?
Please help me to have some clarity on practical grounds.

The SIFT descriptor, as you say, describes the feature point. However, SIFT also tries to be scale invariant. This means that the SIFT detector also examines potential key-points with respect to their response to various scales. The detector then records not only the x,y, but also the scale information.
This means that you're probably better off using the detector that comes with SIFT along with the descriptor. Both Matlab and OpenCV implementations easily allow you to detect and describe points.

Related

How does multiscale feature matching work? ORB, SIFT, etc

When reading about classic computer vision I am confused on how multiscale feature matching works.
Suppose we use an image pyramid,
How do you deal with the same feature being detected at multiple scales? How do you decide which to make a deacriptor for?
How do you connected features between scales? For example let's say you have a feature detected and matched to a descriptor at scale .5. Is this location then translated to its location in the initial scale?
I can share something about SIFT that might answer question (1) for you.
I'm not really sure what you mean in your question (2) though, so please clarify?
SIFT (Scale-Invariant Feature Transform) was designed specifically to find features that remains identifiable across different image scales, rotations, and transformations.
When you run SIFT on an image of some object (e.g. a car), SIFT will try to create the same descriptor for the same feature (e.g. the license plate), no matter what image transformation you apply.
Ideally, SIFT will only produce a single descriptor for each feature in an image.
However, this obviously doesn't always happen in practice, as you can see in an OpenCV example here:
OpenCV illustrates each SIFT descriptor as a circle of different size. You can see many cases where the circles overlap. I assume this is what you meant in question (1) by "the same feature being detected at multiple scales".
And to my knowledge, SIFT doesn't really care about this issue. If by scaling the image enough you end up creating multiple descriptors from "the same feature", then those are distinct descriptors to SIFT.
During descriptor matching, you simply brute-force compare your list of descriptors, regardless of what scale it was generated from, and try to find the closest match.
The whole point of SIFT as a function, is to take in some image feature under different transformations, and produce a similar numerical output at the end.
So if you do end up with multiple descriptors of the same feature, you'll just end up having to do more computational work, but you will still essentially match the same pair of feature across two images regardless.
Edit:
If you are asking about how to convert coordinates from the scaled images in the image pyramid back into original image coordinates, then David Lowe's SIFT paper dedicates section 4 on that topic.
The naive approach would be to simply calculate the ratios of the scaled coordinates vs the scaled image dimensions, then extrapolate back to the original image coordinates and dimensions. However, this is inaccurate, and becomes increasingly so as you scale down an image.
Example: You start with a 1000x1000 pixel image, where a feature is located at coordinates (123,456). If you had scaled down the image to 100x100 pixel, then the scaled keypoint coordinate would be something like (12,46). Extrapolating back to the original coordinates naively would give the coordinates (120,460).
So SIFT fits a Taylor expansion of the Difference of Gaussian function, to try and locate the original interesting keypoint down to sub-pixel levels of accuracy; which you can then use to extrapolate back to the original image coordinates.
Unfortunately, the math for this part is quite beyond me. But if you are fluent in math, C programming, and want to know specifically how SIFT is implemented; I suggest you dive into Rob Hess' SIFT implementation, lines 467 through 648 is probably the most detailed you can get.

What is difference between features and keypoints in computer vision?

I am studying something about some possibilities of OpenCV object detection and this is confusing to me. I just don't see the difference between these two.
Image features are small patches that are useful to compute similarities between images. An image feature is usually composed of a feature keypoint and a feature descriptor.
The keypoint usually contains the patch 2D position and other stuff if available such as scale and orientation of the image feature.
The descriptor contains the visual description of the patch and is used to compare the similarity between image features.

Gaussian Filters with ORB

I have started my first project in the field of Image recognition using Feature Point detectors and descriptors. I have no prior knowledge on the topics of Image recognition techniques before starting of this project and then I have researched on the available detectors and descriptors and came to know about the differences between them. Finally, I have opted out to work with the ORB detectors and descriptors for Image recognition (If it didn't worked according to my requiremnets then I would like to go out with the BRISK later).
As of now am in a stage of getting the results for Image recognition using ORB. At this Point, I was thinking of to use Gaussian Filters in my code so that I can get better results even though the Input Image is a bit blur.
My questions:
1) Is it possible to use Gaussian filters with ORB to get much better results for Image recognition?
2) When I read the paper on ORB I came to know that the lines below
FAST does not produce a measure of cornerness, and we have found that it has large
responses along edges. We employ a Harris corner measure [11] to order the FAST keypoints.
For a target number N of keypoints, we first set the threshold low enough to get more than
N keypoints, then order them according to the Harris measure, and pick the top N points.
FAST does not produce multi-scale features. We employ a scale pyramid of the image, and
produce FAST Features (filtered by Harris) at each level in the pyramid.
ORB provides the Harris Corner inorder to detect the corners in an image and is it worth for me to use Gaussian Filters along with ORB?
3) ORB uses only Harris Corner to detect the corners or any other?
Please let me know about this and just enlighten me on the above mentioned questions.

Confusion regarding Object recognition and features using SURF

I have some conceptual issues in understanding SURF and SIFT algorithm All about SURF. As far as my understanding goes SURF finds Laplacian of Gaussians and SIFT operates on difference of Gaussians. It then constructs a 64-variable vector around it to extract the features. I have applied this CODE.
(Q1 ) So, what forms the features?
(Q2) We initialize the algorithm using SurfFeatureDetector detector(500). So, does this means that the size of the feature space is 500?
(Q3) The output of SURF Good_Matches gives matches between Keypoint1 and Keypoint2 and by tuning the number of matches we can conclude that if the object has been found/detected or not. What is meant by KeyPoints ? Do these store the features ?
(Q4) I need to do object recognition application. In the code, it appears that the algorithm can recognize the book. So, it can be applied for object recognition. I was under the impression that SURF can be used to differentiate objects based on color and shape. But, SURF and SIFT find the corner edge detection, so there is no point in using color images as training samples since they will be converted to gray scale. There is no option of using colors or HSV in these algorithms, unless I compute the keypoints for each channel separately, which is a different area of research (Evaluating Color Descriptors for Object and Scene Recognition).
So, how can I detect and recognize objects based on their color, shape? I think I can use SURF for differentiating objects based on their shape. Say, for instance I have a 2 books and a bottle. I need to only recognize a single book out of the entire objects. But, as soon as there are other similar shaped objects in the scene, SURF gives lots of false positives. I shall appreciate suggestions on what methods to apply for my application.
The local maxima (response of the DoG which is greater (smaller) than responses of the neighbour pixels about the point, upper and lover image in pyramid -- 3x3x3 neighbourhood) forms the coordinates of the feature (circle) center. The radius of the circle is level of the pyramid.
It is Hessian threshold. It means that you would take only maximas (see 1) with values bigger than threshold. Bigger threshold lead to the less number of features, but stability of features is better and visa versa.
Keypoint == feature. In OpenCV Keypoint is the structure to store features.
No, SURF is good for comparison of the textured objects but not for shape and color. For the shape I recommend to use MSER (but not OpenCV one), Canny edge detector, not local features. This presentation might be useful

What is the best solution for rotation invariant detector?

I'd like to create an object detector based on cascade classifier, the only problem is that LBP and Haar features are not rotation invariant. The first thing that comes to my mind is to rotate the training sample at different angles, but I doubt that the resulting classifier would have good quality, moreover, the object could have stretched proportions. There are many rotation invariant detectors, for example, iPhone recognizes faces in real time in any orientation, so I wonder how do they achieve this? I would prefer to use OpenCV for this.
Check out the object detection framework available at https://github.com/nenadmarkus/pico.
The framework enables you to learn a custom object detector (for example, for finding frontal, upright faces) and then use it at runtime for rotation invariant detection.
This is achieved by scanning the image with a rotated version of the object detector at a number of different orientations. Note that this can be done without cascade retraining or image resampling, and it should work in real-time on modern machines (the provided face detection demo does).
The details are given in the paper available at http://arxiv.org/abs/1305.4537.
Fourier descriptors are rotation invariants (and translation as well as scaling invariants); the idea then would be to train whatever classifier your confortable with on the Fourier Descriptor result (PCA on Fourier descriptor, associated with a SVM seems to be a logical choice).
See Fourier Descriptors (Wolfram)
for matching logos I think this is what you need: http://www.ijera.com/papers/Vol2_issue5/JW2517421747.pdf
What about some simple solution....
Object Detection using SURF

Resources