I have one problem in the second step which is to accumulate weighted votes for gradient orientation over spatial cells.
Assuming the cell is 8*8. Let me use two matrix GO[8][8]([1 9]), GM[8][8] to represent the gradient orientation and gradient magnitude respectively.
The gradient orientation ranges from 0 - 180 and there are 9 orientation bins.
According to my understanding of HOG, for every pixel in a cell, adding its gradient magnitude to its corresponding orientation bin. In this way, we can have the histogram for every cell.
But there is one sentence thats confusing me.
"To reduce aliasing, votes(gradient magnitude) are interpolated
trilinearly between the neighbouring bin centers in both orientation
and position."1
Why interpolated? How to interpolate? Can someone explains more detailed? No reducing aliasing.
Thanks in advance.
1 This sentence is in Navneet Dalal's PHD thesis, p38, line 4.
Interpolation is a standard technique for computing histograms. The idea here is that each value is not simply placed into one bin, but is distributed between two neighboring bins (assuming a 1d histogram), based on how far away it is from the center of the original bin.
The purpose of this is to deal with situations when a small error in your measurement can cause a value to be placed into a different bin. This is a very good thing to do for any type of histogram, not just for HOGs, assuming you have the CPU cycles.
There is also bi-linear and tri-linear interpolation for 2d and 3d histograms, where each value is distributed between 4 and 8 neighboring bins respectively.
Related
Suppose that the Canny edge detector successfully detects an edge in an image. The edge is then rotated by θ, where the relationship between a point on the original edge (x,y)(x,y) and a point on the rotated edge (x′,y′)(x′,y′) is defined as x′ = xcosθ; y′ = xsinθ;
Will the rotated edge be detected using the same Canny edge detector?
(I think we should find answer considering that the detection of an edge by the Canny edge detector depends only on the magnitude of its derivative.)
The answer is both yes and no, and which one you go for depends on how literally you take the question.
First of all, we're dealing with a rectangular grid, so given an integer location (x,y), the corresponding point (x',y') in a rotated image is highly likely not an integer location. And considering that the output of Canny is a set of points, and not a smooth function that can be interpolated, it would be difficult to establish a correspondence between the set resulting from the rotated and the one resulting from the original image.
Think for example about the number of pixels on a discrete line of a given length at 0 degrees and at 45 degrees. (Hint: the line at 45 degrees has sqrt(2) times fewer pixels.)
But if you take the question more generally and interpret it as "will an edge that is detected in the original image also be detected after rotating the image by θ degrees?" then the answer is yes, in theory.
Of course practice is always a bit different than theory. The details of the implementation matter here. And there is always numerical imprecision to contend with.
Let's start by assuming the rotation is computed correctly, with a precise interpolation scheme (cubic, Lanczos) and not rounded after to uint8 or something (i.e. we're computing using floating-point values).
If you read the original paper by Canny, you'll see he proposes using Gaussian derivatives as the best compromise between compact support and computational precision. I have seen few implementations that actually do. Typically I see a convolution with a Gaussian and then Sobel derivatives. Especially for smaller sigmas (less smoothing) the difference can be quite large. Gaussian derivatives are rotationally invariant, Sobel derivatives are not.
The next step in the algorithm is non-maximum suppression. This is where the continuous gradient is converted to a set of points. For each pixel, it checks to see if it is a local maximum in the direction of the gradient. Because this is done per pixel, a different set of locations are tested in the rotated image compared to the original. Nonetheless, it should detect points along the same ridges in both cases.
Next, a hysteresis threshold is applied. This is a two-threshold operation that keeps pixels above one threshold as long as at least one pixel above a second threshold is present in the same connected component. This is where the differences could occur between rotated and original image. Remember we're dealing with a set of pixels. We have samples the continuous gradient function at discrete points. There could be an edge that has one pixel above the second threshold in one version of the image, but not in the other. This would only occur for edges very close to the chosen threshold, of course.
Next comes a thinning. Because the non-maximum suppression can yield points along a thicker line, a thinning operation is applied that removes pixels from the set that are not needed to maintain connectivity of the lines. Which pixels are selected here will also differ between rotated and original images, but this does not change the geometry of the solution, so we still have the same set of points.
So, the answer is yes and no. :)
Note that the same logic applies to translation.
I know that we take a 16x16 window of "in-between" pixels around the key point. we split that window into sixteen 4x4 windows. From each 4x4 window, we generate a histogram of 8 bins. Each bin corresponding to 0-44 degrees, 45-89 degrees, etc. Gradient orientations from the 4x4 are put into these bins. This is done for all 4x4 blocks. Finally, we normalize the 128 values you get.
Where they get their value
but I misunderstand where the 128 number get their value from? did it refer to the corresponding magnitude of the orientation value or what?
I would be grateful if anyone describes any numerical example Regards!
In SIFT (Scale-Invariant Feature Transform), the 128 dimensional feature vector is made up of 4x4 samples per window in 8 directions per sample -- 4x4x8 = 128.
For an illustrated guide see A Short introduction to descriptors, and in particular this image, showing 8-direction measurements (cardinal and inter-cardinal) embedded in each of the 4x4 grid squares (center image) and then a histogram of directions (right image):
From your question I believe you are also unclear on what the information inside the descriptor is -- it is called Histograms of Oriented Gradients (HOG). For further reading, Wikipedia has an overview of HOG gradient computation:
Each pixel within the cell casts a weighted vote for an orientation-based histogram channel based on the values found in the gradient computation.
Everything is built on those per-pixel "votes".
I am implementing Lowe's method, "SIFT", for finding and describing features in an image.
I have found interest points, and now I have to describe them: Using Lowe's method, I have calculated the magnitude and gradient in an area around the keypoint, and created a Gauss weighted histogram, with 36 bins, each corresponding to an orientation of 10 degrees. For each keypoint, there is a histogram. Each bin is the sum of the weighted magnitude, in that direction. An example taken from aishack.in: http://www.aishack.in/static/img/tut/sift-orientation-histogram.jpg
Bins within 80% the size of the maximum bin, is made a new keypoint. After describing, it says in the paper: "Finally, a parabola is fit to the 3 histogram values closest to each peak to interpolate the peak position for better accuracy". I am not sure i get this.
In my understanding, it means the peak, the left, and the right value of that peak, will have a parabola fit, like this(be warned! Drawn free hand)
http://i.stack.imgur.com/7V8pb.jpg
and the orientation of the keypoint will be where the extremum of the parabola is. For instance: If the parabola fitted at 10-19, 20-29, and 30-39 (with 20-29 being the histogram peak), had extremum at a point, that reached in the 30-39, then this would be the orientation of that keypoint. Am i understanding this correctly? In this way, the orientation of the keypoint, can only be within 36 orientations
Another option: Same idea as above, only the histogram is no longer discrete: the extremum of the parapola will thus be a continuous value, and this value is assigned to the keypoint.
The idea of the parabola fitting is to find the peak with better than bin resolution. As you see in your example, the peak is at 20-29 (average 24.5) but the 10-19 bin is higher than the 30-39 bin. It's therefore likely that the precise peak should be below 24.5.
You can't have a non-discrete histogram, that defeats the point of a histogram. What you can have is overlapping bins: create a bin for 20-29, but also a bin for 21-30 and 22-31 etc. So the value 24 would map to 10 bins, from 15-24 to 24-35.
And when you increment a bin, you don't necessarily need to increment it by 1. You can also increment a bin by a variable amount, e.g. the distance from the given value to the edge of the bin. So 24 would add 1 to bin 16-25 but 4 to bin 20-29.
According to "Lowe, David G. "Distinctive image features from scale-invariant keypoints." International journal of
computer vision 60.2 (2004): 91-110 "
"It is important to avoid all boundary affects in which the descriptor
abruptly changes as a sample shifts smoothly from being within one
histogram to another or from one orientation to another. Therefore,
trilinear interpolation is used to distribute the value of each
gradient sample into adjacent histogram bins. In other words, each
entry into a bin is multiplied by a weight of 1−d for each dimension,
where d is the distance of the sample from the central value of the
bin as measured in units of the histogram bin spacing."
I am calculating the orientation[t] and location of gradient(x,y) which will be in floating point. Currently, I was just
providing the gradient magnitude to 3d histogram values[t][x][y] ( means the lower bound of floating point values of t,x
and y). But, according to paper, I have to distribute the gradient magnitude to adjacent bins. I am not sure about how
to distribute it.
I got my answer on following link:
HOG Trilinear Interpolation of Histogram Bins
I'm validating an image segmentation algorithm applied to 2D images. The algorithm generates a contour segment, i.e. a set of connected pixels that form a freecurve in 2D space. The idea is to compare this set of pixels with a ground-truth, in my case another contour segment manually traced by an expert. An image showing what would be a segmentation result and the corresponding manual (ground-truth) segmentation is shown below:
I'm trying to think of an adequate comparison metric to validate the segmentation results. Ideally the best metric would be the point-to-point euclidean distance between corresponding pairs of pixels on each segment, however (as seen in previous figure) the segments don't have the same length (i.e. differ by the total number of pixels) so pixel-to-pixel comparisons have to be discarded.
Can you suggest me an adequate metric for validating my algorithm? Thanks for any suggestion!
For each pixel in the ground truth, take the distance to the nearest pixel in the segmentation result. Then take the sum of that for all ground truth pixels as the total error.
That's basically recall weighted by distance. If you start with the pixels in the result, it would resemble precision instead.
If the curves are closed, you can compute the area between the curves. If you can tell which pixels belong to a segment, that is as easy as computing XOR set of the 2 pixel sets.
Here is an example using that I've created using Matlab:
You could divide each line into n segments of equal length, then compute the euclidean distance between each segment and its pair on the other line.