Calculate the unit gradient vector - image-processing

I have a problem to calculate the unit gradient vector. I have a formula but i didn't understand. If possible, would you be able to explain this formula in more detail. I must implement an image for eye center localization. Thank you for your interest.
UGVs formula

Gradient vector calculation will give you the magnitude and orientation for each pixel in the image. This means you need to calculate the derivatives along the x-axis and y-axis separately. Then fuse them to calculate magnitude and direction of vectors. If you are using OpenCV or MATLAB, you will see functions to calculate gradient magnitude and direction of pixels in an image. For example for MATLAB, see imgradient amd imgradientxy functions.

Related

Visualizing HOG feature, why the direction of gradient is perpendicular to the real gradient orientation

The direction of the gradient is usually point from a darker area to a lighter area, or conversely. However for visualizing the HOG, why the "star dials" are along the direction of intensity/color changes.
I do understand how we get the orientation and magnitude of the gradients.
But, I do not understand the idea behind the visualization of HOG. Here is an example, the third "star dial" on the first row. Should not the directions of the gradient point to directions perpendicular to the red lines?
HOG visualizing is to better 'see' what in feature spaces. So the directions of outline(not gradient) at some points are displayed.
Is your image from SATYA's blog? That's good mateiral. You can also read this. It's a project introducing the tools to visualize HOG feature spaces.
It depends on the visualization tool that is being used. For example when using skimage.feature.hog() with visualize=True, it produces a hog image described as follows:
For each cell and orientation bin, the image contains a line segment
that is centered at the cell center, is perpendicular to the midpoint
of the range of angles spanned by the orientation bin, and has
intensity proportional to the corresponding histogram value.
See the documentation for more details. So what you are seeing in these visualizations are lines (nine for each cell) at angles perpendicular to the orientation-bin angles, not the gradient orientations themselves.

corner detection using Chris Harris & Mike Stephens

I am not able to under stand the formula ,
What is W (window) and intensity in the formula mean,
I found this formula in opencv doc
http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_features_harris/py_features_harris.html
For a grayscale image, intensity levels (0-255) tells you how bright is the pixel..hope that you already know about it.
So, now the explanation of your formula is below:
Aim: We want to find those points which have maximum variation in terms of intensity level in all direction i.e. the points which are very unique in a given image.
I(x,y): This is the intensity value of the current pixel which you are processing at the moment.
I(x+u,y+v): This is the intensity of another pixel which lies at a distance of (u,v) from the current pixel (mentioned above) which is located at (x,y) with intensity I(x,y).
I(x+u,y+v) - I(x,y): This equation gives you the difference between the intensity levels of two pixels.
W(u,v): You don't compare the current pixel with any other pixel located at any random position. You prefer to compare the current pixel with its neighbors so you chose some value for "u" and "v" as you do in case of applying Gaussian mask/mean filter etc. So, basically w(u,v) represents the window in which you would like to compare the intensity of current pixel with its neighbors.
This link explains all your doubts.
For visualizing the algorithm, consider the window function as a BoxFilter, Ix as a Sobel derivative along x-axis and Iy as a Sobel derivative along y-axis.
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html will be useful to understand the final equations in the above pdf.

How to find polar shape matrix in OpenCV

I am implementing shape descriptors based classification. I have already implemented convex hull, code chain and fourier and getting successful results. Now I am trying to find polar shape matrix. The image below shows an example. If more than half pixels in a sector are of the shape, then I store it as 1, else 0. Now my problem is, how do I scan the sectors?
Image shows an example of polar shape coordinates.
Try to find the approximative shapes that containing invariant measures. Then you compare by these measures that preserve the same value under geometric deformations.
For example a triangle you can find a ratio of length as invariant if you don't have complex deformation (Euclidean), or a barycenteric coordinates if you have affine deformation (see this paper it may be useful : ), and a cross ratio could be for the most complex deformation (projectivity), see this pages also for cross ratio

Getting corners from convex points

I have written algorithm to extract the points shown in the image. They form convex shape and I know order of them. How do I extract corners (top 3 and bottom 3) from such points?
I'm using opencv.
if you already have the convex hull of the object, and that hull includes the corner points, then all you need to to do is simplify the hull until it only has 6 points.
There are many ways to simplify polygons, for example you could just use this simple algorithm used in this answer: How to find corner coordinates of a rectangle in an image
do
for each point P on the convex hull:
measure its distance to the line AB _
between the point A before P and the point B after P,
remove the point with the smallest distance
repeat until 6 points left
If you do not know the exact number of points, then you could remove points until the minimum distance rises above a certain threshold
you could also do Ramer-Douglas-Peucker to simplify the polygon, openCV already has that implemented in cv::approxPolyDP.
Just modify the openCV squares sample to use 6 points instead of 4
Instead of trying to directly determine which of your feature points correspond to corners, how about applying an corner detection algorithm on the entire image then looking for which of your feature points appear close to peaks in the corner detector?
I'd suggest starting with a Harris corner detector. The OpenCV implementation is cv::cornerHarris.
Essentially, the Harris algorithm applies both a horizontal and a vertical Sobel filter to the image (or some other approximation of the partial derivatives of the image in the x and y directions).
It then constructs a 2 by 2 structure matrix at each image pixel, looks at the eigenvalues of that matrix, and calls points corners if both eigenvalues are above some threshold.

Image processing-Shape Recognition

I want algorithm for recognizing multiple no of shapes(Specially rectangle and squares) in a picture.Preferably I am using C# so, I am looking forward for solutions in C#.
check aforgenet....
http://www.aforgenet.com/forum/
If you are looking for a library that does a lot of image processing for you there is always OpenCV. I think it is is c++ though.
You can use the Circularity algorithm as a first approach, which is very easy to compute:
C = p2/a where p is the perimeter (border area) and a is shape area.
To know how to read/write pixels quickly, take a look here
Alternatively look for shape signature algorithm available at Rafael Gonzales book. In this algorithm you compute the center of the object using central momentum, the you compute the distance between the center and each border pixel. You'll end up with a 1D signal where peaks represent bigger distance from the center. In a square, you have 4 symmetric peaks while in a rectangle 2 big peaks and 2 smaller ones.

Resources