Hough transformation for circles and rectangle - opencv

I am looking to identify the circles in an image. The circles are the tyres of a vehicle that is present in the image. However, using Hough's transformation, many circles are appearing in the image, but not the ones around the tyres. Not sure if there is a better approach to the same.
Also, is there a way to identify the biggest rectangle in the image i.e. the vehicle's storage container.
Any pointers would be of great help.
Regards
Vijay

I think you need to play with the parameters to filter out the unwanted circles.
void HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0 )
minDist – Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1 – First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller).
param2 – Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
minRadius – Minimum circle radius.
maxRadius – Maximum circle radius

For the first question, you can try 'Fast Circle Detection' algorithm on below paper.
Fast Circle Detection Using Gradient Pair Vectors
I got very good result with it on my previous project, which was a real-time processing of finding the borders between iris and sclera of human eye.
hope this helps.

Related

Is canny edge detection edge rotationlly invariant?

Suppose that the Canny edge detector successfully detects an edge in an image. The edge is then rotated by θ, where the relationship between a point on the original edge (x,y)(x,y) and a point on the rotated edge (x′,y′)(x′,y′) is defined as x′ = xcosθ; y′ = xsinθ;
Will the rotated edge be detected using the same Canny edge detector?
(I think we should find answer considering that the detection of an edge by the Canny edge detector depends only on the magnitude of its derivative.)
The answer is both yes and no, and which one you go for depends on how literally you take the question.
First of all, we're dealing with a rectangular grid, so given an integer location (x,y), the corresponding point (x',y') in a rotated image is highly likely not an integer location. And considering that the output of Canny is a set of points, and not a smooth function that can be interpolated, it would be difficult to establish a correspondence between the set resulting from the rotated and the one resulting from the original image.
Think for example about the number of pixels on a discrete line of a given length at 0 degrees and at 45 degrees. (Hint: the line at 45 degrees has sqrt(2) times fewer pixels.)
But if you take the question more generally and interpret it as "will an edge that is detected in the original image also be detected after rotating the image by θ degrees?" then the answer is yes, in theory.
Of course practice is always a bit different than theory. The details of the implementation matter here. And there is always numerical imprecision to contend with.
Let's start by assuming the rotation is computed correctly, with a precise interpolation scheme (cubic, Lanczos) and not rounded after to uint8 or something (i.e. we're computing using floating-point values).
If you read the original paper by Canny, you'll see he proposes using Gaussian derivatives as the best compromise between compact support and computational precision. I have seen few implementations that actually do. Typically I see a convolution with a Gaussian and then Sobel derivatives. Especially for smaller sigmas (less smoothing) the difference can be quite large. Gaussian derivatives are rotationally invariant, Sobel derivatives are not.
The next step in the algorithm is non-maximum suppression. This is where the continuous gradient is converted to a set of points. For each pixel, it checks to see if it is a local maximum in the direction of the gradient. Because this is done per pixel, a different set of locations are tested in the rotated image compared to the original. Nonetheless, it should detect points along the same ridges in both cases.
Next, a hysteresis threshold is applied. This is a two-threshold operation that keeps pixels above one threshold as long as at least one pixel above a second threshold is present in the same connected component. This is where the differences could occur between rotated and original image. Remember we're dealing with a set of pixels. We have samples the continuous gradient function at discrete points. There could be an edge that has one pixel above the second threshold in one version of the image, but not in the other. This would only occur for edges very close to the chosen threshold, of course.
Next comes a thinning. Because the non-maximum suppression can yield points along a thicker line, a thinning operation is applied that removes pixels from the set that are not needed to maintain connectivity of the lines. Which pixels are selected here will also differ between rotated and original images, but this does not change the geometry of the solution, so we still have the same set of points.
So, the answer is yes and no. :)
Note that the same logic applies to translation.

How to group letters in OpenCV knowing their RotatedRects?

I have an image with letters, for example like this:
It's a binary image obtained from previous image processing stages and I know boundingRect and RotatedRect of every letter, but these letters are not grouped in words yet. It is worth mentioning, that RotatedRect can be returned from minAreaRect() or fitEllipse(), what is shown here and here. In my case RotatedRects look like this:
Blue rectangles are obtained from minAreaRect and red are obtained from fitEllipse. They give a little different boxes (center, width, height, angle), but the biggest difference is in values of angle. In first option angle changes from -90 to 0 degrees , in second case angle changes from 0 to 180 degrees. My problem is: how to group these letters in words, basing on parameters of RotatedRects? I can check angle of every RotatedRect and also measure distance between centers of every two RotatedRects. With simple assumptions on direction of text and distance between letters my algorithm of grouping works. But in more complicated case I encounter a problem. For example, in the image below there are few groups of text, with different directions, different angles and distances between letters.
Problems are when letter from one word is close to letter from other word and when angle of RotatedRect inside given word is more different than the angles of its neighbours. What could be the best way to connect letters in right words?
First, you need to define metric. It may be Euclidian 3D distance for example, defined as ||delta_X,delta_y,Delta_angle|| , where delta_X and delta_Y are distances beetween rectangle centers along x and y coordinate, and Delta_angle as a distance between angular orientation.
In short, your rectangles transforms to 3D data points, with coordinates (x,y,angle).
After you define this. You can use clusetering algorithm on your data. Seems DBSCAN should work good here. Check this article for example: link it may help to choose clustering algorithm.
I extended the aforementioned metric by a few other elements related to geometric properties of letters and words (distances, angles, areas, a ratio of neighboring letters areas, etc.) and now it works fine. Thanks for the suggestion.

Finding edges in a height map

I want to find sharp edges in a heightmap image, while ignoring shallow edges.
OpenCV offers multiple approaches to finding edges in a 2d Image: Canny, Sobel, etc.
However, all these approaches work by comparing the intensity values on both sides of the edge.
If the 2D Image represents a height map of a 3D object, then this results in some weird behaviour.
In a height map, the height of a 3D object at a given X/Y coordinate is represented as the intensity of the 2D Pixel at that X/Y coordinate:
In the above picture, at the edge B the intensity changes only slightly between the left and right side, even though it is a sharp corner.
At the edge A, there is a bigchange in the intensity between pixels on the left side of the edge and the right, even though it is only a shallow angle.
So there is no threshold for Canny or Sobel that will preserve the sharp edge but filter the shallow edge.
(In the above example, the edge B has one side with an ascending slope, and one side with a descending slope. I could filter for this feature; but that would remove the edges C and D as well)
How can I get a binary edge image, containing only edges above a certain angle? (e.g. edge B, C, and D, but not A)
Or alternatively, how can I get a gradient derivative image, where the intensity of each pixel is proportional to the angle of the edge at that pixel?
Probably you'll want to use second derivative instead of first for this task.
Here's my intuition: taking derivative of height (intensity in your case) at each position on an evenly spaced grid would be proportional to arctan of the surface slope between sampling points (or at sampling points if you use a 2-sided derivative approximation). But since you want to detect sharp edges - you are looking for a derivative of slope at the sampling points. This means that you can set a threshold on a derivative of arctan of derivative of intensity to achieve your goal (luckily there's no "need to go deeper" :) )
You will have to be extra careful with taking a derivative of "slope angles" that you'll get - depending on the coordinate system you may come across ambiguity of angle difference (there are 2 ways to get from one angle to another, which are different in general case; you're looking for the "shorter" one). You can look for possible solution here
I have a rather simple approach that I came across wile reading a blog post.
It involves computing the median value of the gray scale image. Using this value we can now set two threshold values:
lower: max(0, (1.0 - 0.33) * v)
upper: min(255, (1.0 + 0.33) * v)
Now pass these two values as parameters into the cv2.Canny() function.
You will now be able to perform an optimized edge detection given any image. The crux of this answer depends on the median value of the image which varies for different images.
If i understand your question correctly, "what you need is basically a corner with high intensity values".
If that is so then look for Harris corner detector which would help you to find points with high gradient change in both direction.
http://docs.opencv.org/2.4/doc/tutorials/features2d/trackingmotion/harris_detector/harris_detector.html
Once you detect the corners you can filter the corners which have high intensity by using a suitable threshold.

Detecting incomplete rectangles (missing corners/ short endges) in OpenCV

I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

Resources