What's the use of Canny before HoughLines (opencv)? - opencv

I'm new to image processing and I'm working on detecting lines in a document image. I read the theory of Hough line transform but I can't see why I must use Canny before calling that function in opencv like being said in many tutorials. What's the point of finding edges in this case? The fact is that if I don't use Canny or threshold before HoughLines() the results will be very messy. I hope someone will explain for me the reason why.
2 of the tutorials I've read:
Imgproc Feature Detection
Hough Line Transform

Short Answer
cvCanny is used to detect Edges, as well as increase contrast and remove image noise.
HoughLines which uses the Hough Transform is used to determine whether those edges are lines or not. Hough Transform requires edges to be detected well in order to be efficient and provide meaning results.
Long Answer
The Limitations of the Hough Transform are described in more detail on Wikipedia.
The efficiency of the Hough Transform relies of the bin of acculumated pixel being distinct, e.g. a direct contrast between a pixel and its surrounding neighbours or if using a mask region a pixel region and its surrounds regions. If all pixels had similar acculumated values nothing would stand out as a line or circle. This leads to the reduction of colour (colour to grayscale, grayscale to black and white) in order to increase contract.
The number of parameters to the Hough Transform also increase the spread of votes in the pixel bins and increase the complexity of the transform, which mean that normally only lines or circles are reliably detected using it as they have less than 3 parameters.
The edges need to be detected well before running the Hough Transform otherwise its efficiency suffers further. Also noisy images don't work well with Hough transform unless the noise is removed before hand.

First of all, to detect lines you need to work on a boolean matrix image (or binary), I mean: the color is black or white, there's no grayscale.
HoughLines()'s requirement to work properly is to have this kind of image as input. That's the reason you have to use Canny or Treshold, to convert the colored image matrix into a boolean one.
Hough transformation
A line in one picture is actually an edge. Hough transform scans the whole image and using a transformation that converts all white pixel cartesian coordinates in polar coordinates; the black pixels are left out. So you won't be able to get a line if you first don't detect edges, because HoughLines() don't know how to behave when there's a grayscale.

Theoretically, you are correct. Finding edges is not absolutely required for the Hough Line algorithm to work.
The way the Hough works is basically it takes every point and connects it to every other point, and whatever points have the most lines going through them, those lines stay. For this, we need points. The Canny creates those points. Theoretically you could use any sort of filter - isolate all blue or purple points and connect them, whatever - but edges works well.
The Hough also does not weight its lines or points. To the Hough, an image is binary - made up of either 1s or 0, points or not points. There is no need for greyscale, and the canny conveniently returns binary images.
Thus is the Canny always part of the Hough.

all is about processing binary data,
complex data -> (a binary data, b binary data, c binary data, ..) (using canny(),sobel(), etc)
a binary data -> function1() (using houghlines())
b binary data -> function2()
c binary data -> function3() ..
a binary data -X-> function2() ..
complex data -X-> function1() ..
HTH

Related

How to extract the paper contours in this image (opencv)?

I'm trying to extract the geometries of the papers in the image below, but I'm having some trouble with grabbing the contours. I don't know which threshold algorithm to use (here I used static threshold = 10, which is probably not ideal.
And as you can see, I can get the correct number of images, but I can't get the proper bounds using this method.
Simply applying Otsu just doesn't work, it doesn't capture the geometries.
I assume I need to apply some edge detection, but I'm not sure what to do once I apply Canny or some other.
I also tried sobel in both directions (+ve and -ve in x and y), but unsure how to extract these contours from there.
How do I grab these contours?
Below is some previews of the images in the process of the final convex hull results.
**Original Image** **Sharpened**
**Dilate,Sharpen,Erode,Sharpen** **Convex Of Approximated Polygons Hulls (which doesn't fully capture desired regions)**
Sorry in advance about the horrible formatting, I have no idea how to make images smaller or title them nicely in SOF

Is canny edge detection edge rotationlly invariant?

Suppose that the Canny edge detector successfully detects an edge in an image. The edge is then rotated by θ, where the relationship between a point on the original edge (x,y)(x,y) and a point on the rotated edge (x′,y′)(x′,y′) is defined as x′ = xcosθ; y′ = xsinθ;
Will the rotated edge be detected using the same Canny edge detector?
(I think we should find answer considering that the detection of an edge by the Canny edge detector depends only on the magnitude of its derivative.)
The answer is both yes and no, and which one you go for depends on how literally you take the question.
First of all, we're dealing with a rectangular grid, so given an integer location (x,y), the corresponding point (x',y') in a rotated image is highly likely not an integer location. And considering that the output of Canny is a set of points, and not a smooth function that can be interpolated, it would be difficult to establish a correspondence between the set resulting from the rotated and the one resulting from the original image.
Think for example about the number of pixels on a discrete line of a given length at 0 degrees and at 45 degrees. (Hint: the line at 45 degrees has sqrt(2) times fewer pixels.)
But if you take the question more generally and interpret it as "will an edge that is detected in the original image also be detected after rotating the image by θ degrees?" then the answer is yes, in theory.
Of course practice is always a bit different than theory. The details of the implementation matter here. And there is always numerical imprecision to contend with.
Let's start by assuming the rotation is computed correctly, with a precise interpolation scheme (cubic, Lanczos) and not rounded after to uint8 or something (i.e. we're computing using floating-point values).
If you read the original paper by Canny, you'll see he proposes using Gaussian derivatives as the best compromise between compact support and computational precision. I have seen few implementations that actually do. Typically I see a convolution with a Gaussian and then Sobel derivatives. Especially for smaller sigmas (less smoothing) the difference can be quite large. Gaussian derivatives are rotationally invariant, Sobel derivatives are not.
The next step in the algorithm is non-maximum suppression. This is where the continuous gradient is converted to a set of points. For each pixel, it checks to see if it is a local maximum in the direction of the gradient. Because this is done per pixel, a different set of locations are tested in the rotated image compared to the original. Nonetheless, it should detect points along the same ridges in both cases.
Next, a hysteresis threshold is applied. This is a two-threshold operation that keeps pixels above one threshold as long as at least one pixel above a second threshold is present in the same connected component. This is where the differences could occur between rotated and original image. Remember we're dealing with a set of pixels. We have samples the continuous gradient function at discrete points. There could be an edge that has one pixel above the second threshold in one version of the image, but not in the other. This would only occur for edges very close to the chosen threshold, of course.
Next comes a thinning. Because the non-maximum suppression can yield points along a thicker line, a thinning operation is applied that removes pixels from the set that are not needed to maintain connectivity of the lines. Which pixels are selected here will also differ between rotated and original images, but this does not change the geometry of the solution, so we still have the same set of points.
So, the answer is yes and no. :)
Note that the same logic applies to translation.

Detect triangles, ellipses and rectangles from an image

I am trying to detect the regions of traffic signs. Using OpenCV, my approach is as follows:
The color image:
Using the TanTriggs Preprocessing get rid of the illumination variances:
Equalize histogram:
And binarize (Cv2.Threshold(blobs, blobs, 127, 255, ThresholdTypes.BinaryInv):
Iterate each blob using ConnectedComponents and get the mean color value using the blob as mask. If it is a red color then it may be a red sign.
Then get contours of this blob using FindContours.
Simplify the contours using ApproxPolyDP and check the points of each contour:
If 3 points then triangle shape is acceptable --> candidate for triangle sign
If 4 points then shape is acceptable --> candidate
If more than 4 points, BBox dimensions are acceptable and most of the points are on the ellipse fitted (FitEllipse) --> candidate
This approach works for the separated blobs in the binary image, like the circular 100km sign in my example. However if there is a connection to the outside objects, like the triangle left bottom part in the binary image, it fails.
Because, the mean value of this blob is far from red!
Using Erosion helps in some cases, however makes it worse in many of the other images.
Using different threshold values for the binarization also works for some, but fails on many; like the erosion.
Using HoughCircle is just very slow and I couldn't manage to get good results playing with the parameters.
I have tried using matchShapes but couldn't get good results.
Can anybody show me another way the achieve what I want (with a reasonable computational time)?
Any information, or code in any language is wellcome.
Edit:
Using circularity measure (C=P^2/4πA) or the approach I have described above, triangle and ellips shapes can be found when they are separated. However when the contour is like this for example:
I could not find a robust way to extract the triangle piece. If I could, I would check the mean color, and decide if its a red sign candidate.
Sorry, I don't have the kudos to comment, but can't you use the red colour?
import common
myshow = common.myshow
img = cv2.imread("ms0QB.png")
grey = np.zeros(img.shape[:2],np.uint8)
hsv = cv2.cvtColor(img,cv2.COLOR_mask = np.logical_or(hsv[:,:,0]>160,hsv[:,:,0]<10 )
grey[mask] = 255
cv2.imshow("160<hue<182",grey)
cv2.waitKey()

Detecting incomplete rectangles (missing corners/ short endges) in OpenCV

I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.

Recognize pattern in images

I am looking for a fast idea/algorithm letting me to find squares (as mark points) in the image file. It shouldn't be so much challenge, however...
I started doing this by changing the color of the source image to a grey scale image and scanning each line of the image looking for two, three longest lines (pixel by pixel).
Then having an array of "lines" I am finding elements which may create the desire square.
The better idea would be to find the pattern with known traits, like: it is square, beyond of the square there are no distortion (there is just white space) etc.
The goal is to analyze the image 5000 X 5000 px in less than 1-2s.
Is it possible?
One of the OpenCV samples squares.cpp does just this, see here for code, . Alternatively you could look up the Hough transform to detect all lines in your image, and then test for two lines intersecting at right angles.
There are also a number of resources on this site which may help you:
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Are there any opencv function like "cvHoughCircles()" for square detection?
square detection, image processing
I'm sure there are others, these are just the first few I came across.
See Scale-invariant feature transform, template matching, and Hough transform. A quick and inaccurate guess may be to make a histogram of color and compare it. If the image is complicated enough, you might be able to distinguish between several sets of images.
To make the matter simple, assume we have three buckets for R, G, and B. A completely white image would have (100%, 100%, 100%) for (R, G, B). A completely red image would have (100%, 0%, 0%). A complicated image might have something like (23%, 53%, 34%). If you take the distance between the points in that (R, G, B) space, you can compare which one is "closer".
I guess links by chris solved the question :)

Resources