Implementing Hough Transform - image-processing

I am doing a sudoku grid detection in C using SDL. I figured out that using Hough Transform could help me detect lines so I tried to implement it. However, I don't understand what to do with the accumulator array after iterating on the image.
In fact, on the Wikipedia page, it is said that you have to apply a threshold and determine which parts of the image match the lines but I don't understand this part of the implementation.
Also, the Hough Transform implementation uses polar coordinates. However, SDL can only draw lines from two given points from what I have seen of the documentation. So how could I draw the lines with the polar expression of the lines?
PS : This is my first time using StackOverflow, I hope that I formulated my problem correctly.

Related

Warping curved rectangle to a regular rectangle

Edit: Upon further research, I've came across similar questions. I guess the process is not as trivial as using the WarpPerspective() function. Here is a similar question and an answer.
I'm using methods like thresholding and canny to extract a rectangular shape from books. Sometimes the rectangles (contours) are deformed like this (pages are not always flat):
As you can see the bottom line is not a straight line. I need to warp it into a rectangle to do further analysis of its inside contents.
Normally, I use WarpPerspective() using the 4 points I get from ApproxPolyDP() with a contour like this and it works fine:
But I can't figure out what to do with a curved rectangle. Here is what I get using the method I use on non-curved rectangles. It's close but not quite what I want:

Get polygons from edges in OpenCV

I have an image I want to extract lines from (a vascular network), using the Hough line algorithm. First I preprocess the image, then use Canny edge detection to generate the binary image.
I want to get a polygon/an array of joined line segments representing the shape of the vascular network. However applying the Hough line transform directly on this image yields mediocre results, partly because edge detection means each vessel is represented by two lines on each side, instead of a single line.
I'm new to OpenCV and image processing in general, so I'm probably going about this the wrong way. Any suggestions, or any recommended literature?
I suggest not using Canny edge detection.
Instead, first use a binary threshold to get a binary image of the vascular network (see http://docs.opencv.org/3.1.0/d7/d4d/tutorial_py_thresholding.html#gsc.tab=0 for applying a binary threshold). Then, pixels that are "on" should be points inside the network and those that are "off" should be outside.
Then use the findContours method:
http://opencvexamples.blogspot.com/2013/09/find-contour.html
This method gives you an array of contours, each of which is a list of points. A list of points will represent the list of line segments you are looking for (it will represent a contour, and if you are lucky it might be a polygon!).
Hough may not be the best tool for this job. Hough will give you straight lines or other geometric shapes. It is not designed to follow a detailed pattern like this.
Given the image, I would read research papers which already solve this. Here are a few examples from a search on Google Scholar. If they don't work for you, look up the citations as they should lead you down other paths.
https://scholar.google.com/scholar?hl=en&q=retina+computer+vision+vascular
http://ijesat.org/Volumes/2012_Vol_02_Iss_04/IJESAT_2012_02_04_25.pdf
http://www.vision.cs.rpiscrews.us/publications/pdfs/shen_itbm_submitted.pdf

Finding simple shapes in 2D point clouds

I am currently looking for a way to fit a simple shape (e.g. a T or an L shape) to a 2D point cloud. What I need as a result is the position and orientation of the shape.
I have been looking at a couple of approaches but most seem very complicated and involve building and learning a sample database first. As I am dealing with very simple shapes I was hoping that there might be a simpler approach.
By saying you don't want to do any training I am guessing that you mean you don't want to do any feature matching; feature matching is used to make good guesses about the pose (location and orientation) of the object in the image, and would be applicable along with RANSAC to your problem for guessing and verifying good hypotheses about object pose.
The simplest approach is template matching, but this may be too computationally complex (it depends on your use case). In template matching you simply loop over the possible locations of the object and its possible orientations and possible scales and check how well the template (a cloud that looks like an L or a T at that location and orientation and scale) matches (or you sample possible locations orientations and scales randomly). The checking of the template could be made fairly fast if your points are organised (or you organise them by e.g. converting them into pixels).
If this is too slow there are many methods for making template matching faster and I would recommend to you the Generalised Hough Transform.
Here, before starting the search for templates you loop over the boundary of the shape you are looking for (T or L) and for each point on its boundary you look at the gradient direction and then the angle at that point between the gradient direction and the origin of the object template, and the distance to the origin. You add that to a table (Let us call it Table A) for each boundary point and you end up with a table that maps from gradient direction to the set of possible locations of the origin of the object. Now you set up a 2D voting space, which is really just a 2D array (let us call it Table B) where each pixel contains a number representing the number of votes for the object in that location. Then for each point in the target image (point cloud) you check the gradient and find the set of possible object locations as found in Table A corresponding to that gradient, and then add one vote for all the corresponding object locations in Table B (the Hough space).
This is a very terse explanation but knowing to look for Template Matching and Generalised Hough transform you will be able to find better explanations on the web. E.g. Look at the Wikipedia pages for Template Matching and Hough Transform.
You may need to :
1- extract some features from the image inside which you are looking for the object.
2- extract another set of features in the image of the object
3- match the features (it is possible using methods like SIFT)
4- when you find a match apply RANSAC algorithm. it provides you with transformation matrix (including translation, rotation information).
for using SIFT start from here. it is actually one of the best source-codes written for SIFT. It includes RANSAC algorithm and you do not need to implement it by yourself.
you can read about RANSAC here.
Two common ways for detecting the shapes (L, T, ...) in your 2D pointcloud data would be using OpenCV or Point Cloud Library. I'll explain steps you may take for detecting those shapes in OpenCV. In order to do that, you can use the following 3 methods and the selection of the right method depends on the shape (Size, Area of the shape, ...):
Hough Line Transformation
Template Matching
Finding Contours
The first step would be converting your point to a grayscale Mat object, by doing that you basically make an image of your 2D pointcloud data and so you can use other OpenCV functions. Then you may smooth the image in order to reduce the noises and the result would be somehow a blurry image which contains real edges, if your application does not need real-time processing, you can use bilateralFilter. You can find more information about smoothing here.
The next step would be choosing the method. If the shape is just some sort of orthogonal lines (such as L or T) you can use Hough Line Transformation in order to detect the lines and after detection, you can loop over the lines and calculate the dot product of the lines (since they are orthogonal the result should be 0). You can find more information about Hough Line Transformation here.
Another way would be detecting your shape using Template Matching. Basically, you should make a template of your shape (L or T) and use it in matchTemplate function. You should consider that the size of the template you want to use should be in the order of your image, otherwise you may resize your image. More information about the algorithm can be found here.
If the shapes include areas you can find contours of the shape using findContours, it will give you the number of polygons which are around your shape you want to detect. For instance, if your shape is L, it would have polygon which has roughly 6 lines. Also, you can use some other filters along with findContours such as calculating the area of the shape.

Three Dimensional Hough Space

Im searching for radius and the center coordinates of circle in a image. have already tried 2D Hough transform. but my circle radius is also a unknown. Im still a beginner to Computer vision so need guild lines and help for implementing three dimensional hough space.
You implement it just like 2D Hough space, but with an additional parameter. Pseudo code would look like this:
for each (x,y) in image
for each test_radius in [min_radius .. max_radius]
for each point (tx,ty) in the circle with radius test_radius around (x,y)
HoughSpace(tx,ty,test_radius) += image(x,y)
Thiton gives you the correct approach to formalize the problem. But then, you will run in other problems inherent to the hough transform:
how do you visualize the parameter space? You may implement something with a library like VTK, but 3D visualization of data is always a difficult topic. The visualization is important for debugging your detection algorithm and is one of the nice thing with 2D hough transform
the local maximum detection is non trivial. The new dimension will mean that your parameter space will be more sparse. You will have more tuning to do in this area
If you are looking for a circle detection algorithm, you may have better options than the hough transform (google "Fast Circle Detection Using Gradient Pair Vectors" looks good to me)

OpenCV Identifying Lines and Curves

I'm just starting to learn OpenCV programming. May I just ask about how can I identify lines and curves in OpenCV? My problem is that I have to identify if the image contains a convex or concave (horizontal or vertical curve) curve, a vertical, diagonal or a horizontal line.
In my code, I used CvSetImageROI to take a particular part of an image, and then I'm trying to identify each according to the said lines/curves.
Are there functions in OpenCV that are available? Thank you very much for the help. By the way, i'm using Linux and C++.
Hough transform http://en.wikipedia.org/wiki/Hough_transform, http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
is the standard way to do it. In its simple form (as implemented in OpenCV) it can detect lines of arbitray position and angle and line segments.
Look here for an example
http://opencv.itseez.com/modules/imgproc/doc/feature_detection.html?highlight=hough#houghlinesp
For curves, the detection process is a bit more complicated, and you need the general Hough transform It is not yet available in OCV, but you can write it as an exercise or look for a good implementation.
http://en.wikipedia.org/wiki/Generalised_Hough_transform describes it (in short)

Resources