How to detect corner with specific angle degree - opencv

I have an image with a equilateral triangle and a rectangle:
And I want to detect 3 corner of the triangle only. I follow the OpenCV Harris corner detector tutorial I see that all the corner-point of the triangle have the threshold = 80 (when all the 4 corner-point of the rectangle threshold = 255). But I did not find the link between threshold and degree.
How can I find the corner that in the range of [55,65] degree, for example?
Here is the output Mat http://pastebin.com/raw.php?i=qNidEAG0
P/s: I very new to CV, hope you can give some more detail!

It seems that I found possible solution. I've implemented it on Mathematica and able to explain basic steps.
Use find corners operator and take strongest corners. Use Harris operator.
Find contours (cv::FindContours).
For each corner in each contour draw a circle and find point of intersection between circle and contour. There is no ready function for it in OpenCV and you should implement it yourself.
Now for each corner you have coordinates of three points: corner, and two points on sides of contour. It is enough to evaluate angles using dot product:
Result:

Related

Difference Between Hough Circle and minEnclosed Circle in OpenCV to detect circles?

I just want to know what will the difference be if instead of using hough circle to detect a circle, I find a contour and using minEnclosed circle find the circle? Which one will be more accurate? As far as I can understand both of them should give me the same thing. Can anyone help clarify
minEnclosed circle will enclose all outlier points in your connected component (blob or edge) while Hough circle searches for the best fit using voting algorithm.
So for searching circles; Hough circle is more accurate.
Edit :

How to rectify a detected ellipse

I am trying to find circles in images and warp them back to a canonical view (i.e. as if looking into the center). However, circles in general project to ellipses under perspective transformations. So I am first detecting ellipses, roughly doing the following (in OpenCV):
1. Find contours in the image
2. Estimate area of contour
3. Fitting a bounded box to contour and estimating area by width/2 * height/2 * PI (area of ellipse)
4. checking if area of contour and estimated area of ellipse is < a threhsold
Assuming I have found an ellipse by this method, how can I rectify it back to a circle such that I "undo" the perspective transform (although not in plane rotation as this cannot be done I guess). For example, if it was a rectangle I would just compute the homography from the 4 corners of an uprigh rectangle to the detected projected one.
I have no idea how to do this with an ellipse, any help is much appreciated.
Thanks
A circle is indeed transformed into an ellipse by a perspective transformation, however its axes are not the same as the axes of the initial circle, as shown in this illustration:
(source: brian-curtis.com)
You can refer to this link for a detailled demonstration. As a consequence, the bounding rectangle of the ellipse is not the image of the initial square by the perspective tranformation.
EDIT:
This means that the center and the axes of the ellipse you observe are not the images, by the perspective mapping, of the center and axes of the original circle. I tried to make a clearer illustration:
On this image, I drew in green the axes and center of the original circle, after perspective transformation, and the axes and center of the ellipse in red. On this specific example, the vertical axis is not deformed by the perspective mapping, but it would be deformed in the general case. Hence, deforming a circle by a perspective transformation gives an ellipse, but the axes and center that you see are not the axes and center of the original circle.
As a consequence, you cannot simply use the top, bottom, left and right points on the ellipse (the red points, which can easily be detected from the ellipse) to map these onto the top, bottom, left and right points of the circle because they do not correspond under the perspective mapping (the green points do, but they cannot be detected easily from the ellipse).
In the end, I don't think that it is at all possible to estimate the perspective mapping from a single detected ellipse.
This looks like an indeterminate problem.
The projection of a rectangle supplies 8 equations in 8 unknowns (homography coefficients).
With an ellipse, you can only retrieve the center coordinates (2 DOF), the axis (2 DOF) and the axis orientation (1 DOF).

How do I get the vertices from an edge image

I have just used canny edge detection to detect a rectangle in an image. I would like to get the four corners of the rectangle
Please see my answer here: How to find corners on a Image using OpenCv
As for step 7: cvApproxPoly returns a CvSeq*. This link explains it well. As shown here, a CvSeq struct contains a total member that contains the number of elements in the sequence. In the case of a true quadrilateral, total should equal 4. If the quadrilateral is a square (or rectangle), angles between adjacent vertices should be ~90 degrees.
findContours will give you the outline points

Rectangular approximation of contours

After some color detection, binary thresholding, and using cvFindContours() and drawing the contours and detected blue rectangle on the image I have:
My problem is to some simple collision avoidance (the blue rectangle in the center cannot hit the red "walls"). It would be helpful for my purposes to have the red wall contours be approximated as with rectangles. However, using a simple cvBoundingRect and drawing red rectangles around the white contours I get:
The edges are a little cropped off, but you may get the idea of what we would expect using a bounding rectangle for the contours, as the entire contour is used for the approximation of the bounding rectangle and hence the large overlapping rectangles. What I would like to have is the wall contours be divided into multiple bounding rectangles, such as the the left wall be approximated as one rectangle, the right wall, the forward wall, etc...as in my illustrative rendition below:
Any help in doing so would be greatly appreciated.
Detecting lines (typically Hough, RANSAC) together with some other information you have about the problem should be enough, maybe even overkill. For instance, starting with the below image at left, we get the below image at right.
But if you have the above image at left (which you should have already), the problem is already solved. Just draw both internal and external contours of the walls and you are set.

Getting corners from convex points

I have written algorithm to extract the points shown in the image. They form convex shape and I know order of them. How do I extract corners (top 3 and bottom 3) from such points?
I'm using opencv.
if you already have the convex hull of the object, and that hull includes the corner points, then all you need to to do is simplify the hull until it only has 6 points.
There are many ways to simplify polygons, for example you could just use this simple algorithm used in this answer: How to find corner coordinates of a rectangle in an image
do
for each point P on the convex hull:
measure its distance to the line AB _
between the point A before P and the point B after P,
remove the point with the smallest distance
repeat until 6 points left
If you do not know the exact number of points, then you could remove points until the minimum distance rises above a certain threshold
you could also do Ramer-Douglas-Peucker to simplify the polygon, openCV already has that implemented in cv::approxPolyDP.
Just modify the openCV squares sample to use 6 points instead of 4
Instead of trying to directly determine which of your feature points correspond to corners, how about applying an corner detection algorithm on the entire image then looking for which of your feature points appear close to peaks in the corner detector?
I'd suggest starting with a Harris corner detector. The OpenCV implementation is cv::cornerHarris.
Essentially, the Harris algorithm applies both a horizontal and a vertical Sobel filter to the image (or some other approximation of the partial derivatives of the image in the x and y directions).
It then constructs a 2 by 2 structure matrix at each image pixel, looks at the eigenvalues of that matrix, and calls points corners if both eigenvalues are above some threshold.

Resources