Separate quads in an image - opencv

Here I have some quads in an image, and now I want to separate them into a single quad. Here is the image of the quads
quads, and each shape like this I call a quad quad( a quad is formed by 4 points). Given that I've already had the coordinates of each line segment. For each line, I have their 2 endpoints, but I don't know which point is endpoint 1 or endpoint 2 endpoints of a line. If 2 lines are intersected, I also have their corner coordinate.
So in general, I have 3 lists:
A list of endpoint 1 coordinates
A list of endpoint 2 coordinates
A list of corner coordinates if 2 lines are intersected
The range of endpoint 1's list and endpoint 2's list is the same, but different to the corner's list.
Could you give me some suggestions to separate the quads? This is the result that I want to achieve desire result
What I have tried:
I tried to get 4 coordinates for each quad:
for i in endpoint 1:
-for j in corners: check if point 1 or point 2 close to the corner, then I have 2 points for a quad check if point 1 or 2 close to corner
-for z in endpoint 2(or endpoint 1 is okay because they have the same range): find a line through a corner and point 2 or point 1, if a corner point belongs to this line, then I will have 3rd point for the quad.
-for g in endpoint 2: check if 3rd point is close to point 1: then 4th point = point 2, otherwise = point 1.
As you can see, there are so many for loops, and it even does not work. If you have any idea or suggestion, please help
Edit: This is the input image input image

Related

Using flood-fill to detect corners of a rectangle

I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.

Draw a non intersecting polygon after detecting corners with OpenCV

I have this image:
and I am using cv2.goodFeaturesToTrack to detect the coroners, so now I have this:
The corners are in red and the numbers show the order of which goodFeaturesToTrack got the corners.. for example, corner with number 0 is the first detected one, etc...
If I were to connect the dots based on that order, I would get a messy polygon so I thought of using a function that given a random set of points, it returns them in an order with which the polygon wouldn't intersect..
I found this function and it does exactly what I want.
However, although the polygon doesn't intersect, for this example I am not getting the same shape as the initial one (I am getting a non self-intersecting polygon but a different shape).
Does anyone have an idea to fix this? I was thinking of making cv2.goodFeaturesToTrack return an ordered set of points but I couldn't figure out how to do that.
Thank you so much!
If you want to get the polygon, you can threshold the image and extract the outer contour with findContours, using CV_RETR_EXTERNAL as the mode to obtain the outer contour and CV_CHAIN_APPROX_SIMPLE as the method. CV_CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points (see the documentation).
If you want to use corner detection results and arrange them in correct order to make the polygon, you'll have to trace the boundary of the shape and add those corner points into a list as you find them along the boundary. For this, I think you can use findContours with CV_RETR_EXTERNAL and CV_CHAIN_APPROX_NONE to get every pixel. Still, you might not find your detected corner points exactly on the contour returned from findContours, so you'll have to use a proximity threshold.

Opencv Subdiv2d (Delaunay Triangulation) remove vertices at infinity

I am using subdiv2d class of opencv for delaunay triangulation. I am particularly facing the problem with vertices at infinity. I do not need any triangle outside the image. So, I inserted the corner points of the image which works nice in many cases. But in certain cases, I still have triangles outside the image. So, I want to remove those vertices at infinity. Any help regarding removing the vertices or any other way, someone can suggest so that the triangles are always within the image?
Here is the code for inserting feature points in subdiv. I have inserted the corner points of the image.
Mat img = imread(...);
Rect rect(0, 0, 600, 600);
Subdiv2D subdiv(rect);
// inserting corners of the image
subdiv.insert( Point2f(0,0));
subdiv.insert( Point2f(img.cols-1, 0));
subdiv.insert( Point2f(img.cols-1, img.rows-1));
subdiv.insert( Point2f(0, img.rows-1));
// inserting N feature points
// ...
// further processing
Here is an example where corner points at infinity are creating problem
5 feature points, one in the middle and 4 corners
http://i.stack.imgur.com/VONsN.jpg
5 feature points, one near the bottom and 4 corner points
http://i.stack.imgur.com/kjxgm.jpg
You can see in the 2nd image that triangle is outside the image

OpenCV find polygons

I'm using EmguCV and trying to find polygons within an image. Here are some facts about the problem:
1) The polygons are irregularly shaped, but the sides are always at one of two angles.
2) Often the polygons have gaps in their sides that need to be filled.
3) If a polygon is contained within another polygon, I want to ignore it.
Consider this image:
And I want to find the polygons highlighted in red, omit the polygon highlighted in green and make connections across gaps as shown in blue here:
I've had some success using HoughLinesBinary and then connecting the closest line segment end points to each other to bridge gaps to build a complete polygon, but this doesn't work when multiple polygons are involved since it will try to draw lines between polygons if they happen to be close to each other.
Anybody have any ideas?
I think that the problem could be your image threshold.
I don't know how you did but you can get better results if the binary image is better.
I like of your idea to connect the polygons segment, try to secure that the line you will connect has a maximum and minimum length to avoid connect to nearest objects.
Verify if the new joint of lines forms 90 degrees angle, even if its a corner.
Added:
You can use moephological operators to grow the lines acord to angles. As you said that the lines has known angles do dilations using a mask like this
0 0 0
1 1 0
0 0 0
This mask will grow lines only to rigth until conect to other side.
A general solution will be hard, but for your particular problem a relatively simple heuristic should work.
The main idea is to use the white pixels on one side of a wall as an additional feature. The walls in your image always have one almost black side, and one side with many white noise pixels. The orientation of the white noise in relation to the wall does not switch on corners, so using this information you can eliminate a lot of possible connections between lines.
First some definitions:
All walls in the picture go either from the lower left to the upper right (a rising line) or from the upper left to the lower right (a falling line).
If there are more white pixels on the left side of a falling line, call it falling-left-wall, otherwise falling-right-wall. Same for rising lines.
Each line ends in two points. Call the leftmost one start, the rightmost one end.
Now for the algorithm:
classify each line and each start/endpoint in the image.
Check the immediate area on both sides of each line, and see which side contains more white pixels.
Afterwards, you have a list of points labeled falling-left-wall-start, rising-left-wall-end, etc.
for each start/end point:
look for nearby start/end points from another line
if it is a falling-left-wall-start, only look for:
a falling-left-wall-end
a rising-left-wall-start, if that point is on the left side of the current line
a rising-right-wall-end, if that point is on the right side of the current line
pick the closest point among the found points, connect it to the current point.

Find outer checkerboard corners

In the below picture, I have the 2D locations of the green points and I want to calculate the locations of the red points, or, as an intermediate step, I want to calculate the locations of the blue points. All in 2D.
Of course, I do not only want to find those locations for the picture above. In the end, I want an automated algorithm which takes a set of checkerboard corner points to calculate the outer corners.
I need the resulting coordinates to be as accurate as possible, so I think that I need a solution which does not only take the outer green points into account, but which also uses all the other green points' locations to calculate a best fit for the outer corners (red or blue).
If OpenCV can do this, please point me into that direction.
In general, if all you have is the detection of some, but not all, the inner corners, the problem cannot be solved. This is because the configuration is invariant to translation - shifting the physical checkerboard by whole squares would produce the same detected corner position on the image, but due to different physical corners.
Further, the configuration is also invariant to rotations by 180 deg in the checkerboard plane and, unless you are careful to distinguish between the colors of the squares adjacent each corner, to rotations by 90 deg and reflections with respect the center and the midlines.
This means that, in addition to detecting the corners, you need to extract from the image some features of the physical checkerboard that can be used to break the above invariance. The simplest break is to detect all 9 corners of one row and one column, or at least their end-corners. They can be used directly to rectify the image by imposing the condition that their lines be at 90 deg angle. However, this may turn out to be impossible due to occlusions or detector failure, and more sophisticated methods may be necessary.
For example, you can try to directly detect the chessboard edges, i.e. the fat black lines at the boundary. One way to do that, for example, would be to detect the letters and numbers nearby, and use those locations to constrain a line detector to nearby areas.
By the way, if the photo you posted is just a red herring, and you are interested in detecting general checkerboard-like patterns, and can control the kind of pattern, there are way more robust methods of doing it. My personal favorite is the "known 2D crossratios" pattern of Matsunaga and Kanatani.
I solved it robustly, but not accurately, with the following solution:
Find lines with at least 3 green points closely matching the line. (thin red lines in pic)
Keep bounding lines: From these lines, keep those with points only to one side of the line or very close to the line.
Filter bounding lines: From the bounding lines, take the 4 best ones/those with most points on them. (bold white lines in pic)
Calculate the intersections of the 4 remaining bounding lines (none of the lines are perfectly parallel, so this results in 6 intersections, of which we want only 4).
From the intersections, remove the one farthest from the average position of the intersections until only 4 of them are left.
That's the 4 blue points.
You can then feed these 4 points into OpenCV's findPerspectiveTransform function to find a perspective transform (aka a homography):
Point2f* srcPoints = (Point2f*) malloc(4 * sizeof(Point2f));
std::vector<Point2f> detectedCorners = CheckDet::getOuterCheckerboardCorners(srcImg);
for (int i = 0; i < MIN(4, detectedCorners.size()); i++) {
srcPoints[i] = detectedCorners[i];
}
Point2f* dstPoints = (Point2f*) malloc(4 * sizeof(Point2f));
int dstImgSize = 400;
dstPoints[0] = Point2f(dstImgSize * 1/8, dstImgSize * 1/8);
dstPoints[1] = Point2f(dstImgSize * 7/8, dstImgSize * 1/8);
dstPoints[2] = Point2f(dstImgSize * 7/8, dstImgSize * 7/8);
dstPoints[3] = Point2f(dstImgSize * 1/8, dstImgSize * 7/8);
Mat m = getPerspectiveTransform(srcPoints, dstPoints);
For our example image, the input and output of findPerspectiveTranform looks like this:
input
(349.1, 383.9) -> ( 50.0, 50.0)
(588.9, 243.3) -> (350.0, 50.0)
(787.9, 404.4) -> (350.0, 350.0)
(506.0, 593.1) -> ( 50.0, 350.0)
output
( 1.6 -1.1 -43.8 )
( 1.4 2.4 -1323.8 )
( 0.0 0.0 1.0 )
You can then transform the image's perspective to board coordinates:
Mat plainBoardImg;
warpPerspective(srcImg, plainBoardImg, m, Size(dstImgSize, dstImgSize));
Results in the following image:
For my project, the red points that you can see on the board in the question are not needed anymore, but I'm sure they can be calculated easily from the homography by inverting it and then using the inverse for back-tranforming the points (0, 0), (0, dstImgSize), (dstImgSize, dstImgSize), and (dstImgSize, 0).
The algorithm works surprisingly reliable, however, it does not use all the available information, because it uses only the outer points (those which are connected with the white lines). It does not use any data of the inner points for additional accuracy. I would still like to find an even better solution, which uses the data of the inner points.

Resources