I already had two questions on here (Undistorting/rectify images with OpenCV and Improve cvFindChessboardCorners) which led me to the cause of my problem.
So, as you can see in this picture (http://abload.de/img/cvfindchessboardcornet9pn4.jpg) all inner corners of this chessboard are found except the ones in the upper left area.
For the second image, I tried interpolating the coordinates of found corners both in vertical and horizontal direction in order to calculate their intersection, but I couldn't locate the corner that way (the x-coordinate was correct, but the y-coordinate wasn't due to the curvature of the interpolating polnomials).
Problem: The trouble is, that interpolation won't work in the first picture, because there is no such polynomial, at least for the interpolation in vertical direction (because that would be in conflict with the fundamental definition of a mathematical function.
So, I'm really clueless, how to extract the remaining coordinates of the corners which cannot be found by cvFindChessboardCorners
Related
I have this image:
and I am using cv2.goodFeaturesToTrack to detect the coroners, so now I have this:
The corners are in red and the numbers show the order of which goodFeaturesToTrack got the corners.. for example, corner with number 0 is the first detected one, etc...
If I were to connect the dots based on that order, I would get a messy polygon so I thought of using a function that given a random set of points, it returns them in an order with which the polygon wouldn't intersect..
I found this function and it does exactly what I want.
However, although the polygon doesn't intersect, for this example I am not getting the same shape as the initial one (I am getting a non self-intersecting polygon but a different shape).
Does anyone have an idea to fix this? I was thinking of making cv2.goodFeaturesToTrack return an ordered set of points but I couldn't figure out how to do that.
Thank you so much!
If you want to get the polygon, you can threshold the image and extract the outer contour with findContours, using CV_RETR_EXTERNAL as the mode to obtain the outer contour and CV_CHAIN_APPROX_SIMPLE as the method. CV_CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points (see the documentation).
If you want to use corner detection results and arrange them in correct order to make the polygon, you'll have to trace the boundary of the shape and add those corner points into a list as you find them along the boundary. For this, I think you can use findContours with CV_RETR_EXTERNAL and CV_CHAIN_APPROX_NONE to get every pixel. Still, you might not find your detected corner points exactly on the contour returned from findContours, so you'll have to use a proximity threshold.
Suppose that I want to find the 3D position of a cup with its rotation, with image input like this (this cup can be rotated to point in any direction):
Given that I have a bunch of 2D points specifying the top circle and bottom circle like the following image. (Let's assume that these points are given by a person drawing the lines around the cup, so it won't be very accurate. Ellipse fitting or SolvePnP might be needed to recover a good approximation. And the bottom circle is not a complete circle, it's just part of a circle. Sometimes the top part will be occluded as well so we cannot rely that there will be a complete circle)
I also know the physical radius of the top and bottom circle, and the distance between them by using a ruler to measure them beforehand.
I want to find the complete 2 circle like following image (I think I need to find the position of the cup and its up direction before I could project the complete circles):
Let's say that my ultimate goal is to be able to find the closest 2D top point and closest 2D bottom point, given a 2D point on the side of the cup, like the following image:
A point can also be inside of the cup, like so:
Let's define distance(a, b) as a function that find euclidean distance from point a and point b in pixel units.
From that I would be able to calculate the distance(side point, bottom point) / distance(top point, bottom point) which will be a scale number from 0 to 1, if I multiply this number to the physical height of the cup measured by the ruler, then I will know how high the point is from the bottom of the cup in metric unit.
What is the method I can use to find the corresponding top and bottom point given point on the side, so that I can finally find out the height of the point from the bottom of the cup?
I'm thinking of using PnP to solve this but my points do not have correct IDs associated with them. And I don't want to know the exact rotation of the cup, I only want to know the up direction of the cup.
I also think that fitting the ellipse might help somewhat, but maybe it's not the best because the circle is not complete.
If you have any suggestions, please tell me how to obtain the point height from the bottom of the cup.
Given the accuracy issues, I don't think it is worth performing a 3D reconstruction of the cone.
I would perform a "standard" ellipse fit on the top outline, which is the most accurate, then a constrained one on the bottom, knowing the position of the vertical axis. After reduction of the coordinates, the bottom ellipse can be written as
x²/a² + (y - h)²/b² = 1
which can be solved by least-squares.
Note that it could be advantageous to ask the user to point at the endpoints of the straight edges at the bottom, plus the lowest point, instead of the whole curve.
Solving for the closest top and bottom points is a pure 2D problem (draw the line through the given point and the intersection of the sides, and find the intersection points with the ellipse.
I have a contour in Opencv with a convexity defect (the one in red) and I want to cut that contour in two parts, horizontally traversing that point, is there anyway to do it, so I just get the contour marked in yellow?
Image describing the problem
That's an interesting question. There are some solutions based on how the concavity points are distributed in your image.
1) If such points does not occur at the bottom of the contour (like your simple example). Then here is a pseudo-code.
Find convex hull C of the image I.
Subtract I from C, that will give you the concavity areas (like the black triangle between two white triangles in your example).
The point with the minimum y value in that area gives you the horizontal line to cut.
2) If such points can occur anywhere, you need a more intelligent algorithm which has cut lines that are not constrained by only being horizontal (because the min-y point of that difference will be the min-y of the image). You can find the "inner-most" corner points, and connect them to each other. You can recursively cut the remainder in y-,x+,y+,x- directions. It really depends on the specs of your input.
I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.
So I need to extract ellipses that are closest to being circles using OpenCV and Java since I have lot of false ellipses found. I have found the ellipses with color extraction and
Imgproc.fitEllipse(thisContour2f);
Current idea is to check all ellipse points and calculate widths from center. Those ellipses whose width variance in value is the smallest are closest to a circle in that image. Any better ideas?
I have also tried using Hough Circle but this does not detect anything.