How to rectify a detected ellipse - opencv

I am trying to find circles in images and warp them back to a canonical view (i.e. as if looking into the center). However, circles in general project to ellipses under perspective transformations. So I am first detecting ellipses, roughly doing the following (in OpenCV):
1. Find contours in the image
2. Estimate area of contour
3. Fitting a bounded box to contour and estimating area by width/2 * height/2 * PI (area of ellipse)
4. checking if area of contour and estimated area of ellipse is < a threhsold
Assuming I have found an ellipse by this method, how can I rectify it back to a circle such that I "undo" the perspective transform (although not in plane rotation as this cannot be done I guess). For example, if it was a rectangle I would just compute the homography from the 4 corners of an uprigh rectangle to the detected projected one.
I have no idea how to do this with an ellipse, any help is much appreciated.
Thanks

A circle is indeed transformed into an ellipse by a perspective transformation, however its axes are not the same as the axes of the initial circle, as shown in this illustration:
(source: brian-curtis.com)
You can refer to this link for a detailled demonstration. As a consequence, the bounding rectangle of the ellipse is not the image of the initial square by the perspective tranformation.
EDIT:
This means that the center and the axes of the ellipse you observe are not the images, by the perspective mapping, of the center and axes of the original circle. I tried to make a clearer illustration:
On this image, I drew in green the axes and center of the original circle, after perspective transformation, and the axes and center of the ellipse in red. On this specific example, the vertical axis is not deformed by the perspective mapping, but it would be deformed in the general case. Hence, deforming a circle by a perspective transformation gives an ellipse, but the axes and center that you see are not the axes and center of the original circle.
As a consequence, you cannot simply use the top, bottom, left and right points on the ellipse (the red points, which can easily be detected from the ellipse) to map these onto the top, bottom, left and right points of the circle because they do not correspond under the perspective mapping (the green points do, but they cannot be detected easily from the ellipse).
In the end, I don't think that it is at all possible to estimate the perspective mapping from a single detected ellipse.

This looks like an indeterminate problem.
The projection of a rectangle supplies 8 equations in 8 unknowns (homography coefficients).
With an ellipse, you can only retrieve the center coordinates (2 DOF), the axis (2 DOF) and the axis orientation (1 DOF).

Related

Using flood-fill to detect corners of a rectangle

I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.

How does meanshift tracking work? (using histograms)

I know, that the Meanshift-Algorithm calculates the mean of a pixel density and checks, if the center of the roi is equal with this point. If not, it moves the new ROI center to the mean center and checks again... . Like in this picture:
For density, it is clear how to find the mean point. But it can't simply calculate the mean of a histogram and get the new position by this point. How can this algorithm work using color histogram?
The feature space in your image is 2D.
Say you have an intensity image (so it's 1D) then you would just have a line (e.g. from 0 to 255) on which the points are located. The circles shown above would just be line segments on that [0,255] line. Depending on their means, these line segments would then shift, just like the circles do in 2D.
You talked about color histograms, so I assume you are talking about RGB.
In that case your feature space is 3D, so you have a sphere instead of a line segment or circle. Your axes are R,G,B and pixels from your image are points in that 3D feature space. You then still look where the mean of a sphere is, to then shift the center towards that mean.

hough circle opencv coordinates of circles' center

I am using this page to identify circles and their centers on my images. I know that top left hand corner is 0,0 point. But then i noticed that x and y coordinates returned for my circles' center are all positive. Why is that? shouldn't y coordinate be negative? I am talking about circles[i][1] values from the original code.
No. It is correct.
In computer vision, usually, the y-axis is inverted.
OpenCV follows this coordinate system:
Image from here.

OpenCV: Draw major axis of contour

I am currently working on cell contour detection with OpenCV. So far, I have been able to detect the cell contours and I want to find and draw the longest axis parallel to the y-axis of the contour.
What I did was create a bounding rectangle from the contour which finds the center and the height and width and use this information to draw the axes. As it turns out, the major axis does not necessarily run through the center, so at times it peeks over the cell contour.
My line of approach is to split the contour into a semi-circle along the y-axis, aquire the perpendicular distance from each contour point to the y-axis and then select the longest on each side, but I suppose this is computationally expensive.
Is there an easy way to find the longest axes of a contour (not a bounding rectangle), that are parallel to the x- or y- coordinate axis?
Here's an image - My cell contour is in thin black, major and minor axes are in red, and the blue "axes" are what I want to find.

what is the relationship between image edges and gradient?

Is there anybody can help me interpret
"Edge points may be located by the maxima of the
module of the gradient, and the direction of edge contour is orthogonal to the direction of the gradient."
Paul R has given you an answer, so I'll just add some images to help make the point.
In image processing, when we refer to a "gradient" we usually mean the change in brightness over a series of pixels. You can create gradient images using software such as GIMP or Photoshop.
Here's an example of a linear gradient from black (left) to white (right):
The gradient is "linear" meaning that the change in intensity is directly proportional to the distance between pixels. This particular gradient is smooth, and we wouldn't say there is an "edge" in this image.
If we plot the brightness of the gradient vs. X-position (left to right), we get a plot that looks like this:
Here's an example of an object on a background. The edges are a bit fuzzy, but this is common in images of real objects. The pixel brightness does not change from black to white from one pixel to the next: there is a gradient that includes shades of gray. This is not obvious since you typically have to zoom into a photo to see the fuzzy edge.
In image processing we can find those edges by looking at sharp transitions (sharp gradients) from one brightness to another. If we zoom into the upper left corner of that box, we can see that there is a transition from white to black over just a few pixels. This transition is a gradient, too. The difference is that the gradient is located between two regions of constant color: white on the left, black on the right.
The red arrow shows the direction of the gradient from background to foreground: pixels are light on the left, and as we move in the +x direction the pixels become darker. If we plot the brightness sampled along the arrow, we'll get something like the following plot, with red squares representing the brightness for a specific pixel. The change isn't linear, but instead will look like one side of a bell curve:
The blue line segment is a rough approximation of the slope of the curve at its steepest. The "true" edge point is the point at which slope is steepest along the gradient corresponding to the edge of an object.
Gradient magnitude and direction can be calculated using horizontal and vertical Sobel filters. You can then calculate the direction of the gradient as:
gradientAngle = arctan(gradientY / gradientX)
The gradient will be steepest when it is perpendicular to the edge of the object.
If you look at some black and white images of real scenes, you can zoom in, look at individual pixel values, and develop a good sense of how these principles apply.
Object edges typically result in a step change in intensity. So if you take the derivative of intensity it will have a large (positive or negative) value at edges and a smaller value elsewhere. If you can identify the direction of steepest gradient then this will be at right angles to (orthogonal to) the object edge.

Resources