I am currently working on cell contour detection with OpenCV. So far, I have been able to detect the cell contours and I want to find and draw the longest axis parallel to the y-axis of the contour.
What I did was create a bounding rectangle from the contour which finds the center and the height and width and use this information to draw the axes. As it turns out, the major axis does not necessarily run through the center, so at times it peeks over the cell contour.
My line of approach is to split the contour into a semi-circle along the y-axis, aquire the perpendicular distance from each contour point to the y-axis and then select the longest on each side, but I suppose this is computationally expensive.
Is there an easy way to find the longest axes of a contour (not a bounding rectangle), that are parallel to the x- or y- coordinate axis?
Here's an image - My cell contour is in thin black, major and minor axes are in red, and the blue "axes" are what I want to find.
Related
I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.
I want to draw a straight line passing through the centroid of the contour, and then there are two intersection coordinates between the straight line and the contour. Finding the distance between the two intersection coordinates. I want to traverse each contour point, draw a straight line passing through the center of mass, make it intersect another point of the contour, and then find the two intersections with the longest distance. I used OpenCV to draw a straight line from the center of mass to the contour, but this is a line segment that does not extend to intersect the contour. Is there any good way to find the intersection of these two longest distances?
The purple line may be the result I want, while the green line is drawn by OpenCV.Please look at the picture:
Given a contour with an easily-identifiable edge, how would one straighten it and its contents, as pictured?
Detect the black edge and fit a spline curve to it.
From that spline you will be able to draw normals, and mark points regularly along it. This forms a (u, v) mesh that is easy to straighten.
To compute the destination image, draw horizontal rows, which correspond to particular normals in the source. Then sampling along the horizontal corresponds to some fractional (x, y) coordinates in the source. You can perform bilinear interpolation around the neighboring pixels to achieve good quality resampling.
I am trying to find circles in images and warp them back to a canonical view (i.e. as if looking into the center). However, circles in general project to ellipses under perspective transformations. So I am first detecting ellipses, roughly doing the following (in OpenCV):
1. Find contours in the image
2. Estimate area of contour
3. Fitting a bounded box to contour and estimating area by width/2 * height/2 * PI (area of ellipse)
4. checking if area of contour and estimated area of ellipse is < a threhsold
Assuming I have found an ellipse by this method, how can I rectify it back to a circle such that I "undo" the perspective transform (although not in plane rotation as this cannot be done I guess). For example, if it was a rectangle I would just compute the homography from the 4 corners of an uprigh rectangle to the detected projected one.
I have no idea how to do this with an ellipse, any help is much appreciated.
Thanks
A circle is indeed transformed into an ellipse by a perspective transformation, however its axes are not the same as the axes of the initial circle, as shown in this illustration:
(source: brian-curtis.com)
You can refer to this link for a detailled demonstration. As a consequence, the bounding rectangle of the ellipse is not the image of the initial square by the perspective tranformation.
EDIT:
This means that the center and the axes of the ellipse you observe are not the images, by the perspective mapping, of the center and axes of the original circle. I tried to make a clearer illustration:
On this image, I drew in green the axes and center of the original circle, after perspective transformation, and the axes and center of the ellipse in red. On this specific example, the vertical axis is not deformed by the perspective mapping, but it would be deformed in the general case. Hence, deforming a circle by a perspective transformation gives an ellipse, but the axes and center that you see are not the axes and center of the original circle.
As a consequence, you cannot simply use the top, bottom, left and right points on the ellipse (the red points, which can easily be detected from the ellipse) to map these onto the top, bottom, left and right points of the circle because they do not correspond under the perspective mapping (the green points do, but they cannot be detected easily from the ellipse).
In the end, I don't think that it is at all possible to estimate the perspective mapping from a single detected ellipse.
This looks like an indeterminate problem.
The projection of a rectangle supplies 8 equations in 8 unknowns (homography coefficients).
With an ellipse, you can only retrieve the center coordinates (2 DOF), the axis (2 DOF) and the axis orientation (1 DOF).
I have written algorithm to extract the points shown in the image. They form convex shape and I know order of them. How do I extract corners (top 3 and bottom 3) from such points?
I'm using opencv.
if you already have the convex hull of the object, and that hull includes the corner points, then all you need to to do is simplify the hull until it only has 6 points.
There are many ways to simplify polygons, for example you could just use this simple algorithm used in this answer: How to find corner coordinates of a rectangle in an image
do
for each point P on the convex hull:
measure its distance to the line AB _
between the point A before P and the point B after P,
remove the point with the smallest distance
repeat until 6 points left
If you do not know the exact number of points, then you could remove points until the minimum distance rises above a certain threshold
you could also do Ramer-Douglas-Peucker to simplify the polygon, openCV already has that implemented in cv::approxPolyDP.
Just modify the openCV squares sample to use 6 points instead of 4
Instead of trying to directly determine which of your feature points correspond to corners, how about applying an corner detection algorithm on the entire image then looking for which of your feature points appear close to peaks in the corner detector?
I'd suggest starting with a Harris corner detector. The OpenCV implementation is cv::cornerHarris.
Essentially, the Harris algorithm applies both a horizontal and a vertical Sobel filter to the image (or some other approximation of the partial derivatives of the image in the x and y directions).
It then constructs a 2 by 2 structure matrix at each image pixel, looks at the eigenvalues of that matrix, and calls points corners if both eigenvalues are above some threshold.