Is edge points of object represent an ellipse? - image-processing

Given an edge points of object. Let us say Obj = (xi,yi); i=1,2,3,....
How can we know if these edge points represent an ellipse or not?

As long as you have more than 2 points you could try linear fitting by using least squares:
See here:https://math.stackexchange.com/a/153150/104118
See section 7 Fitting an Ellipse to 2D Points in the actual link: http://www.geometrictools.com/Documentation/LeastSquaresFitting.pdf

Off the top of my head, I would calculate the axis with minimal variance (call it a) and the axis with maximal variance (call it b).
I would check that those axes are reasonably close to being perpendicular - if not then it's probably not an ellipse. If they are close to being perpendicular, I would rotate the point cloud so that a and are aligned with the x- and y-axes.
Next step would be to translate the point cloud so its center is at (0,0) and then check that each translated point lie close to the perimeter of an ellipse with axes a and b by putting each of the points into the equation of the ellipse and checking that the value is close to 0.
This is all based on me reading "edge points" as just looking at the points used by edges. If the edges themselves are to be involved, you would have to check that the edges go "around the clock" as well.
Well I know this was loose... hope it made sense somehow :-).

Related

Using flood-fill to detect corners of a rectangle

I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.

Finding point height on a cup using OpenCV

Suppose that I want to find the 3D position of a cup with its rotation, with image input like this (this cup can be rotated to point in any direction):
Given that I have a bunch of 2D points specifying the top circle and bottom circle like the following image. (Let's assume that these points are given by a person drawing the lines around the cup, so it won't be very accurate. Ellipse fitting or SolvePnP might be needed to recover a good approximation. And the bottom circle is not a complete circle, it's just part of a circle. Sometimes the top part will be occluded as well so we cannot rely that there will be a complete circle)
I also know the physical radius of the top and bottom circle, and the distance between them by using a ruler to measure them beforehand.
I want to find the complete 2 circle like following image (I think I need to find the position of the cup and its up direction before I could project the complete circles):
Let's say that my ultimate goal is to be able to find the closest 2D top point and closest 2D bottom point, given a 2D point on the side of the cup, like the following image:
A point can also be inside of the cup, like so:
Let's define distance(a, b) as a function that find euclidean distance from point a and point b in pixel units.
From that I would be able to calculate the distance(side point, bottom point) / distance(top point, bottom point) which will be a scale number from 0 to 1, if I multiply this number to the physical height of the cup measured by the ruler, then I will know how high the point is from the bottom of the cup in metric unit.
What is the method I can use to find the corresponding top and bottom point given point on the side, so that I can finally find out the height of the point from the bottom of the cup?
I'm thinking of using PnP to solve this but my points do not have correct IDs associated with them. And I don't want to know the exact rotation of the cup, I only want to know the up direction of the cup.
I also think that fitting the ellipse might help somewhat, but maybe it's not the best because the circle is not complete.
If you have any suggestions, please tell me how to obtain the point height from the bottom of the cup.
Given the accuracy issues, I don't think it is worth performing a 3D reconstruction of the cone.
I would perform a "standard" ellipse fit on the top outline, which is the most accurate, then a constrained one on the bottom, knowing the position of the vertical axis. After reduction of the coordinates, the bottom ellipse can be written as
x²/a² + (y - h)²/b² = 1
which can be solved by least-squares.
Note that it could be advantageous to ask the user to point at the endpoints of the straight edges at the bottom, plus the lowest point, instead of the whole curve.
Solving for the closest top and bottom points is a pure 2D problem (draw the line through the given point and the intersection of the sides, and find the intersection points with the ellipse.

Extend a square in world space to a cube when only screen space coordinates are available

I have a photo of a Go-board, which is basically a grid with n*n squares, each of size a.
Depending on how the image was taken, the grid can have either one vanishing point like this (n = 15, board size b = 15*a):
or two vanishing points like this (n = 9, board size b = 9*a):
So what is available to me are the four screen space coordinates of the four corners of the flat board: p1, p2, p3, p4.
What I would like to do is to calculate the corresponding four screen space coordinates q1, q2, q3, q4 of the corners of the board, if the board was moved 'upward' (perpendicular to the plane of the board) in world space by a, or in other words the coordinates on top of the board, if the board had a thickness of a.
Is the information about the four points even sufficient to calculate this?
If this is not enough information, maybe it would help to make the assumption that the distance of the camera to the center of the board is typically of the order of 1.5 or 2 times the board size b?
From my understanding, the four lines p1-q1, p2-q2, p3-q3, p4-q4 would all go through the same (yet unknown) vanishing point, located somewhere below the board.
Maybe a sufficient approximation (because typically for a Go board n=18 and therefore square size a is small in comparison to the board size) for the direction of each of the lines p1-q1, p2-q2, ... in screen space would be to simply choose a line perpendicular to the horizon (given by the two vanishing points vp1-vp2 or by p1-p2 in the case of only one vanishing point)?
Having made this approximation, still the length of the four lines p1-q1, p2-q2, p3-q3, p4-q4 would need to be calculated ...
Any hints are highly appreciated!
PS: I am using Objective-C & OpenCV
Not yet a full answer but this might help to move forward. As MvG pointed out 4 points alone are not enough. Luckily we know the board is a square so even with perspective distortion the diagonals in 2D should/will intersect at board center (unless serious fish-eye or other distortions are present in the image). Here a test image (created by OpenGL I used as a test input):
The grayish surface is 2D QUAD using 2D perspective distorted corner points (your input). The aqua/bluish grid is 3D OpenGL grid I created the 2D corner points with (to see if they match). The green lines are 2D diagonals and Orange points are the 2D corner points and the diagonals intersection. As you can see 2D diagonal intersection correspond exactly with 3D board mid cell center.
Now we can use the ratio between half diagonal lengths to assume/fit the perspective. If we handle cell coordinates in range <0,9> we want to achieve further division of halve diagonals like this:
I am still not sure how exactly (linear ratio l0/(l0+l1) is not working) so I need to inspect perspective mapping equations to find relative ratio dependence and compute inverse (when I have time mood for this).
If that will be a success than we can compute any points along the diagonals (we want the cell edges). If that is done from that we can easily compute visual size of any cell size a and use the vanishing point without any 3D transform matrices at all.
In case this is not doable there is still the option to use DIP/CV techniques to detect the cell crossings like this:
OpenCV Birdseye view without loss of data
using just the bullet #2 but for that you need to take into account type of images you will have and adjust the detector or add preprocessing for it ...
Now back to your offsetting you can simply offset your cells up by the visual size of the cell like this:
And handle the left side points (either interpolate the size or use the sane as neighboring cell) That should work unless too weird angles of the board are used.

Drawing curve working for all the quadrants / Finding dynamic control points for the Bézier curve

Problem is if somebody taps on the angle abc as shown in fig. 1, then the curve should be drawn as shown in fig. 2 using CoreGraphics. I tried it using a Bézier curve, but shapes in different quadrants need dynamic control points which is quite complex (I guess). Can anyone suggest a solution for this?
If I understood it right, then what you need to know is, how to find suitable control points in different quadrant. This link will give you exactly what you want. If you are looking to draw cubic bezier curves then page 18 is for you. However I will recommend you to read it completely to have better understanding of bezier curves.
Formulas given in this paper will help you draw elliptical arcs accurately for one quadrant. You can define your quadrant using angles. To find control points using this paper you need to give following data:
start and end angle (which will define your quadrant)
radii of curve according to your figure
Instead of going through the math , I figured to draw the curve perfect for all the quadrant programmatically.
The algorithm for this is as follows:
(This is an algorithm to find the control points for the Bézier curve perfect for all the quadrants that means you will get the dynamic control points for the Bézier curve.)
Problem: Given 3 points a, b, c, the task is to draw the curve at the angle abc (curve structure is fixed as shown in the figure in the question).
Take all 3 points a, b, c in function.
Transform all 3 points a, b, c to the origin with respect to point a.
Find whether the 3rd point c lies left or right.
Rotate the 2nd point b to coincide the x-axis.
After step 4, you are in the zero position.
(Here you can choose the control points for the Bézier curve like you desire. You do not have to solve any relation for the control points. You can set the control points using simple add/subtract math only.) The control points obtained here will be perfect for all the quadrants.
After step 5, we get all the control points for the Bézier curve, now take all those points to the original position,
a. First rotate point b and the two control points (by the rotation angle of b in step 4)
b. Translate back all points to their original location (i.e. with respect to point a - reversing transformation from step 1).
Now you get the required control points for cubic Bézier suitable for all the quadrants.
Draw the curve using the Bézier curve function.

Given a set of points to define a shape, how can I contract this shape like Photoshop's Selection>Contract

I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.

Resources