Best polygon fitting into points - image-processing

I'm looking for an algorithm to find a polygon that can represent a set of points in 2D space. Specifically, if given a set of points like this
It should ideally produce something similar to this:
(The arrows are segments)
Basically, the output would be a set of segments that "best" address the features of the points. The algorithms possibly take some parameters to control the numbers of output segments.
I currently do not have any ideas on what algorithms I'm looking for. Any papers or advice are appreciated.

This is a possible algorithm.
For every point, look at the 2 points closest to it, they become connected.
Then use Douglas Peucker to refine the edges.
Essentially you will create a first polygon containing all the points, and the try to eliminate points whose elimination doesn't change the shape too much.

Related

How to normalize position of the elements on the picture [OpenCV]

I am currently working on program which could help at my work. I'm trying to use Machine Learning for the classification purpose. The problem is that I don't have enough samples for training the model and augmentation is something I'm trying to avoid because hardware problems (not enough RAM) either on my company laptop and on the Google Collab. So I decided to try to somehow normalize the position of the elements so the differences would be visible for the machine even with no big amount of different samples. Unfortunately now I'm struggling how to normalize those pictures.
Element 1a:
Element 1b:
Element 2a:
Element 2b:
Elements 1a and 1b are the same type and 2a - 2b are the same type. Is there a way to somehow normalize position for those pictures (something like position 0) which would help the algorithm to see differences between them? I've tried using cv2.minAreaSquare to get the square position, rotating them and cropping don't needed area but unfortunately those elements can have different width so after scaling them down the contours are deformed unevenly. Then I was trying to get symmetry axis and using this to do a proper cropping after rotation but still the results didn't meet my expectations. I was thinking to add more normalization points like this:
Normalization Points:
And using this points normalize position of the rest of my elements but Perspective Transform takes only 4 points and with 4 points its also not very good methodology. Maybe you guys know a way how to move those elements to have them in the same positions.
Seeing the images, I believe that the transformation between two pictures is either an isometry (translation + rotation) or a similarity (translation + rotation + scaling). These can be determined with just two points. (Perspective takes four points but I think that this is overkill.)
But for good accuracy, you must make sure that the points are found reliably and precisely. In the first place, you need to guess which features of the shapes are repeatable from one sample to the next.
For example, you might estimate that the straight edges are always in the same relative position. In such a case, I would recommend finding two points on some edges, drawing a line between them and find intersections between the lines.
In the illustration, you find edge points along the red profiles, and from them you draw the green lines. They intersect in the yellow points.
For increased accuracy, you can use a least-squares approach to find a best fit on more than two points.

Homography and projective transformation

im trying to write a code that will do projective transformation, but with more than 4 key points. i found this helpful guide but it uses 4 points of reference
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
i know that matlab uses has a function tcp2form that handles that, but i haven't found a way so far.
anyone can give me some guidance, on how to do so? i can solve the equations using (least squares), but i'm stuck since i have a matrix that is larger than 3*3 and i can't multiple the homogeneous coordinates.
Thanks
If you have more than four control points, you have an overdetermined system of equations. There are two possible scenarios. Either your points are all compatible with the same transformation. In that case, any four points can be used, and the rest will match the transformation exactly. At least in theory. For the sake of numeric stability you'd probably want to choose your points so that they are far from being collinear.
Or your points are not all compatible with a single projective transformation. In this case, all you can hope for is an approximation. If you want the best approximation, you'll have to be more specific about what “best” means, i.e. some kind of error measure. Measuring things in a projective setup is inherently tricky, since there are usually a lot of arbitrary decisions involved.
What you can try is fixing one matrix entry (e.g. the lower right one to 1), then writing the conditions for the remaining 8 coordinates as a system of linear equations, and performing a least squares approximation. But the choice of matrix representative (i.e. fixing one entry here) affects the least squares error measure while it has no effect on the geometric meaning, so this is a pretty arbitrary choice. If the lower right entry of the desired matrix should happen to be zero, you'd computation will run into numeric problems due to overflow.

find curvature at depth map

I want to find curvature at depth map
Look at the picture
This is example of curvature
Maybe if i represent image as function and take second derivative from it a can find curvatures. But i couldn't to implement it. (I tryed sobel operator from opencv)
Is there way out?
PS Sorry for my writing mistakes. English in not my native language.
That is not a depth map, it is a point cloud (but I assume it is generated from one single depth map z = f(x,y).
What curvature do you want to estimate? Mean, Gaussian, the whole 2nd fundamental form?
See, e.g. here for definitions. Here's a recent reference on fast estimation methods:

Metric for ellipse fitting in OpenCV

OpenCV has a nice in-built ellipse-fitting algorithm called fitEllipse(const Mat& points)
However, it has some major shortcomings, limiting its usefulness. For example, it already requires selected points, so I already have to do a feature extraction myself. HoughCircles detects circles on a given image, pity there is no HoughEllipses.
The other major shortcoming, which stands in the center of my question, is that it does no provide any metric about how accurate the fitting was. It returns an ellipse which best fits the given points, even if the shape does not even remotely look like an ellipse. Is there a way to get the estimated error from the algorithm? I would like to use it as a threshold to filter out shapes which are not even close to be considered ellipses.
I asked this, because maybe there is a simple solution before I try to reinvent the wheel and write my very own fitEllipse function.
If you don't mind getting your hands dirty, you could actually modify the source code for fitEllipse(). The fitEllipse() function uses least-squares to determine the likely ellipses, and the least-squares solution is a tangible distance metric, which is what you want.
If that is something you're willing to do, it would be a very simple code change. Simply add a float whose value is passed back after the function call, where the float stores the current best least-squares value.
fitEllipse gives you the ellipse as a cv::RotatedRect and so you know the angle of rotation of the ellipse, its center and its two axes.
You can compute the sum of the square of the distances between your points and the ellipse, that sum is the metric you are looking for.
The distance between a point and an ellipse is described here http://www.geometrictools.com/Documentation/DistancePointEllipseEllipsoid.pdf and the code is here http://www.geometrictools.com/GTEngine/Include/Mathematics/GteDistPointHyperellipsoid.h
You need to go from OpenCV cv::RotatedRect to Ellipse2 of Geometric Tools Engine and then you can compute the distance.
Why don't you do a findContours() to reduce the memory space required? There's your selected points structure right there. If you want to further simplify you can run a ConvexHull() or ApproxPoly() on that. Fit the ellipse to those points, and then I suppose you can check similarity between the two structures to get some kind of estimate. A difference operator between the two Mats would be a (very) rough estimate?
Depending on the application, you might be able to use CAMShift (or mean shift), which fits an ellipse to a region with similar colors.

How to create a single line/edge from a set of superimposing lines/edges in MATLAB?

I have a set of edges detected from an image using edge detector of MATLAB's computer vision toolbox. All these edges (18 of them) just form two lines. How do I get the lines from these edges? All that I am interested is to find the intersection point of these two lines.
edges looklike
and the hough lines look like
Peter Kovesi's CV website contains a great set of functions for line detection. Look at this example of using them.
Since you mentioned that the intention is to find the "center point" here goes a possible way (not MATLAB specific though):
Clarifications: when you mention
All these edges (18 of them) just form two lines
It's actually two components or contours that are formed. The Hough line transform will give you straight lines: not exactly what you wanted it seems.
Also, the two "lines" or "contours" do not intersect at least from what's seen in the picture. If you want to find the point of closest approach traverse each point on one contour and check the distance between that point and the points on the second contour. Find the minimum distance for each point on the contour. Then select the minimum from that.
If you meant intersection of two straight lines, simply solve the two equations (you can get them from knowing the end-points of the lines).

Resources