Opencv divide a contour in two sections - opencv

I have a contour in Opencv with a convexity defect (the one in red) and I want to cut that contour in two parts, horizontally traversing that point, is there anyway to do it, so I just get the contour marked in yellow?
Image describing the problem

That's an interesting question. There are some solutions based on how the concavity points are distributed in your image.
1) If such points does not occur at the bottom of the contour (like your simple example). Then here is a pseudo-code.
Find convex hull C of the image I.
Subtract I from C, that will give you the concavity areas (like the black triangle between two white triangles in your example).
The point with the minimum y value in that area gives you the horizontal line to cut.
2) If such points can occur anywhere, you need a more intelligent algorithm which has cut lines that are not constrained by only being horizontal (because the min-y point of that difference will be the min-y of the image). You can find the "inner-most" corner points, and connect them to each other. You can recursively cut the remainder in y-,x+,y+,x- directions. It really depends on the specs of your input.

Related

Extend a square in world space to a cube when only screen space coordinates are available

I have a photo of a Go-board, which is basically a grid with n*n squares, each of size a.
Depending on how the image was taken, the grid can have either one vanishing point like this (n = 15, board size b = 15*a):
or two vanishing points like this (n = 9, board size b = 9*a):
So what is available to me are the four screen space coordinates of the four corners of the flat board: p1, p2, p3, p4.
What I would like to do is to calculate the corresponding four screen space coordinates q1, q2, q3, q4 of the corners of the board, if the board was moved 'upward' (perpendicular to the plane of the board) in world space by a, or in other words the coordinates on top of the board, if the board had a thickness of a.
Is the information about the four points even sufficient to calculate this?
If this is not enough information, maybe it would help to make the assumption that the distance of the camera to the center of the board is typically of the order of 1.5 or 2 times the board size b?
From my understanding, the four lines p1-q1, p2-q2, p3-q3, p4-q4 would all go through the same (yet unknown) vanishing point, located somewhere below the board.
Maybe a sufficient approximation (because typically for a Go board n=18 and therefore square size a is small in comparison to the board size) for the direction of each of the lines p1-q1, p2-q2, ... in screen space would be to simply choose a line perpendicular to the horizon (given by the two vanishing points vp1-vp2 or by p1-p2 in the case of only one vanishing point)?
Having made this approximation, still the length of the four lines p1-q1, p2-q2, p3-q3, p4-q4 would need to be calculated ...
Any hints are highly appreciated!
PS: I am using Objective-C & OpenCV
Not yet a full answer but this might help to move forward. As MvG pointed out 4 points alone are not enough. Luckily we know the board is a square so even with perspective distortion the diagonals in 2D should/will intersect at board center (unless serious fish-eye or other distortions are present in the image). Here a test image (created by OpenGL I used as a test input):
The grayish surface is 2D QUAD using 2D perspective distorted corner points (your input). The aqua/bluish grid is 3D OpenGL grid I created the 2D corner points with (to see if they match). The green lines are 2D diagonals and Orange points are the 2D corner points and the diagonals intersection. As you can see 2D diagonal intersection correspond exactly with 3D board mid cell center.
Now we can use the ratio between half diagonal lengths to assume/fit the perspective. If we handle cell coordinates in range <0,9> we want to achieve further division of halve diagonals like this:
I am still not sure how exactly (linear ratio l0/(l0+l1) is not working) so I need to inspect perspective mapping equations to find relative ratio dependence and compute inverse (when I have time mood for this).
If that will be a success than we can compute any points along the diagonals (we want the cell edges). If that is done from that we can easily compute visual size of any cell size a and use the vanishing point without any 3D transform matrices at all.
In case this is not doable there is still the option to use DIP/CV techniques to detect the cell crossings like this:
OpenCV Birdseye view without loss of data
using just the bullet #2 but for that you need to take into account type of images you will have and adjust the detector or add preprocessing for it ...
Now back to your offsetting you can simply offset your cells up by the visual size of the cell like this:
And handle the left side points (either interpolate the size or use the sane as neighboring cell) That should work unless too weird angles of the board are used.

Calculation of center point for the localization of robot in 3D data

I am trying to find a reliable method to calculate the corner points of a container. From these corner point’s idea is to calculate the center point of the container for the localization of robot, it means that the calculated center point will be the destination of robot in order to pick the container. For this I am looking for any suggestions to calculate the corner points or may be if any possibility to calculate the center point directly. Up to this point PCL library C/C++ is used for the processing of the 3D data.
The image below is the screenshot of the container.
thanks in advance.
afterApplyingPassthrough
I did the following things:
I binarized the image (black pixels = 0, green pixels = 1),
inverted the image (black pixels = 1, green pixels = 0),
eroded the image with 3x3 kernel N-times and dilated it with same kernel M-times.
Left: N=2, M=1;Right: N=6, M=6
After that:
I computed contours of all non-zero areas and
removed the contour that surrounded entire image.
This are the contours that remained:
I do not know how "typical" input image looks like in your case. Since I only have access to one sample image, I would rather not speculate about "general solution" that will be suitable for you. But to solve this particular case, you could analyze every contour in the following way:
compute rotatated rectangle that fits best around your contour (you need something similar to minAreaRect from OpenCV)
compute areas of rectangle and contour interior
if the difference between contour area and the area of the rotated bounding rectangle is small, the contour has approximately rectangular shape
find the contour that is both rectangular and satisfies some other condition (for example: typical area of the container). Assume that this belongs to container and compute its center.
I am not claiming that this is a solution that will work well in real world scenarios. It is also not fast. You should view it as a "sketch" that shows how to extract some useful information.
I assume the wheels maintain the cart a known offset from the floor and you can identify the floor. Filter out all points which are too close to the floor (this will remove wheels and everything but cart which will help limit data and simplify later steps.
If you isolate the cart, you could apply a simple average point (centroid), alternately, if that is not precise, you could try finding the bounding box of the isolated cart (min max in primary directions) and then take the centroid of that bounding box (this should be more accurate, but will still need a slight vertical offset due to the top handles).
If you can not isolate the cart or the other methods are not working well, you could try using PCL sample consensus specifically SACMODEL_LINE. This will be an involved strategy, but will give very solid results, basically run through and find each line and subtract its members from the cloud so as to find the next best line. After you have your 4 primary cart lines, use their parameters to find your centroid. *this would also be robust against random items being in or on the cart as well as carts of various sizes (assuming they always had linear perpendicular walls)

Detecting incomplete rectangles (missing corners/ short endges) in OpenCV

I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.

How to compute the overlapping ratio of two rotated rectangles?

Given two rectangles, and we know the position of four corners, widths, heights, angles.
How to compute the overlapping ratio of these two rectangles?
Can you please help me out?
A convenient way is by the Sutherland-Hodgman polygon clipping algorithm. It works by clipping one of the polygons with the four supporting lines (half-planes) of the other. In the end you get the intersection polygon (at worst an octagon) and find its area by the polygon area formula.
You'll make clipping easier by counter-rotating the polygons around the origin so that one of them becomes axis parallel. This won't change the area.
Note that this approach generalizes easily to two general convex polygons, taking O(N.M) operations. G.T. Toussaint, using the Rotating Caliper principle, reduced the workload to O(N+M), and B. Chazelle & D. P. Dobkin showed that a nonempty intersection can be detected in O(Log(N+M)) operations. This shows that there is probably a little room for improvement for the S-H clipping approach, even though N=M=4 is a tiny problem.
Use rotatedRectangleIntersection function to get contour and use contourArea function to get area and find the ratios
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#rotatedrectangleintersection
Lets say you have rectangle A and B the you can use the operation:
intersection_area = (A & B).area();
from this area you can calculate de respective ratio towards one of the rectangles. there will be harder more dynamic ways to do this as well.

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

Resources