Given two rectangles, and we know the position of four corners, widths, heights, angles.
How to compute the overlapping ratio of these two rectangles?
Can you please help me out?
A convenient way is by the Sutherland-Hodgman polygon clipping algorithm. It works by clipping one of the polygons with the four supporting lines (half-planes) of the other. In the end you get the intersection polygon (at worst an octagon) and find its area by the polygon area formula.
You'll make clipping easier by counter-rotating the polygons around the origin so that one of them becomes axis parallel. This won't change the area.
Note that this approach generalizes easily to two general convex polygons, taking O(N.M) operations. G.T. Toussaint, using the Rotating Caliper principle, reduced the workload to O(N+M), and B. Chazelle & D. P. Dobkin showed that a nonempty intersection can be detected in O(Log(N+M)) operations. This shows that there is probably a little room for improvement for the S-H clipping approach, even though N=M=4 is a tiny problem.
Use rotatedRectangleIntersection function to get contour and use contourArea function to get area and find the ratios
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#rotatedrectangleintersection
Lets say you have rectangle A and B the you can use the operation:
intersection_area = (A & B).area();
from this area you can calculate de respective ratio towards one of the rectangles. there will be harder more dynamic ways to do this as well.
Related
I have a contour in Opencv with a convexity defect (the one in red) and I want to cut that contour in two parts, horizontally traversing that point, is there anyway to do it, so I just get the contour marked in yellow?
Image describing the problem
That's an interesting question. There are some solutions based on how the concavity points are distributed in your image.
1) If such points does not occur at the bottom of the contour (like your simple example). Then here is a pseudo-code.
Find convex hull C of the image I.
Subtract I from C, that will give you the concavity areas (like the black triangle between two white triangles in your example).
The point with the minimum y value in that area gives you the horizontal line to cut.
2) If such points can occur anywhere, you need a more intelligent algorithm which has cut lines that are not constrained by only being horizontal (because the min-y point of that difference will be the min-y of the image). You can find the "inner-most" corner points, and connect them to each other. You can recursively cut the remainder in y-,x+,y+,x- directions. It really depends on the specs of your input.
I want to find all pixels in an image (in Cartesian coordinates) which lie within certain polar range, r_min r_max theta_min and theta_max. So in other words I have some annular section defined with the parameters mentioned above and I want to find integer x,y coordinates of the pixels which lie within it. The brute force solution comes to mid offcourse (going through all the pixels of the image and checking if it is within it) but I am wondering if there is some more efficient solution to it.
Thanks
In the brute force solution, you can first determine the tight bounding box of the area, by computing the four vertexes and including the four cardinal extreme points as needed. Then for every pixel, you will have to evaluate two circles (quadratic expressions) and two straight lines (linear expressions). By doing the computation incrementally (X => X+1) the number of operations drops to about nothing.
Inside a circle
f(X,Y) = X²+Y²-2XXc-2YYc+Xc²+Yc²-R² <= 0
Incrementally,
f(X+1,Y) = f(X,Y)+2X+1-2Xc <= 0
If you really want to avoid that overhead, you will resort to scanline conversion techniques. First think of filling a slanted rectangle. Drawing two horizontal lines by the intermediate vertices, you decompose the rectangle in two triangles and a parallelogram. Then for any scanline that crosses one of these shapes, you know beforehand what pair of sides you will intersect. From there, you know what portion of the scanline you need to fill.
You can generalize to any shape, in particular your circle segment. Be prepared to a relatively subtle case analysis, but finding the intersections themselves isn't so hard. It may help to split the domain with a vertical through the center so that any horizontal always meets the outline twice, never four times.
We'll assume the center of the section is at 0,0 for simplicity. If not, it's easy to change by offsetting all the coordinates.
For each possible y coordinate from r_max to -r_max, find the x coordinates of the circle of both radii: -sqrt(r*r-y*y) and sqrt(r*r-y*y). For every point that is inside the r_max circle and outside the r_min circle, it might be part of the section and will need further testing.
Now do the same x coordinate calculations, but this time with the line segments described by the angles. You'll need some conditional logic to determine which side of the line is inside and which is outside, and whether it affects the upper or lower part of the section.
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.
I have to detect the pattern of 6 circles using opencv. I have detected the circles and their centroids by using thresholding and contour function in opencv.
Now I have to define the relation between these circles in a way that should be invariant to scale and rotation. With this I would be able to detect this pattern in various views. I have to use this pattern for determining the object pose.
How can I achieve scale/rotation invariance? Do you have any reference I could read about it?
To make your pattern invariant toward rotation & scale, you have to normalize the direction and the scale when detecting your pattern. Here is a simple algorithm to achieve this
detect centers and circle size (you say you have already achieved this - good!)
compute the average center using a simple mean. Express all the centers from this mean
find the farthest center using a simple norm (euclidian is good enough)
scale the center position and the circle sizes so that this maximum distance is 1.0
rotate the centers so that coordinates of the farthest one is (1.0, 0)
you're done. You are now the proud owner of a scale/rotation invariant pattern detector!! Congratulations!
Now you can find patterns, transform them as suggested, and compare center position & circle sizes.
It is not entirely clear to me if you need to find the rotation, or merely get rid of it, or detect if the circles actually form the pattern you linked. Either way, the answer is much the same.
I would start by finding the two circles that have only one neighbour. For each circle centroid calculate the distance to the closest two neighbours. If the distances differ in more than say 10%, the centroid belongs to an "end" circle (one of the top ones in your link).
Now that you have found the two end circles, rotate them so that they are horizontal to each other. If the other centroids are now above them, rotate another 180 degrees so that the pattern ends up in the orientation you want.
Now you can calculate the scaling from the average inter-centroid distance.
Hope that helps.
Your question sounds exactly like what the SURF algorithm does. It finds groups of interest and groups them together in a way invarant to rotation and scale, and can find the same object in other pictures.
Just search for OpenCV and SURF.
I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.