I am trying to implement a 2-D fast collision detection with Quad-Tree.
AFAIK, Quad-Tree divides a region into 4 sub-regions, north-west, north-east, south-east and south-west. This dividing works perfectly with a square. But what if the region is a non-square rectangle? In that case, we cannot divide the long edge and the short edge evenly, and the short edge determins how far we can divide.
Am I right on this? Is that meant to be?
Simply take the max of width, height of the bounding box of the region of interest as the side length of the quad tree.
Another solution:
Two quad tree implementtaions that i have seen uses a rectangle internaly, so that would run out of the box, even if the provided root bounds is not a square. They divide both the width and the height of the bounds in each subdivision step. But note that there are ovr 10 different Quadtree types. I am talking about Rectangle Quadtrees.
One implemention explictly uses a a side length which is divided by 2, so that would not work fine for non square root bounds.
However, I still recommend my first sentence, better use a square as root bounds.
This then works for all quad tree types.
Related
I want to find all pixels in an image (in Cartesian coordinates) which lie within certain polar range, r_min r_max theta_min and theta_max. So in other words I have some annular section defined with the parameters mentioned above and I want to find integer x,y coordinates of the pixels which lie within it. The brute force solution comes to mid offcourse (going through all the pixels of the image and checking if it is within it) but I am wondering if there is some more efficient solution to it.
Thanks
In the brute force solution, you can first determine the tight bounding box of the area, by computing the four vertexes and including the four cardinal extreme points as needed. Then for every pixel, you will have to evaluate two circles (quadratic expressions) and two straight lines (linear expressions). By doing the computation incrementally (X => X+1) the number of operations drops to about nothing.
Inside a circle
f(X,Y) = X²+Y²-2XXc-2YYc+Xc²+Yc²-R² <= 0
Incrementally,
f(X+1,Y) = f(X,Y)+2X+1-2Xc <= 0
If you really want to avoid that overhead, you will resort to scanline conversion techniques. First think of filling a slanted rectangle. Drawing two horizontal lines by the intermediate vertices, you decompose the rectangle in two triangles and a parallelogram. Then for any scanline that crosses one of these shapes, you know beforehand what pair of sides you will intersect. From there, you know what portion of the scanline you need to fill.
You can generalize to any shape, in particular your circle segment. Be prepared to a relatively subtle case analysis, but finding the intersections themselves isn't so hard. It may help to split the domain with a vertical through the center so that any horizontal always meets the outline twice, never four times.
We'll assume the center of the section is at 0,0 for simplicity. If not, it's easy to change by offsetting all the coordinates.
For each possible y coordinate from r_max to -r_max, find the x coordinates of the circle of both radii: -sqrt(r*r-y*y) and sqrt(r*r-y*y). For every point that is inside the r_max circle and outside the r_min circle, it might be part of the section and will need further testing.
Now do the same x coordinate calculations, but this time with the line segments described by the angles. You'll need some conditional logic to determine which side of the line is inside and which is outside, and whether it affects the upper or lower part of the section.
I need to find orientation of corn pictures (as examples below) they have different angles to right or left. I need to turn them upside (90 degree angle with their normal) (when they look like a water drop)
Is there any way I can do it easily?
As starting point - find image moments (and Hu moments for complex forms like pear). From the link:
Information about image orientation can be derived by first using the
second order central moments to construct a covariance matrix.
I suspect that usage of some image processing library like OpenCV could give more reliable results in common case
From the OP I got the impression you a rookie in this so I stick to something simple:
compute bounding box of image
simple enough go through all pixels and remember min,max of x,y coordinates of non background pixels
compute critical dimensions
Just cast few lines through the bounding box computing the red points positions. So select the start points I choose 25%,50%,75% of height. First start from left and stop on first non background pixel. Then start from right and stop on first non background pixel.
axis aligned position
start rotating the image with some step remember/stop on position where the red dots are symmetric so they are almost the same distance from left and from right. Also the bounding box has maximal height and minimal width in axis aligned position so you can also exploit that instead ...
determine the position
You got 4 options if I call the distance l0,l1,l2,r0,r1,r2
l means from left, r means from right
0 is upper (bluish) line, 1 middle, 2 bottom
then you wanted position is if (l0==r0)>=(l1==r1)>=(l2==r2) and bounding box is bigger in y axis then in x axis so rotate by 90 degrees until match is found or determine the orientation directly from distances and rotate just once ...
[Notes]
You will need accessing pixels of image so I strongly recommend to use Graphics::TBitmap from VCL. Look here gfx in C specially the section GDI Bitmap and also at this finding horizon on high altitude photo might help a bit.
I use C++ and VCL so you have to translate to Pascal but the VCL stuff is the same...
Given two rectangles, and we know the position of four corners, widths, heights, angles.
How to compute the overlapping ratio of these two rectangles?
Can you please help me out?
A convenient way is by the Sutherland-Hodgman polygon clipping algorithm. It works by clipping one of the polygons with the four supporting lines (half-planes) of the other. In the end you get the intersection polygon (at worst an octagon) and find its area by the polygon area formula.
You'll make clipping easier by counter-rotating the polygons around the origin so that one of them becomes axis parallel. This won't change the area.
Note that this approach generalizes easily to two general convex polygons, taking O(N.M) operations. G.T. Toussaint, using the Rotating Caliper principle, reduced the workload to O(N+M), and B. Chazelle & D. P. Dobkin showed that a nonempty intersection can be detected in O(Log(N+M)) operations. This shows that there is probably a little room for improvement for the S-H clipping approach, even though N=M=4 is a tiny problem.
Use rotatedRectangleIntersection function to get contour and use contourArea function to get area and find the ratios
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#rotatedrectangleintersection
Lets say you have rectangle A and B the you can use the operation:
intersection_area = (A & B).area();
from this area you can calculate de respective ratio towards one of the rectangles. there will be harder more dynamic ways to do this as well.
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.
I have a photograph containing multiple rectangles of various sizes and orientations. I am currently trying to find the distance from the camera to any rectangles present in the image. What is the best way to accomplish this?
For example, an example photograph might look like similar to this (although this is probably very out-of-proportion):
I can find the pixel coordinates of the corners of any of the rectangles in the image, along with the camera FOV and resolution. I also know beforehand the length and width of any rectangle that could be in the image (but not what angle they face the camera). The ratio of length to width of each rectangular target that could be in the image is guaranteed to be unique. The rectangles and the camera will always be parallel to the ground.
What I've tried:
I hacked out a solution based on some example code I found on the internet. I'm basically iterating through each rectangle and finding the average pixel length and height.
I then use this to find the ratio of length vs. height, and compare it against a list of
the ratios of all known rectangular targets so I can find the actual height of the target in inches. I then use this information to find the distance:
...where actual_height is the real height of the target in inches, the IMAGE_HEIGHT is how tall the image is (in pixels), the pixel_height is the average height of the rectangle on the image (in pixels), and the VERTICAL_FOV is the angle the camera sees along the vertical axis in degrees (about 39.75 degrees on my camera).
I found this formula on the internet, and while it seems to work somewhat ok, I don't really understand how it works, and it always seems to undershoot the actual distance by a bit.
In addition, I'm not sure how to go about modifying the formula so that it can deal with rectangles that are very skewed from viewing them along an angle. Since my algorithm works by finding the proportion of the length and height, it works ok for rectangles 1 and 2 (which aren't too skewed), but doesn't work for rectangle 3, since it's very skewed, throwing the ratios completely off.
I considered finding the ratio using the method outlined in this StackOverflow question regarding the proportions of a perspective-deformed rectangle, but I wasn't sure how well that would work with what I have, and was wondering if it's overkill or if there's a simpler solution I could try.
FWIW I once did something similar with triangles (full 6DoF pose, not just distance).