I want to find all pixels in an image (in Cartesian coordinates) which lie within certain polar range, r_min r_max theta_min and theta_max. So in other words I have some annular section defined with the parameters mentioned above and I want to find integer x,y coordinates of the pixels which lie within it. The brute force solution comes to mid offcourse (going through all the pixels of the image and checking if it is within it) but I am wondering if there is some more efficient solution to it.
Thanks
In the brute force solution, you can first determine the tight bounding box of the area, by computing the four vertexes and including the four cardinal extreme points as needed. Then for every pixel, you will have to evaluate two circles (quadratic expressions) and two straight lines (linear expressions). By doing the computation incrementally (X => X+1) the number of operations drops to about nothing.
Inside a circle
f(X,Y) = X²+Y²-2XXc-2YYc+Xc²+Yc²-R² <= 0
Incrementally,
f(X+1,Y) = f(X,Y)+2X+1-2Xc <= 0
If you really want to avoid that overhead, you will resort to scanline conversion techniques. First think of filling a slanted rectangle. Drawing two horizontal lines by the intermediate vertices, you decompose the rectangle in two triangles and a parallelogram. Then for any scanline that crosses one of these shapes, you know beforehand what pair of sides you will intersect. From there, you know what portion of the scanline you need to fill.
You can generalize to any shape, in particular your circle segment. Be prepared to a relatively subtle case analysis, but finding the intersections themselves isn't so hard. It may help to split the domain with a vertical through the center so that any horizontal always meets the outline twice, never four times.
We'll assume the center of the section is at 0,0 for simplicity. If not, it's easy to change by offsetting all the coordinates.
For each possible y coordinate from r_max to -r_max, find the x coordinates of the circle of both radii: -sqrt(r*r-y*y) and sqrt(r*r-y*y). For every point that is inside the r_max circle and outside the r_min circle, it might be part of the section and will need further testing.
Now do the same x coordinate calculations, but this time with the line segments described by the angles. You'll need some conditional logic to determine which side of the line is inside and which is outside, and whether it affects the upper or lower part of the section.
Related
I have an array of data from a grayscale image that I have segmented sets of contiguous points of a certain intensity value from.
Currently I am doing a naive bounding box routine where I find the minimum and maximum (x,y) [row, col] points. This obviously does not provide the smallest possible box that contains the set of points which is demonstrable by simply rotating a rectangle so the longest axis is no longer aligned with a principal axis.
What I wish to do is find the minimum sized oriented bounding box. This seems to be possible using an algorithm known as rotating calipers, however the implementations of this algorithm seem to rely on the idea that you have a set of vertices to begin with. Some details on this algorithm: https://www.geometrictools.com/Documentation/MinimumAreaRectangle.pdf
My main issue is in finding the vertices within the data that I currently have. I believe I need to at least find candidate vertices in order to reduce the amount of iterations I am performing, since the amount of points is relatively large and treating the interior points as if they are vertices is unnecessary if I can figure out a way to not include them.
Here is some example data that I am working with:
Here's the segmented scene using the naive algorithm, where it segments out the central objects relatively well due to the objects mostly being aligned with the image axes:
.
In red, you can see the current bounding boxes that I am drawing utilizing 2 vertices: top-left and bottom-right corners of the groups of points I have found.
The rotation part is where my current approach fails, as I am only defining the bounding box using two points, anything that is rotated and not axis-aligned will occupy much more area than necessary to encapsulate the points.
Here's an example with rotated objects in the scene:
Here's the current naive segmentation's performance on that scene, which is drawing larger than necessary boxes around the rotated objects:
Ideally the result would be bounding boxes aligned with the longest axis of the points that are being segmented, which is what I am having trouble implementing.
Here's an image roughly showing what I am really looking to accomplish:
You can also notice unnecessary segmentation done in the image around the borders as well as some small segments, which should be removed with some further heuristics that I have yet to develop. I would also be open to alternative segmentation algorithm suggestions that provide a more robust detection of the objects I am interested in.
I am not sure if this question will be completely clear, therefore I will try my best to clarify if it is not obvious what I am asking.
It's late, but that might still help. This is what you need to do:
expand pixels to make small segments connect larger bodies
find connected bodies
select a sample of pixels from each body
find the MBR ([oriented] minimum bounding rectangle) for selected set
For first step you can perform dilation. It's somehow like DBSCAN clustering. For step 3 you can simply select random pixels from a uniform distribution. Obviously the more pixels you keep, the more accurate the MBR will be. I tested this in MATLAB:
% import image as a matrix of 0s and 1s
oI = ~im2bw(rgb2gray(imread('vSb2r.png'))); % original image
% expand pixels
dI = imdilate(oI,strel('disk',4)); % dilated
% find connected bodies of pixels
CC = bwconncomp(dI);
L = labelmatrix(CC) .* uint8(oI); % labeled
% mark some random pixels
rI = rand(size(oI))<0.3;
sI = L.* uint8(rI) .* uint8(oI); % sampled
% find MBR for a set of connected pixels
for i=1:CC.NumObjects
[Y,X] = find(sI == i);
mbr(i) = getMBR( X, Y );
end
You can also remove some ineffective pixels using some more processing and morphological operations:
remove holes
find boundaries
find skeleton
In MATLAB:
I = imfill(I, 'holes');
I = bwmorph(I,'remove');
I = bwmorph(I,'skel');
I have a photo of a Go-board, which is basically a grid with n*n squares, each of size a.
Depending on how the image was taken, the grid can have either one vanishing point like this (n = 15, board size b = 15*a):
or two vanishing points like this (n = 9, board size b = 9*a):
So what is available to me are the four screen space coordinates of the four corners of the flat board: p1, p2, p3, p4.
What I would like to do is to calculate the corresponding four screen space coordinates q1, q2, q3, q4 of the corners of the board, if the board was moved 'upward' (perpendicular to the plane of the board) in world space by a, or in other words the coordinates on top of the board, if the board had a thickness of a.
Is the information about the four points even sufficient to calculate this?
If this is not enough information, maybe it would help to make the assumption that the distance of the camera to the center of the board is typically of the order of 1.5 or 2 times the board size b?
From my understanding, the four lines p1-q1, p2-q2, p3-q3, p4-q4 would all go through the same (yet unknown) vanishing point, located somewhere below the board.
Maybe a sufficient approximation (because typically for a Go board n=18 and therefore square size a is small in comparison to the board size) for the direction of each of the lines p1-q1, p2-q2, ... in screen space would be to simply choose a line perpendicular to the horizon (given by the two vanishing points vp1-vp2 or by p1-p2 in the case of only one vanishing point)?
Having made this approximation, still the length of the four lines p1-q1, p2-q2, p3-q3, p4-q4 would need to be calculated ...
Any hints are highly appreciated!
PS: I am using Objective-C & OpenCV
Not yet a full answer but this might help to move forward. As MvG pointed out 4 points alone are not enough. Luckily we know the board is a square so even with perspective distortion the diagonals in 2D should/will intersect at board center (unless serious fish-eye or other distortions are present in the image). Here a test image (created by OpenGL I used as a test input):
The grayish surface is 2D QUAD using 2D perspective distorted corner points (your input). The aqua/bluish grid is 3D OpenGL grid I created the 2D corner points with (to see if they match). The green lines are 2D diagonals and Orange points are the 2D corner points and the diagonals intersection. As you can see 2D diagonal intersection correspond exactly with 3D board mid cell center.
Now we can use the ratio between half diagonal lengths to assume/fit the perspective. If we handle cell coordinates in range <0,9> we want to achieve further division of halve diagonals like this:
I am still not sure how exactly (linear ratio l0/(l0+l1) is not working) so I need to inspect perspective mapping equations to find relative ratio dependence and compute inverse (when I have time mood for this).
If that will be a success than we can compute any points along the diagonals (we want the cell edges). If that is done from that we can easily compute visual size of any cell size a and use the vanishing point without any 3D transform matrices at all.
In case this is not doable there is still the option to use DIP/CV techniques to detect the cell crossings like this:
OpenCV Birdseye view without loss of data
using just the bullet #2 but for that you need to take into account type of images you will have and adjust the detector or add preprocessing for it ...
Now back to your offsetting you can simply offset your cells up by the visual size of the cell like this:
And handle the left side points (either interpolate the size or use the sane as neighboring cell) That should work unless too weird angles of the board are used.
Given two rectangles, and we know the position of four corners, widths, heights, angles.
How to compute the overlapping ratio of these two rectangles?
Can you please help me out?
A convenient way is by the Sutherland-Hodgman polygon clipping algorithm. It works by clipping one of the polygons with the four supporting lines (half-planes) of the other. In the end you get the intersection polygon (at worst an octagon) and find its area by the polygon area formula.
You'll make clipping easier by counter-rotating the polygons around the origin so that one of them becomes axis parallel. This won't change the area.
Note that this approach generalizes easily to two general convex polygons, taking O(N.M) operations. G.T. Toussaint, using the Rotating Caliper principle, reduced the workload to O(N+M), and B. Chazelle & D. P. Dobkin showed that a nonempty intersection can be detected in O(Log(N+M)) operations. This shows that there is probably a little room for improvement for the S-H clipping approach, even though N=M=4 is a tiny problem.
Use rotatedRectangleIntersection function to get contour and use contourArea function to get area and find the ratios
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#rotatedrectangleintersection
Lets say you have rectangle A and B the you can use the operation:
intersection_area = (A & B).area();
from this area you can calculate de respective ratio towards one of the rectangles. there will be harder more dynamic ways to do this as well.
I have an image with free-form curved lines (actually lists of small line-segments) overlayed onto it, and I want to generate some kind of image-warp that will deform the image in such a way that these curves are deformed into horizontal straight lines.
I already have the coordinates of all the line-segment points stored separately so they don't have to be extracted from the image. What I'm looking for is an appropriate method of warping the image such that these lines are warped into straight ones.
thanks
You can use methods similar to those developed here:
http://www-ui.is.s.u-tokyo.ac.jp/~takeo/research/rigid/
What you do, is you define an MxN grid of control points which covers your source image.
You then need to determine how to modify each of your control points so that the final image will minimize some energy function (minimum curvature or something of this sort).
The final image is a linear warp determined by your control points (think of it as a 2D mesh whose texture is your source image and whose vertices' positions you're about to modify).
As long as your energy function can be expressed using linear equations, you can globally solve your problem (figuring out where to send each control point) using linear equations solver.
You express each of your source points (those which lie on your curved lines) using bi-linear interpolation weights of their surrounding grid points, then you express your restriction on the target by writing equations for these points.
After solving these linear equations you end up with destination grid points, then you just render your 2D mesh with the new vertices' positions.
You need to start out with a mapping formula that given an output coordinate will provide the corresponding coordinate from the input image. Depending on the distortion you're trying to correct for, this can get exceedingly complex; your question doesn't specify the problem in enough detail. For example, are the curves at the top of the image the same as the curves on the bottom and the same as those in the middle? Do horizontal distances compress based on the angle of the line? Let's assume the simplest case where the horizontal coordinate doesn't need any correction at all, and the vertical simply needs a constant correction based on the horizontal. Here x,y are the coordinates on the input image, x',y' are the coordinates on the output image, and f() is the difference between the drawn line segment and your ideal straight line.
x = x'
y = y' + f(x')
Now you simply go through all the pixels of your output image, calculate the corresponding point in the input image, and copy the pixel. The wrinkle here is that your formula is likely to give you points that lie between input pixels, such as y=4.37. In that case you'll need to interpolate to get an intermediate value from the input; there are many interpolation methods for images and I won't try to get into that here. The simplest would be "nearest neighbor", where you simply round the coordinate to the nearest integer.
I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.