I want to draw circles in an image canvas. I'm able to get pixel values from coordinate values by calling map.coordinateToPixel.
For radius, how can I map a coordinate distance to pixel length?
For instance, if my radius is 60 arc minutes, it goes from 50 degrees to 51 degrees. In a vector layer, the underlying framework manages the translation to pixels depending on the zoom level. However, for an ImageCanvas, I need to specify that myself. Is there a method to do that? I know I might have to dig into the code, but I was wondering if there's an inherent solution somebody already knows of.
An alternate option I've considered is:
Get the coordinate at pixel (0,0)
Get the coordinate at (radiusLogitude, 0)
Find the diff between the #2 - #1 on the Longitude and use that as my radius
Maybe this example can help you: http://acanimal.github.io/thebookofopenlayers3/chapter03_04_imagecanvas.html it draw a set of random pie charts (but without taking into account pixel ratio).
Note, the canvasFunction you use receives five parameters that can help you determine the pixel size: function(extent, resolution, pixelRatio, size, projection)
Related
I need to find the rotation angle between two binary images. SO I can correct the rotation by rotating the images by the specified angle. Can someone help please?
I already tried the Principle axis rotation angle but It doesn't give accurate result. Can some one suggest me a better method. And this image an be anything. It need not to be the image I uploaded here. But all the images are binary.
Threshold source.
Apply thinning algorithm as described here.
Find contour and approxPolyDP.
Now for each consecutive points calculate angle.
double angle = atan2(p1.y - p2.y, p1.x - p2.x)
Do the same for second image and calculate difference in angle.
For each image
Threshold the image so that object pixels are non-zero and background pixels are zero
Find the convexhull of the non-zero pixels (you may use any method to reduce the number of points that you use to calculate the convexhull, such as first finding contours. The main idea is to find the convexhull)
Calculate the minimum-area-rectangle using minAreaRect and it'll return a RotatedRect object (in C++). This object contains the rotation angle
Take the difference
Note: this approach will not work if somehow the resulting min-area-rect returns the same angle though the object rotation is different. Therefore, I feel it's better to use other measures such as moments of the filled convexhull to calculate the rotation: http://en.wikipedia.org/wiki/Image_moment
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.
I am trying to put thresholds on the aspect ratios of rotated rectangles obtained around certain objects in the image using OpenCV. To compare the aspect ratio of a rotated rectangle with the threshold, I need to take the ratio of the longer dimension and the shorter dimension of the rotated rectangle.
I am confused in this regard: what is the convention in OpenCV? Is rotatedRectanlge.size.width always smaller than rotatedRectangle.size.height? i.e., is the width of a rotated rectangle always assigned to the smaller of the two dimensions of the rotated Rectangle in OpenCV?
I tried running some code to find an answer. And, it seems like rotatedRectangle.size.width is actually the smaller dimension of a rotatedRectangle. But I still want some confirmation from anyone who has encountered something similar.
EDIT: I am using fitEllipse to get the rotated rectangle and my version of OpenCV is 2.4.1.
Please help!
There is no convention for a rotated rectangle per se, as the documentation says
The class represents rotated (i.e. not up-right) rectangles on a plane. Each rectangle is specified by the center point (mass center), length of each side (represented by cv::Size2f structure) and the rotation angle in degrees.
However, you don't specify what function or operation is creating your rotated rects - for example, if you used fitEllipse it may be that there is some internal detail of the algorithm that prefers to use the larger (or smaller) dimension as the width (or height).
Perhaps you could comment or edit your question with more information. As it stands, if you want the ratio of the longer:shorter dimensions, you will need to specifically test which is longer first.
EDIT
After looking at the OpenCV source code, the fitEllipse function contains the following code
if( box.size.width > box.size.height )
{
float tmp;
CV_SWAP( box.size.width, box.size.height, tmp );
box.angle = (float)(90 + rp[4]*180/CV_PI);
}
So, at least for this implementation, it seems that width is always taken as the shorter dimension. However, I wouldn't rely on that staying true in a future implementation.
I have a digital image, and I want to make some calculation based on distances on it. So I need to get the Milimeter/Pixel proportion. What I'm doing right now, is to mark two points wich I know the real world distance, to calculate the Euclidian distance between them, and than obtain the proportion.
The question is, Only with two points can I make the correct Milimeter/Pixel's proportion, or do I need to use 4 points, 2 for the X-Axis and 2 for Y-axis?
If your image is of a flat surface and the camera direction is perpendicular to that surface, then your scale factor should be the same in both directions.
If your image is of a flat surface, but it is tilted relative to the camera, then marking out a rectangle of known proportions on that surface would allow you to compute a perspective transform. (See for example this question)
If your image is of a 3D scene, then of course there is no way in general to convert pixels to distances.
If you know the distance between the points A and B measured on the picture(say in inch) and you also know the number of pixels between the points, you can easily calculate the pixels/inch ratio by dividing <pixels>/<inches>.
I suggest to take the points on the picture such that the line which intersects them is either horizontal either vertical such that calculations do not have errors taking into account the pixels have a rectangular form.
All i know is that the height and width of an object in video. can someone guide me to calculate distance of an detected object from camera in video using c or c++? is there any algorithm or formula to do that?
thanks in advance
Martin Ch was correct in saying that you need to calibrate your camera, but as vasile pointed out, it is not a linear change. Calibrating your camera means finding this matrix
camera_matrix = [fx,0 ,cx,
0,fy,cy,
0,0, 1];
This matrix operates on a 3 dimensional coordinate (x,y,z) and converts it into a 2 dimensional homogeneous coordinate. To convert to your regular euclidean (x,y) coordinate just divide the first and second component by the third. So now what are those variables doing?
cx/cy: They exist to let you change coordinate systems if you like. For instance you might want the origin in camera space to be in the top left of the image and the origin in world space to be in the center. In that case
cx = -width/2;
cy = -height/2;
If you are not changing coordinate systems just leave these as 0.
fx/fy: These specify your focal length in units of x pixels and y pixels, these are very often close to the same value so you may be able to just give them the same value f. These parameters essentially define how strong perspective effects are. The mapping from a world coordinate to a screen coordinate (as you can work out for yourself from the above matrix) assuming no cx and cy is
xsc = fx*xworld/zworld;
ysc = fy*yworld/zworld;
As you can see the important quantity that makes things bigger closer up and smaller farther away is the ratio f/z. It is not linear, but by using homogenous coordinates we can still use linear transforms.
In short. With a calibrated camera, and a known object size in world coordinates you can calculate its distance from the camera. If you are missing either one of those it is impossible. Without knowing the object size in world coordinates the best you can do is map its screen position to a ray in world coordinates by determining the ration xworld/zworld (knowing fx).
i donĀ“t think it is easy if have to use camera only,
consider about to use 3rd device/sensor like kinect/stereo camera,
then you will get the depth(z) from the data.
https://en.wikipedia.org/wiki/OpenNI