How to measure ratio between lines in a photo - opencv

I'm working with OpenCV for a task on measuring the solar angle in a photo (without any camera parameter). In the photo there is a straight stick with the height of 3 meters standing in the middle of the field. The shadow it casts, however, lies obliquely on the ground (not in the same projection plane as the stick). I can obtain the pixel length of the stick and shadow, but I don't know if the ratio should be directly calculated with the two numbers, since only lines within the same projection plane have the same scale.
This is more like a graphic issue rather than algorithm. Can anyone shed some light on me about how to determine the height-shadow ratio?

Related

Image Processing(oblique image)

I hope you can give me some suggestions. I want to semantically segment the cyanobacteria image of the lake, and hope to calculate the cyanobacteria area in the image. How to preprocess the image due to the existence of a certain angle?It is not vertical. Make it more accurate to calculate the actual area through pixels. The image is as follows.
You can't make calibrated measurements (in true units of area) without knowing the scaling factors. So you should let a calibration target float on the water, wholly in the field of view*.
If the viewing distance is sufficiently large that the perspective effect can be neglected, the transformation is affine and it suffices to take the ratio of the apparent area of the cyanobacteria (in pixels) over the apparent area of the target (in pixels), times the true area of the target**.
If the perspective is strong, the transformation is an homography, and things get a little more complicated. From four points of the target (say corners), you can obtain the coefficients of the homography that maps the viewed points to undistorded space. Then you need to undistort the cyanobacteria area outline (as a polygon) and you can compute its area by the shoelace formula.
You can also completely straighten the image before segmentation, though this is not really necessary.
*You could think of obtaining the scaling factors by knowing viewing angles and distance, but that method will be unpractical to use in the field.
**Take a picture of a large square. If it appears like a parallelogram, you are good. If like a general quadrilateral, perspective must be corrected.

Cropping out Extreme Distortion from a Homography

I have a picture of a checkerboard taken from an arbitrary camera angle. I find the two vanishing points corresponding to the two sets of lines that form the checkerboard grid. From these two vanishing points, I compute a homography from the checkerboard plane to the image plane.
I then apply the inverse homography to re-render the checkerboard from a top view. However, for certain images, the re-rendered top view is very large. That is, due to the camera angle, the inverse homography stretches certain parts of the image (i.e. the regions of the image that are very close to one of the vanishing points) to be very large.
This takes up an unnecessarily large amount of memory, and most of the region that becomes highly stretched is stuff I do not need. So, when applying the inverse homography, I would like to avoid rendering regions of the image that will be highly stretched. What is a good way to do this?
(I am coding in MATLAB)
If you just need to render the checkerboard, without the background, you could just extract the four corners of the checkerboard and compute the homography that maps them to the four corners of a square.
Then you can obtain a rectified image of the checkerboard by warping your input image with this homography, paying attention to render only the needed region (ie the square on which you map the checkerboard).

Pixel-Milimeter Proportion

I have a digital image, and I want to make some calculation based on distances on it. So I need to get the Milimeter/Pixel proportion. What I'm doing right now, is to mark two points wich I know the real world distance, to calculate the Euclidian distance between them, and than obtain the proportion.
The question is, Only with two points can I make the correct Milimeter/Pixel's proportion, or do I need to use 4 points, 2 for the X-Axis and 2 for Y-axis?
If your image is of a flat surface and the camera direction is perpendicular to that surface, then your scale factor should be the same in both directions.
If your image is of a flat surface, but it is tilted relative to the camera, then marking out a rectangle of known proportions on that surface would allow you to compute a perspective transform. (See for example this question)
If your image is of a 3D scene, then of course there is no way in general to convert pixels to distances.
If you know the distance between the points A and B measured on the picture(say in inch) and you also know the number of pixels between the points, you can easily calculate the pixels/inch ratio by dividing <pixels>/<inches>.
I suggest to take the points on the picture such that the line which intersects them is either horizontal either vertical such that calculations do not have errors taking into account the pixels have a rectangular form.

How to determine the distance of a (skewed) rectangular target from a camera

I have a photograph containing multiple rectangles of various sizes and orientations. I am currently trying to find the distance from the camera to any rectangles present in the image. What is the best way to accomplish this?
For example, an example photograph might look like similar to this (although this is probably very out-of-proportion):
I can find the pixel coordinates of the corners of any of the rectangles in the image, along with the camera FOV and resolution. I also know beforehand the length and width of any rectangle that could be in the image (but not what angle they face the camera). The ratio of length to width of each rectangular target that could be in the image is guaranteed to be unique. The rectangles and the camera will always be parallel to the ground.
What I've tried:
I hacked out a solution based on some example code I found on the internet. I'm basically iterating through each rectangle and finding the average pixel length and height.
I then use this to find the ratio of length vs. height, and compare it against a list of
the ratios of all known rectangular targets so I can find the actual height of the target in inches. I then use this information to find the distance:
...where actual_height is the real height of the target in inches, the IMAGE_HEIGHT is how tall the image is (in pixels), the pixel_height is the average height of the rectangle on the image (in pixels), and the VERTICAL_FOV is the angle the camera sees along the vertical axis in degrees (about 39.75 degrees on my camera).
I found this formula on the internet, and while it seems to work somewhat ok, I don't really understand how it works, and it always seems to undershoot the actual distance by a bit.
In addition, I'm not sure how to go about modifying the formula so that it can deal with rectangles that are very skewed from viewing them along an angle. Since my algorithm works by finding the proportion of the length and height, it works ok for rectangles 1 and 2 (which aren't too skewed), but doesn't work for rectangle 3, since it's very skewed, throwing the ratios completely off.
I considered finding the ratio using the method outlined in this StackOverflow question regarding the proportions of a perspective-deformed rectangle, but I wasn't sure how well that would work with what I have, and was wondering if it's overkill or if there's a simpler solution I could try.
FWIW I once did something similar with triangles (full 6DoF pose, not just distance).

Finding distance from camera to object of known size

I am trying to write a program using opencv to calculate the distance from a webcam to a one inch white sphere. I feel like this should be pretty easy, but for whatever reason I'm drawing a blank. Thanks for the help ahead of time.
You can use triangle similarity to calibrate the camera angle and find the distance.
You know your ball's size: D units (e.g. cm). Place it at a known distance Z, say 1 meter = 100cm, in front of the camera and measure its apparent width in pixels. Call this width d.
The focal length of the camera f (which is slightly different from camera to camera) is then f=d*Z/D.
When you see this ball again with this camera, and its apparent width is d' pixels, then by triangle similarity, you know that f/d'=Z'/D and thus: Z'=D*f/d' where Z' is the ball's current distance from the camera.
To my mind you will need a camera model = a calibration model if you want to measure distance or other things (int the real-world).
The pinhole camera model is simple, linear and gives good results (but won't correct distortions, (whether they are radial or tangential).
If you don't use that, then you'll be able to compute disparity-depth map, (for instance if you use stereo vision) but it is relative and doesn't give you an absolute measurement, only what is behind and what is in front of another object....
Therefore, i think the answer is : you will need to calibrate it somehow, maybe you could ask the user to approach the sphere to the camera till all the image plane is perfectly filled with the ball, and with a prior known of the ball measurement, you'll be able to then compute the distance....
Julien,

Resources