OpenCV: measuring distance between two balls in millimeters - how to improve accuracy - opencv

I also posted this topic in the Q&A forum at opencv.org but I don't know how many experts from here are reading this forum - so forgive me that I'm also trying it here.
I'm currently learning OpenCV and my current task is to measure the distance between two balls which are lying on a plate. My next step is to compare several cameras and resolutions to get a feeling how important resolution, noise, distortion etc. is and how heavy these parameters affect the accuracy. If the community is interested in the results I'm happy to share the results when they are ready! The camera is placed above the plate using a wide-angle lens. The width and height of the plate (1500 x 700 mm) and the radius of the balls (40 mm) are known.
My steps so far:
camera calibration
undistorting the image (the distortion is high due to the wide-angle lens)
findHomography: I use the corner points of the plate as input (4 points in pixels in the undistorted image) and the corner points in millimeters (starting with 0,0 in the lower left corner, up to 1500,700 in the upper right corner)
using HoughCircles to find the balls in the undistorted image
applying perspectiveTransform on the circle center points => circle center points now exist in millimeters
calculation the distance of the two center points: d = sqrt((x1-x2)^2+(y1-y2)^2)
The results: an error of around 4 mm at a distance of 300 mm, an error of around 25 mm at a distance of 1000 mm But if I measure are rectangle which is printed on the plate the error is smaller than 0.2 mm, so I guess the calibration and undistortion is working good.
I thought about this and figured out three possible reasons:
findHomography was applied to points lying directly on the plate whereas the center points of the balls should be measured in the equatorial height => how can I change the result of findHomography to change this, i.e. to "move" the plane? The radius in mm is known.
the error increases with increasing distance of the ball to the optical center because the camera will not see the ball from the top, so the center point in the 2D projection of the image is not the same as in the 3D world - I will we projected further to the borders of the image. => are there any geometrical operations which I can apply on the found center to correct the value?
during undistortion there's probably a loss of information, because I produce a new undistorted image and go back to pixel accuracy although I have many floating point values in the distortion matrix. Shall I search for the balls in the distorted image and tranform only the center points with the distortion matrix? But I don't know what's the code for this task.
I hope someone can help me to improve this and I hope this topic is interesting for other OpenCV-starters.
Thanks and best regards!

Here are some thoughts to help you along... By no means "the answer", though.
First a simple one. If you have calibrated your image in mm at a particular plane that is distance D away, then points that are r closer will appear larger than they are. To get from measured coordinates to actual coordinates, you use
Actual = measured * (D-r)/D
So since the centers of the spheres are radius r above the plane, the above formula should answer part 1 of your question.
Regarding the second question: if you think about it, the center of the sphere that you see should be in the right place "in the plane of the center of the sphere", even though you look at it from an angle. Draw yourself a picture to convince yourself this is so.
Third question: if you find the coordinates of the spheres in the distorted image, you should be able to transform them to the corrected image using perspectiveTransform. This may improve accuracy a little bit - but I am surprised at the size of errors you see. How large is a single pixel at the largest distance (1000mm)?
EDIT
You asked about elliptical projections etc. Basically, if you think of the optical center of the camera as a light source, and look at the shadow of the ball onto the plane as your "2D image", you can draw a picture of the rays that just hit the sides of the ball, and determine the different angles:
It is easy to see that P (the mid point of A and B) is not the same as C (the projection of the center of the sphere). A bit more trig will show you that the error C - (A+B)/2 increases with x and decreases with D. If you know A and B you can calculate the correct position of C (given D) from:
C = D * tan( (atan(B/D) + atan(A/D)) / 2 )
The error becomes larger as D is smaller and/or x is larger. Note D is the perpendicular (shortest) distance from the lens to the object plane.
This only works if the camera is acting like a "true lens" - in other words, there is no pincushion distortion, and a rectangle in the image plane maps into a rectangle on the sensor. The above combined with your own idea to fit in the uncorrected ('pixel') space, then transform the centers found with perspectiveTransform, ought to get you all the way there.
See what you can do with that!

Related

Using flood-fill to detect corners of a rectangle

I am trying to find corners of a square, potentially rotated shape, to determine the direction of its primary axes (horizontal and vertical) and be able to do a perspective transform (straighten it out).
From a prior processing stage I obtain the coordinates of a point (red dot in image) belonging to the shape. Next I do a flood-fill of the shape on a thresholded version of the image to determine its center (not shown) and area, by summing up X and Y of all filled pixels and dividing them by the area (number of pixels filled).
Given this information, what is an easy and reliable way to determine the corners of the shape (blue arrows)?
I was thinking about keeping track of P1, P2, P3, P4 where P1 is (minX, minY), P2 is (minX, maxY), P3 (maxY, minY) and P4 (maxY, maxY), so P1 is the point with the smallest value of X encountered, and of all those P, the one where Y is smallest too. Then sort them to get a clock-wise ordering. But I'm not sure if this is correct in all cases and efficient.
PS: I can't use OpenCV.
Looking your image, direction of 2 axes of the 2D pattern coordinate system will be able to be estimated from histogram of gradient direction.
When creating such histogram, 4 peeks will be found clearly.
If the image captured from front (image without perspective, your image looks like this case), Ideally, the angles between adjacent peaks are all 90 degrees.
directions of 2 axes of the pattern coordinate system will be directly estimated from those peaks.
After that, 4 corners can be simply estimated from "Axis aligned bounding box" (along the estimated axis, of course).
If not (when image is a picture with perspective), 4 peaks indicates which edge line is along the axis of the pattern coordinates.
So, for example, you can estimate corner location as intersection of 2 lines that along edge.
What I eventually ended up doing is the following:
Trace the edges of the contour using Moore-Neighbour Tracing --> this gives me a sequence of points lying on the border of rectangle.
During the trace, I observe changes in rectangular distance between the first and last points in a sliding window. The idea is inspired by the paper "The outline corner filter" by C. A. Malcolm (https://spie.org/Publications/Proceedings/Paper/10.1117/12.939248?SSO=1).
This is giving me accurate results for low computational overhead and little space.

Measure distance to object with a single camera in a static scene

let's say I am placing a small object on a flat floor inside a room.
First step: Take a picture of the room floor from a known, static position in the world coordinate system.
Second step: Detect the bottom edge of the object in the image and map the pixel coordinate to the object position in the world coordinate system.
Third step: By using a measuring tape measure the real distance to the object.
I could move the small object, repeat this three steps for every pixel coordinate and create a lookup table (key: pixel coordinate; value: distance). This procedure is accurate enough for my use case. I know that it is problematic if there are multiple objects (an object could cover an other object).
My question: Is there an easier way to create this lookup table? Accidentally changing the camera angle by a few degrees destroys the hard work. ;)
Maybe it is possible to execute the three steps for a few specific pixel coordinates or positions in the world coordinate system and perform some "calibration" to calculate the distances with the computed parameters?
If the floor is flat, its equation is that of a plane, let
a.x + b.y + c.z = 1
in the camera coordinates (the origin is the optical center of the camera, XY forms the focal plane and Z the viewing direction).
Then a ray from the camera center to a point on the image at pixel coordinates (u, v) is given by
(u, v, f).t
where f is the focal length.
The ray hits the plane when
(a.u + b.v + c.f) t = 1,
i.e. at the point
(u, v, f) / (a.u + b.v + c.f)
Finally, the distance from the camera to the point is
p = √(u² + v² + f²) / (a.u + b.v + c.f)
This is the function that you need to tabulate. Assuming that f is known, you can determine the unknown coefficients a, b, c by taking three non-aligned points, measuring the image coordinates (u, v) and the distances, and solving a 3x3 system of linear equations.
From the last equation, you can then estimate the distance for any point of the image.
The focal distance can be measured (in pixels) by looking at a target of known size, at a known distance. By proportionality, the ratio of the distance over the size is f over the length in the image.
Most vision libraries (including opencv) have built in functions that will take a couple points from a camera reference frame and the related points from a Cartesian plane and generate your warp matrix (affine transformation) for you. (some are fancy enough to include non-linearity mappings with enough input points, but that brings you back to your time to calibrate issue)
A final note: most vision libraries use some type of grid to calibrate off of ie a checkerboard patter. If you wrote your calibration to work off of such a sheet, then you would only need to measure distances to 1 target object as the transformations would be calculated by the sheet and the target would just provide the world offsets.
I believe what you are after is called a Projective Transformation. The link below should guide you through exactly what you need.
Demonstration of calculating a projective transformation with proper math typesetting on the Math SE.
Although you can solve this by hand and write that into your code... I strongly recommend using a matrix math library or even writing your own matrix math functions prior to resorting to hand calculating the equations as you will have to solve them symbolically to turn it into code and that will be very expansive and prone to miscalculation.
Here are just a few tips that may help you with clarification (applying it to your problem):
-Your A matrix (source) is built from the 4 xy points in your camera image (pixel locations).
-Your B matrix (destination) is built from your measurements in in the real world.
-For fast recalibration, I suggest marking points on the ground to be able to quickly place the cube at the 4 locations (and subsequently get the altered pixel locations in the camera) without having to remeasure.
-You will only have to do steps 1-5 (once) during calibration, after that whenever you want to know the position of something just get the coordinates in your image and run them through step 6 and step 7.
-You will want your calibration points to be as far away from eachother as possible (within reason, as at extreme distances in a vanishing point situation, you start rapidly losing pixel density and therefore source image accuracy). Make sure that no 3 points are colinear (simply put, make your 4 points approximately square at almost the full span of your camera fov in the real world)
ps I apologize for not writing this out here, but they have fancy math editing and it looks way cleaner!
Final steps to applying this method to this situation:
In order to perform this calibration, you will have to set a global home position (likely easiest to do this arbitrarily on the floor and measure your camera position relative to that point). From this position, you will need to measure your object's distance from this position in both x and y coordinates on the floor. Although a more tightly packed calibration set will give you more error, the easiest solution for this may simply be to have a dimension-ed sheet(I am thinking piece of printer paper or a large board or something). The reason that this will be easier is that it will have built in axes (ie the two sides will be orthogonal and you will just use the four corners of the object and used canned distances in your calibration). EX: for a piece of paper your points would be (0,0), (0,8.5), (11,8.5), (11,0)
So using those points and the pixels you get will create your transform matrix, but that still just gives you a global x,y position on axes that may be hard to measure on (they may be skew depending on how you measured/ calibrated). So you will need to calculate your camera offset:
object in real world coords (from steps above): x1, y1
camera coords (Xc, Yc)
dist = sqrt( pow(x1-Xc,2) + pow(y1-Yc,2) )
If it is too cumbersome to try to measure the position of the camera from global origin by hand, you can instead measure the distance to 2 different points and feed those values into the above equation to calculate your camera offset, which you will then store and use anytime you want to get final distance.
As already mentioned in the previous answers you'll need a projective transformation or simply a homography. However, I'll consider it from a more practical view and will try to summarize it short and simple.
So, given the proper homography you can warp your picture of a plane such that it looks like you took it from above (like here). Even simpler you can transform a pixel coordinate of your image to world coordinates of the plane (the same is done during the warping for each pixel).
A homography is basically a 3x3 matrix and you transform a coordinate by multiplying it with the matrix. You may now think, wait 3x3 matrix and 2D coordinates: You'll need to use homogeneous coordinates.
However, most frameworks and libraries will do this handling for you. What you need to do is finding (at least) four points (x/y-coordinates) on your world plane/floor (preferably the corners of a rectangle, aligned with your desired world coordinate system), take a picture of them, measure the pixel coordinates and pass both to the "find-homography-function" of your desired computer vision or math library.
In OpenCV that would be findHomography, here an example (the method perspectiveTransform then performs the actual transformation).
In Matlab you can use something from here. Make sure you are using a projective transformation as transform type. The result is a projective tform, which can be used in combination with this method, in order to transform your points from one coordinate system to another.
In order to transform into the other direction you just have to invert your homography and use the result instead.

Camera projection for lines orthogonal to camera z-axis

I'm working on an object tracking application using openCV. I want to convert my pixel coordinates to world coordinates to get more meaningful information. I have read a lot about computing the perspective transform matrix, and I know about cv2.solvePnP. But I feel like my case should be special, because I'm tracking a runner on a track and field runway with the runway orthogonal to the camera's z-axis. I will set up the camera to ensure this.
If I just pick two points on the runway edge, I can calculate a linear conversion from pixels to world coords at that specific height (ground level) and distance from the camera (i.e. along that line). Then I reason that the runner will run on a line parallel to the runway at a different height and slightly different distance from the camera, but the lines should still be parallel in the image, because they will both be orthogonal to the camera z-axis. With all those constraints, I feel like I shouldn't need the normal number of points to track the runner on that particular axis. My gut says that 2-3 should be enough. Can anyone help me nail down the method here? Am I completely off track? With both height and distance from camera essentially fixed, shouldn't I be able to work with a much smaller set of correspondences?
Thanks, Bill
So, I think I've answered this one myself. It's true that only two correspondence points are needed given the following assumptions.
Assume:
World coordinates are set up with X-axis and Y-axis parallel to the ground plane. X-axis is parallel to the runway.
Camera is translated and possibly rotated about X-axis (angled downward), but no rotation around Y-axis(camera plane parallel to runway and x-axis) or Z-axis (camera is level with respect to ground).
Camera intrinsic parameters are known from camera calibration.
Method:
Pick two points in the ground plane with known coordinates in world and image. For example, two points on the runway edge as mentioned in original post. The line connecting the poitns in world coordinates should not be parallel with either X or Z axis.
Since Y=0 for these points, ignore the second column of the rotation/translation matrix, reducing the projection to a planar homography transform (3x3 matrix). Now we have 9 degrees of freedom.
The rotation assumptions will enforce a certain form on the rotation/translation matrix. Namely, the first column and first row will be the identity (1,0,0). This further reduces the number of degrees of freedom in the matrix to 5.
Constrain the values of the second column of the matrix such that cos^2(theta)+sin^2(theta) = 1. This reduces the number of unknowns to only 4. Two correspondence points will give us the 4 equations we need to calculate the homography matrix for the ground plane.
Factor out the camera intrinsic parameter matrix from the homography matrix, leaving the rotation/translation matrix for the ground plane.
Due to the rotation assumptions made earlier, the ignored column of the rotation/translation matrix can be easily constructed from the third column of the same matrix, which is the second column in the ground plane homography matrix.
Multiply back out with the camera intrinsic parameters to arrive at the final universal projection matrix (from only 2 correspondence points!)
My test implentation has worked quite well. Of course, it's sensitive to the accuracy of the two correspondence points provided, but that's kind of a given.

Pixel-Milimeter Proportion

I have a digital image, and I want to make some calculation based on distances on it. So I need to get the Milimeter/Pixel proportion. What I'm doing right now, is to mark two points wich I know the real world distance, to calculate the Euclidian distance between them, and than obtain the proportion.
The question is, Only with two points can I make the correct Milimeter/Pixel's proportion, or do I need to use 4 points, 2 for the X-Axis and 2 for Y-axis?
If your image is of a flat surface and the camera direction is perpendicular to that surface, then your scale factor should be the same in both directions.
If your image is of a flat surface, but it is tilted relative to the camera, then marking out a rectangle of known proportions on that surface would allow you to compute a perspective transform. (See for example this question)
If your image is of a 3D scene, then of course there is no way in general to convert pixels to distances.
If you know the distance between the points A and B measured on the picture(say in inch) and you also know the number of pixels between the points, you can easily calculate the pixels/inch ratio by dividing <pixels>/<inches>.
I suggest to take the points on the picture such that the line which intersects them is either horizontal either vertical such that calculations do not have errors taking into account the pixels have a rectangular form.

Finding distance from camera to object of known size

I am trying to write a program using opencv to calculate the distance from a webcam to a one inch white sphere. I feel like this should be pretty easy, but for whatever reason I'm drawing a blank. Thanks for the help ahead of time.
You can use triangle similarity to calibrate the camera angle and find the distance.
You know your ball's size: D units (e.g. cm). Place it at a known distance Z, say 1 meter = 100cm, in front of the camera and measure its apparent width in pixels. Call this width d.
The focal length of the camera f (which is slightly different from camera to camera) is then f=d*Z/D.
When you see this ball again with this camera, and its apparent width is d' pixels, then by triangle similarity, you know that f/d'=Z'/D and thus: Z'=D*f/d' where Z' is the ball's current distance from the camera.
To my mind you will need a camera model = a calibration model if you want to measure distance or other things (int the real-world).
The pinhole camera model is simple, linear and gives good results (but won't correct distortions, (whether they are radial or tangential).
If you don't use that, then you'll be able to compute disparity-depth map, (for instance if you use stereo vision) but it is relative and doesn't give you an absolute measurement, only what is behind and what is in front of another object....
Therefore, i think the answer is : you will need to calibrate it somehow, maybe you could ask the user to approach the sphere to the camera till all the image plane is perfectly filled with the ball, and with a prior known of the ball measurement, you'll be able to then compute the distance....
Julien,

Resources