opencv detect real world coordinates - opencv

I have followed the stereo implementation from book "Learning OpenCV". I have the fundamental, essential, rotation, translation matrices and how do I calculate the real world position of an object that I have clicked on?

You can use ReprojectTo3D.
The reprojectImageto3D function requires the disparity map, an output array of the same size of the image & an array Q (disparity-to-depth map).
Q is calculated from stereoRectify function -> stereoRectify requires the intrinsic camera matrices, distortion coefficients & the relative rotation & translation vectors between the 2 cameras (R&T).
To calculate all these, use the stereoCalibrate function which outputs all these.
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html - this is the link to all the functions that i've mentioned.
So, the steps in order of coding are:
1> stereoCalibrate (use 2 views from same camera for now): get distortion coeffs, intrinsic matrices for both cameras and R & T. (Which you have already performed).
2> calculate the disparity and store the image - use the disparity calculation shown here (3d reconstruction from 2 images without info about the camera) to understand what kind of input is expected as the first & second argument to reprojectImageto3D.
3> use stereoRectify to get Q.
4> plug in all values into reprojectImage to3D to generate coords. Every entry in the output array _3dImage i.e. at each pixel we will have 3 values stored together: the x,y coordinate corresponding to the pixel and the z coordinate calculated from Q. This z coordinate is essentially what we want.
Once you compute the disparity map and get _3dImage using Q, each(i, j) of _3dImage has (x, y, z) values. I even ported it to PCL to view 3D Image and this is how it looks.
This is how Disparity Map looks like
and this is its corresponding 3D View using PCL

Related

Fundamental matrix from transformation

I have two images with known corresponding 2D points, the intrinsic parameters of the cameras and the 3D transformation between the cameras. I want to calculate the 2D reprojection error from one image to the other.
To do so, I thought about getting a fundamental matrix from the transformation, so I can compute the point-to-line distance between the points and the corresponding epipolar lines. How can I get the fundamental matrix?
I know that E = R * [t] and F = K^(-t) * E * K^(-1), where E is the essential matrix and [t] is the skew-symmetric matrix of the translation vector. However, this returns a null matrix if the motion is pure rotation (t = [0 0 0]). I know that in this case a homography explains the motion better than the fundamental matrix, so that I can compare the norm of the translation vector with a small threshold to choose a fundamental matrix or a homogaphy. Is there a better way of doing this?
"I want to calculate the 2D reprojection error from one image to the other."
Then go and calculate it. Your setup is calibrated, so you don't need anything other than a known piece of 3D geometry. Forget about the epipolar error, which may as well be undefined if your camera motion is (close to) a pure rotation.
Take an object of known size and shape (for example, a checkerboard), work out its location in 3D space from one camera view (for a checkerboard you can fit a homography between its physical model and its projection, then decompose it into [R|t]). Then project the now-located 3D shape into the other camera given that camera's calibrated parameters, and compare the projection with the object's actual image.

finding the real world coordinates of an image point

I am searching lots of resources on internet for many days but i couldnt solve the problem.
I have a project in which i am supposed to detect the position of a circular object on a plane. Since on a plane, all i need is x and y position (not z) For this purpose i have chosen to go with image processing. The camera(single view, not stereo) position and orientation is fixed with respect to a reference coordinate system on the plane and are known
I have detected the image pixel coordinates of the centers of circles by using opencv. All i need is now to convert the coord. to real world.
http://www.packtpub.com/article/opencv-estimating-projective-relations-images
in this site and other sites as well, an homographic transformation is named as:
p = C[R|T]P; where P is real world coordinates and p is the pixel coord(in homographic coord). C is the camera matrix representing the intrinsic parameters, R is rotation matrix and T is the translational matrix. I have followed a tutorial on calibrating the camera on opencv(applied the cameraCalibration source file), i have 9 fine chessbordimages, and as an output i have the intrinsic camera matrix, and translational and rotational params of each of the image.
I have the 3x3 intrinsic camera matrix(focal lengths , and center pixels), and an 3x4 extrinsic matrix [R|T], in which R is the left 3x3 and T is the rigth 3x1. According to p = C[R|T]P formula, i assume that by multiplying these parameter matrices to the P(world) we get p(pixel). But what i need is to project the p(pixel) coord to P(world coordinates) on the ground plane.
I am studying electrical and electronics engineering. I did not take image processing or advanced linear algebra classes. As I remember from linear algebra course we can manipulate a transformation as P=[R|T]-1*C-1*p. However this is in euclidian coord system. I dont know such a thing is possible in hompographic. moreover 3x4 [R|T] Vector is not invertible. Moreover i dont know it is the correct way to go.
Intrinsic and extrinsic parameters are know, All i need is the real world project coordinate on the ground plane. Since point is on a plane, coordinates will be 2 dimensions(depth is not important, as an argument opposed single view geometry).Camera is fixed(position,orientation).How should i find real world coordinate of the point on an image captured by a camera(single view)?
EDIT
I have been reading "learning opencv" from Gary Bradski & Adrian Kaehler. On page 386 under Calibration->Homography section it is written: q = sMWQ where M is camera intrinsic matrix, W is 3x4 [R|T], S is an "up to" scale factor i assume related with homography concept, i dont know clearly.q is pixel cooord and Q is real coord. It is said in order to get real world coordinate(on the chessboard plane) of the coord of an object detected on image plane; Z=0 then also third column in W=0(axis rotation i assume), trimming these unnecessary parts; W is an 3x3 matrix. H=MW is an 3x3 homography matrix.Now we can invert homography matrix and left multiply with q to get Q=[X Y 1], where Z coord was trimmed.
I applied the mentioned algorithm. and I got some results that can not be in between the image corners(the image plane was parallel to the camera plane just in front of ~30 cm the camera, and i got results like 3000)(chessboard square sizes were entered in milimeters, so i assume outputted real world coordinates are again in milimeters). Anyway i am still trying stuff. By the way the results are previosuly very very large, but i divide all values in Q by third component of the Q to get (X,Y,1)
FINAL EDIT
I could not accomplish camera calibration methods. Anyway, I should have started with perspective projection and transform. This way i made very well estimations with a perspective transform between image plane and physical plane(having generated the transform by 4 pairs of corresponding coplanar points on the both planes). Then simply applied the transform on the image pixel points.
You said "i have the intrinsic camera matrix, and translational and rotational params of each of the image.” but these are translation and rotation from your camera to your chessboard. These have nothing to do with your circle. However if you really have translation and rotation matrices then getting 3D point is really easy.
Apply the inverse intrinsic matrix to your screen points in homogeneous notation: C-1*[u, v, 1], where u=col-w/2 and v=h/2-row, where col, row are image column and row and w, h are image width and height. As a result you will obtain 3d point with so-called camera normalized coordinates p = [x, y, z]T. All you need to do now is to subtract the translation and apply a transposed rotation: P=RT(p-T). The order of operations is inverse to the original that was rotate and then translate; note that transposed rotation does the inverse operation to original rotation but is much faster to calculate than R-1.

Triangulation to find distance to the object- Image to world coordinates

Localization of an object specified in the image.
I am working on the project of computer vision to find the distance of an object using stereo images.I followed the following steps using OpenCV to achieve my objective
1. Calibration of camera
2. Surf matching to find fundamental matrix
3. Rotation and Translation vector using svd as method is described in Zisserman and Hartley book.
4. StereoRectify to get the projection matrix P1, P2 and Rotation matrices R1, R2. The Rotation matrices can also be find using Homography R=CameraMatrix.inv() H Camera Matrix.
Problems:
i triangulated point using least square triangulation method to find the real distance to the object. it returns value in the form of [ 0.79856 , .354541 .258] . How will i map it to real world coordinates to find the distance to an object.
http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/
Alternative approach:
Find the disparity between the object in two images and find the depth using the given formula
Depth= ( focal length * baseline ) / disparity
for disparity we have to perform the rectification first and the points must be undistorted. My rectification images are black.
Please help me out.It is important
Here is the detail explanation of how i implemented the code.
Calibration of Camera using Circles grid to get the camera matrix and Distortion coefficient. The code is given on the Github (Andriod).
2.Take two pictures of a car. First from Left and other from Right. Take the sub-image and calculate the -fundmental matrix- essential matrix- Rotation matrix- Translation Matrix....
3.I have tried to projection in two ways.
Take the first image projection as identity matrix and make a second project 3x4d through rotation and translation matrix and perform Triangulation.
Get the Projection matrix P1 and P2 from Stereo Rectify to perform Triangulation.
My object is 65 meters away from the camera and i dont know how to calculate this true this based on the result of triangulation in the form of [ 0.79856 , .354541 .258]
Question: Do i have to do some extra calibration to get the result. My code is not based to know the detail of geometric size of the object.
So you already computed the triangulation? Well, then you have points in camera coordinates, i.e. in the coordinate frame centered on one of the cameras (the left or right one depending on how your code is written and the order in which you feed your images to it).
What more do you want? The vector length (square root of the sum of the square coordinates) of those points is their estimated distance from the same camera. If you want their position in some other "world" coordinate system, you need to give the coordinate transform between that system and the camera - presumably through a calibration procedure.

stereo reconstruction of point cloud based on rectified images

I have a pair of matched 2D features extracted from rectified stereo image. Using cvPerspectiveTransform function in OpenCV, I attempted to reconstruct those features in 3D. The result is not consistent with the actual object dimension in real world. I realize there is a function in Matlab calibration toolbox that converts 2D stereo features into 3D point cloud. Nevertheless, the features are lifted from original images.
If I want to work with rectified images, is it possible to reconstruct the 3D locations based on 2D feature locations and disparity information.
If you know the focal length (f) and the baseline width (b, the distance of the projection axis of both cameras) as well as the disparity (d) in a rectified stereo image pair, you can calculate the distance (Z) with the following formula:
Z = f*(b/d);
This follows from the following equations:
x_l = f*(X/Z); // projecting a 3D point onto the left image
x_r = f*((X+b)/Z); // projecting the same 3D point onto the right image
d = x_r - x_l = f * (b/Z); // calculating the disparity
Solving the last equation for Z should lead to the formula given above.

Project 2d points in camera 1 image to camera 2 image after a stereo calibration

I am doing stereo calibration of two cameras (let's name them L and R) with opencv. I use 20 pairs of checkerboard images and compute the transformation of R with respect to L. What I want to do is use a new pair of images, compute the 2d checkerboard corners in image L, transform those points according to my calibration and draw the corresponding transformed points on image R with the hope that they will match the corners of the checkerboard in that image.
I tried the naive way of transforming the 2d points from [x,y] to [x,y,1], multiply by the 3x3 rotation matrix, add the rotation vector and then divide by z, but the result is wrong, so I guess it's not that simple (?)
Edit (to clarify some things):
The reason I want to do this is basically because I want to validate the stereo calibration on a new pair of images. So, I don't actually want to get a new 2d transformation between the two images, I want to check if the 3d transformation I have found is correct.
This is my setup:
I have the rotation and translation relating the two cameras (E), but I don't have rotations and translations of the object in relation to each camera (E_R, E_L).
Ideally what I would like to do:
Choose the 2d corners in image from camera L (in pixels e.g. [100,200] etc).
Do some kind of transformation on the 2d points based on matrix E that I have found.
Get the corresponding 2d points in image from camera R, draw them, and hopefully they match the actual corners!
The more I think about it though, the more I am convinced that this is wrong/can't be done.
What I am probably trying now:
Using the intrinsic parameters of the cameras (let's say I_R and I_L), solve 2 least squares systems to find E_R and E_L
Choose 2d corners in image from camera L.
Project those corners to their corresponding 3d points (3d_points_L).
Do: 3d_points_R = (E_L).inverse * E * E_R * 3d_points_L
Get the 2d_points_R from 3d_points_R and draw them.
I will update when I have something new
It is actually easy to do that but what you're making several mistakes. Remember after stereo calibration R and L relate the position and orientation of the second camera to the first camera in the first camera's 3D coordinate system. And also remember to find the 3D position of a point by a pair of cameras you need to triangulate the position. By setting the z component to 1 you're making two mistakes. First, most likely you have used the common OpenCV stereo calibration code and have given the distance between the corners of the checker board in cm. Hence, z=1 means 1 cm away from the center of camera, that's super close to the camera. Second, by setting the same z for all the points you are saying the checker board is perpendicular to the principal axis (aka optical axis, or principal ray), while most likely in your image that's not the case. So you're transforming some virtual 3D points first to the second camera's coordinate system and then projecting them onto the image plane.
If you want to transform just planar points then you can find the homography between the two cameras (OpenCV has the function) and use that.

Resources