Generating 3D points from Disparity - opencv

Can someone point out the math involved in getting the 3d points of an image from its disparity values?I have the Image(i,j) and disparity at each of these points.What i want is the true 3D coordinate x,y,z using math equations.

Long answer - http://www.amazon.com/Multiple-View-Geometry-Computer-Vision/dp/0521540518/
Short answer.
You have the pixel scale, so for a given number of pixel disparity you can get an angle different. With the baseline between cameras and an angle you have a distance
ps. take a look at the opencv book, it has a couple of good chapters on stereo

Related

Compute camera angles and locations given matching points with OpenCV

I have stereo images (non co-planar cameras), which have matching points on a plane (wall) labeled.
I need to compute the camera locations in world space, and the angles they are focused at.
I can work out the math if I need to (with effort), I wonder if there is a shortcut to doing these computations in OpenCV that I might not be familiar with?
If they are both looking at a plane, all you have to do is estimate homographies (with findHomography) independently between the plane and each camera, then decompose them (decomposeHomographyMat) to get rotation and translation up to scale. To resolve scale you need to know the distance between at least two of the points on the plane.

calculate distance between two objects in image opencv c++

i am measuring distance between two objects in a image using opencv c++.
i have detected two balls with hough transform circle and want to measure distance between them.
so far, used Pythagoras theorem to find distance between two coordinates but not getting close.
d= sq rt( (x2-x1)^2 + (y2-y1)^2 )
for eg: if distance between two balls is 13 cm then result is 5.6 cm
thanks in advance
To measure the distance between objects in an image you will have to look into camera calibration concept.
And as said by Mark Setchell the distance will be in pixels if you do not do the calibration and 2D point to 3D point transformation.

Triangulation to find distance to the object- Image to world coordinates

Localization of an object specified in the image.
I am working on the project of computer vision to find the distance of an object using stereo images.I followed the following steps using OpenCV to achieve my objective
1. Calibration of camera
2. Surf matching to find fundamental matrix
3. Rotation and Translation vector using svd as method is described in Zisserman and Hartley book.
4. StereoRectify to get the projection matrix P1, P2 and Rotation matrices R1, R2. The Rotation matrices can also be find using Homography R=CameraMatrix.inv() H Camera Matrix.
Problems:
i triangulated point using least square triangulation method to find the real distance to the object. it returns value in the form of [ 0.79856 , .354541 .258] . How will i map it to real world coordinates to find the distance to an object.
http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/
Alternative approach:
Find the disparity between the object in two images and find the depth using the given formula
Depth= ( focal length * baseline ) / disparity
for disparity we have to perform the rectification first and the points must be undistorted. My rectification images are black.
Please help me out.It is important
Here is the detail explanation of how i implemented the code.
Calibration of Camera using Circles grid to get the camera matrix and Distortion coefficient. The code is given on the Github (Andriod).
2.Take two pictures of a car. First from Left and other from Right. Take the sub-image and calculate the -fundmental matrix- essential matrix- Rotation matrix- Translation Matrix....
3.I have tried to projection in two ways.
Take the first image projection as identity matrix and make a second project 3x4d through rotation and translation matrix and perform Triangulation.
Get the Projection matrix P1 and P2 from Stereo Rectify to perform Triangulation.
My object is 65 meters away from the camera and i dont know how to calculate this true this based on the result of triangulation in the form of [ 0.79856 , .354541 .258]
Question: Do i have to do some extra calibration to get the result. My code is not based to know the detail of geometric size of the object.
So you already computed the triangulation? Well, then you have points in camera coordinates, i.e. in the coordinate frame centered on one of the cameras (the left or right one depending on how your code is written and the order in which you feed your images to it).
What more do you want? The vector length (square root of the sum of the square coordinates) of those points is their estimated distance from the same camera. If you want their position in some other "world" coordinate system, you need to give the coordinate transform between that system and the camera - presumably through a calibration procedure.

Find projection matrix

at first I want to apologize for my bad English.
I am really new in OpenCV and in virtual reality. I tried to find out the theory of image processing, but some points are missing there for me. I learned that projection matrix is matrix to transform 3D point to 2D. Am I right? Essential matrix gives me information about rotation between two cameras and fundamental matrix gives information about the relationship between pixel in one image with pixel in other image. The homography matrix relates coordinates of pixel in two image (is that correct?).
What is the difference between fundamental and homography matrix?
Do I need all these matrices to get projection matrix?
I am new in these, so please if you can, try to explain me it simply.
Thanks for your help.
I learned that projection matrix is matrix to transform 3D point to 2D. Am I right?
Yes. But usually these transformations are expressed in homogeneous coordinates. This means that 3D points are represented by 4-vectors (ie vectors of length 4), and 2D points are represented by 3-vectors.
The homography matrix relates coordinates of pixel in two image (is that correct?)
No. This is true in two special cases only: when the scene lie on a plane, or when the two views have been generated by two cameras sharing the same center location.
In all the other cases, ie when the scene is not planar and the two cameras have different centers, there is not an homography transforming one image into the other.
What is the difference between fundamental and homography matrix?
There are many differences. From an algebraic point of view, the most obvious difference is that an homography matrix is non-singular (its rank is 3), while a fundamental matrix is singular (its rank is 2).

Compute transformation matrix from a set of coordinates (with OpenCV)

I have a small cube with n (you can assume that n = 4) distinguished points on its surface. These points are numbered (1-n) and form a coordinate space, where point #1 is the origin.
Now I'm using a tracking camera to get the coordinates of those points, relative to the camera's coordinate space. That means that I now have n vectors p_i pointing from the origin of the camera to the cube's surface.
With that information, I'm trying to compute the affine transformation matrix (rotation + translation) that represents the transformation between those two coordinate spaces. The translation part is fairly trivial, but I'm struggling with the computation of the rotation matrix.
Is there any build-in functionality in OpenCV that might help me solve this problem?
Sounds like cvGetPerspectiveTransform is what you're looking for; cvFindHomograpy might also be helpful.
solvePnP should give you the rotation matrix and the translation vector. Try it with CV_EPNP or CV_ITERATIVE.
Edit: Or perhaps you're looking for RQ decomposition.
Look at the Stereo Camera tutorial for OpenCV. OpenCV uses a planar chessboard for all the computation and sets its Z-dimension to 0 to build its list of 3D points. You already have 3D points so change the code in the tutorial to reflect your list of 3D points. Then you can compute the transformation.

Resources