I have two calibrated cameras with known intrinsic and extrinsic parameters. I also have nearly 30 points and their correspondences in the other image plane.
How can I obtain the depth of only these points? Any code or resource will be really helpful.
I'm using Python and OpenCV 3.4 to implement it.
Related
I have a vehicle with two cameras, left and right. Is there a difference between me calibrating each camera separately vs me performing "stereo calibration" ? I am asking because I noticed in the OpenCV documentation that there is a stereoCalibrate function, and also a stereo calibration tool for MATLAB. If I do separate camera calibration on each and then perform a depth calculation using the undistorted images of each camera, will the results be the same ?
I am not sure what the difference is between the two methods. I performed normal camera calibration for each camera separately.
For intrinsics, it doesn't matter. The added information ("pair of cameras") might make the calibration a little better though.
Stereo calibration gives you the extrinsics, i.e. transformation matrices between cameras. That's for... stereo vision. If you don't perform stereo calibration, you would lack the extrinsics, and then you can't do any depth estimation at all, because that requires the extrinsics.
TL;DR
You need stereo calibration if you want 3D points.
Long answer
There is a huge difference between single and stereo camera calibration.
The output of single camera calibration are intrinsic parameters only (i.e. the 3x3 camera matrix and a number of distortion coefficients, depending on the model used). In OpenCV this is accomplished by cv2.calibrateCamera. You may check my custom library that helps reducing the boilerplate.
When you do stereo calibration, its output is given by the intrinsics of both cameras and the extrinsic parameters.
In OpenCV this is done with cv2.stereoCalibrate. OpenCV fixes the world origin in the first camera and then you get a rotation matrix R and translation vector t to go from the first camera (origin) to the second one.
So, why do we need extrinsics? If you are using a stereo system for 3D scanning then you need those (and the intrinsics) to do triangulation, so to obtain 3D points in the space: if you know the projection of a general point p in the space on both cameras, then you can calculate its position.
To add something to what #Christoph correctly answered before, the intrinsics should be almost the same, however, cv2.stereoCalibrate may improve the calculation of the intrinsics if the flag CALIB_FIX_INTRINSIC is not set. This happens because the system composed by two cameras and the calibration board is solved as a whole by numerical optimization.
I am trying to understand the concept of homography. It gives features but I can't get that how does it calculate features from images?
A homography is nothing but a mapping between points on one surface or plane to points on another. In the case of computer vision, it is a matrix specifying the transformation between two views of images, for example.
A homography can be estimated by identifying the keypoints in both the images and then estimating the transformation between the two views. There are many keypoint descriptors available that help in identifying these keypoints.
In camera calibration, an extrinsic matrix is computed by capturing different views of an object of known geometry, like a Chessboard from which the corner points are detected. The matrix is estimated by mathematically solving for the detected points from the many different views captured.
A details derivation of the estimation and solving to obtain the homography matrix can be found in this book. :)
I am trying to get 3D reconstruction from uncalibrated multi-view images.
I don't know the intrinsic parameters of the camera
I have SIFT features.
What I like to do is filtering out-liers using the 5-point algorithm in combination with RANSAC, so that I can proceed for the relative pose optimization and triangulation of the points matched.
Opencv has one API
findEssentialMat(); That API needs focal and pp. Where I can have focal and pp?
Is this API findEssentialMat() the right one I have to use for the pose estimation?
If my approach is wrong, is there any API closer to what I want to achieve in OpenCV?
Thanks
I don't know the intrinsic parameters of the camera. [...]
[...] proceed [with] the relative pose optimization and triangulation of the points matched.
Where I can have focal and pp?
Then findEssentialMat() can not be used in this situation as it requires the intrinsic parameters, which are the focal length and principal point (focal and pp arguments).
First calibrate the camera to recover these parameters. Then pose estimation and 3D triangulation will be possible using OpenCV functions.
I am dealing with the problem, which concerns the camera calibration. I need calibrated cameras to realize measurements of the 3D objects. I am using OpenCV to carry out the calibration and I am wondering how can I predict or calculate a volume in which the camera is well calibrated. Is there a solution to increase the volume espacially in the direction of the optical axis? Does the procedure, in which I increase the movement range of the calibration target in 'z' direction gives sufficient difference?
I think you confuse a few key things in your question:
Camera calibration - this means finding out the matrices (intrinsic and extrinsic) that describe the camera position, rotation, up vector, distortion, optical center etc. etc.
Epipolar Rectification - this means virtually "rotating" the image planes so that they become coplanar (parallel). This simplifies the stereo reconstruction algorithms.
For camera calibration you do not need to care about any volumes - there aren't volumes where the camera is well calibrated or wrong calibrated. If you use the chessboard pattern calibration, your cameras are either calibrated or not.
When dealing with rectification, you want to know which areas of the rectified images correspond and also maximize these areas. OpenCV allows you to choose between two extremes - either making all pixels in the returned areas valid and cutting out pixels that don't fit into the rectangular area or include all pixels even with invalid ones.
OpenCV documentation has some nice, more detailed descriptions here: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
i want to find a position of a point with opencv. i calibrated two cameras using cvCalibrateCamera2. so i know both intrinsic and extrinsic parameters. I read that with a known intrinsic and extrinsic parameters, i can reconstruct 3d by triangulation easily. Is there a function in opencv to achive this.I think cvProjectPoint2 may be useful but i don t understand what exactly. So how i can find 3d position of a point.
Thanks.
You first have to find disparities. There are two algorithms implemented in OpenCV - block matching (cvFindStereoCorrespondenceBM) and graph cuts (cvFindStereoCorrespondenceGC). The latter one gives better results but is slower. After disparity detection you can reproject the disparities to 3D using cvReprojectImageTo3D. This gives you distances for each point of the input images that is in both camera views.
Also note that the stereo correspondence algorithms require a rectified image pair (use cvStereoRectify, cvInitUndistortRectifyMap and cvRemap).