Minimum number of chessboard images for Stereo Calibration and Rectification - opencv

What is the minimum number of chessboard image pairs in order to mathematically calibrate and rectify two cameras ? One pair is considered as a single view of the chessboard by each camera, ending with a left and right image of the same scene. As far as I know we need just one pair for a stereo system, as the stereo calibration seeks the relations between the tow cameras.

Stereo calibration seeks not only the rotation and translation between the two cameras, but also the intrinsic and distortion parameters of each camera. You need at least two images to calibrate each camera separately, just to get the intrinsics. If you have already calibrated each camera separately, then, yes, you can use a single pair of checkerboard images to get R and t. However, you will not get a very good accuracy.
As a rule of thumb, you need 10-20 image pairs. You need enough images to cover the field of view, and to have a good distribution of 3D orientations of the board.
To calibrate a stereo pair of cameras, you first calibrate the two cameras separately, and then you do another joint optimization of the parameters of both cameras plus the rotation and translation between them. So one pair of images will simply not work.
Edit:
The camera calibration algorithm used in OpenCV, Caltech Calibration Toolbox, and the Computer Vision System Toolbox for MATLAB is based on the work by Zhengyou Zhang. His paper explains it better than I ever could.
The crux of the issue here is that the points on the chessboard are co-planar, which is a degenerate configuration. You simply cannot solve for the intrinsics using just one view of a planar board. You need more than one view, with the board in different 3-D orientations. Views where the boards are in parallel planes do not add any information.

"One image with 3 corners give us 6 pieces of information can be used to solve both intrinsic and distortion. "
I think that this is your main error. These corners are not independent. A pattern with a 100x100 chessboard pattern does not provide more information than a 10x10 pattern in your perfect world as the points are on the same plane.
If you have a single view of a chessboard, a closer distance to the board can be compensated by the focus so that you are not (even in your perfect world) able to calibrate your camera's intrinsic AND extrinsic parameters.

Related

Difference between stereo camera calibration vs two single camera calibrations using OpenCV

I have a vehicle with two cameras, left and right. Is there a difference between me calibrating each camera separately vs me performing "stereo calibration" ? I am asking because I noticed in the OpenCV documentation that there is a stereoCalibrate function, and also a stereo calibration tool for MATLAB. If I do separate camera calibration on each and then perform a depth calculation using the undistorted images of each camera, will the results be the same ?
I am not sure what the difference is between the two methods. I performed normal camera calibration for each camera separately.
For intrinsics, it doesn't matter. The added information ("pair of cameras") might make the calibration a little better though.
Stereo calibration gives you the extrinsics, i.e. transformation matrices between cameras. That's for... stereo vision. If you don't perform stereo calibration, you would lack the extrinsics, and then you can't do any depth estimation at all, because that requires the extrinsics.
TL;DR
You need stereo calibration if you want 3D points.
Long answer
There is a huge difference between single and stereo camera calibration.
The output of single camera calibration are intrinsic parameters only (i.e. the 3x3 camera matrix and a number of distortion coefficients, depending on the model used). In OpenCV this is accomplished by cv2.calibrateCamera. You may check my custom library that helps reducing the boilerplate.
When you do stereo calibration, its output is given by the intrinsics of both cameras and the extrinsic parameters.
In OpenCV this is done with cv2.stereoCalibrate. OpenCV fixes the world origin in the first camera and then you get a rotation matrix R and translation vector t to go from the first camera (origin) to the second one.
So, why do we need extrinsics? If you are using a stereo system for 3D scanning then you need those (and the intrinsics) to do triangulation, so to obtain 3D points in the space: if you know the projection of a general point p in the space on both cameras, then you can calculate its position.
To add something to what #Christoph correctly answered before, the intrinsics should be almost the same, however, cv2.stereoCalibrate may improve the calculation of the intrinsics if the flag CALIB_FIX_INTRINSIC is not set. This happens because the system composed by two cameras and the calibration board is solved as a whole by numerical optimization.

How to calibrate 4 camera set around a circle?

Four cameras are arranged in a ring shape. How to calibrate the relative postures of the four cameras, that is, the attitudes of the other three cameras relative to the camera 0, the difficulties are:
When using a calibration plate, four cameras cannot see the calibration plate at the same time, and only two cameras can see the calibration plate, such as calibrating cam1 relative to cam0, then calibrating cam2 relative to cam0, and cam2 can only be relative to cam0. The indirect calculation, causing errors;
In the case of only calibrating two cameras, such as cam0 and cam1, the calibration plates seen by both cameras are tilted, and the calibration plate changes angle is small, which also causes errors.
Is there any better way to calibrate, thank you
There are many ways and papers introduced to this.
The similiest way is to calibrate two at a time. The pair need to be havig largest common FOV. But there are other methods as well.
You can use structure from motion-based method to move the camera around and jointly optimize for the camera poses. It was first published in CVPR between 2010 to 2016. forgot the exact year, but it about camera calibration with minimal or zero overlap.
You can add an IMU and use kalibra to calibrate them. Anchor all image to this IMU. https://github.com/ethz-asl/kalibr/wiki/camera-imu-calibration.
An alternative that I frequently use is the Robotics HAND EYE calibration System used in VINSMONO https://github.com/HKUST-Aerial-Robotics/VINS-Mono. The VINSMONO one requires no complicated pattern. just moving around.
For my paper, We use sea level vanishing line and vanishing point to calibrate cameras which cant get the same chessboard pattern in the same view.
Han Wang, Wei Mou, Xiaozheng Mou, Shenghai Yuan, Soner Ulun, Shuai Yang, Bok-Suk Shin, “An Automatic Self-Calibration Approach for Wide Baseline Stereo Cameras Using Sea Surface Images”, Unmanned Systems, Vol. 3, No. 4. pp. 277-290. 2015
There are others as well such as using vicon to image tracking system or many other methods. Just find one which you think is suitable for you and try it out.

OpenCV + photogrammetry

i have a stereopair,
photo 1: http://savepic.org/1671682.jpg
photo 2: http://savepic.org/1667586.jpg
there is coordinate system in each image. How can I find coordinates of point A in this system using OpenCV library. It would be nice to see sample code.
I've looked for it at opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html but haven't found (or haven't understood :) )
Your 'stereo' images are fine. What you have already done is solve the correspondence problem: in both images you have indicated points 'A'. This means that you know which pixel corresponds to eachother labeling point 'A'.
What you want to do, is triangulate where your camera is. You can only do this by first calibrating your camera. This is inside of OpenCV already.
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
This gives you the exact vector/ray of light for each vector, and the optical center of your cameras through which the ray passes. Moreover, you need stereo calibration. This establishes the orientation and position of each camera with respect through each other.
From that point on, your triangulation is simple, knowing the pixel location in both images of point 'A'. You have
Location and orientation of camera 1 and camera 2
Otical Ray Vector (pixel location) from the cameras to label 'A'.
So you have 2 locations in space, and 2 rays from these location. The intersection of these rays is your 3D answer.
Note that in practice there rays will never exactly intersect (2 lines in 3D rarely do), so you need to approximate. Use opencv function triangulatePoints(), using the input of the stereo calibration and the pixel index relating to label A.
Firstly of all this is not truly a stereo pair. A nice stereo pair needs to have 60%-80% overlap usually small rotation differences between images. Even if this pair had the necessary BASE to be a good stereo pair due to the extremely kappa rotation the resulting epipolar image would be useless.
Secondly among others you should take a look at the camera calibration and collinearity equations both supported by OpenCV
http://en.wikipedia.org/wiki/Camera_resectioning
http://en.wikipedia.org/wiki/Collinearity_equation
You need to understand the maths.
If the page isn't enough then you should look at the opencv book - it devotes a couple of chapters to this. Then there are a lot of textbooks that cover it in more detail

How to compute the rotation and translation between 2 cameras?

I am aware of the chessboard camera calibration technique, and have implemented it.
If I have 2 cameras viewing the same scene, and I calibrate both simultaneously with the chessboard technique, can I compute the rotation matrix and translation vector between them? How?
If you have the 3D camera coordinates of the corresponding points, you can compute the optimal rotation matrix and translation vector by Rigid Body Transformation
If You are using OpenCV already then why don't you use cv::stereoCalibrate.
It returns the rotation and translation matrices. The only thing you have to do is to make sure that the calibration chessboard is seen by both of the cameras.
The exact way is shown in .cpp samples provided with OpenCV library( I have 2.2 version and samples were installed by default in /usr/local/share/opencv/samples).
The code example is called stereo_calib.cpp. Although it's not explained clearly what they are doing there (for that You might want to look to "Learning OpenCV"), it's something You can base on.
If I understood you correctly, you have two calibrated cameras observing a common scene, and you wish to recover their spatial arrangement. This is possible (provided you find enough image correspondences) but only up to an unknown factor on translation scale. That is, we can recover rotation (3 degrees of freedom, DOF) and only the direction of the translation (2 DOF). This is because we have no way to tell whether the projected scene is big and the cameras are far, or the scene is small and cameras are near. In the literature, the 5 DOF arrangement is termed relative pose or relative orientation (Google is your friend).
If your measurements are accurate and in general position, 6 point correspondences may be enough for recovering a unique solution. A relatively recent algorithm does exactly that.
Nister, D., "An efficient solution to the five-point relative pose problem," Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.26, no.6, pp.756,770, June 2004
doi: 10.1109/TPAMI.2004.17
Update:
Use a structure from motion/bundle adjustment package like Bundler to solve simultaneously for the 3D location of the scene and relative camera parameters.
Any such package requires several inputs:
camera calibrations that you have.
2D pixel locations of points of interest in cameras (use a interest point detection like Harris, DoG (first part of SIFT)).
Correspondences between points of interest from each camera (use a descriptor like SIFT, SURF, SSD, etc. to do the matching).
Note that the solution is up to a certain scale ambiguity. You'll thus need to supply a distance measurement either between the cameras or between a pair of objects in the scene.
Original answer (applies primarily to uncalibrated cameras as the comments kindly point out):
This camera calibration toolbox from Caltech contains the ability to solve and visualize both the intrinsics (lens parameters, etc.) and extrinsics (how the camera positions when each photo is taken). The latter is what you're interested in.
The Hartley and Zisserman blue book is also a great reference. In particular, you may want to look at the chapter on epipolar lines and fundamental matrix which is free online at the link.

Volume of the camera calibration

I am dealing with the problem, which concerns the camera calibration. I need calibrated cameras to realize measurements of the 3D objects. I am using OpenCV to carry out the calibration and I am wondering how can I predict or calculate a volume in which the camera is well calibrated. Is there a solution to increase the volume espacially in the direction of the optical axis? Does the procedure, in which I increase the movement range of the calibration target in 'z' direction gives sufficient difference?
I think you confuse a few key things in your question:
Camera calibration - this means finding out the matrices (intrinsic and extrinsic) that describe the camera position, rotation, up vector, distortion, optical center etc. etc.
Epipolar Rectification - this means virtually "rotating" the image planes so that they become coplanar (parallel). This simplifies the stereo reconstruction algorithms.
For camera calibration you do not need to care about any volumes - there aren't volumes where the camera is well calibrated or wrong calibrated. If you use the chessboard pattern calibration, your cameras are either calibrated or not.
When dealing with rectification, you want to know which areas of the rectified images correspond and also maximize these areas. OpenCV allows you to choose between two extremes - either making all pixels in the returned areas valid and cutting out pixels that don't fit into the rectangular area or include all pixels even with invalid ones.
OpenCV documentation has some nice, more detailed descriptions here: http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html

Resources