I use the Opencv sample code to do the camera calibration. As far as I know, the extrinsic parameter have 12 elements but in the OpenCV the sum of rotation vector and translation vector is 6.
Why OpenCV has only 6 parameters ?
http://docs.opencv.org/2.4/_downloads/camera_calibration.cpp
calibratecamera method
In calibrateCamera method output rvecs and tvecs, 3D vector for rotation(since any rotation matrix has just 3 degrees of freedom) and translation. They use Rodrigues method to convert 3x4 matrix R to 3D vector r. Thus, only 6 extrinsic parameters.
Related
I have a fisheye camera which I already calibrated correctly with the provided calibration functions by OpenCV. Therefore, I got a 3x3 Intrinsic Camera Matrix K and vector with distortion parameters.
Using the last 2 I can rectify the input image with the functions estimateNewCameraMatrixForUndistortRectify and initUndistortRectifyMap to obtain 2 transformation maps which I later use as input to the function remap. As output I get an undistorted image where parallel lines are maintained.
My questions are basically...
Can I continue using the K intrinsic matrix I got from calibration in conjunction with the undistorted image?
Has the intrinsic matrix K somehow changed due to the undistortion? In case this is true then how could I calculate the new K?
Thanks in advance.
As #micka pointed out in the comments, after calibrating the cameras and undistorting the image, I can continue using the new camera matrix K outputted by estimateNewCameraMatrixForUndistortRectify. This answers my 2 previous questions.
calibrateCamera() provides rvec, tvec, distCoeff and cameraMatrix whereas solvePnP() takes cameraMatrix, distCoeff as input and provides rvec, tvec as output. What is the difference between these two functions?
cv::calibrateCamera(...)
The function estimates the following parameters of a monocular camera from several views of a calibration pattern. The geometry of this pattern is usually known (i.e. it can be a chessboard):
The linear intrinsic parameters: the focal lengths in terms of pixels (these are basically scale factors), the principal point which would be ideally in the center of the image, and sometimes a skew coefficient between the x and the y axis (but this is often zero).
The non-linear intrinsic parameters: the previously mentioned parameters are forming the linear camera matrix, but there are also some non-linear parameters in the tranformation from the 3D camera to the 2D image plane, i.e. the lens distortion.
The extrinsic parameters: the tranformation matrix between the 3D world and 3D camera coordinate systems.
The estimation of the above mentioned parameters is usually based on 2D-3D correspondences. The algorithm detects some 2D points in the image (i.e. chessboard) for what the corresponding 3D object points are specified (known 3D geometry). It performs the following steps in the simplest case (can vary on the flags of cv::calibrateCamera(..., int flags, ...)):
Computes the linear intrinsic parameters and considers the non-linear ones to zero.
Estimates the initial camera pose (extrinsics) in function of the approximated intrinsics. This is done using cv::solvePnP(...).
Performs the Levenberg-Marquardt optimization algorithm to minimize the re-projection error between the detected 2D image points and 2D projections of the 3D object points. This is done using cv::projectPoints(...).
cv::solvePnP(...)
At this point, I also answered implicitly the role of cv::solvePnP(...) as this is the part of cv::calibrateCamera(...).
Once you have the intrinsics of a camera, you can assume that these will never change (except you change the optics or zooming). On the other hand the extrinsics can be changed, i.e. you can rotate the camera or put it to another location. You should see that the scenario of changing an object's pose to the camera is very similar in this case. And this is what the cv::solvePnP(...) is used for.
The function estimates the object pose given:
A set of 3D object points in a model coordinate system (can be the 3D world as well),
Their 2D projections on the image plane,
The linear and non-linear intrinsic parameters.
The output of cv::solvePnP(...) is given as a rotation vector (rvec) together with a translation vector (tvec) that bring the 3D object points from the model coordinate system to the 3D camera coordinate system.
calibrateCamera (doc) estimates intrinsics coefficients (i.e. camera matrix and distortion coefficients) for a given camera. This function requires you to provide as input N sets of 2D-3D correspondences, associated to N images taken with the same camera from varying viewpoints (typically N=30, see this tutorial on this topic). The function returns the camera matrix and distortion coefficients for the considered camera. Although those are usually not used, the extrinsics parameters (i.e. position and orientation) are also estimated, hence the function returns one pair of rvec and tvec for each of the N input images.
solvePnP (doc) estimates extrinsics parameters for a given camera image. This function requires you to provide a set of 2D-3D correspondences, associated to a single image taken with a camera with known intrinsics parameters. The function returns a single pair of rvec and tvec, corresponding to the input image.
calibrateCamera() provides rvec, tvec, distCoeff, cameraMatrix ---- distCoeffs are related to distortion of the image and cameraMatrix provides the center of image(Cx and Cy) and focal length (Fx and Fy) (projection center). These are called intrinsic parameters. Unless you change the aperture/focus of the camera they will remain the same. [it also provides rvec and tvec, I don't know yet now what can be any possible use of it. These are the position of the camera in the real world. rvec and tvec are also known as extrinsic parameters]
solvePnP() takes cameraMatrix, distCoeff as input and provides rvec, tvec --- Using the Cx, Cy, Fx, Fy it can estimate the current position of the camera i.e. the extrinsic parameters.
In other words, first use calibrateCamera() to obtain the CameraMatrix and distCoeff. Use them in solvePNP() and it will tell you the rotation (rvec) and translation (tvec) of the camera as you move the camera with respect to your real world object (with some marker as you can presume).
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#calibratecamera
I used cv::calibrateCamera method with 9*6 chessboard pattern.
Now I am getting rvecs and tvecs corresponding to each pattern,
Can somebody explain the format of rvecs and tvecs?
As far as I have figured out it is each one is 3*1 matrix.
and OpenCV documentation suggests to see Rodrigues function.
http://en.wikipedia.org/wiki/Rodrigues'_rotation_formula
As far rodrigues is concerned it is way to rotate a vector
around a given axis with angle theta.
but for this we need four values unit Vector(ux,uy,uz) and the angle. but openCV seem to use only 3 values.
OpenCV rodrigues documentation refer the below link http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#void Rodrigues(InputArray src, OutputArray dst, OutputArray jacobian)
says that it will convert 3*1 matrix to 3*3 rotation matrix.
Is this matrix same as which we use 3D graphics.
can I convert it to 4*4 matrix and use it for transformations like below
M4X4 [
x x x 0
x x x 0
x x x 0
0 0 0 1
]
x : are the values from output 3by3 matrix of rodrigues function.
Is the relationship valid:
Vout = M4X4 * Vin;
using the matrix above.
The 3x1 rotation vector can express a rotation matrix by defining an axis of rotation via the direction that the vector points and an angle via the magnitude of the vector. Using the opencv function Rodrigues(InputArray src, OutputArray dst) you can obtain a rotation matrix which fits the function you describe.
If I have a camera which is already calibrated, so that I already know distortion coefficients, and the camera matrix. And that I have a set of points that all are in a plane, and I know the realworld metrics and pixel-location of those points, I have constructed a homography.
Given this homography, camera matrix and distortion coefficients, how can I find the camera pose in the easiest way? Prefferable by using openCV.
Can I for instance use the "DecomposeProjectionMatrix()" function?
It accepts only a 3x4 projection matrix, but I have a simple 3x3 homography
In this older post you have a method for that. It is a mathematical conversion that gives you the pose matrix, which is translation and rotation.
I've 4 ps3eye cameras. And I've calibrated camera1 and camera2 using cvStereoCalibrate() function of OpenCV library
using a chessboard pattern by finding the corners and passing their 3d coordinates into this function.
Also I've calibrated camera2 and camera3 using another set of chessboard images viewed by camera2 and camera3.
Using the same method I've calibrated camera3 and camera4.
So now I've extrinsic and intrinsic parameters of camera1 and camera2,
extrinsic and intrinsic parameters of camera2 and camera3,
and extrinsic and intrinsic parameters of camera3 and camera4.
where extrinsic parameters are matrices of rotation and translation and intrinsic are matrices of focus length and principle point.
Now suppose there's a 3d point(world coordinate)(And I know how to find 3d coordinates from stereo cameras) that is viewed by camera3 and camera4 which is not viewed by camera1 and camera2.
The question I've is: How do you take this 3d world coordinate point that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2's
world coordinate system using rotation, translation, focus and principle point parameters?
OpenCV's stereo calibration gives you only the relative extrinsic matrix between two cameras.
Acording to its documentation, you don't get the transformations in world coordinates (i.e. in relation to the calibration pattern ). It suggests though to run a regular camera calibration on one of the images and at least know its transformations. cv::stereoCalibrate
If the calibrations were perfect, you could use your daisy-chain setup to derive the world transformation of any of the cameras.
As far as I know this is not very stable, because the fact that you have multiple cameras should be considered when running the calibration.
Multi-camera calibration is not the most trivial of problems. Have a look at:
Multi-Camera Self-Calibration
GML C++ Camera Calibration Toolbox
I'm also looking for a solution to this, so if you find out more regarding this and OpenCV, let me know.