I am trying to run the cvPOSIT algorithm to map points that are projected using an Optoma PK301 with a the Kinect's depth camera. I have already determined the intrinsic parameters of the projector by calibrating it using the Projector-Camera Calibration Toolbox (http://code.google.com/p/procamcalib/) in Matlab. Would I be able to use these intrinsic parameters (in particular the focal lengths fc) to determine the actual focal length of the projector to feed into the POSIT function in OpenCV?
Ok, the quick answer to this is that the POSIT algorithm assumes that the focal length in both the x and y axes are the same (as it would be on an ideal camera). For the purposes of the POSIT algorithm, just take the average of the two: (fx + fy)/2.
Related
Knowing the rotations and translations of two cameras in world coordinates (relative to some known point), how would I calibrate my stereo system?
In OpenCV the normal approach is to use a calibration pattern in front of both cameras to get point correspondences. These points are used in stereoCalibrate which calculates the rotation matrix R and translation vector T (and the fundamental matrix F). In the next step the stereo rectification can be done to row-align images of both cameras with stereoRectify. stereoRectify needs R and T to calculate the homographies for the perspective transform of the images and also calculates the Q-matrix for translating disparity to depth.
Giving the situation that R and T in the world coordinate system are already known (known is the rotation around the z-Axis (floor-ceiling or yaw angle in aeronomy) and the rotation around the axis perpendicular to the camera view (pitch angle)), in which coordinate system should they be given to stereoRectify? What I mean with that is that there is the coordinate system of Camera1, of Camera2, and the (or one) world coordinate system.
The computation of the essential matrix E can be done with R * S where S is the skew-symmetric matrix of T and the fundamental matrix F with M_r.inv().t() * E * M_l.inv() following LearningOpenCV 3 from Kaehler and Bradski (M_r and M_l are the camera intrinsics of the right and left camera respectively). Here the question on R and T is the same. Is it the rotation from one camera to the other in world coordinates or e.g. in the coordinate system of one camera?
A sketch of the involved coordinate systems can be found here:
How is the camera coordinate system in OpenCV oriented?, however it is still unclear for me how exactly R and T should be calculated.
The question is not terribly clear, but...
IIUC you know the extrinsic parameters of both cameras, ergo their relative pose, but not the intrinsic ones. Therefore you still need to calibrate the cameras' intrinsics.
Knowing the relative pose of the cameras simply allows you to calibrate the intrinsics of the two cameras independently. Whether this is a simplification for your procedure or not depends on your particular setup.
Note that, unless you have inferred the extrinsics you have from a separate, image-based procedure, you should hardly trust their values - especially if they are derived by some sort of CAD model of your rig. The reason is that, unless your cameras have quite low resolution, pixel-level accuracy is likely to be much finer than what the manufacturing tolerances of your rig would account for.
What is the procedure to calculate reprojected points, reprojected errors and mean reprojection error from the given world points (Original coordinates), intrinsic matrix, rotation matrices and translation vector?
Is there any inbuilt opencv function for that or we should calculate manuallay?
If we have to calculate manually, what is the best way to get reprojected points?
projectPoints projects 3D points to an image plane.
calibrateCamera returns the final re-projection error. calibrateCamera finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
The function estimates the intrinsic camera parameters and extrinsic
parameters for each of the views. The algorithm is based on
[Zhang2000]1 and [BouguetMCT]2. The coordinates of 3D object points and
their corresponding 2D projections in each view must be specified.
That may be achieved by using an object with a known geometry and
easily detectable feature points. Such an object is called a
calibration rig or calibration pattern, and OpenCV has built-in
support for a chessboard as a calibration rig (see
findChessboardCorners() ).
The algorithm performs the following steps:
Compute the initial intrinsic parameters (the option only available
for planar calibration patterns) or read them from the input
parameters. The distortion coefficients are all set to zeros initially
unless some of CV_CALIB_FIX_K? are specified.
Estimate the initial
camera pose as if the intrinsic parameters have been already known.
This is done using solvePnP().
Run the global Levenberg-Marquardt
optimization algorithm to minimize the reprojection error, that is,
the total sum of squared distances between the observed feature points
imagePoints and the projected (using the current estimates for camera
parameters and the poses) object points objectPoints. See
projectPoints() for details. The function returns the final
re-projection error.
1ZHANG, Zhengyou. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2000, 22.11: 1330-1334.
2J.Y.Bouguet. MATLAB calibration tool. http://www.vision.caltech.edu/bouguetj/calib_doc/
I have calibrated my pinhole camera using opencv 3.0 and got 4 intrinsic parameters (f_x,f_y,u_0,v_0) plus some distortion coefficients. Using this calibration, I estimate the essential matrix from two images at different positions. Finally I want to recover (R|t) using the recover pose function from opencv 3.0. The interface of this function expects a single focal length, but I have two from the calibration procedure. How can a get the focal length f=f_y/s_y = f_x/s_x (Definition according to OpenCV) from f_x an f_y so that I can properly use the recover pose function?
You can simply use the horizontal focal length f_x. The ratio f_y/f_x is just the pixel aspect ratio, an estimate the squareness of the pixels.
Note that, unless you have some absolute scale reference in your image pair (e.g. an object of known size), you can recover pose only up to scale, that is, R and s*t for some unknown scale s.
You can't really derive the actual focal length just from f_x and f_y. For a pinhole camera the actual focal length is the distance from the pinhole to the imaging plane. Your camera probably has the focal length written somewhere in the specs.
calibrateCamera() provides rvec, tvec, distCoeff and cameraMatrix whereas solvePnP() takes cameraMatrix, distCoeff as input and provides rvec, tvec as output. What is the difference between these two functions?
cv::calibrateCamera(...)
The function estimates the following parameters of a monocular camera from several views of a calibration pattern. The geometry of this pattern is usually known (i.e. it can be a chessboard):
The linear intrinsic parameters: the focal lengths in terms of pixels (these are basically scale factors), the principal point which would be ideally in the center of the image, and sometimes a skew coefficient between the x and the y axis (but this is often zero).
The non-linear intrinsic parameters: the previously mentioned parameters are forming the linear camera matrix, but there are also some non-linear parameters in the tranformation from the 3D camera to the 2D image plane, i.e. the lens distortion.
The extrinsic parameters: the tranformation matrix between the 3D world and 3D camera coordinate systems.
The estimation of the above mentioned parameters is usually based on 2D-3D correspondences. The algorithm detects some 2D points in the image (i.e. chessboard) for what the corresponding 3D object points are specified (known 3D geometry). It performs the following steps in the simplest case (can vary on the flags of cv::calibrateCamera(..., int flags, ...)):
Computes the linear intrinsic parameters and considers the non-linear ones to zero.
Estimates the initial camera pose (extrinsics) in function of the approximated intrinsics. This is done using cv::solvePnP(...).
Performs the Levenberg-Marquardt optimization algorithm to minimize the re-projection error between the detected 2D image points and 2D projections of the 3D object points. This is done using cv::projectPoints(...).
cv::solvePnP(...)
At this point, I also answered implicitly the role of cv::solvePnP(...) as this is the part of cv::calibrateCamera(...).
Once you have the intrinsics of a camera, you can assume that these will never change (except you change the optics or zooming). On the other hand the extrinsics can be changed, i.e. you can rotate the camera or put it to another location. You should see that the scenario of changing an object's pose to the camera is very similar in this case. And this is what the cv::solvePnP(...) is used for.
The function estimates the object pose given:
A set of 3D object points in a model coordinate system (can be the 3D world as well),
Their 2D projections on the image plane,
The linear and non-linear intrinsic parameters.
The output of cv::solvePnP(...) is given as a rotation vector (rvec) together with a translation vector (tvec) that bring the 3D object points from the model coordinate system to the 3D camera coordinate system.
calibrateCamera (doc) estimates intrinsics coefficients (i.e. camera matrix and distortion coefficients) for a given camera. This function requires you to provide as input N sets of 2D-3D correspondences, associated to N images taken with the same camera from varying viewpoints (typically N=30, see this tutorial on this topic). The function returns the camera matrix and distortion coefficients for the considered camera. Although those are usually not used, the extrinsics parameters (i.e. position and orientation) are also estimated, hence the function returns one pair of rvec and tvec for each of the N input images.
solvePnP (doc) estimates extrinsics parameters for a given camera image. This function requires you to provide a set of 2D-3D correspondences, associated to a single image taken with a camera with known intrinsics parameters. The function returns a single pair of rvec and tvec, corresponding to the input image.
calibrateCamera() provides rvec, tvec, distCoeff, cameraMatrix ---- distCoeffs are related to distortion of the image and cameraMatrix provides the center of image(Cx and Cy) and focal length (Fx and Fy) (projection center). These are called intrinsic parameters. Unless you change the aperture/focus of the camera they will remain the same. [it also provides rvec and tvec, I don't know yet now what can be any possible use of it. These are the position of the camera in the real world. rvec and tvec are also known as extrinsic parameters]
solvePnP() takes cameraMatrix, distCoeff as input and provides rvec, tvec --- Using the Cx, Cy, Fx, Fy it can estimate the current position of the camera i.e. the extrinsic parameters.
In other words, first use calibrateCamera() to obtain the CameraMatrix and distCoeff. Use them in solvePNP() and it will tell you the rotation (rvec) and translation (tvec) of the camera as you move the camera with respect to your real world object (with some marker as you can presume).
I have sucessfully calibrated an analog camera using opencv. The ouput focal length and principal points are in pixels.
I know in digital cameras you can easily multiply the size of the pixel in the sensor by the focal length in pixels and get the focal length in mm (or whatever).
How can I do with this analog camera to get the focal length in mm?
The lens manufacturers usually write focal length on the lens. Even the name of the lens contains it, e.g. "canon lens 1.8 50mm".
If not, you can try to measure it manually.
Get lens apart from the camera. Take a small well illuminated object, place it in 1-3 meters in from of lens and sheet of paper back from it. Get sharp and focused image of the object on the paper.
Now measure following:
a - distance from lens to the object;
y - object size;
y' - object image size on the paper;
f = a/(1+y/y') - focus distance.
If your output is in pixels, you must be digitizing the analog input at some point. You just need to figure out the size of the pixel that you are creating.
For example, if you are scanning film in, then you use the pixel size of the scanner.