I performed the calibration with chessboard (I defined the size of the corner in mm). Now, are the results of the calibration (matrix roto translation) in mm?
Short and long answer: yes.
You basically set the units when you define the chessboard square size, so since you defined it in mm, your results will be in mm.
Edit: Keep in mind though that the reprojection error is in pixels and not in mm.
Related
I used the CameraCalibration node in Meshroom 2021.1.0 using a checkerboard grid to do a camera calibration. From what I understand, Meshroom is using OpenCV, so this question indirectly relates to the calibration process in OpenCV as well.
The lens I'm using is advertised as an 8mm lens, so I was expecting a focal length of something between 7 and 9 mm, but the fx value was 2541.273 and fy value was 2641.111 and I know the sensor pixel size is 6 microns, so when converting from pixels to mm, I'm getting focal lengths of 15.247 mm and 15.847 mm respectively which is right around double what I would expect.
The checkerboard I'm using has 50 mm squares, and I specified the size of the square in the camera calibration, and I double checked the printed dimensions with calipers. I also verified that the size of my images were full resolution compared to the expected size based on sensor specifications, so it wasn't a case where the resolution was half or double the original sensor size or something like that.
Curious if there is anything obvious I may have missed that would cause the focal length in the calibration to come out double what is expected.
I went through a similar calibration process with my smartphone and the camera I was testing with advertised a focal length of 7 mm and the camera calibration returned in that case an fx of 7.21 mm and an fy of 7.20 mm. The only difference was the grid I was using in that test was using 30 mm squares and was 7 x 5 instead of 4 x 3, but the process to get those values was essentially the same.
Update:
I reran a camera calibration with a different set of images, and this time I got an fx of 23.07 mm and fy of 23.23 mm, so it would seem that the previous run that was off by a factor of 2 may have been a coincidence that it was off by 2. Given how inconsistent the focal length values are from one run to the next and how far off they are from expected values, I'm guessing that the errors that I'm seeing are due to poor calibration images being used in the process? The camera is fixed, so I'm moving the checkerboard on a surface, so mostly in a single plane. To get a good calibration do I just need a better variety of orientations that the checkerboard is captured in like different distances and different angles?
Is the size of the grid just too small for the field of view to get good calibration values from it? I calibrated with 80 calibration shots similar to the two above moving the board from one edge to the other.
I got a larger calibration target using the ChAruco pattern, and it looks like the values are more stable now, but every now and then if I repeat the calibration, I can get very far out numbers. Should the board below be large enough to get stable calibration values?
I am using opencv to calibrate my webcam. So, what I have done is fixed my webcam to a rig, so that it stays static and I have used a chessboard calibration pattern and moved it in front of the camera and used the detected points to compute the calibration. So, this is as we can find in many opencv examples (https://docs.opencv.org/3.1.0/dc/dbb/tutorial_py_calibration.html)
Now, this gives me the camera intrinsic matrix and a rotation and translation component for mapping each of these chessboard views from the chessboard space to world space.
However, what I am interested in is the global extrinsic matrix i.e. once I have removed the checkerboard, I want to be able to specify a point in the image scene i.e. x, y and its height and it gives me the position in the world space. As far as I understand, I need both the intrinsic and extrinsic matrix for this. How should one proceed to compute the extrinsic matrix from here? Can I use the measurements that I have already gathered from the chessboard calibration step to compute the extrinsic matrix as well?
Let me place some context. Consider the following picture, (from https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html):
The camera has "attached" a rigid reference frame (Xc,Yc,Zc). The intrinsic calibration that you successfully performed allows you to convert a point (Xc,Yc,Zc) into its projection on the image (u,v), and a point (u,v) in the image to a ray in (Xc,Yc,Zc) (you can only get it up to a scaling factor).
In practice, you want to place the camera in an external "world" reference frame, let's call it (X,Y,Z). Then there is a rigid transformation, represented by a rotation matrix, R, and a translation vector T, such that:
|Xc| |X|
|Yc|= R |Y| + T
|Zc| |Z|
That's the extrinsic calibration (which can be written also as a 4x4 matrix, that's what you call the extrinsic matrix).
Now, the answer. To obtain R and T, you can do the following:
Fix your world reference frame, for example the ground can be the (x,y) plane, and choose an origin for it.
Set some points with known coordinates in this reference frame, for example, points in a square grid in the floor.
Take a picture and get the corresponding 2D image coordinates.
Use solvePnP to obtain the rotation and translation, with the following parameters:
objectPoints: the 3D points in the world reference frame.
imagePoints: the corresponding 2D points in the image in the same order as objectPoints.
cameraMatris: the intrinsic matrix you already have.
distCoeffs: the distortion coefficients you already have.
rvec, tvec: these will be the outputs.
useExtrinsicGuess: false
flags: you can use CV_ITERATIVE
Finally, get R from rvec with the Rodrigues function.
You will need at least 3 non-collinear points with corresponding 3D-2D coordinates for solvePnP to work (link), but more is better. To have good quality points, you could print a big chessboard pattern, put it flat in the floor, and use it as a grid. What's important is that the pattern is not too small in the image (the larger, the more stable your calibration will be).
And, very important: for the intrinsic calibration, you used a chess pattern with squares of a certain size, but you told the algorithm (which does kind of solvePnPs for each pattern), that the size of each square is 1. This is not explicit, but is done in line 10 of the sample code, where the grid is built with coordinates 0,1,2,...:
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
And the scale of the world for the extrinsic calibration must match this, so you have several possibilities:
Use the same scale, for example by using the same grid or by measuring the coordinates of your "world" plane in the same scale. In this case, you "world" won't be at the right scale.
Recommended: redo the intrinsic calibration with the right scale, something like:
objp[:,:2] = (size_of_a_square*np.mgrid[0:7,0:6]).T.reshape(-1,2)
Where size_of_a_square is the real size of a square.
(Haven't done this, but is theoretically possible, do it if you can't do 2) Reuse the intrinsic calibration by scaling fx and fy. This is possible because the camera sees everything up to a scale factor, and the declared size of a square only changes fx and fy (and the T in the pose for each square, but that's another story). If the actual size of a square is L, then replace fx and fy Lfx and Lfy before calling solvePnP.
I am using OpenCV's calibrateCamera and am trying to understand how it calculates the reprojection error as well as what this error represents. It appears to be the RMS of the (Euclidean) distance between the projected points and measured image points--is this right? However, what does it mean for the final reprojection error to be "minimized"? Does calibrateCamera() explicitly use the function projectPoints() to find the projected points?
The reprojection error is the error (Euclidean distance for example) between the 3D points reprojected with the estimated intrinsic and extrinsic matrices and the 2D image points detected by some image processing techniques (corners of the chessboard pattern for example).
The final reprojection error is minimized because the problem to estimate the set of intrinsic and extrinsic parameters is a non-linear problem and thus you have to find the set of parameters that minimizes this reprojection error iteratively.
More information: A Flexible New Technique for Camera Calibration ; Zhengyou Zhang ; 2000.
The reprojection error is not defined 100% mathematically in literature. Formally, a single reprojection error is the 2d vector relating measured to projected pixel coordinates.
OpenCV and most other software and related bundle adjustment algorithms use the sum of squares of Euclidean lengths of these 2d vectors as the objective function during optimization.
As Geoff and Alessandro Jacopson have pointed out, the return value of cv::calibrateCamera() is the RMS of Euclidean errors (contrary to the doc of v3.1). This quantity is directly related to the objective function value, but not exactly the same.
Other definitions of the reprojection error include the mean Euclidean length and the median of Euclidean lengths. Both are legitimate and care must be given when comparing values.
An in-depth article on this topic can be found here: https://calib.io/blogs/knowledge-base/understanding-reprojection-errors
I am referring to OpenCV version 3.1.0, here you find doc for calibrateCamera
http://docs.opencv.org/3.1.0/d9/d0c/group__calib3d.html#ga687a1ab946686f0d85ae0363b5af1d7b and the doc says (bold is mine):
The algorithm performs the following steps:
Compute the initial intrinsic parameters (the option only available
for planar calibration patterns) or read them from the input
parameters. The distortion coefficients are all set to zeros initially
unless some of CV_CALIB_FIX_K? are specified.
Estimate the initial
camera pose as if the intrinsic parameters have been already known.
This is done using solvePnP .
Run the global Levenberg-Marquardt
optimization algorithm to minimize the reprojection error, that is,
the total sum of squared distances between the observed feature points
imagePoints and the projected (using the current estimates for camera
parameters and the poses) object points objectPoints. See
projectPoints for details.
The function returns the final re-projection error.
Anyway, instead of relying on the doc, I will prefer to have a look at the code:
https://github.com/Itseez/opencv/blob/3.1.0/modules/calib3d/src/calibration.cpp#L3298
When you use projectPoints, You need to calculate RMS by hand after the reprojection. This might help.
OPENCV: Calibratecamera 2 reprojection error and custom computed one not agree
Here is the reprojection error calculation from the OpenCV code calibrate.cpp line 1629:
return std::sqrt(reprojErr/total);
Where total is the sum of all points for all images.
I'm currently experimenting with OpenCV's calibration toolbox and I'm using a default checkerboard pattern to calibrate a camera. I want to use larger checkerboard blocks so that I can stand farther away from the camera without affecting OpenCV's ability to detect the corners.
As I understand it, OpenCV is pre-programmed with default block-size values. My question is: is there a way to change this default block-size value in the code? And where would I change this? TIA
OpenCV does not make any assumption on the physical size or the pattern size of your pattern.
That is, you can have any pattern with R rows and C columns.
It also doesn't matter if each block is 1 cm or 1 m.
The only thing you give to the calibrateCamera function is the objectPoints and imagePoints.
The array-dimensions (sizes) of these parameters corresponds to the number of corners of your pattern.
objectPoints should contain 3D coordinates (well, planar coordinates in your case, by setting Z=0) of your checkboard corners. These corners should be scaled to the physical size of your checkerboard. That is, if a corner has row-column index (3,1) and each block side is 3 cm then the 3D coordinate would be (0.09, 0.03, 0.00).
My problem statement is very simple. But I am unable to get the opencv calibration work for me. I am using the code from here : source code.
I have to take images parallel to the camera at a fixed distance. I tried taking test images (about 20 of them) only parallel to the camera as well as at different planes. Also I changed the size and the no of squares.
What would be the best way to calibrate in this scenario?
The undistorted image is cropped later, that's why it looks smaller.
After going through the images closely, the pincushion distortion seems to have been corrected. But the "trapezoidal" distortion still remains. Since the camera is mounted in a closed box, the planes at which I can take images is limited.
To simplify what Vlad already said: It is theoretically impossible to calibrate your camera with test images only parallel to the camera. You have to change your calibration board's orientation. In fact, you should have different orientation in each test image.
Check out the first two images in the link below to see how the calibration board should be slanted (or tilted):
http://www.vision.caltech.edu/bouguetj/calib_doc/
think about calibration problem as finding a projection matrix P:
image_points = P * 3d_points, where P = intrinsic * extrinsic
Now just bear with me:
You basically are interested in intrinsic part but the calibration algorithm has to find both intrinsic and extrinsic. Now, each column of projection matrix can be obtained if you select a 3D point at infinity, for example xInf = [1, 0, 0, 0]. This point is at infinity because when you transform it from homogeneous coordinates to Cartesian you get
[1/0, 0, 0]. If you multiply a projection matrix with a point at infinity you will get its corresponding column (1st for Xinf, 2nd for yInf, 3rd for zInf and 4th for camera center).
Thus the conclusion is simple - to get a projection matrix (that is a successful calibration) you have to clearly see points at infinity or vanishing points from the converging extensions of lines in your chessboard rig (aka end of the railroad tracks at the horizon). Your images don’t make it easy to detect vanishing points since you don’t slant your chessboard, nor rotate nor scale it by stepping back. Thus your calibration will always fail.