Intrinsic Camera Matrix after image rectification - opencv

I have a fisheye camera which I already calibrated correctly with the provided calibration functions by OpenCV. Therefore, I got a 3x3 Intrinsic Camera Matrix K and vector with distortion parameters.
Using the last 2 I can rectify the input image with the functions estimateNewCameraMatrixForUndistortRectify and initUndistortRectifyMap to obtain 2 transformation maps which I later use as input to the function remap. As output I get an undistorted image where parallel lines are maintained.
My questions are basically...
Can I continue using the K intrinsic matrix I got from calibration in conjunction with the undistorted image?
Has the intrinsic matrix K somehow changed due to the undistortion? In case this is true then how could I calculate the new K?
Thanks in advance.

As #micka pointed out in the comments, after calibrating the cameras and undistorting the image, I can continue using the new camera matrix K outputted by estimateNewCameraMatrixForUndistortRectify. This answers my 2 previous questions.

Related

Is there any opencv function to calculate reprojected points?

What is the procedure to calculate reprojected points, reprojected errors and mean reprojection error from the given world points (Original coordinates), intrinsic matrix, rotation matrices and translation vector?
Is there any inbuilt opencv function for that or we should calculate manuallay?
If we have to calculate manually, what is the best way to get reprojected points?
projectPoints projects 3D points to an image plane.
calibrateCamera returns the final re-projection error. calibrateCamera finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
The function estimates the intrinsic camera parameters and extrinsic
parameters for each of the views. The algorithm is based on
[Zhang2000]1 and [BouguetMCT]2. The coordinates of 3D object points and
their corresponding 2D projections in each view must be specified.
That may be achieved by using an object with a known geometry and
easily detectable feature points. Such an object is called a
calibration rig or calibration pattern, and OpenCV has built-in
support for a chessboard as a calibration rig (see
findChessboardCorners() ).
The algorithm performs the following steps:
Compute the initial intrinsic parameters (the option only available
for planar calibration patterns) or read them from the input
parameters. The distortion coefficients are all set to zeros initially
unless some of CV_CALIB_FIX_K? are specified.
Estimate the initial
camera pose as if the intrinsic parameters have been already known.
This is done using solvePnP().
Run the global Levenberg-Marquardt
optimization algorithm to minimize the reprojection error, that is,
the total sum of squared distances between the observed feature points
imagePoints and the projected (using the current estimates for camera
parameters and the poses) object points objectPoints. See
projectPoints() for details. The function returns the final
re-projection error.
1ZHANG, Zhengyou. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2000, 22.11: 1330-1334.
2J.Y.Bouguet. MATLAB calibration tool. http://www.vision.caltech.edu/bouguetj/calib_doc/

Rectifying images on opencv with intrinsic and extrinsic parameters already found

I ran Bouguet's calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html) on Matlab and have the parameters from the calibration (intrinsic [focal lengths and principal point offsets] and extrinsic [rotation and translations of the checkerboard with respect to the camera]).
Feature coordinate points of the checkerboard on my images are also known.
I want to obtain rectified images so that I can make a disparity map (for which I have the code for) from each pair of rectified images.
How can I go about doing this?
The documentation is here. At the end, it reads "Add these values as constants to your program, call the initUndistortRectifyMap and the remap function to remove distortion and enjoy distortion free inputs with cheap and low quality cameras".
Once your cameras are rectified, you may be interested in the class StereoVar or StereoBM to get the disparity map. Use reprojectImageTo3D once you are done if you want to check that your results look fine in 3D.
If fully calibrated use: http://link.springer.com/article/10.1007/s001380050120#page-1 Both cameras have the same orientation, share the same R.
First row of the new R is the baseline = subtraction of both camera centers. Second row cross product of baseline with old left z-axis (3 row R_old_left). Third row cross product of the first two rows.
Warp images with H_left=P_new(1:3,1:3)*P_old_left(1:3,1:3)^-1 and H_right=P_new(1:3,1:3)*P_old_right(1:3,1:3)^-1.
Rectified left pixel coordinates are u_new=(h11*u+h12*v+h13)/(h31*u+h32*v+h33), v=(h21*u+h22*v+h23)/(h31*u+h32*v+h33), same with the right ones

How can I use the output 3x3 matrix from getPerspectiveTransform in OpenCV?

I'm now trying to analyze the perspective transform/homography matrix between two images capturing the same object (e.g., a rectangle) but at different perspectives/shooting angles. The perspective transform can be derived by using the function getPerspectiveTransform in OpenCV 2.3.1. I want to find the corresponding rotation and translation matrices.
The output of getPerspectiveTransform is a 3x3 matrix which I can directly use it to warp the source image into the target image. But my question is that how I can find the rotation and translation matrices based on the obtained 3x3 matrix?
I was looking into the funciton decomposeProjectionMatrix for the corresponding rotation and translation matrices. But the input is required to be a 3x4 projection matrix. How can I relate the perspective transformation (i.e., a 3x3 matrix) to the 3x4 projection matrix? Am I on the right track?
Thank you very much.
The information contained in the homography matrix (returned from getPerspectiveTransform) is not enough to extract rotation/translation. The missing column is key to correctly find the angles.
The good news is that in some scenarios, you can use the solvePnP() function to extract the desired parameters from two sets of points.
Also, this question is about the same thing you are asking for. It should help
Analyze camera movement with OpenCV

using FindExtrinsicCameraParams2 in OpenCV

I have 4 coplanar points in object coordinates and the correspoinding image points (on image plane). I want to compute the relative translation and rotation of the object plane with respect to the camera.
FindExtrinsicCameraParams2 is supposed to be the solution. But I'm having troubles with using it. Errors keep on showing when compiling
Has anyone successfully used this function in OpenCV?? Could I have some comments or sample code to use this function??
Thank you!
I would use the OpenCV function FindHomography() as it is simpler and you can converto easily from homography to extrinsic parameters.
You have to call the function like this
FindHomography(srcPoints, dstPoints, H, method, ransacReprojThreshold=3.0, status=None)
method is CV_RANSAC. If you pass more than 4 points, RANSAC will select the best 4-point set to satisfy the model.
You will get the homography in H, and if you want to convert it to extrinsic parameters you should do what I explain in this post.
Basically, the extrinsics matrix (Pose), has the first, second and fourth columns equal tp homography. The third column is redundant because it is the crossproduct of columns one and two.
After several days testing OpenCV functions related to 3D calibration, getting over all the errors, awkward output numbers, I finally get the correct outputs for these functions including findHomography, solvePnP (new version of FindExtrinsicCameraParams) and cvProjectPoints. Some of the tips have been discussed in use OpenCV cvProjectPoints2 function.
These tips are also applied for the error in this post. Specifically, in this post, my violation is passing float data to CV_64F Mat. All done now!!
You can use CalibrateCamera2.
objectPts - your 4 coplanar points
imagePts - your corresponding image points.
This method will compute instrinsic matrix and distortion coefficients, which tell you how the objectPts have been projected as the imagePts on to the camera's imaging plane.
There are no extrinsic parameters to compute here since you are using only 1 camera. If you used 2 cameras, then you are looking at computing Extrinsic Matrix using StereoCalibrate.
Ankur

Project points using intrinsic and transformational matrices

Currently I am working on 3D image visualizing project using C# and emgucv.net. On that project following steps already done with 2 images of same scene(a little different in rotation and translation),
Feature detection(SURF), matching and calculate homography
calculate fundamental matrix
calculate essential matrix using above fundamental and camera intrinsic matrices
finally calculate the Rotational and translational matrices
Also I have obtain 4 possible answers for transformational matrix(3X4 [R|T]) using different combinations of R and T by changing its sign. Now I want to select the correct transformation matrix from those 4 answers. Before that I want to check either one of the answer is correct. So I have to re-project the points of second image using "Camera intrinsic matrix" and each one of "Transformation matrix". After that I can compare with resultant points with the second image points to confirm the result(translational matrix).
My question is, How to re-combine translational matrix(rotational[3X3] and translational[3X1] matrix ) and camera intrinsic matrix to project points into image points using emgucv.net?
OR any alternative method to confirm the transformational matrix that I obtain?
Thanks in advance for any help.

Resources