I have 4 coplanar points in object coordinates and the correspoinding image points (on image plane). I want to compute the relative translation and rotation of the object plane with respect to the camera.
FindExtrinsicCameraParams2 is supposed to be the solution. But I'm having troubles with using it. Errors keep on showing when compiling
Has anyone successfully used this function in OpenCV?? Could I have some comments or sample code to use this function??
Thank you!
I would use the OpenCV function FindHomography() as it is simpler and you can converto easily from homography to extrinsic parameters.
You have to call the function like this
FindHomography(srcPoints, dstPoints, H, method, ransacReprojThreshold=3.0, status=None)
method is CV_RANSAC. If you pass more than 4 points, RANSAC will select the best 4-point set to satisfy the model.
You will get the homography in H, and if you want to convert it to extrinsic parameters you should do what I explain in this post.
Basically, the extrinsics matrix (Pose), has the first, second and fourth columns equal tp homography. The third column is redundant because it is the crossproduct of columns one and two.
After several days testing OpenCV functions related to 3D calibration, getting over all the errors, awkward output numbers, I finally get the correct outputs for these functions including findHomography, solvePnP (new version of FindExtrinsicCameraParams) and cvProjectPoints. Some of the tips have been discussed in use OpenCV cvProjectPoints2 function.
These tips are also applied for the error in this post. Specifically, in this post, my violation is passing float data to CV_64F Mat. All done now!!
You can use CalibrateCamera2.
objectPts - your 4 coplanar points
imagePts - your corresponding image points.
This method will compute instrinsic matrix and distortion coefficients, which tell you how the objectPts have been projected as the imagePts on to the camera's imaging plane.
There are no extrinsic parameters to compute here since you are using only 1 camera. If you used 2 cameras, then you are looking at computing Extrinsic Matrix using StereoCalibrate.
Ankur
Related
I have question regrading undistortion of a single point using either Scaramuzza or Mei's opencv
I have done the calibration on a dataset and extracted camera matrix and distortion coefficient (for mei) and the necessary parameters for Scaramuzza, after getting mapx (map1) and mapy (map2) I want to apply the undistortion on a single point.
for mei:
we have a position for a point (an intersection in a chess board) in a fisheye image, I was able to find its position using findchessboardcoreners (I know this can be used for calibration but I want to know a position for a well-known point in the image), now I have the undistorted image and I want to know the position of that point after the distortion correction,
I have read many links, suggesting to use undistortpoints method, or by using remap method, and I read links describing that dst(x,y)=src(mapx(x,y),mapy(x,y)) and I applied them all but when I draw the resulted point it wasn't on the same intersection of the chessboard it was even out of the board closer to its position in the fisheye
for Scaramuzza:
I tried to understand world2cam and cam2world methods but still I can't get it right
so
is there a method to know the position of a single point after the distortion correction if we have its position before the distortion? also can someone explain in deep way mapx and mapy .. I have read examples about them and how they can be used but whenever I wanted to implement the mapping between the distorted point and the undistorted one I got confused, for example: mapx and mapy should have the size of the src (in my case it is a point) so how can I use remap method here? or I should get them form the camera matrix and distortion coefficient and use dst(x,y)=src(map1(x,y),map2(x,y) ?
note
I have applied estimateNewCameraMatrixForUndistortRectify, initUndistortRectifyMap and remap successfully on images (for mei's) and I have also applied the undistortion method which was implemented by Scaramuzza on images with a very satisfying result (better than mei)
I was able to solve it by undistortpoints openCV function, the problem was I did not use the fisheye::undistortPoints but I was using the original one, still the surrounded points are not in their right position but the result was kind of acceptable
I ran Bouguet's calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html) on Matlab and have the parameters from the calibration (intrinsic [focal lengths and principal point offsets] and extrinsic [rotation and translations of the checkerboard with respect to the camera]).
Feature coordinate points of the checkerboard on my images are also known.
I want to obtain rectified images so that I can make a disparity map (for which I have the code for) from each pair of rectified images.
How can I go about doing this?
The documentation is here. At the end, it reads "Add these values as constants to your program, call the initUndistortRectifyMap and the remap function to remove distortion and enjoy distortion free inputs with cheap and low quality cameras".
Once your cameras are rectified, you may be interested in the class StereoVar or StereoBM to get the disparity map. Use reprojectImageTo3D once you are done if you want to check that your results look fine in 3D.
If fully calibrated use: http://link.springer.com/article/10.1007/s001380050120#page-1 Both cameras have the same orientation, share the same R.
First row of the new R is the baseline = subtraction of both camera centers. Second row cross product of baseline with old left z-axis (3 row R_old_left). Third row cross product of the first two rows.
Warp images with H_left=P_new(1:3,1:3)*P_old_left(1:3,1:3)^-1 and H_right=P_new(1:3,1:3)*P_old_right(1:3,1:3)^-1.
Rectified left pixel coordinates are u_new=(h11*u+h12*v+h13)/(h31*u+h32*v+h33), v=(h21*u+h22*v+h23)/(h31*u+h32*v+h33), same with the right ones
I'm using OpenCVs solvePnP to get the pose/positon of the camera.
I'm doing this by using points selected by the user, on an image that is already calibrated and have applied the fix for radial and tangential distortion.
However, it seems solvePnP() takes distortion coefficients as input in addition to the points selected in the image, which I suppose means taht SolvePnP applies the distortion-fix on the points given as input to the function.
This would create a minor error in my program, since the source image is already barrel-distorted, right?
If so, how can I make solvePnP() ignore the barreldistortion? Can I pass a vector with distortion-coefficients set to just 1's? Or should i set all values to 0?
Some other way?
In the past I have just passed an empty cv::Mat
cv::solvePnP(world_points, image_points, camera_mat, cv::Mat(), rotation_vector, translation_vector);
the documentation says that if you pass NULL it will set all the coefficients to 0 for you.
Currently I am working on 3D image visualizing project using C# and emgucv.net. On that project following steps already done with 2 images of same scene(a little different in rotation and translation),
Feature detection(SURF), matching and calculate homography
calculate fundamental matrix
calculate essential matrix using above fundamental and camera intrinsic matrices
finally calculate the Rotational and translational matrices
Also I have obtain 4 possible answers for transformational matrix(3X4 [R|T]) using different combinations of R and T by changing its sign. Now I want to select the correct transformation matrix from those 4 answers. Before that I want to check either one of the answer is correct. So I have to re-project the points of second image using "Camera intrinsic matrix" and each one of "Transformation matrix". After that I can compare with resultant points with the second image points to confirm the result(translational matrix).
My question is, How to re-combine translational matrix(rotational[3X3] and translational[3X1] matrix ) and camera intrinsic matrix to project points into image points using emgucv.net?
OR any alternative method to confirm the transformational matrix that I obtain?
Thanks in advance for any help.
I'm trying to do calibration of Kinect camera and external camera, with Emgu/OpenCV.
I'm stuck and I would really appreciate any help.
I've choose do this via fundamental matrix, i.e. epipolar geometry.
But the result is not as I've expected. Result images are black, or have no sense at all.
Mapx and mapy points are usually all equal to infinite or - infinite, or all equals to 0.00, and rarely have regular values.
This is how I tried to do rectification:
1.) Find image points get two arrays of image points (one for every camera) from set of images. I've done this with chessboard and FindChessboardCorners function.
2.) Find fundamental matrix
CvInvoke.cvFindFundamentalMat(points1Matrix, points2Matrix,
_fundamentalMatrix.Ptr, CV_FM.CV_FM_RANSAC,1.0, 0.99, IntPtr.Zero);
Do I pass all collected points from whole set of images, or just from two images trying to rectify?
3.) Find homography matrices
CvInvoke.cvStereoRectifyUncalibrated(points11Matrix, points21Matrix,
_fundamentalMatrix.Ptr, Size, h1.Ptr, h2.Ptr, threshold);
4.) Get mapx and mapy
double scale = 0.02;
CvInvoke.cvInvert(_M1.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H1.Ptr, _M1.Ptr, _R1.Ptr,scale);
CvInvoke.cvMul(_iM.Ptr, _R1.Ptr, _R1.Ptr, scale);
CvInvoke.cvInvert(_M2.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H2.Ptr, _M2.Ptr, _R2.Ptr, scale);
CvInvoke.cvMul(_iM.Ptr, _R2.Ptr, _R2.Ptr, scale);
CvInvoke.cvInitUndistortRectifyMap(_M1.Ptr,_D1.Ptr, _R1.Ptr, _M1.Ptr,
mapxLeft.Ptr, mapyLeft.Ptr) ;
I have a problem here...since I'm not using calibrated images, what is my camera matrix and distortion coefficients ? How can I get it from fundamental matrix or homography matrices?
5.) Remap
CvInvoke.cvRemap(src.Ptr, destRight.Ptr, mapxRight, mapyRight,
(int)INTER.CV_INTER_LINEAR, new MCvScalar(255));
And this doesn't returning good result. I would appreciate if someone would tell me what am I doing wrong.
I have set of 25 pairs of images, and chessboard pattern size 9x6.
The book "Learning OpenCV," from O'Reilly publishing, has two full chapters devoted to this specific topic. Both make heavy use of OpenCV's included routines cvCalibrateCamera2() and cvStereoCalibrate(); These routines are wrappers to code that is very similar to what you have written here, with the added benefit of having been more thoroughly debugged by the folks who maintain the OpenCV libraries. while they are convenient, both require quite a bit of preprocessing to achieve the necessary inputs to the routines. There may in fact be a sample program, somewhere deep in the samples directory of the OpenCV distribution, that uses these routines, with examples on how to go from chessboard image to calibration/intrinsics matrix. If you take an in depth look at any of these places, I am sure you will see how you can achieve your goal with advice from the experts.
cv::findFundamentalMat cannot work if the intrinsic parameter of your image points is an identity matrix. In other words, it cannot work with unprojected image points.