OpenCV solvePnP barreldistoriton - opencv

I'm using OpenCVs solvePnP to get the pose/positon of the camera.
I'm doing this by using points selected by the user, on an image that is already calibrated and have applied the fix for radial and tangential distortion.
However, it seems solvePnP() takes distortion coefficients as input in addition to the points selected in the image, which I suppose means taht SolvePnP applies the distortion-fix on the points given as input to the function.
This would create a minor error in my program, since the source image is already barrel-distorted, right?
If so, how can I make solvePnP() ignore the barreldistortion? Can I pass a vector with distortion-coefficients set to just 1's? Or should i set all values to 0?
Some other way?

In the past I have just passed an empty cv::Mat
cv::solvePnP(world_points, image_points, camera_mat, cv::Mat(), rotation_vector, translation_vector);
the documentation says that if you pass NULL it will set all the coefficients to 0 for you.

Related

Intrinsic Camera Matrix after image rectification

I have a fisheye camera which I already calibrated correctly with the provided calibration functions by OpenCV. Therefore, I got a 3x3 Intrinsic Camera Matrix K and vector with distortion parameters.
Using the last 2 I can rectify the input image with the functions estimateNewCameraMatrixForUndistortRectify and initUndistortRectifyMap to obtain 2 transformation maps which I later use as input to the function remap. As output I get an undistorted image where parallel lines are maintained.
My questions are basically...
Can I continue using the K intrinsic matrix I got from calibration in conjunction with the undistorted image?
Has the intrinsic matrix K somehow changed due to the undistortion? In case this is true then how could I calculate the new K?
Thanks in advance.
As #micka pointed out in the comments, after calibrating the cameras and undistorting the image, I can continue using the new camera matrix K outputted by estimateNewCameraMatrixForUndistortRectify. This answers my 2 previous questions.

Undistort single point after calibration

I have question regrading undistortion of a single point using either Scaramuzza or Mei's opencv
I have done the calibration on a dataset and extracted camera matrix and distortion coefficient (for mei) and the necessary parameters for Scaramuzza, after getting mapx (map1) and mapy (map2) I want to apply the undistortion on a single point.
for mei:
we have a position for a point (an intersection in a chess board) in a fisheye image, I was able to find its position using findchessboardcoreners (I know this can be used for calibration but I want to know a position for a well-known point in the image), now I have the undistorted image and I want to know the position of that point after the distortion correction,
I have read many links, suggesting to use undistortpoints method, or by using remap method, and I read links describing that dst(x,y)=src(mapx(x,y),mapy(x,y)) and I applied them all but when I draw the resulted point it wasn't on the same intersection of the chessboard it was even out of the board closer to its position in the fisheye
for Scaramuzza:
I tried to understand world2cam and cam2world methods but still I can't get it right
so
is there a method to know the position of a single point after the distortion correction if we have its position before the distortion? also can someone explain in deep way mapx and mapy .. I have read examples about them and how they can be used but whenever I wanted to implement the mapping between the distorted point and the undistorted one I got confused, for example: mapx and mapy should have the size of the src (in my case it is a point) so how can I use remap method here? or I should get them form the camera matrix and distortion coefficient and use dst(x,y)=src(map1(x,y),map2(x,y) ?
note
I have applied estimateNewCameraMatrixForUndistortRectify, initUndistortRectifyMap and remap successfully on images (for mei's) and I have also applied the undistortion method which was implemented by Scaramuzza on images with a very satisfying result (better than mei)
I was able to solve it by undistortpoints openCV function, the problem was I did not use the fisheye::undistortPoints but I was using the original one, still the surrounded points are not in their right position but the result was kind of acceptable

solvePnP with Unity3D

I have a real/physical stick with an IR camera attached to it and some IR LED that forms a pattern that I'm using in order to make a virtual stick move in the same way as the physical one.
For that, I'm using OpenCV in Python and send a rotation and translation vector calculated by solvePnP to Unity.
I'm struggling to understand how I can use the results given by the solvePnP function into my 3D world.
So far what I did is: using the solvePnP function to get the rotation and translation vectors. And then use this rotation vector to move my stick in the 3d World
transform.rotation = Quaternion.Euler(new Vector3(x, z, y));
It seems to work okay when my stick is positioned at a certain angle and if I move slowly...but most of the time it moves everywhere.
By looking for answers online, most of people are doing several more steps after solvePnP - from what I understand:
Using Rodrigues to convert the rotation vector to a rotation matrix
Copy the rotation matrix and translation vector into a extrinsic matrix
Inverse the extrinsic matrix
I understand that these steps are necessary if I was working with matrix like in OpenGL - but what about Unity3D? Are these extra steps necessary? Or can I directly use the vectors given by the solvePnP function (which I doubt as the results I'm having so far aren't good).
This is old, but the answer to the question "what about Unity3D? Are these extra steps necessary? Or can I directly use the vectors given by the solvePnP function"
is:
-No, you can't directly use them. I tried to convert rvec using Quaternion.Euler and as you've posted, the results were bad.
-Yes, you have to use Rodrigues, which converts rvec correctly into a rotation matrix.
-About inversing the extrinsic matrix: it depends.
If your object is at (0,0,0) in world space and you want to place the camera, you have to invert the transform resulting from tvec and rvec, in order to get the desired result.
If on the other hand your camera has a fixed position and you want to position the object relatively to it, you have to apply the camera's localToWorld matrix to your transform resulting from rvec and tvec, in order to get the desired result

using FindExtrinsicCameraParams2 in OpenCV

I have 4 coplanar points in object coordinates and the correspoinding image points (on image plane). I want to compute the relative translation and rotation of the object plane with respect to the camera.
FindExtrinsicCameraParams2 is supposed to be the solution. But I'm having troubles with using it. Errors keep on showing when compiling
Has anyone successfully used this function in OpenCV?? Could I have some comments or sample code to use this function??
Thank you!
I would use the OpenCV function FindHomography() as it is simpler and you can converto easily from homography to extrinsic parameters.
You have to call the function like this
FindHomography(srcPoints, dstPoints, H, method, ransacReprojThreshold=3.0, status=None)
method is CV_RANSAC. If you pass more than 4 points, RANSAC will select the best 4-point set to satisfy the model.
You will get the homography in H, and if you want to convert it to extrinsic parameters you should do what I explain in this post.
Basically, the extrinsics matrix (Pose), has the first, second and fourth columns equal tp homography. The third column is redundant because it is the crossproduct of columns one and two.
After several days testing OpenCV functions related to 3D calibration, getting over all the errors, awkward output numbers, I finally get the correct outputs for these functions including findHomography, solvePnP (new version of FindExtrinsicCameraParams) and cvProjectPoints. Some of the tips have been discussed in use OpenCV cvProjectPoints2 function.
These tips are also applied for the error in this post. Specifically, in this post, my violation is passing float data to CV_64F Mat. All done now!!
You can use CalibrateCamera2.
objectPts - your 4 coplanar points
imagePts - your corresponding image points.
This method will compute instrinsic matrix and distortion coefficients, which tell you how the objectPts have been projected as the imagePts on to the camera's imaging plane.
There are no extrinsic parameters to compute here since you are using only 1 camera. If you used 2 cameras, then you are looking at computing Extrinsic Matrix using StereoCalibrate.
Ankur

OpenCV cvRemap Cropping Image

So I am very new to OpenCV (2.1), so please keep that in mind.
So I managed to calibrate my cheap web camera that I am using (with a wide angle attachment), using the checkerboard calibration method to produce the intrinsic and distortion coefficients.
I then have no trouble feeding these values back in and producing image maps, which I then apply to a video feed to correct the incoming images.
I run into an issue however. I know when it is warping/correcting the image, it creates several skewed sections, and then formats the image to crop out any black areas. My question then is can I view the complete warped image, including some regions that have black areas? Below is an example of the black regions with skewed sections I was trying to convey if my terminology was off:
An image better conveying the regions I am talking about can be found here! This image was discovered in this post.
Currently: The cvRemap() returns basically the yellow box in the image linked above, but I want to see the whole image as there is relevant data I am looking to get out of it.
What I've tried: Applying a scale conversion to the image map to fit the complete image (including stretched parts) into frame
CvMat *intrinsic = (CvMat*)cvLoad( "Intrinsics.xml" );
CvMat *distortion = (CvMat*)cvLoad( "Distortion.xml" );
cvInitUndistortMap( intrinsic, distortion, mapx, mapy );
cvConvertScale(mapx, mapx, 1.25, -shift_x); // Some sort of scale conversion
cvConvertScale(mapy, mapy, 1.25, -shift_y); // applied to the image map
cvRemap(distorted,undistorted,mapx,mapy);
The cvConvertScale, when I think I have aligned the x/y shift correctly (guess/checking), is somehow distorting the image map making the correction useless. There might be some math involved here I am not correctly following/understanding.
Does anyone have any other suggestions to solve this problem, or what I might be doing wrong? I've also tried trying to write my own code to fix distortion issues, but lets just say OpenCV knows already how to do it well.
From memory, you need to use InitUndistortRectifyMap(cameraMatrix,distCoeffs,R,newCameraMatrix,map1,map2), of which InitUndistortMap is a simplified version.
cvInitUndistortMap( intrinsic, distort, map1, map2 )
is equivalent to:
cvInitUndistortRectifyMap( intrinsic, distort, Identity matrix, intrinsic,
map1, map2 )
The new parameters are R and newCameraMatrix. R species an additional transformation (e.g. rotation) to perform (just set it to the identity matrix).
The parameter of interest to you is newCameraMatrix. In InitUndistortMap this is the same as the original camera matrix, but you can use it to get that scaling effect you're talking about.
You get the new camera matrix with GetOptimalNewCameraMatrix(cameraMat, distCoeffs, imageSize, alpha,...). You basically feed in intrinsic, distort, your original image size, and a parameter alpha (along with containers to hold the result matrix, see documentation). The parameter alpha will achieve what you want.
I quote from the documentation:
The function computes the optimal new camera matrix based on the free
scaling parameter. By varying this parameter the user may retrieve
only sensible pixels alpha=0, keep all the original image pixels if
there is valuable information in the corners alpha=1, or get something
in between. When alpha>0, the undistortion result will likely have
some black pixels corresponding to “virtual” pixels outside of the
captured distorted image. The original camera matrix, distortion
coefficients, the computed new camera matrix and the newImageSize
should be passed to InitUndistortRectifyMap to produce the maps for
Remap.
So for the extreme example with all the black bits showing you want alpha=1.
In summary:
call cvGetOptimalNewCameraMatrix with alpha=1 to obtain newCameraMatrix.
use cvInitUndistortRectifymap with R being identity matrix and newCameraMatrix set to the one you just calculated
feed the new maps into cvRemap.

Resources