Determining the rotation around each axis from OpenCV rotation vector - opencv

I'm trying to better understand the calibrateCamera and SolvePnP functions in OpenCV, specifically the rotation vectors returned by these functions which I believe is an axis-angle rotation vector (NOT as I had thought initially the yaw,pitch,roll angles). I would like to know the rotation around the x,y and z axis of my checkerboard image. The OpenCV functions return a rotation vector in the form rot = [a,b,c]
Using this answer
as a guide I calculate the angle theta with theta = sqrt(a^2,b^2,c^2) and the rotation axis v = [a/theta, b/theta, c/theta];
Then I take these values and use the Axis-Angle To Euler conversion on euclideanspace.com. shown here:
heading = atan2(y * sin(angle)- x * z * (1 - cos(angle)) , 1 - (y^2 + z^2 ) * (1 - cos(angle)))
attitude = asin(x * y * (1 - cos(angle)) + z * sin(angle))
bank = atan2(x * sin(angle)-y * z * (1 - cos(angle)) , 1 - (x^2 + z^2) * (1 - cos(angle)))
I'm using one of the example OpenCV checkerboard images (Left01.jpg), shown below (note the frame axes in the upper left corner with red = x, green = y, blue = z
Using this image I get a rotation vector from calibrateCamera of [0.166,0.294,0.014]
Running these values through the calculations discussed and converting to degrees I get:
heading = 16.7 deg
attitude = 1.7 deg
bank = 9.3 deg
I believe these correspond to yaw,pitch,roll? The 16.7 degree heading seems high looking at the image, but it's hard to tell. Does this make sense? What would be the correct way to figure out the euler angles (angles around each axis) given the OpenCV rotation vector? Snippets of my code are shown below.
double RMSError = calibrateCamera(
objectPointsArray,
imagePointsArray,
img.size(),
intrinsics,
distortion,
rotation,
translation,
CALIB_ZERO_TANGENT_DIST |
CALIB_FIX_K3 | CALIB_FIX_K4 | CALIB_FIX_K5 |
CALIB_FIX_ASPECT_RATIO);
Mat rvec = rotation.at(0);
//try and get the rotation angles here
//https://stackoverflow.com/questions/12933284/rodrigues-into-eulerangles-and-vice-versa
float theta = sqrt(pow(rvec.at<double>(0),2) + pow(rvec.at<double>(1),2) + pow(rvec.at<double>(2),2));
Mat axis = (Mat_<double>(1, 3) << rvec.at<double>(0) / theta, rvec.at<double>(1) / theta, rvec.at<double>(2) / theta);
float x_ = axis.at<double>(0);
float y_ = axis.at<double>(1);
float z_ = axis.at<double>(2);
//this is yaw,pitch,roll respectively...maybe
float heading = atan2(y_ * sin(theta) - x_ * z_ * (1 - cos(theta)), 1 - (pow(y_,2) + pow(z_,2)) * (1 - static_cast<double>(cos(theta))));
float attitude = asin(x_ * y_ * (1 - cos(theta) + z_ * sin(theta)));
float bank = atan2(x_ * sin(theta) - y_ * z_ * (1 - cos(theta)), 1 - (pow(x_, 2) + pow(z_, 2)) * (1 - static_cast<double>(cos(theta))));
float headingDeg = heading * (180 / 3.14);
float attitudeDeg = attitude * (180 / 3.14);
float bankDeg = bank * (180 / 3.14);

Related

Error in radial distortion based on (zoomed in) OpenCV-style camera parameters

Our AR device is based on a camera with pretty strong optical zoom. We measure the distortion of this camera using classical camera-calibration tools (checkerboards), both through OpenCV and the GML Camera Calibration tools.
At higher zoom levels (I'll use 249 out of 255 as an example) we measure the following camera parameters at full HD resolution (1920x1080):
fx = 24545.4316
fy = 24628.5469
cx = 924.3162
cy = 440.2694
For the radial and tangential distortion we measured 4 values:
k1 = 5.423406
k2 = -2964.24243
p1 = 0.004201721
p2 = 0.0162647516
We are not sure how to interpret (read: implement) those extremely large values for k1 and k2. Using OpenCV's classic "undistort" operation to rectify the image using these values seems to work well. Unfortunately this is (much) too slow for realtime usage.
The thumbnails below look similar, clicking them will display the full size images where you can spot the difference:
Camera footage
Undistorted using OpenCV
That's why we want to take the opposite aproach: leave the camera footage be distorted and apply a similar distortion to our 3D scene using shaders. Following the OpenCV documentation and this accepted answer in particular, the distorted position for a corner point (0, 0) would be
// To relative coordinates
double x = (point.X - cx) / fx; // -960 / 24545 = -0.03911
double y = (point.Y - cy) / fy; // -540 / 24628 = -0.02193
double r2 = x*x + y*y; // 0.002010
// Radial distortion
// -0.03911 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.039067
double xDistort = x * (1 + k1 * r2 + k2 * r2 * r2);
// -0.02193 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.021906
double yDistort = y * (1 + k1 * r2 + k2 * r2 * r2);
// Tangential distortion
... left out for brevity
// Back to absolute coordinates.
xDistort = xDistort * fx + cx; // -0.039067 * 24545.4316 + 924.3162 = -34.6002 !!!
yDistort = yDistort * fy + cy; // -0.021906 * 24628.5469 + 440.2694 = = -99.2435 !!!
These large pixel displacements (34 and 100 pixels at the upper left corner) seem overly warped and do not correspond with the undistorted image OpenCV generates.
So the specific question is: what is wrong with the way we interpreted the values we measured, and what should the correct code for distortion be?

OpenCV How to apply camera distortion to an image

I have an rendered Image. I want to apply radial and tangential distortion coefficients to my image that I got from opencv. Even though there is undistort function, there is no distort function. How can I distort my images with distortion coefficients?
I was also looking for the same type of functionality. I couldn't find it, so I implemented it myself. Here is the C++ code.
First, you need to normalize the image point using the focal length and centers
rpt(0) = (pt_x - cx) / fx
rpt(1) = (pt_y - cy) / fy
then distort the normalized image point
double x = rpt(0), y = rpt(1);
//determining the radial distortion
double r2 = x*x + y*y;
double icdist = 1 / (1 - ((D.at<double>(4) * r2 + D.at<double>(1))*r2 + D.at<double>(0))*r2);
//determining the tangential distortion
double deltaX = 2 * D.at<double>(2) * x*y + D.at<double>(3) * (r2 + 2 * x*x);
double deltaY = D.at<double>(2) * (r2 + 2 * y*y) + 2 * D.at<double>(3) * x*y;
x = (x + deltaX)*icdist;
y = (y + deltaY)*icdist;
then you can translate and scale the point using the center of projection and focal length:
x = x * fx + cx
y = y * fy + cy

Rotation angles from Quaternion

I have a 3D scene in which in the imaginary sphere I position few objects, now I want to rotate them within device motion.
I use spherical coordinate system and calculate position in sphere like below:
x = ρ * sin⁡ϕ * cos⁡θ
y = ρ * sin⁡ϕ * sin⁡θ
z = ρ * cos⁡ϕ.
Also, I use angles (from 0 to 2_M_PI) for performing rotation horizontally (in z-x)
As result all works perfect until I want to use Quaternion from motion matrix.
I can extract values like pitch, yaw, roll
GLKQuaternion quat = GLKQuaternionMakeWithMatrix4(motionMatrix);
CGFloat adjRoll = atan2(2 * (quat.y * quat.w - quat.x * quat.z), 1 - 2 * quat.y * quat.y - 2 * quat.z * quat.z);
CGFloat adjPitch = atan2(2 * (quat.x * quat.w + quat.y * quat.z), 1 - 2 * quat.x * quat.x - 2 * quat.z * quat.z);
CGFloat adjYaw = asin(2 * quat.x * quat.y + 2 * quat.w * quat.z);
or try also
CMAttitude *currentAttitude = [MotionDataProvider sharedProvider].attitude; //from CoreMotion
CGFloat roll = currentAttitude.roll;
CGFloat pitch = currentAttitude.pitch;
CGFloat yaw = currentAttitude.yaw;
*the values that i got is different for this methods
The problem is that pitch, yaw, roll is not applicable in this format to my scheme.
How can I convert pitch, yaw, roll or quaternion or motionMatrix to required angles in x-z for my rotation model? Am I on correct way of things doing, or I missed some milestone point?
How to get rotation around y axis from received rotation matrix/quaternion from CoreMotion, converting current z and x to 0, so displayed object can be rotated only around y axis?
I use iOS, by the way, but guess this is not important here.

Fisheye distortion rectification with lookup table

I have a fisheye lens:
I would like to undistort it. I apply the FOV model:
rd = 1 / ω * arctan (2 * ru * tan(ω / 2)) //Equation 13
ru = tan(rd * ω) / 2 / tan(ω / 2) //Equation 14
as found in equations (13) and (14) of the INRIA paper "Straight lines have to be straight" https://hal.inria.fr/inria-00267247/document.
The code implementation is the following:
Point2f distortPoint(float w, float h, float cx, float cy, float omega, Point2f input) {
//w = width of the image
//h = height of the image
//cx = center of the lens in the image, aka w/2
//cy = center of the lens in the image, aka h/2
Point2f tmp = new Point2f();
//We normalize the coordinates of the point
tmp.x = input.x / w - cx / w;
tmp.y = input.y / h - cy / h;
//We apply the INRIA key formula (FOV model)
double ru = sqrt(tmp.x * tmp.x + tmp.y * tmp.y);
double rd = 1.0f / omega * atan(2.0f * ru * tan(omega / 2.0f));
tmp.x *= rd / ru;
tmp.y *= rd / ru;
//We "un-normalize" the point
Point2f ret = new Point2f();
ret.x = (tmp.x + cx / w) * w;
ret.y = (tmp.y + cy / h) * h;
return ret;
}
I then used the OpenCV remap function:
//map_x and map_y are computed with distortPoint
remap(img, imgUndistorted, map_x, map_y, INTER_LINEAR, BORDER_CONSTANT, Scalar(0, 0, 0));
I managed to get the distortion model from the lens manufacturer. It's a table of image_height as a function of the field angle:
Field angle(deg) Image Height (mm)
0 0
1 height1
2 height2
3 height3
...
89 height89
90 height90
To give an idea, each height is small and inferior to 2mm.
I found an interesting paper here: https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/wp/wp-01073-flexible-architecture-fisheye-correction-automotive-rear-view-cameras.pdf
How can I modify my pixel-unit undistortion function to factor in the mm-unit table provided by the manufacturer in order to have the most accurate possible undistorted image?

Draw a line using an angle and a point in OpenCV

I have a point and an angle in OpenCV, how can I draw that using those parameters and not using 2 points?
Thanks so much!
Just use the equation
x2 = x1 + length * cos(θ)
y2 = y1 + length * sin(θ)
and θ should be in radians
θ = angle * 3.14 / 180.0
In OpenCV you can rewrite the above equation like
int angle = 45;
int length = 150;
Point P1(50,50);
Point P2;
P2.x = (int)round(P1.x + length * cos(angle * CV_PI / 180.0));
P2.y = (int)round(P1.y + length * sin(angle * CV_PI / 180.0));
Done!

Resources