How to obtain rotation matrix using roll, pitch, and yaw values? - image-processing

I have extrinsic parameters as like this:
<Ra_roll>0.213877</Ra_roll>
<Ra_pitch>0.003699</Ra_pitch>
<Ra_yaw>0.000555</Ra_yaw>
I want to get a rotation matrix, but I don't know how to calculate a rotation matrix from these roll, pitch, and yaw values.
My final goal is map color in a 2D image to 3D points in world coordinate.
So the questions are:
How to get rotation matrix from the roll, pitch, and yaw values?
How to get the scale factor between image pixels and 3D points?
Thanks.

Related

Align RGB image to Depth Image using Intrinsic and Extrinsic Matrix

similar questions are solved many times. However, they generally maps depth coordinates to RGB coordinates, by following the next steps:
apply the inverse depth intrinsic matrix to the depth coordinates.
rotate and translate the 3d coordinates obtained using the rotation R and T matrixes that maps 3d depth coordinates to 3D RGB coordinates.
apply the RGB intrinsic matrix to obtain the image coordinates.
However, I want to do the reverse process. From a RGB coordinates obtain the depth coordinates. Then I can obtain an interpolated value from the depth map based on those coordinates.
The problem is that I don't know how can I define the z coordinate in the RGB image to make everything works.
The process should be:
obtain 3D RGB coordinates by applying the camera's inverse intrinsic matrix. How can I set the z coordinates? Should I define and estimated value? Set all the z coordinates to one?
rotate and translate the 3D RGB coordinates to the 3d coordinates.
apply the depth intrinsic matrix.
If this process cannot be done. How can I map RGB coordinates to depth coordinates instead of the other way around?
Thank you!

Camera intrinsics matrix from Unity

I'm using a physical camera in Unity where I set the focal length f and sensor size sx and sy. Can these parameters and image resolution be used to create a camera calibration matrix? I probably need the focal length in terms of pixels and the cx and cy parameters that denote the deviation of the image plane center from the camera's optical axis. Is cx = w/2 and cy = h/2 correct in this case (w: width, h: height)?
I need the calibration matrix to compute a homography in OpenCV using the camera pose from Unity.
Yes, that's possible. I have done that with multiple different camera models( pinhole model, fisheye lens, polynominal lens model, etc).
Calibrate your camera with opencv and put the calibration parameters to the shader. You need to write a custom shader. Have a look at my previous question.
Camera lens distortion in OpenGL
You don't need homography here.
#Tuebel gave me a nice piece of code and I have successfully adapted it to real camera models.
The hardest part will be managing the difference between opengl camera coordinate and opencv camera coordinate. The camera calibration parameters are of course calibrated based on the opencv camera coordinate.

How to estimate intrinsic properties of a camera from data?

I am attempting camera calibration from a single RGB image (panorama) given 3D pointcloud
The methods that I have considered all require an intrinsic properties matrix (which I have no access to)
The intrinsic properties matrix can be estimated using the Bouguet’s camera calibration Toolbox, but as I have said, I have a single image only and a single point cloud for that image.
So, knowing 2D image coordinates, extrinsic properties, and 3D world coordinates, how can the intrinsic properties be estimated?
It would seem that the initCameraMatrix2D function from the OpenCV (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) works in the same way as the Bouguet’s camera calibration Toolbox and requires multiple images of the same object
I am looking into the Direct linear transformation DLT and Levenberg–Marquardt algorithm with implementations https://drive.google.com/file/d/1gDW9zRmd0jF_7tHPqM0RgChBWz-dwPe1
but it would seem that both use the pinhole camera model and therefore find linear transformation between 3D and 2D points
I can't find my half year old source code, but from top of my head
cx, cy is optical centre which is width/2, height/2 in pixels
fx=fy is focal length in pixels (distance from camera to image plane or axis of rotation)
If you know that image distance from camera to is for example 30cm and it captures image that has 16x10cm and 1920x1200 pixels, size of pixel is 100mm/1200=1/12mm and camera distance (fx,fy) would be 300mm*12px/1mm=3600px and image centre is cx=1920/2=960, cy=1200/2=600. I assume that pixels are square and camera sensor is centered at optical axis.
You can get focal lenght from image size in pixels and measured angle of view.

Determining perspective distortion from euler angles

I have the readings from a gyroscope attached to a camera describing the orientation of the camera in 3D (say with 3 Euler angles).
I take a picture (of say a flat plane) from this pose. After which, I want to transform the image to another image, as though it has been taken with the camera being perpendicular to the plane itself.
How would I do something like this in OpenCV? Can someone point me in the correct direction?
You can checkout how to calculate the rotation matrix using the roll-pitch-yaw angles here: http://planning.cs.uiuc.edu/node102.html
A Transformation matrix is T = [R t; 0 1] (in matlab notation)
Here, you can place the translation as a 3x1 vector in 't' and the calculated rotation matrix in 'R'.
Since a mathematical information is missing, I assume the Z-axis of the image and the camera are parallel. In this case, you have to add a 90° rotation to either the X or the Y axis to get a perpendicular view. This is to take care of orientation.
perspectiveTransform() function should be helpful thereon.
Check out this question for code insights: How to calculate perspective transform for OpenCV from rotation angles?

How can I find rotation angles (pitch, yaw, roll) from perspective transofmation coefficients

I have two 2d quads (each represented using 4 xy pairs), one of them is a perspective transformation of the other. How can I use these quads to deduce the rotations (pitch, yaw, roll) that caused the perspective distortion?
Notice that I used the cvGetPerspectiveTransform() which returns the perspective transformation coefficients in the form of a 3x3 matrix. I am able to use such coefficients to map a point from one space to another. However, it is the rotation angles which I'm concerned about knowing.
Any ideas?
Thanks,
Hasan.
My algorithm was
1) calculate 3d coordinate of to quads (example show in Calculating rectangle 3D coordinate with coordinate its shadow?)
2) take to point of first and corresponded two points of second calculate quaternion(see first reference I post in it reference to answer)
3) from quaternion calculate rotation matrix, angle.

Resources