Recompose Results of OpenCV RQDecomp3x3 - opencv

After running RQDecomp3x3 in OpenCV, you get:
mtxR – Output 3x3 upper-triangular matrix.
mtxQ – Output 3x3 orthogonal matrix.
Qx – Optional output 3x3 rotation matrix around x-axis.
Qy – Optional output 3x3 rotation matrix around y-axis.
Qz – Optional output 3x3 rotation matrix around z-axis.
How do you get back from the three rotation matrices (Qx, Qy, Qz) to the original input matrix?
Or in the case where the input matrix was a rotational matrix, mtxR will be the identity matrix so how can you go from the three rotation matrices to mtxQ?
UPDATED
With answer though I don't get why the transpose is needed.

It looks like (at least for a rotational matrix input):
input = (Qx # Qy # Qz)'.

Related

Fundamental matrix from transformation

I have two images with known corresponding 2D points, the intrinsic parameters of the cameras and the 3D transformation between the cameras. I want to calculate the 2D reprojection error from one image to the other.
To do so, I thought about getting a fundamental matrix from the transformation, so I can compute the point-to-line distance between the points and the corresponding epipolar lines. How can I get the fundamental matrix?
I know that E = R * [t] and F = K^(-t) * E * K^(-1), where E is the essential matrix and [t] is the skew-symmetric matrix of the translation vector. However, this returns a null matrix if the motion is pure rotation (t = [0 0 0]). I know that in this case a homography explains the motion better than the fundamental matrix, so that I can compare the norm of the translation vector with a small threshold to choose a fundamental matrix or a homogaphy. Is there a better way of doing this?
"I want to calculate the 2D reprojection error from one image to the other."
Then go and calculate it. Your setup is calibrated, so you don't need anything other than a known piece of 3D geometry. Forget about the epipolar error, which may as well be undefined if your camera motion is (close to) a pure rotation.
Take an object of known size and shape (for example, a checkerboard), work out its location in 3D space from one camera view (for a checkerboard you can fit a homography between its physical model and its projection, then decompose it into [R|t]). Then project the now-located 3D shape into the other camera given that camera's calibrated parameters, and compare the projection with the object's actual image.

finding the real world coordinates of an image point

I am searching lots of resources on internet for many days but i couldnt solve the problem.
I have a project in which i am supposed to detect the position of a circular object on a plane. Since on a plane, all i need is x and y position (not z) For this purpose i have chosen to go with image processing. The camera(single view, not stereo) position and orientation is fixed with respect to a reference coordinate system on the plane and are known
I have detected the image pixel coordinates of the centers of circles by using opencv. All i need is now to convert the coord. to real world.
http://www.packtpub.com/article/opencv-estimating-projective-relations-images
in this site and other sites as well, an homographic transformation is named as:
p = C[R|T]P; where P is real world coordinates and p is the pixel coord(in homographic coord). C is the camera matrix representing the intrinsic parameters, R is rotation matrix and T is the translational matrix. I have followed a tutorial on calibrating the camera on opencv(applied the cameraCalibration source file), i have 9 fine chessbordimages, and as an output i have the intrinsic camera matrix, and translational and rotational params of each of the image.
I have the 3x3 intrinsic camera matrix(focal lengths , and center pixels), and an 3x4 extrinsic matrix [R|T], in which R is the left 3x3 and T is the rigth 3x1. According to p = C[R|T]P formula, i assume that by multiplying these parameter matrices to the P(world) we get p(pixel). But what i need is to project the p(pixel) coord to P(world coordinates) on the ground plane.
I am studying electrical and electronics engineering. I did not take image processing or advanced linear algebra classes. As I remember from linear algebra course we can manipulate a transformation as P=[R|T]-1*C-1*p. However this is in euclidian coord system. I dont know such a thing is possible in hompographic. moreover 3x4 [R|T] Vector is not invertible. Moreover i dont know it is the correct way to go.
Intrinsic and extrinsic parameters are know, All i need is the real world project coordinate on the ground plane. Since point is on a plane, coordinates will be 2 dimensions(depth is not important, as an argument opposed single view geometry).Camera is fixed(position,orientation).How should i find real world coordinate of the point on an image captured by a camera(single view)?
EDIT
I have been reading "learning opencv" from Gary Bradski & Adrian Kaehler. On page 386 under Calibration->Homography section it is written: q = sMWQ where M is camera intrinsic matrix, W is 3x4 [R|T], S is an "up to" scale factor i assume related with homography concept, i dont know clearly.q is pixel cooord and Q is real coord. It is said in order to get real world coordinate(on the chessboard plane) of the coord of an object detected on image plane; Z=0 then also third column in W=0(axis rotation i assume), trimming these unnecessary parts; W is an 3x3 matrix. H=MW is an 3x3 homography matrix.Now we can invert homography matrix and left multiply with q to get Q=[X Y 1], where Z coord was trimmed.
I applied the mentioned algorithm. and I got some results that can not be in between the image corners(the image plane was parallel to the camera plane just in front of ~30 cm the camera, and i got results like 3000)(chessboard square sizes were entered in milimeters, so i assume outputted real world coordinates are again in milimeters). Anyway i am still trying stuff. By the way the results are previosuly very very large, but i divide all values in Q by third component of the Q to get (X,Y,1)
FINAL EDIT
I could not accomplish camera calibration methods. Anyway, I should have started with perspective projection and transform. This way i made very well estimations with a perspective transform between image plane and physical plane(having generated the transform by 4 pairs of corresponding coplanar points on the both planes). Then simply applied the transform on the image pixel points.
You said "i have the intrinsic camera matrix, and translational and rotational params of each of the image.” but these are translation and rotation from your camera to your chessboard. These have nothing to do with your circle. However if you really have translation and rotation matrices then getting 3D point is really easy.
Apply the inverse intrinsic matrix to your screen points in homogeneous notation: C-1*[u, v, 1], where u=col-w/2 and v=h/2-row, where col, row are image column and row and w, h are image width and height. As a result you will obtain 3d point with so-called camera normalized coordinates p = [x, y, z]T. All you need to do now is to subtract the translation and apply a transposed rotation: P=RT(p-T). The order of operations is inverse to the original that was rotate and then translate; note that transposed rotation does the inverse operation to original rotation but is much faster to calculate than R-1.

findHomography() / 3x3 matrix - how to get rotation part out of it?

As a result to a call to findHomography() I get back a 3x3 matrix mtx[3][3]. This matrix contains the translation part in mtx[0][2] and mtx[1][2]. But how can I get the rotation part out of this 3x3 matrix?
Unfortunaltely my target system uses completely different calculation so I can't reuse the 3x3 matrix directly and have to extract the rotation out of this, that's why I'm asking this question.
Generally speaking, you can't decompose the final transformation matrix into its constituent parts. There are some certain cases where it is possible. For example if the only operation preceding the operation was a translation, then you can do arccos(m[0][0]) to get the theta value of the rotation.
Found it for my own meanwhile: There is an OpenCV function RQDecomp3x3() that can be used to extract parts of the transformation out of a matrix.
RQDecomp3x3 has a problem to return rotation in other axes except Z so in this way you just find spin in z axes correctly,if you find projection matrix and pass it to "decomposeProjectionMatrix" you will find better resaults,projection matrix is different to homography matrix you should attention to this point.

How can I use the output 3x3 matrix from getPerspectiveTransform in OpenCV?

I'm now trying to analyze the perspective transform/homography matrix between two images capturing the same object (e.g., a rectangle) but at different perspectives/shooting angles. The perspective transform can be derived by using the function getPerspectiveTransform in OpenCV 2.3.1. I want to find the corresponding rotation and translation matrices.
The output of getPerspectiveTransform is a 3x3 matrix which I can directly use it to warp the source image into the target image. But my question is that how I can find the rotation and translation matrices based on the obtained 3x3 matrix?
I was looking into the funciton decomposeProjectionMatrix for the corresponding rotation and translation matrices. But the input is required to be a 3x4 projection matrix. How can I relate the perspective transformation (i.e., a 3x3 matrix) to the 3x4 projection matrix? Am I on the right track?
Thank you very much.
The information contained in the homography matrix (returned from getPerspectiveTransform) is not enough to extract rotation/translation. The missing column is key to correctly find the angles.
The good news is that in some scenarios, you can use the solvePnP() function to extract the desired parameters from two sets of points.
Also, this question is about the same thing you are asking for. It should help
Analyze camera movement with OpenCV

Finding Homography And Warping Perspective

With FeatureDetector I get features on two images with the same element and match this features with BruteForceMatcher.
Then I'm using OpenCv function findHomography to get homography matrix
H = findHomography( src2Dfeatures, dst2Dfeatures, outlierMask, RANSAC, 3);
and getting H matrix, then align image with
warpPerspective(img1,alignedSrcImage,H,img2.size(),INTER_LINEAR,BORDER_CONSTANT);
I need to know rotation angle, scale, displacement of detected element. Is there any simple way to get this than some big equations? Some evaluated formulas just to put data in?
Homography would match projections of your elements lying on a plane or lying arbitrary in 3D if the camera goes through a pure rotation or zoom and no translation. So here are the cases we are talking about with indication of what is the input to our calculations:
- planar target, pure rotation, intra-frame homography
- planar target, rotation and translation, target to frame homography
- 3D target, pure rotation, frame to frame mapping (constrained by a fundamental matrix)
In case of the planar target, a pure rotation is easy to calculate through your frame-to-frame Homography (H12):
given intrinsic camera matrix A, plane to image homographies for frame H1, and H2 that can be expressed as H1=A, H2=A*R, H12 = H2*H1-1=ARA-1 and thus R=A-1H12*A
In case of elements lying on a plane, rotation with translation of the camera (up to unknown scale) can be calculated through decomposition of target-to-frame homography. Note that the target can be just one of the views. Assuming you have your original planar target as an image (taken at some reference orientation) your task is to decompose the homography between images H12 which can be done through SVD. The first two columns of H represent the first two columns of the rotation martrix and be be recovered through H=ULVT, [r1 r2] = UDVT where D is 3x2 Identity matrix with the last row being all 0. The third column of a rotation matrix is just a vector product of the first two columns. The last column of the Homography is a translation vector times some constant.
Finally for arbitrary configuration of points in 3D and pure camera rotation, the rotation is calculated using the essential matrix decomposition rather than homography, see this
cv::decomposeProjectionMatrix();
and
cv::RQDecomp3x3();
are both similar to what you want to achive.
None of them is perfect. The theory behind them and why you cannot extract all params from a 3x3 matrix is a bit cumbersome. But the short answer is that a 3x3 proj matrix is a simplification from the complete 4x4 one, based on the fact that all points stay in the same plane.
You can try to use levenberg marquardt optimalization, where parameters will be translation and rotation, equations will represent by computed distances between features from two images(use only inliers from ransac homography).
Here is C++ implementation of LM http://www.ics.forth.gr/~lourakis/levmar/

Resources