I have two right handed coordinate systems.
OpenCV
As you can see with the black arrows, the camera looks down the positive $Z$ axis. You can ignore the rest of the diagram.
OpenGL
Although not visible here, the camera in OpenGL looks down the -Z axis. I want to transform a 3D point in front of the camera in the OpenCV coordinate system to a 3D point in front of the camera in the OpenGL.
I'm trying to represent this in a 4x4 matrix that concatenates R and T with 0001 at the bottom.
So far, I've tried this
1 0 0 0
0 -1 0 0
0 0 -1 0
0 0 0 1
but it doesn't seem to do anything, nothing shows up in the OpenGL coordinate system.
The camera coordinates of OpenCV goes X right, Y down, Z forward. While the camera coordinates of OpenGL goes X right, Y up, Z inward.
Use solvePnP as one of the most commonly used example.
You get a 3x3 rotation matrix R and a 1x3 translation vector T, and create a 4x4 view matrix M with R and T. Simply inverse the 2nd and 3rd row of M and you will get a view matrix for OpenGL rendering.
Related
As I understand OpenCV's coordinate system, as in this diagram.
The left camera of a calibrated stereo pair is located at the origin facing the Z direction.
I have a pair of 2464x2056 pixel cameras that I have calibrated (with a stereo rms of around 0.35), computed the disparity on a pair of images and reprojected this to get the 3D pointcloud. However, I've noticed that the Z axis is not in line with the optical centre of the camera.
This does kind of mess with some of the pointcloud manipulation I'm hoping to do- is this expected, or does it indicate that that something has gone wrong along the way?
Below is the point I've generated, plus the axis- the red green and blue lines indicate the x,y and z axes respectively, coming out from the origin.
As you can see, the Z axis intercepts the pointcloud between the head and the post- this corresponds to a pixel coordinate of approximately x = 637, y = 1028 when I fix the principal point during calibration to cx = 1232,y=1028. When I remove the CV_FIX_PRINCIPAL_POINT flag, this is calculated as approximatly cx = 1310, cy=1074, and the Z axis intercepts at around x=310,y=1050.
Compared to the rectified image here where the midpoint x = 1232,y=1028 is marked by a yellow cross, the centre of the image is over the mannequin had, the intersection between the Z axis is significantly off from where I would expect.
Does anyone have any idea as to why this could be occuring? Any help would be greatly appreciated.
I need to find the world coordinate of a pixel using OpenCV. So when I take pixel (0,0) in my image (that's the upper-left corner), I want to know to what 3D world space coordinate this pixel corresponds to on my image plane. I know that a single pixel corresponds to a line of 3D points in world space, but I want specific the one that lies on the image plane itself.
This is the formula of the OpenCV Pinhole model of which I have the first (intrinsics) and second (extrinsics) matrices. I know that I have u and v, but I don't know how to get from this u and v to the correct X, Y and Z coordinate.
What I've tried already:
I thought to just set s to 1 and make a homogeneous coordinate from [u v 1]^T by adding a 1, like so: [u v 1 1]^T. Then I multiplied the intrinsics with the extrinsics and made it into a 4x4 matrix by adding the following row: [0 0 0 1]. This was then inverted and multiplied with [u v 1 1]^T to get my X, Y and Z. But when I checked if four pixels calculated like that lay on the same plane (the image plane), this was wrong.
So, any ideas?
IIUC you want the intersection I with the image plane of the ray that back-projects a given pixel P from the camera center.
Let's define the coordinate systems first. The usual OpenCV convention is as follows:
Image coordinates: origin at the top-left corner, u axis going right (increasing column) and v axis going down.
Camera coordinates: origin at the camera center C, z axis going toward the scene, x axis going right and y axis going downward.
Then the image plane in camera frame is z=fx, where fx is the focal length measured in pixels, and a pixel (u, v) has camera coordinates (u - cx, v - cy, fx).
Multiply them by the inverse of the (intrinsic) camera matrix K you'll get the same point in metrical camera coordinates.
Finally, multiply that by the inverse of the world-to-camera coordinate transform [R | t] and you'll get the same point in world coordinates.
I'm trying to determine the orientation of a face in a video.
The video starts with the frontal image of the face, so it has no rotation. In the following frames the head rotates and i'm trying to determine the rotation, which will lead me to determine the face orientation based on the camera position.
I'm using OpenCV and C++ for the job.
I'm using SURF descriptors to find points on the face which i use to calculate an homography between the two images. Being the two frames very close to each other, the head rotation will be minimal in that interval and my homography matrix will be close to the identity matrix.
This is my homography matrix:
H = findHomography(k1,k2,RANSAC,8);
where k1 and k2 are the keypoints extracted with SURF.
I'm using decomposeProjectionMatrix to extract the rotation matrix but now i'm not sure how to interpret the rotMatrix. This one too is basically (1 0 0; 0 1 0; 0 0 1) (where the 0 are numbers in a range from e-10 to e-16).
In theory, what is was trying to do was to find the angle of the rotation at each frame and store it somewhere, so that if i get a 1° change in each frame, after 10 frames i know that my head has changed its orientation by 10°.
I spend some time reading everything i could find about QR decomposition, homography matrices and so on, but i haven't been able to get around this. Hence, any help would be really appreciated.
Thanks!
The upper-left 2x2 of the homography matrix is a 2D rotation matrix. If you work through the multiplication of the matrix with a point (i.e. take R*p), you'll see it's equivalent to:
newX = oldVector dot firstRow
newY = oldVector dot secondRow
In other words, the first row of the matrix is a unit vector which is the x axis of the new head. (If there's a scale difference between the frames it won't be a unit vector, but this method will still work.) So you should be able to calculate
rotation = atan2(second entry of first row, first entry of first row)
I really hope this isn't a waste of anyone's time but I've run into a small problem. I am able to construct the transformation matrix using the following:
M =
s*cos(theta) -s*sin(theta) t_x
s*sin(theta) s*cos(theta) t_y
0 0 1
This works if I give the correct values for theta, s (scale) and tx/ty and then use this matrix as one of the arguments for cv::warpPerspective. The problem lies in that this matrix rotates about the (0,0) pixel whereas I would like it to rotate about the centre pixel (cols/2, rows/2). How can incoporate the centre point rotation into this matrix?
Two possibilities. The first is to use the function getRotationMatrix2D which takes the center of rotation as an argument, and gives you a 2x3 matrix. Add the third row and you're done.
A second possibility is to construct an additional matrix that translates the picture before and after the rotation:
T =
1 0 -cols/2
0 1 -rows/2
0 0 1
Multiply your rotation matrix M with this one to get the total transform -TMT (e.g. with function gemm) and apply this one with warpPerspective.
I need to find the pose (rotation matrix + translation vector) for a camera, and for that I am using cv2.solvePnP(), but the results I get from photos don't match.
In order to debug, I created (with numpy) a "debugging 3d scene" composed of some object points (four corners of a square), some camera points (focal point, principal point and four corners of the virtual projection plane) and parameters (focal distance, initial orientation).
Then, I construct a general rotation matrix by multiplying three axis rotation matrices, apply this general rotation to the camera (numpy.dot()), project the object points in the virtual projection plane (line-plane intersection algorithm), and calculate the in-plane 2D coordinates (point-line distance) to the projection plane axes.
After doing this (objectpoints to imagepoints via rotationmatrix), I feed the imagepoints and the objectpoints to cv2.Rodrigues(cv2.solvePnP(...)) and get a matrix "not quite identical" to the one I used, only because of transposition and some elements with opposite signal (negative vs. positive), respecting this relation:
solvepnp_rotmatrix = my_original_matrix.transpose * [ 1 1 1]
[ 1 1 -1]
[-1 -1 1]
Although the rotation matrix mismatch is "solveable" with this hack, the TRANSLATION VECTOR gives coordinates that don't make sense to me.
I suspect there are mismatches between my 3D model (handedness, axes orientation, order of rotations) and the model used by opencv:
I use OpenGL-like coordinate system (X increases to the right, Y increases upwards, and Z increases toward the observer;
I applied the rotations in the order that made more sense to me (all right-handed, first around global Z, then around global X, then around global Y);
The image plane is between object and camera focal point (virtual projection plane, instead of real/CCD);
The origin of my image plane (virtual CCD) is lower-left corner (Xpix increases to the right, Ypx increases upwards.
My questions are:
Given that the terms of the rotation matrix are identical, only transposed and with different signal in some terms, is it possible that I am confusing some of openCV conventions (handedness, order of rotations, axis direction)? And how can I discover which one(s)?
Also, is there a way to relate my handmade translation vector to the tvec returned by solvePnP? (of course, ideally, the best would be to make the coordinate systems to match, in the first place).
Any help will be most welcome!