opencv Perspective transformation then rotate 90 degree CW in one step - opencv

in opencv, i know i can use Perspective Transformation to do in this way:
pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
then i can rotate the image by 90 degree by rotate function.
cv::rotate(image, image, ROTATE_90_CLOCKWISE);
But i think i should be able to do the two steps in one step.
How should I create the matrix to multiple with PerspectiveTransform M,
i thought about this:
0 -1 0
1 0 0
0 0 1
but this is wrong.
note: I will do it in Java.

Related

Moving an object with a rotation

I'm trying to make a freecam sort of tool for a rather obscure game. I'm trying to figure out how to move the XY position of the freecam based on the car's rotation value, but I'm struggling to do so. I've tried using Calculating X Y movement based on rotation angle? and modifying it a bit, but it doesn't work as intended. The game's rotation uses a float that ranges from -1 to 1, -1 being 0 degrees and 1 being 360 degrees.
Putting rot at -1 corresponds to X+
Putting rot at 0 corresponds to Z+
Putting rot at 1 corresponds to X-
Here's my cheat engine code:
speed = 10000
local yaw = math.rad((180*(getAddressList().getMemoryRecordByDescription('rot').Value))+180)
local px = getAddressList().getMemoryRecordByDescription('playerx').Value
local py = getAddressList().getMemoryRecordByDescription('playery').Value
local pz = getAddressList().getMemoryRecordByDescription('playerz').Value
local siny = math.sin(yaw) -- Sine of Horizontal (Yaw)
local cosy = math.cos(yaw) -- Cosine of Horizontal (Yaw)
getAddressList().getMemoryRecordByDescription('playerx').Value = ((getAddressList().getMemoryRecordByDescription('playerx').Value)+(cosy*speed))
getAddressList().getMemoryRecordByDescription('playerz').Value = ((getAddressList().getMemoryRecordByDescription('playerz').Value)+(siny*speed))
print(yaw)

Why does the cube warp in unity

I want to develop an AR application use unity and OpenCV. I use solvePNP method in openCV to get camera rotation matrix and translation as follows:
r11  r12  r13  tx
r21  r22  r23  ty
r31  r32  r33  tz
In unity, projectionMatrix and WorldToCameraMatrix are 4x4 Matrix,which correspond to camera instrinsic matrix and camera pose
In Order to align two coordinate system. I set
WorldToCameraMatrix=
r11 r12  r13 tx
-r21 -r22 -r23 -ty
-r31 -r32 -r33 -tz
0 0 0 1
and I set
projectMatrix =
2*f/w 0 0 0
0 2*f/h 0 0
0 0 -(far+near)/(far-near)) -2*(far*near)/(far-near)
0 0 -1 0
After these , the rotation and translation is correct.But the cube warps heavily.
As this image:
cube warps
Who can help me to find the reason?Thanks in advance.

How to get skew from intrinics/distortion

I have the camera intrinsics matrix and the distortion coefficients from OpenCV camera calibration.
Am an wondering where the skew factor of the calibration is contained in this Matrices and how I can get it as a float or float[].
Skew is by default set to 0 at least since OpenCV 2.0.
If you look at the camera matrix it looks like A = [fx skew u0; 0 fy v0; 0 0 1]
but in OpenCv it is A = [fx 0 u0; 0 fy v0; 0 0 1].

icp transformation matrix interpretation

I'm using PCL to obtain the transformation matrix from ICP (getTransformationMatrix()).
The result obtained for exemple for a translation movement without rotation is
0.999998 0.000361048 0.00223594 -0.00763852
-0.000360518 1 -0.000299474 -0.000319525
-0.00223602 0.000298626 0.999998 -0.00305045
0 0 0 1
how can I find the trasformation from the matrix?
The idea is to see the error made between the stimation and the real movement
I have not used the library you refer to here, but it is pretty clear to me that the result you provide is a homogenous transform i.e. the upper left 3x3 matrix (R) is the rotation matrix and the right 3x1 (T) is the translation:
M1 = [ **[** [R], [T] **], [** 0 0 0 1 **]** ]
refer to the 'Matrix Representation' section here:
http://en.wikipedia.org/wiki/Kinematics
This notation is used so that you can get the final point after successive transforms by multiplying the transform matrices.
If you have a point p0 transformed n times you get the point p1 as:
P0 = [[p0_x], [p0_y], [p0_z], [1]]
P1 = [[p1_x], [p1_y], [p1_z], [1]]
M = M1*M2*...*Mn
P1 = M*P0
tROTA is the matrix with translation and rotation:
auto trafo = icp.getFinalTransformation();
Eigen::Transform<float, 3, Eigen::Affine> tROTA(trafo);
float x, y, z, roll, pitch, yaw;
pcl::getTranslationAndEulerAngles(tROTA, x, y, z, roll, pitch, yaw);

3D Camera coordinates to world coordinates (change of basis?)

Suppose I have the coordinates X, Y, Z and orientation Rx, Ry, Rz of an object with respect to a camera.
In addition, I have the coordinates U, V, W and orientation Ru, Rv, Rw of this camera in the world.
How do I transform the position (location and rotation) of the object to its position in the world?
It sounds like a change of basis to me, but I haven't found a clear source yet.
I have found this document which is quite clear on the topic.
http://www.cse.psu.edu/~rcollins/CSE486/lecture12.pdf
It treats, among others, the reverse operation, i.e., going from world to camera 3D coordinates.
Pc = R ( Pw - C )
Where, Pc is a point in the camera world, Pw is a point in the normal world, R is a rotation matrix and C is the camera translation.
Unfortunately it is rather cumbersome to add latex formulae, so I will give some matlab code instead.
function lecture12_collins()
% for plotting simplicity I choose my points on plane z=0 in this example
% Point in the world
Pw = [2 2.5 0 1]';
% rotation
th = pi/3;
% translation
c = [1 2.5 0]';
% obtain world to camera coordinate matrix
T = GetT(th, c);
% calculate the camera coordinate
Pc = T*Pw
% get the camera to world coordinate
T_ = GetT_(th, c)
% Alternatively you could use the inverse matrix
% T_ = inv(R*C)
% calculate the worldcoordinate
Pw_ = T_*Pc
assert (all(eq(Pw_ ,Pw)))
function T = GetT(th, c)
% I have assumed rotation around the z axis only here.
R = [cos(th) -sin(th) 0 0
sin(th) cos(th) 0 0
0 0 1 0
0 0 0 1];
C = [1 0 0 -c(1)
0 1 0 -c(2)
0 0 1 -c(3)
0 0 0 1];
T = R*C;
function T_ = GetT_(th, c)
% negate the angle
R_ = [cos(-th) -sin(-th) 0 0
sin(-th) cos(-th) 0 0
0 0 1 0
0 0 0 1];
% negate the translation
C_ = [1 0 0 c(1)
0 1 0 c(2)
0 0 1 c(3)
0 0 0 1];
T_ = C_*R_
So far this is about the location only. The rotation I've solved by using extra knowledge I had of the rotation. I know that my camera is perpendicular to the object and that its rotation is only around the z axis. I can just add the rotation of the camera and the object.
In fact you have two basis : one relative to the camera, the other is absolute (the world). So you basically want to transform your relative data into absolute data.
Location
This is the easiest one. You have to translate the (X,Y,Z) position by the vector t(U,V,W). So all your positions in absolute are (Ax, Ay, Az) = (X,Y,Z)+t = (X+U,Y+V,Z+W).
Orientation
This is a bit more difficult. You have to find the rotation matrix that rotate your camera from (I assume) (0,0,1) to (Ru,Rv,Rw). You should look at Basic Rotation Matrices in order to decompose the 2 rotations that take (0,0,1) to (Ru,Rv,Rw) (one according to X axis, one according to Z axis for example). I advise you to draw the absolute basis and the vector (Ru,Rv,Rw) on a sheet of paper, it is the simplest way to get the right result.
So you have 2 basic rotations matrices r1 and r2. The resultant rotation matrix r = r1*r2 (or r2*r1, it doesn't matter). So the absolute orientation of your object is (ARx, ARy, ARz)
= r*(Rx,Ry,Rz).
Hope this helps !

Resources