icp transformation matrix interpretation - opencv

I'm using PCL to obtain the transformation matrix from ICP (getTransformationMatrix()).
The result obtained for exemple for a translation movement without rotation is
0.999998 0.000361048 0.00223594 -0.00763852
-0.000360518 1 -0.000299474 -0.000319525
-0.00223602 0.000298626 0.999998 -0.00305045
0 0 0 1
how can I find the trasformation from the matrix?
The idea is to see the error made between the stimation and the real movement

I have not used the library you refer to here, but it is pretty clear to me that the result you provide is a homogenous transform i.e. the upper left 3x3 matrix (R) is the rotation matrix and the right 3x1 (T) is the translation:
M1 = [ **[** [R], [T] **], [** 0 0 0 1 **]** ]
refer to the 'Matrix Representation' section here:
http://en.wikipedia.org/wiki/Kinematics
This notation is used so that you can get the final point after successive transforms by multiplying the transform matrices.
If you have a point p0 transformed n times you get the point p1 as:
P0 = [[p0_x], [p0_y], [p0_z], [1]]
P1 = [[p1_x], [p1_y], [p1_z], [1]]
M = M1*M2*...*Mn
P1 = M*P0

tROTA is the matrix with translation and rotation:
auto trafo = icp.getFinalTransformation();
Eigen::Transform<float, 3, Eigen::Affine> tROTA(trafo);
float x, y, z, roll, pitch, yaw;
pcl::getTranslationAndEulerAngles(tROTA, x, y, z, roll, pitch, yaw);

Related

How to calculate FFT of a time series in 3D space (X, Y, T)

A time series (x, y, t) in 3D space (X, Y, T) satisfies:
x(t) = f1(t), y(t) = f2(t),
where t = 1, 2, 3,....
In other words, coordinates (x, y) vary with timestamp t. It is easy to compute the FFT of x(t) or y(t), but how do you calculate the FFT of (x, y)? I assume it should NOT be computed as a 2D-FFT, because that is for an image, whereas (x, y) is just a series. Any suggestion? Thank you.
use
fftn
for example: Y = fftn(X) returns the multidimensional Fourier transform of an N-D array using a fast Fourier transform algorithm. The N-D transform is equivalent to computing the 1-D transform along each dimension of X. The output Y is the same size as X.
for 3-D transform:
Create a 3-D signal X. The size of X is 20-by-20-by-20
x = (1:20)';
y = 1:20;
z = reshape(1:20,[1 1 20]);
X = cos(2*pi*0.01*x) + sin(2*pi*0.02*y) + cos(2*pi*0.03*z);
Compute the 3-D Fourier transform of the signal, which is also a 20-by-20-by-20 array.
Y = fftn(X)
Pad X with zeros to compute a 32-by-32-by-32 transform.
m = nextpow2(20);
Y = fftn(X,[2^m 2^m 2^m]);
size(Y)
also you can use this code:
first You might use SINGLE intead of DOUBLE
psi = single(psi);
fftpsi = fft(psi,[],3);
Next might be working slide by slide
psi=rand(10,10,10);
% costly way
fftpsi=fftn(psi);
% This might save you some RAM, to be tested
[m,n,p] = size(psi);
for k=1:p
psi(:,:,k) = fftn(psi(:,:,k));
end
psi = reshape(psi,[m*n p]);
for i=1:m*n % you might work on bigger row-block to increase speed
psi(i,:) = fft(psi(i,:));
end
psi = reshape(psi,[m n p]);
% Check
norm(psi(:)-fftpsi(:))
I hope it will be useful for you

Inverse Perspective Transform?

I am trying to find the bird's eye image from a given image. I also have the rotations and translations (also intrinsic matrix) required to convert it into the bird's eye plane. My aim is to find an inverse homography matrix(3x3).
rotation_x = np.asarray([[1,0,0,0],
[0,np.cos(R_x),-np.sin(R_x),0],
[0,np.sin(R_x),np.cos(R_x),0],
[0,0,0,1]],np.float32)
translation = np.asarray([[1, 0, 0, 0],
[0, 1, 0, 0 ],
[0, 0, 1, -t_y/(dp_y * np.sin(R_x))],
[0, 0, 0, 1]],np.float32)
intrinsic = np.asarray([[s_x * f / (dp_x ),0, 0, 0],
[0, 1 * f / (dp_y ) ,0, 0 ],
[0,0,1,0]],np.float32)
#The Projection matrix to convert the image coordinates to 3-D domain from (x,y,1) to (x,y,0,1); Not sure if this is the right approach
projection = np.asarray([[1, 0, 0],
[0, 1, 0],
[0, 0, 0],
[0, 0, 1]], np.float32)
homography_matrix = intrinsic # translation # rotation # projection
inv = cv2.warpPerspective(source_image, homography_matrix,(w,h),flags = cv2.INTER_CUBIC | cv2.WARP_INVERSE_MAP)
My question is, Is this the right approach, as I can manual set a suitable ty,rx, but not for the one (ty,rx) which is provided.
First premise: your bird's eye view will be correct only for one specific plane in the image, since a homography can only map planes (including the plane at infinity, corresponding to a pure camera rotation).
Second premise: if you can identify a quadrangle in the first image that is the projection of a rectangle in the world, you can directly compute the homography that maps the quad into the rectangle (i.e. the "birds's eye view" of the quad), and warp the image with it, setting the scale so the image warps to a desired size. No need to use the camera intrinsics. Example: you have the image of a building with rectangular windows, and you know the width/height ratio of these windows in the world.
Sometimes you can't find rectangles, but your camera is calibrated, and thus the problem you describe comes into play. Let's do the math. Assume the plane you are observing in the given image is Z=0 in world coordinates. Let K be the 3x3 intrinsic camera matrix and [R, t] the 3x4 matrix representing the camera pose in XYZ world frame, so that if Pc and Pw represent the same 3D point respectively in camera and world coordinates, it is Pc = R*Pw + t = [R, t] * [Pw.T, 1].T, where .T means transposed. Then you can write the camera projection as:
s * p = K * [R, t] * [Pw.T, 1].T
where s is an arbitrary scale factor and p is the pixel that Pw projects onto. But if Pw=[X, Y, Z].T is on the Z=0 plane, the 3rd column of R only multiplies zeros, so we can ignore it. If we then denote with r1 and r2 the first two columns of R, we can rewrite the above equation as:
s * p = K * [r1, r2, t] * [X, Y, 1].T
But K * [r1, r2, t] is a 3x3 matrix that transforms points on a 3D plane to points on the camera plane, so it is a homography.
If the plane is not Z=0, you can repeat the same argument replacing [R, t] with [R, t] * inv([Rp, tp]), where [Rp, tp] is the coordinate transform that maps a frame on the plane, with the plane normal being the Z axis, to the world frame.
Finally, to obtain the bird's eye view, you select a rotation R whose third column (the components of the world's Z axis in camera frame) is opposite to the plane's normal.

Estimating distance from camera to ground plane point

How can I calculate distance from camera to a point on a ground plane from an image?
I have the intrinsic parameters of the camera and the position (height, pitch).
Is there any OpenCV function that can estimate that distance?
You can use undistortPoints to compute the rays backprojecting the pixels, but that API is rather hard to use for your purpose. It may be easier to do the calculation "by hand" in your code. Doing it at least once will also help you understand what exactly that API is doing.
Express your "position (height, pitch)" of the camera as a rotation matrix R and a translation vector t, representing the coordinate transform from the origin of the ground plane to the camera. That is, given a point in ground plane coordinates Pg = [Xg, Yg, Zg], its coordinates in camera frame are given by
Pc = R * Pg + t
The camera center is Cc = [0, 0, 0] in camera coordinates. In ground coordinates it is then:
Cg = inv(R) * (-t) = -R' * t
where inv(R) is the inverse of R, R' is its transpose, and the last equality is due to R being an orthogonal matrix.
Let's assume, for simplicity, that the the ground plane is Zg = 0.
Let K be the matrix of intrinsic parameters. Given a pixel q = [u, v], write it in homogeneous image coordinates Q = [u, v, 1]. Its location in camera coordinates is
Qc = Ki * Q
where Ki = inv(K) is the inverse of the intrinsic parameters matrix. The same point in world coordinates is then
Qg = R' * Qc + Cg
All the points Pg = [Xg, Yg, Zg] that belong to the ray from the camera center through that pixel, expressed in ground coordinates, are then on the line
Pg = Cg + lambda * (Qg - Cg)
for lambda going from 0 to positive infinity. This last formula represents three equations in ground XYZ coordinates, and you want to find the values of X, Y, Z and lambda where the ray intersects the ground plane. But that means Zg=0, so you have only 3 unknowns. Solve them (you recover lambda from the 3rd equation, then substitute in the first two), and you get Xg and Yg of the solution to your problem.

How can I derive relative translation and rotation from two extrinsic camera matrices?

I have two sets of data associated with a video sequence. One contains relative rotation and translation data generated using one algorithm. The other is comprised of ground-truth extrinsic matrices associated with each frame.
I would like to compare the data-sets to determine the disparity between them. My question is, how can I derive the relative translation and rotation from the two extrinsic camera matrices?
If you have camera1 pose P1 = [R1|T1] and camera2 pose P2 = [R2|T2] then P1to2 = P2 * P1^-1.
Intuitively, imagine a simple case in which both cameras have translation zero, camera1 has rotation on X axis +30 degrees and camera2 rotation on X axis of +60 degrees.
P1 = [R1|0] P2 = [R2|0]
So they both differ of a +30 degrees rotation on the X axis:
P1to2 = P2 * P1^-1 = [R2|0] * [R1|0]^-1 = [R2|0] * [R1^1|0]
R2 * R1^1 = 60 - 30 on X = rotation of +30 degrees on X
The extrinsic matrix is comprised of the rotation matrix 3x3 and the translation vector 3x1.
So, M 3x4, is simply a concatenation of the two [R t]. Unless I misunderstand your question, getting the rotation and translation is trivial.

How to calculate quantized angle?

I am looking at the code for Hough transformation in image segmentation. The following code is from Computer Vision by Linda Shapiro. Can somebody tell me what is quantize_angle and how can I compute it?
The Hough transform looks for straight lines (or other features) in an image and represents these features as points in a different 2D coordinate system, where one axis represents the angle θ of a detected line, and the other represents the distance δ from this line to the centre of the image.
Source: Wikipedia
To produce a Hough transform of finite dimensions, both θ and δ have to be quantized. For example, if θ lies in the range (0 ≤ θ < 2π), then you could map it to the range 0–255 by a function such as the following:
int quantize_angle(float theta) {
int q = floor(theta * 128.0 / 3.141592654 + 0.5);
return q % 256;
}
This will result in a Hough transform that is 256 pixels wide.

Resources