I need help creating a function to convert three angles (in degrees, yaw pitch and roll) to six float variables.
How would I go about making a function output these floats?
{0, 0, 0} = {1, 0, 0, -0, -0, 1}
{45, 0, 0} = {0.70710676908493, 0.70710676908493, 0, -0, -0, 1}
{0, 90, 0} = {-4.3711388286738e-08, 0, 1, -1, 0, -4.3711388286738e-08}
{0, 0, 135} = {1, -0, 0, -0, -0.70710676908493, -0.70710676908493}
{180, 180, 0} = {1, -8.7422776573476e-08, 8.7422776573476e-08, 8.7422776573476e-08, 0, -1}
{225, 0, 225} = {-0.70710682868958, 0.5, 0.5, -0, 0.70710670948029, -0.70710682868958}
{270, 270, 270} = {1.4220277639103e-16, -2.3849761277006e-08, 1, 1, 1.1924880638503e-08, 1.42202776319103e-16}
{315, 315, 315} = {0.5, -0.85355341434479, 0.14644680917263, 0.70710688829422, 0.5, 0.5}
MORE EXAMPLES REQUESTED BY: Egor Skriptunoff
{10, 20, 30} = {0.92541658878326, -0.018028322607279, 0.37852230668068, -0.34202012419701, -0.46984630823135, 0.81379765272141}
{10, 30, 20} = {0.85286849737167, -0.0052361427806318, 0.52209949493408, -0.5, -0.29619812965393, 0.81379765272141}
{20, 10, 30} = {0.92541658878326, 0.21461015939713, 0.3123245537281, -0.17364817857742, -0.49240386486053, 0.85286849737167}
{20, 30, 10} = {0.81379765272141, 0.25523611903191, 0.52209949493408, -0.5, -0.15038372576237, 0.85286849737167}
{30, 10, 20} = {0.85286849737167, 0.41841205954552, 0.3123245537281, -0.17364817857742, -0.33682405948639, 0.92541658878326}
{30, 20, 10} = {0.81379765272141, 0.4409696161747, 0.37852230668068, -0.34202012419701, -0.16317591071129, 0.92541658878326}
The code I currently have can calculate all of the floats except the 2nd and 3rd.
function convert_rotations(Yaw, Pitch, Roll)
return {
math.cos(math.rad(Yaw))*math.cos(math.rad(Pitch)),
0,
0,
math.sin(math.rad(Pitch))*-1,
math.sin(math.rad(Roll))*math.cos(math.rad(Pitch))*-1,
math.cos(math.rad(Roll))*math.cos(math.rad(Pitch))
}
end
I cannot seem to find the correct math for when all angles are nonzero for the 2nd float and 3rd float, but I did come up with this:
-- The second float when the Yaw is 0 degrees
math.sin(math.rad(Pitch))*math.sin(math.rad(Roll))*-1
-- The second float when the Pitch is 0 degrees
math.sin(math.rad(Yaw))*math.cos(math.rad(Roll))
-- The second float when the Roll is 0 degrees
math.sin(math.rad(Yaw))*math.sin(math.rad(Pitch))
And for the 3rd float I came up with this:
-- The third float when Yaw is 0 degrees
math.sin(math.rad(Pitch))*math.cos(math.rad(Roll))
-- The third float when Pitch is 0 degrees
math.sin(math.rad(Yaw))*math.sin(math.rad(Roll))
-- The third float when Roll is 0 degrees
math.cos(math.rad(Yaw))*math.sin(math.rad(Pitch))
local function Rotate(X, Y, alpha)
local c, s = math.cos(math.rad(alpha)), math.sin(math.rad(alpha))
local t1, t2, t3 = X[1]*s, X[2]*s, X[3]*s
X[1], X[2], X[3] = X[1]*c+Y[1]*s, X[2]*c+Y[2]*s, X[3]*c+Y[3]*s
Y[1], Y[2], Y[3] = Y[1]*c-t1, Y[2]*c-t2, Y[3]*c-t3
end
local function convert_rotations(Yaw, Pitch, Roll)
local F, L, T = {1,0,0}, {0,1,0}, {0,0,1}
Rotate(F, L, Yaw)
Rotate(F, T, Pitch)
Rotate(T, L, Roll)
return {F[1], -L[1], -T[1], -F[3], L[3], T[3]}
end
Related
I have a coordinate system where the upper left corner is (0,0) and the bottom right is (375,647)
In this system I have the coordinate of vertices like bellow :
var Vertices = [
Vertex(x: 0, y: 0, z: 0, r: 1, g: 0, b: 0, a: 1),
Vertex(x: 375, y: 0, z: 0, r: 1, g: 0, b: 0, a: 1),
Vertex(x: 375, y: 647, z: 0, r: 1, g: 0, b: 0, a: 1),
Vertex(x: 0, y: 647, z: 0, r: 1, g: 0, b: 0, a: 1),
]
so the vertices draw a rectangle that fills all my coordinates system. The screen size is (750,1294). My coordinate system represents this screen.
with Metal mormalized device coordinates use a left-handed coordinate system and map to positions in the viewport. Primitives are clipped to a box in this coordinate system and then rasterized. The lower-left corner of the clipping box is at an (x,y) coordinate of (-1.0,-1.0) and the upper-right corner is at (1.0,1.0).
How to draw my vertices with metal ?
I try to do
[RenderCommand setViewport: (MTLViewport){ 0.0, 0.0, 750, 1294, 0.0, 1.0 }];
but it's seam to not work, so maybe I miss something?
I'm using OpenCV 3.2.0 to do some Fourier space calculations. To obtain a phase image after an inverse DFT, I tried using cv::phase() but I noticed that in some cases, it returned values close to 2*Pi where it should (in my eyes) return a value close to zero. I wonder if this function is implemented badly or if I'm using it wrong.
This is my example data, a 7x8 FFT where the imaginary part is zero or, due to rounding errors, very close to zero (value pairs in the form of real, imag):
0.75686288, 0, 0.74509817, -3.6017641e-19, 0.74117655, -4.8023428e-19, 0.76078451, -1.3206505e-18, 0.77647072, 0, 0.74509817, -3.6017641e-19, 0.72549027, 4.8023428e-19, 0.70588243, 2.0410032e-18;
0.70980388, 0, 0.66666675, -6.6032515e-19, 0.69803929, -3.8418834e-18, 0.73725492, -5.3426161e-18, 0.69803923, 0, 0.6549021, -6.6032515e-19, 0.5725491, 3.8418834e-18, 0.5411765, 6.6632662e-18;
0.63529414, 0, 0.6352942, -1.7408535e-18, 0.63921577, -5.1625314e-18, 0.61960787, -3.1815585e-18, 0.60784316, 0, 0.55686277, -1.7408535e-18, 0.4705883, 5.1625314e-18, 0.45882356, 6.6632657e-18;
0.58039224, 0, 0.58431381, -6.6032412e-19, 0.63921583, -7.8038246e-18, 0.63921577, -7.9839117e-18, 0.50196087, 0, 0.45490205, -6.6032412e-19, 0.38431379, 7.8038246e-18, 0.35686284, 9.3045593e-18;
0.54117656, 0, 0.58431375, -9.0044183e-19, 0.68627465, -9.1244722e-18, 0.6156863, -6.7833236e-18, 0.48627454, 0, 0.45490202, -9.0044183e-19, 0.38823539, 9.1244722e-18, 0.36470592, 8.5842074e-18;
0.50980395, 0, 0.56470597, -6.0029469e-19, 0.57254916, -4.8023546e-18, 0.54901963, -3.9619416e-18, 0.4784314, 0, 0.42352945, -6.0029469e-19, 0.41568634, 4.8023546e-18, 0.39999998, 5.162531e-18;
0.49411768, 0, 0.50588238, 4.8023392e-19, 0.54509813, -1.6808249e-18, 0.56078434, -3.241587e-18, 0.49803928, 0, 0.49411774, 4.8023392e-19, 0.49019611, 1.6808249e-18, 0.47058827, 2.2811191e-18
I then applied cv::phase() like this:
Mat planes[2];
split(output,planes);
Mat ph;
phase(planes[0],planes[1],ph);
Then, cout<<ph yields:
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 6.6180405e-19, 2.8908079e-18;
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 6.7087144e-18, 1.2309944e-17;
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 1.096805e-17, 1.451942e-17;
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 2.0301558e-17, 2.6067677e-17;
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 2.3497438e-17, 2.3532349e-17;
0, 6.2831855, 6.2831855, 6.2831855, 0, 6.2831855, 1.1550381e-17, 1.2903592e-17;
0, 9.4909814e-19, 6.2831855, 6.2831855, 0, 9.7169579e-19, 3.4281555e-18, 4.8463495e-18
So the output is sort of oscillating between the lowest and highest value. I was awaiting a matrix of (near) zeros though, because a non-existing phase shift would be in line with the underlying physics application. I then tried computing the phase image pixel by pixel:
Mat_<double> myPhase = Mat_<double>(8,7);
for(int i = 0; i < fftReal.rows; i++) {
for(int j = 0; j < fftReal.cols; j++) {
float fftRealVal = planes[0].at<float>(i,j);
float fftImagVal = planes[1].at<float>(i,j);
double angle = atan2(fftImagVal, fftRealVal);
myPhase(i,j) = angle;
}
Here, the output of cout<<myPhase is what I expected to see, a matrix of near zeros:
0, -4.833945789050036e-19, -6.479350716073673e-19, -1.735906137457605e-18, 0, -4.833945789050036e-19, 6.619444609555068e-19;
0, -9.904875721669217e-19, -5.503821154321125e-18, -7.246633215917781e-18, 0, -1.00828074413082e-18, 6.710137932686301e-18;
0, -2.740232027682232e-18, -8.076351618590122e-18, -5.13479354918468e-18, 0, -3.126180439429062e-18, 1.097037782204674e-17;
0, -1.130084743690479e-18, -1.220843476128649e-17, -1.249016668765776e-17, 0, -1.451574279023501e-18, 2.030586625060691e-17;
0, -1.541024556489219e-18, -1.329565697018843e-17, -1.101749982000204e-17, 0, -1.979419217601631e-18, 2.350242300975683e-17;
0, -1.063021695913201e-18, -8.387671795472417e-18, -7.216393147084068e-18, 0, -1.417362295683461e-18, 1.155283208729227e-17;
0, 9.492995611309157e-19, -3.083527284733071e-18, -5.78045227144509e-18, 0, 9.719018577786209e-19, 3.428882635099473e-18;
4.847377651234249e-18, 6.937607420147441e-310, 6.937607420153765e-310, 6.93760742011582e-310, 6.93760742011503e-310, 6.937607420163251e-310, 6.937607420188547e-310
So does cv::phase() yield a wrong result here due to some rounding errors or does it work as it should and I'm missing some pre-processing or anything?
Note that
2*pi - 6.479350716073673e-19 == 6.28318530717959
Your two results are equivalent.
The C++ std::atan2 function returns a value in the range (-π , +π], so for any angle close to zero, whether positive or negative, you get a value close to zero.
The OpenCV cv::phase function is documented to use atan2, but it seems to return a value in the range [0, 2π) instead.
If you need the output to be in the (-π , +π] range, you can do (modified from here):
float pi = 3.14159265358979;
cv::subtract(ph, 2*pi, ph, (ph > pi));
Apologies if this seems trivial - relatively new to openCV.
Essentially, I'm trying to create a function that can take in a camera's image, the known world coordinates of that image, and the world coordinates of some other point 2, and then transform the camera's image to what it would look like if the camera was at point 2. From my understanding, the best way to tackle this is using a homography transformation using the warpPerspective tool.
The experiment is being done inside the Unreal Game simulation engine. Right now, I essentially read the data from the camera, and add a set transformation to the image. However, I seem to be doing something wrong as the image is looking something like this (original image first then distorted image):
Original Image
Distorted Image
This is the current code I have. Basically, it reads in the texture from Unreal engine, and then gets the individual pixel values and puts them into the openCV Mat. Then I try and apply my warpPerspective transformation. Interestingly, if I just try a simple warpAffine transformation (rotation), it works fine. I have seen this questions: Opencv virtually camera rotating/translating for bird's eye view, but I cannot figure out what I am doing wrong vs. their solution. I would really appreciate any help or guidance any of you may have. Thanks in advance!
ROSCamTextureRenderTargetRes->ReadPixels(ImageData);
cv::Mat image_data_matrix(TexHeight, TexWidth, CV_8UC3);
cv::Mat warp_dst, warp_rotate_dst;
int currCol = 0;
int currRow = 0;
cv::Vec3b* pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
for (auto color : ImageData)
{
pixel_left[currCol][2] = color.R;
pixel_left[currCol][1] = color.G;
pixel_left[currCol][0] = color.B;
currCol++;
if (currCol == TexWidth)
{
currRow++;
currCol = 0;
pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
}
}
warp_dst = cv::Mat(image_data_matrix.rows, image_data_matrix.cols, image_data_matrix.type());
double rotX = (45 - 90)*PI / 180;
double rotY = (90 - 90)*PI / 180;
double rotZ = (90 - 90)*PI / 180;
cv::Mat A1 = (cv::Mat_<float>(4, 3) <<
1, 0, (-1)*TexWidth / 2,
0, 1, (-1)*TexHeight / 2,
0, 0, 0,
0, 0, 1);
// Rotation matrices Rx, Ry, Rz
cv::Mat RX = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, cos(rotX), (-1)*sin(rotX), 0,
0, sin(rotX), cos(rotX), 0,
0, 0, 0, 1);
cv::Mat RY = (cv::Mat_<float>(4, 4) <<
cos(rotY), 0, (-1)*sin(rotY), 0,
0, 1, 0, 0,
sin(rotY), 0, cos(rotY), 0,
0, 0, 0, 1);
cv::Mat RZ = (cv::Mat_<float>(4, 4) <<
cos(rotZ), (-1)*sin(rotZ), 0, 0,
sin(rotZ), cos(rotZ), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// R - rotation matrix
cv::Mat R = RX * RY * RZ;
// T - translation matrix
cv::Mat T = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// K - intrinsic matrix
cv::Mat K = (cv::Mat_<float>(3, 4) <<
12.5, 0, TexHeight / 2, 0,
0, 12.5, TexWidth / 2, 0,
0, 0, 1, 0
);
cv::Mat warp_mat = K * (T * (R * A1));
//warp_mat = cv::getRotationMatrix2D(srcTri[0], 43.0, 1);
//cv::warpAffine(image_data_matrix, warp_dst, warp_mat, warp_dst.size());
cv::warpPerspective(image_data_matrix, warp_dst, warp_mat, image_data_matrix.size(), CV_INTER_CUBIC | CV_WARP_INVERSE_MAP);
cv::imshow("distort", warp_dst);
cv::imshow("imaage", image_data_matrix)
Given a 3 x 3 rotation matrix,R, and a 3 x 1 translation matrix,T, I am wondering how to multiply the T and R matrices to an image?
Lets say the Iplimage img is 640 x 480.
What I want to do is R*(T*img).
I was thinking of using cvGemm, but that didn't work.
The function you are searching for is probably warpPerspective() : this is a use case...
// Projection 2D -> 3D matrix
Mat A1 = (Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);
// Rotation matrices around the X axis
Mat R = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(alpha), -sin(alpha), 0,
0, sin(alpha), cos(alpha), 0,
0, 0, 0, 1);
// Translation matrix on the Z axis
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// Camera Intrisecs matrix 3D -> 2D
Mat A2 = (Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);
Mat transfo = A2 * (T * (R * A1));
Mat source;
Mat destination;
warpPerspective(source, destination, transfo, source.size(), INTER_CUBIC | WARP_INVERSE_MAP);
I hope it could help you,
Julien
PS : I gave the example with a projection from 2D to 3D but you can use directly transfo = T* R;
I've a calibrated camera where I exactly know the intrinsic and extrinsic data. Also the height of the camera is known. Now I want to virtually rotate the camera for getting a Bird's eye view, such that I can build the Homography matrix with the three rotation angles and the translation.
I know that 2 points can be transformed from one image to another via Homography as
x=K*(R-t*n/d)K^-1 * x'
there are a few things I'd like to know now:
if I want to bring back the image coordinate in ccs, I have to multiply it with K^-1, right? As Image coordinate I use (x',y',1) ?
Then I need to built a rotation matrix for rotating the ccs...but which convention should I use? And how do I know how to set up my WCS?
The next thing is the normal and the distance. Is it right just to take three points lying on the ground and compute the normal out of them? and is the distance then the camera height?
Also I'd like to know how I can change the height of the virtually looking bird view camera, such that I can say I want to see the ground plane from 3 meters height. How can I use the unit "meter" in the translation and homography Matrix?
So far for now, it would be great if someone could enlighten and help me. And please don't suggest generating the bird view with "getperspective", I ve already tried that but this way is not suitable for me.
Senna
That is the code i would advise (it's one of mine), to my mind it answers a lot of your questions,
If you want the distance, i would precise that it is in the Z matrix, the (4,3) coefficient.
Hope it will help you...
Mat source=imread("Whatyouwant.jpg");
int alpha_=90., beta_=90., gamma_=90.;
int f_ = 500, dist_ = 500;
Mat destination;
string wndname1 = getFormatWindowName("Source: ");
string wndname2 = getFormatWindowName("WarpPerspective: ");
string tbarname1 = "Alpha";
string tbarname2 = "Beta";
string tbarname3 = "Gamma";
string tbarname4 = "f";
string tbarname5 = "Distance";
namedWindow(wndname1, 1);
namedWindow(wndname2, 1);
createTrackbar(tbarname1, wndname2, &alpha_, 180);
createTrackbar(tbarname2, wndname2, &beta_, 180);
createTrackbar(tbarname3, wndname2, &gamma_, 180);
createTrackbar(tbarname4, wndname2, &f_, 2000);
createTrackbar(tbarname5, wndname2, &dist_, 2000);
imshow(wndname1, source);
while(true) {
double f, dist;
double alpha, beta, gamma;
alpha = ((double)alpha_ - 90.)*PI/180;
beta = ((double)beta_ - 90.)*PI/180;
gamma = ((double)gamma_ - 90.)*PI/180;
f = (double) f_;
dist = (double) dist_;
Size taille = source.size();
double w = (double)taille.width, h = (double)taille.height;
// Projection 2D -> 3D matrix
Mat A1 = (Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);
// Rotation matrices around the X,Y,Z axis
Mat RX = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(alpha), -sin(alpha), 0,
0, sin(alpha), cos(alpha), 0,
0, 0, 0, 1);
Mat RY = (Mat_<double>(4, 4) <<
cos(beta), 0, -sin(beta), 0,
0, 1, 0, 0,
sin(beta), 0, cos(beta), 0,
0, 0, 0, 1);
Mat RZ = (Mat_<double>(4, 4) <<
cos(gamma), -sin(gamma), 0, 0,
sin(gamma), cos(gamma), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// Composed rotation matrix with (RX,RY,RZ)
Mat R = RX * RY * RZ;
// Translation matrix on the Z axis change dist will change the height
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// Camera Intrisecs matrix 3D -> 2D
Mat A2 = (Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);
// Final and overall transformation matrix
Mat transfo = A2 * (T * (R * A1));
// Apply matrix transformation
warpPerspective(source, destination, transfo, taille, INTER_CUBIC | WARP_INVERSE_MAP);
imshow(wndname2, destination);
waitKey(30);
}
This code works for me but I don't know why the Roll and Pitch angles are exchanged. When I change "alpha", the image is warped in pitch and when I change "beta" the image in warped in roll. So, I changed my rotation matrix, as can be seen below.
Also, the RY has a signal error. You can check Ry at: http://en.wikipedia.org/wiki/Rotation_matrix.
The rotation metrix I use:
Mat RX = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(beta), -sin(beta), 0,
0, sin(beta), cos(beta), 0,
0, 0, 0, 1);
Mat RY = (Mat_<double>(4, 4) <<
cos(alpha), 0, sin(alpha), 0,
0, 1, 0, 0,
-sin(alpha), 0, cos(alpha), 0,
0, 0, 0, 1);
Mat RZ = (Mat_<double>(4, 4) <<
cos(gamma), -sin(gamma), 0, 0,
sin(gamma), cos(gamma), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
Regards