how to project a point defined in real world coordinates to image plane and vice-versa? - image-processing

I prepared a toy experiment to project a point defined in the world frame to the image plane. I'm trying to calculate the 3D point (inverse-projection) with the calculated pixel coordinates. I used the same coordinate frames as the figure below [https://www.researchgate.net/figure/World-and-camera-frame-definition_fig1_224318711]. (world x,y,z -> camera z,-x,-y). Then I will project the same point for different image frames. The problem here is that the 3D point defined as (8,-2.1) is calculated at (6.29, -1.60, 0.7). Since there is no loss of information, I think I need to find the same point. I believe there is a problem with depth. I couldn't find where I missed.
import numpy as np
#3d point at 8 -2 1 wrt world frame
P_world = np.array([8, -2, 1, 1]).reshape((4,1))
T_wc = np.array([
[ 0, -1, 0, 0],
[ 0, 0, -1, 0],
[ 1, 0, 0, 0],
[ 0, 0, 0, 1]])
pose0 = np.eye(4)
pose0[:3,-1] = [1, 0, .6]
pose0 = np.matmul(T_wc, pose0)
pose1 = np.eye(4)
pose1[:3,-1] = [3, 0, .6]
pose1 = np.matmul(T_wc, pose1)
depth0 = np.linalg.norm(P_world[:3].flatten() - np.array([1, 0, .6]).flatten())
depth1 = np.linalg.norm(P_world[:3].flatten() - np.array([3, 0, .6]).flatten())
K = np.array([
[173.0, 0 , 173.0],
[ 0 , 173.0, 130.0],
[ 0 , 0 , 1 ]])
uv1 = np.matmul(np.matmul(K, pose0[:3]), P_world)
uv1 = (uv1 / uv1[-1])[:2]
uv2 = np.matmul(np.matmul(K, pose1[:3]), P_world)
uv2 = (uv2 / uv2[-1])[:2]
img0 = np.zeros((260,346))
img0[int(uv1[0]), int(uv1[1])] = 1
img1 = np.zeros((260,346))
img1[int(uv2[0]), int(uv2[1])] = 1
#%% Op
pix_coord = np.array([int(uv1[0]), int(uv1[1]), 1])
pt_infilm = np.matmul(np.linalg.inv(K), pix_coord.reshape(3,1))
pt_incam = depth0*pt_infilm
pt_incam_hom = np.append(pt_incam, 1)
pt_inworld = np.matmul(np.linalg.inv(pose0), pt_incam_hom)

Related

Using warpPerspective to simulate virtual camera issues

Apologies if this seems trivial - relatively new to openCV.
Essentially, I'm trying to create a function that can take in a camera's image, the known world coordinates of that image, and the world coordinates of some other point 2, and then transform the camera's image to what it would look like if the camera was at point 2. From my understanding, the best way to tackle this is using a homography transformation using the warpPerspective tool.
The experiment is being done inside the Unreal Game simulation engine. Right now, I essentially read the data from the camera, and add a set transformation to the image. However, I seem to be doing something wrong as the image is looking something like this (original image first then distorted image):
Original Image
Distorted Image
This is the current code I have. Basically, it reads in the texture from Unreal engine, and then gets the individual pixel values and puts them into the openCV Mat. Then I try and apply my warpPerspective transformation. Interestingly, if I just try a simple warpAffine transformation (rotation), it works fine. I have seen this questions: Opencv virtually camera rotating/translating for bird's eye view, but I cannot figure out what I am doing wrong vs. their solution. I would really appreciate any help or guidance any of you may have. Thanks in advance!
ROSCamTextureRenderTargetRes->ReadPixels(ImageData);
cv::Mat image_data_matrix(TexHeight, TexWidth, CV_8UC3);
cv::Mat warp_dst, warp_rotate_dst;
int currCol = 0;
int currRow = 0;
cv::Vec3b* pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
for (auto color : ImageData)
{
pixel_left[currCol][2] = color.R;
pixel_left[currCol][1] = color.G;
pixel_left[currCol][0] = color.B;
currCol++;
if (currCol == TexWidth)
{
currRow++;
currCol = 0;
pixel_left = image_data_matrix.ptr<cv::Vec3b>(currRow);
}
}
warp_dst = cv::Mat(image_data_matrix.rows, image_data_matrix.cols, image_data_matrix.type());
double rotX = (45 - 90)*PI / 180;
double rotY = (90 - 90)*PI / 180;
double rotZ = (90 - 90)*PI / 180;
cv::Mat A1 = (cv::Mat_<float>(4, 3) <<
1, 0, (-1)*TexWidth / 2,
0, 1, (-1)*TexHeight / 2,
0, 0, 0,
0, 0, 1);
// Rotation matrices Rx, Ry, Rz
cv::Mat RX = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, cos(rotX), (-1)*sin(rotX), 0,
0, sin(rotX), cos(rotX), 0,
0, 0, 0, 1);
cv::Mat RY = (cv::Mat_<float>(4, 4) <<
cos(rotY), 0, (-1)*sin(rotY), 0,
0, 1, 0, 0,
sin(rotY), 0, cos(rotY), 0,
0, 0, 0, 1);
cv::Mat RZ = (cv::Mat_<float>(4, 4) <<
cos(rotZ), (-1)*sin(rotZ), 0, 0,
sin(rotZ), cos(rotZ), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// R - rotation matrix
cv::Mat R = RX * RY * RZ;
// T - translation matrix
cv::Mat T = (cv::Mat_<float>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// K - intrinsic matrix
cv::Mat K = (cv::Mat_<float>(3, 4) <<
12.5, 0, TexHeight / 2, 0,
0, 12.5, TexWidth / 2, 0,
0, 0, 1, 0
);
cv::Mat warp_mat = K * (T * (R * A1));
//warp_mat = cv::getRotationMatrix2D(srcTri[0], 43.0, 1);
//cv::warpAffine(image_data_matrix, warp_dst, warp_mat, warp_dst.size());
cv::warpPerspective(image_data_matrix, warp_dst, warp_mat, image_data_matrix.size(), CV_INTER_CUBIC | CV_WARP_INVERSE_MAP);
cv::imshow("distort", warp_dst);
cv::imshow("imaage", image_data_matrix)

Buggy CATransform3D?

EDITED
(the example contained mistake so I replaced it with another one)
The following code is just an example how does it work:
CATransform3D temp = CATransform3DIdentity;
temp.m34 = -0.002;
temp = CATransform3DTranslate(temp, 0, -230, 0);
temp = CATransform3DRotate(temp, -M_PI / 5, 1, 0, 0);
temp = CATransform3DTranslate(temp, 0, 230, 0);
The output before the last line of code:
(lldb) po temp
(m11 = 1, m12 = 0, m13 = 0, m14 = 0, m21 = 0, m22 = 0.809017002, m23 = -0.587785244, m24 = 0.00117557053, m31 = 0, m32 = 0.587785244, m33 = 0.809017002, m34 = -0.00161803409, m41 = 0, m42 = -230, m43 = 0, m44 = 1)
The output after the last line of code:
(lldb) po temp
(m11 = 1, m12 = 0, m13 = 0, m14 = 0, m21 = 0, m22 = 0.809017002, m23 = -0.587785244, m24 = 0.00117557053, m31 = 0, m32 = 0.587785244, m33 = 0.809017002, m34 = -0.00161803409, m41 = 0, m42 = -43.9260902, m43 = -135.190613, m44 = 1.27038121)
Whats the ...? The last line of code does nothing because it is E (it is indentation matrix and multiplication with it should return the same result) but it have changed even the m44 element which must be always equal to 1.
And even if this matrix performs the same calculations as the correct one does then, for example, I can't simply take back a transform value which is stored in it.
Could anybody suggest a solution how to generate correct matrices (m44 == 1) except of multiplicating them manually?
It is unclear what you're trying to do. You create a CATransform3D called temp, but on the third line of your code you are using a variable called transform which is not mentioned elsewhere in your 3 lines of code. Maybe you meant to use temp instead.
I have found a reason of the problem. Everything is correct because in CATransform3D all the transform matrices are transposed but I thought that they look like the following one:
So m44 != 1 because m34 is not the z coordinate or another kind of translation. It means "perspective" which has influence on m44 value. It also tangled me that m44 == 1 until the last line of code.

Compute homography for a virtual camera with opencv

I have an image of a planar surface, and I want to compute an image warping that gives me a synthetic view of the same planar surface seen from a virtual camera located at another point in the 3d space.
So, given an image I1 I want to compute an image I2 that represents the image I1 seen from a virtual camera.
In theory, there exists an homography that relates these two images.
How do I compute this homography given the camera pose of the virtual camera, as well as it's matrix of internal parameters?
I'm using opencv's warpPerspective() function to apply this homography and generate the image warped.
Thanks in advance.
Ok, found this post (Opencv virtually camera rotating/translating for bird's eye view), where I found some code doing what I needed.
However, I noticed that the rotation in Y had a sign error (-sin instead of sin) . Here's my solution adapted for python. I'm new to python, sorry if I'm doing something ugly.
import cv2
import numpy as np
rotXdeg = 90
rotYdeg = 90
rotZdeg = 90
f = 500
dist = 500
def onRotXChange(val):
global rotXdeg
rotXdeg = val
def onRotYChange(val):
global rotYdeg
rotYdeg = val
def onRotZChange(val):
global rotZdeg
rotZdeg = val
def onFchange(val):
global f
f=val
def onDistChange(val):
global dist
dist=val
if __name__ == '__main__':
#Read input image, and create output image
src = cv2.imread('/home/miquel/image.jpeg')
dst = np.ndarray(shape=src.shape,dtype=src.dtype)
#Create user interface with trackbars that will allow to modify the parameters of the transformation
wndname1 = "Source:"
wndname2 = "WarpPerspective: "
cv2.namedWindow(wndname1, 1)
cv2.namedWindow(wndname2, 1)
cv2.createTrackbar("Rotation X", wndname2, rotXdeg, 180, onRotXChange)
cv2.createTrackbar("Rotation Y", wndname2, rotYdeg, 180, onRotYChange)
cv2.createTrackbar("Rotation Z", wndname2, rotZdeg, 180, onRotZChange)
cv2.createTrackbar("f", wndname2, f, 2000, onFchange)
cv2.createTrackbar("Distance", wndname2, dist, 2000, onDistChange)
#Show original image
cv2.imshow(wndname1, src)
h , w = src.shape[:2]
while True:
rotX = (rotXdeg - 90)*np.pi/180
rotY = (rotYdeg - 90)*np.pi/180
rotZ = (rotZdeg - 90)*np.pi/180
#Projection 2D -> 3D matrix
A1= np.matrix([[1, 0, -w/2],
[0, 1, -h/2],
[0, 0, 0 ],
[0, 0, 1 ]])
# Rotation matrices around the X,Y,Z axis
RX = np.matrix([[1, 0, 0, 0],
[0,np.cos(rotX),-np.sin(rotX), 0],
[0,np.sin(rotX),np.cos(rotX) , 0],
[0, 0, 0, 1]])
RY = np.matrix([[ np.cos(rotY), 0, np.sin(rotY), 0],
[ 0, 1, 0, 0],
[ -np.sin(rotY), 0, np.cos(rotY), 0],
[ 0, 0, 0, 1]])
RZ = np.matrix([[ np.cos(rotZ), -np.sin(rotZ), 0, 0],
[ np.sin(rotZ), np.cos(rotZ), 0, 0],
[ 0, 0, 1, 0],
[ 0, 0, 0, 1]])
#Composed rotation matrix with (RX,RY,RZ)
R = RX * RY * RZ
#Translation matrix on the Z axis change dist will change the height
T = np.matrix([[1,0,0,0],
[0,1,0,0],
[0,0,1,dist],
[0,0,0,1]])
#Camera Intrisecs matrix 3D -> 2D
A2= np.matrix([[f, 0, w/2,0],
[0, f, h/2,0],
[0, 0, 1,0]])
# Final and overall transformation matrix
H = A2 * (T * (R * A1))
# Apply matrix transformation
cv2.warpPerspective(src, H, (w, h), dst, cv2.INTER_CUBIC)
#Show the image
cv2.imshow(wndname2, dst)
cv2.waitKey(1)

Translating and Rotating an Image in 3D using OpenCV

Given a 3 x 3 rotation matrix,R, and a 3 x 1 translation matrix,T, I am wondering how to multiply the T and R matrices to an image?
Lets say the Iplimage img is 640 x 480.
What I want to do is R*(T*img).
I was thinking of using cvGemm, but that didn't work.
The function you are searching for is probably warpPerspective() : this is a use case...
// Projection 2D -> 3D matrix
Mat A1 = (Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);
// Rotation matrices around the X axis
Mat R = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(alpha), -sin(alpha), 0,
0, sin(alpha), cos(alpha), 0,
0, 0, 0, 1);
// Translation matrix on the Z axis
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// Camera Intrisecs matrix 3D -> 2D
Mat A2 = (Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);
Mat transfo = A2 * (T * (R * A1));
Mat source;
Mat destination;
warpPerspective(source, destination, transfo, source.size(), INTER_CUBIC | WARP_INVERSE_MAP);
I hope it could help you,
Julien
PS : I gave the example with a projection from 2D to 3D but you can use directly transfo = T* R;

Opencv virtually camera rotating/translating for bird's eye view

I've a calibrated camera where I exactly know the intrinsic and extrinsic data. Also the height of the camera is known. Now I want to virtually rotate the camera for getting a Bird's eye view, such that I can build the Homography matrix with the three rotation angles and the translation.
I know that 2 points can be transformed from one image to another via Homography as
x=K*(R-t*n/d)K^-1 * x'
there are a few things I'd like to know now:
if I want to bring back the image coordinate in ccs, I have to multiply it with K^-1, right? As Image coordinate I use (x',y',1) ?
Then I need to built a rotation matrix for rotating the ccs...but which convention should I use? And how do I know how to set up my WCS?
The next thing is the normal and the distance. Is it right just to take three points lying on the ground and compute the normal out of them? and is the distance then the camera height?
Also I'd like to know how I can change the height of the virtually looking bird view camera, such that I can say I want to see the ground plane from 3 meters height. How can I use the unit "meter" in the translation and homography Matrix?
So far for now, it would be great if someone could enlighten and help me. And please don't suggest generating the bird view with "getperspective", I ve already tried that but this way is not suitable for me.
Senna
That is the code i would advise (it's one of mine), to my mind it answers a lot of your questions,
If you want the distance, i would precise that it is in the Z matrix, the (4,3) coefficient.
Hope it will help you...
Mat source=imread("Whatyouwant.jpg");
int alpha_=90., beta_=90., gamma_=90.;
int f_ = 500, dist_ = 500;
Mat destination;
string wndname1 = getFormatWindowName("Source: ");
string wndname2 = getFormatWindowName("WarpPerspective: ");
string tbarname1 = "Alpha";
string tbarname2 = "Beta";
string tbarname3 = "Gamma";
string tbarname4 = "f";
string tbarname5 = "Distance";
namedWindow(wndname1, 1);
namedWindow(wndname2, 1);
createTrackbar(tbarname1, wndname2, &alpha_, 180);
createTrackbar(tbarname2, wndname2, &beta_, 180);
createTrackbar(tbarname3, wndname2, &gamma_, 180);
createTrackbar(tbarname4, wndname2, &f_, 2000);
createTrackbar(tbarname5, wndname2, &dist_, 2000);
imshow(wndname1, source);
while(true) {
double f, dist;
double alpha, beta, gamma;
alpha = ((double)alpha_ - 90.)*PI/180;
beta = ((double)beta_ - 90.)*PI/180;
gamma = ((double)gamma_ - 90.)*PI/180;
f = (double) f_;
dist = (double) dist_;
Size taille = source.size();
double w = (double)taille.width, h = (double)taille.height;
// Projection 2D -> 3D matrix
Mat A1 = (Mat_<double>(4,3) <<
1, 0, -w/2,
0, 1, -h/2,
0, 0, 0,
0, 0, 1);
// Rotation matrices around the X,Y,Z axis
Mat RX = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(alpha), -sin(alpha), 0,
0, sin(alpha), cos(alpha), 0,
0, 0, 0, 1);
Mat RY = (Mat_<double>(4, 4) <<
cos(beta), 0, -sin(beta), 0,
0, 1, 0, 0,
sin(beta), 0, cos(beta), 0,
0, 0, 0, 1);
Mat RZ = (Mat_<double>(4, 4) <<
cos(gamma), -sin(gamma), 0, 0,
sin(gamma), cos(gamma), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
// Composed rotation matrix with (RX,RY,RZ)
Mat R = RX * RY * RZ;
// Translation matrix on the Z axis change dist will change the height
Mat T = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, dist,
0, 0, 0, 1);
// Camera Intrisecs matrix 3D -> 2D
Mat A2 = (Mat_<double>(3,4) <<
f, 0, w/2, 0,
0, f, h/2, 0,
0, 0, 1, 0);
// Final and overall transformation matrix
Mat transfo = A2 * (T * (R * A1));
// Apply matrix transformation
warpPerspective(source, destination, transfo, taille, INTER_CUBIC | WARP_INVERSE_MAP);
imshow(wndname2, destination);
waitKey(30);
}
This code works for me but I don't know why the Roll and Pitch angles are exchanged. When I change "alpha", the image is warped in pitch and when I change "beta" the image in warped in roll. So, I changed my rotation matrix, as can be seen below.
Also, the RY has a signal error. You can check Ry at: http://en.wikipedia.org/wiki/Rotation_matrix.
The rotation metrix I use:
Mat RX = (Mat_<double>(4, 4) <<
1, 0, 0, 0,
0, cos(beta), -sin(beta), 0,
0, sin(beta), cos(beta), 0,
0, 0, 0, 1);
Mat RY = (Mat_<double>(4, 4) <<
cos(alpha), 0, sin(alpha), 0,
0, 1, 0, 0,
-sin(alpha), 0, cos(alpha), 0,
0, 0, 0, 1);
Mat RZ = (Mat_<double>(4, 4) <<
cos(gamma), -sin(gamma), 0, 0,
sin(gamma), cos(gamma), 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
Regards

Resources