Change Directx Mesh Center - directx

I have created a mesh box in DirectX written in VB.NET. When I rotate and scale mesh, it is doing so from the center.
How do I change the mesh center similar to the following image:
Mesh Center

Translate the mesh matrix to the rotation/scaling center, apply rotation/scaling, then translate the matrix back. e.g.:
matWorld = Matrix.Identity
' Translate to rotation/scaling center
matWorld = matWorld * Matrix.Translate(0.1, 0.2, 0.3)
' Apply your rotation/scaling
matWorld = matWorld * Matrix.RotationZ(0.01)
' Translate from rotation/scaling center
matWorld = matWorld * Matrix.Translate(-0.1, -0.2, -0.3)
' Assign matWorld as world transformation matrix
device.Transform.World = matWorld
Note: I didn't test the above, so there may be syntactical issues.

Related

Rotating image with its bounding boxes yielding worse boxes at 45 degrees with opencv2 and numpy

I have some code, largely taken from various sources linked at the bottom of this post, written in Python, that takes an image of shape [height, width] and some bounding boxes in the [x_min, y_min, x_max, y_max] format, both numpy arrays, and rotates an image and its bounding boxes counterclockwise. Since after rotation the bounding box becomes more of a "diamond shape", i.e. not axis aligned, then I perform some calculations to make it axis aligned. The purpose of this code is to perform data augmentation in training an object detection neural network through the use of rotated data (where flipping horizontally or vertically is common). It seems flips of other angles are common for image classification, without bounding boxes, but when there is boxes, the resources for how to flip the boxes as well as the images is relatively sparse/niche.
It seems when I input an angle of 45 degrees, that I get some less than "tight" bounding boxes, as in the four corners are not a very good annotation, whereas the original one was close to perfect.
The image shown below is the first image in the MS COCO 2014 object detection dataset (training image), and its first bounding box annotation. My code is as follows:
import math
import cv2
import numpy as np
# angle assumed to be in degrees
# bbs a list of bounding boxes in x_min, y_min, x_max, y_max format
def rotateImageAndBoundingBoxes(im, bbs, angle):
h, w = im.shape[0], im.shape[1]
(cX, cY) = (w//2, h//2) # original image center
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0) # 2 by 3 rotation matrix
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the dimensions of the rotated image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation of the new centre
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
rotated_im = cv2.warpAffine(im, M, (nW, nH))
rotated_bbs = []
for bb in bbs:
# get the four rotated corners of the bounding box
vec1 = np.matmul(M, np.array([bb[0], bb[1], 1], dtype=np.float64)) # top left corner transformed
vec2 = np.matmul(M, np.array([bb[2], bb[1], 1], dtype=np.float64)) # top right corner transformed
vec3 = np.matmul(M, np.array([bb[0], bb[3], 1], dtype=np.float64)) # bottom left corner transformed
vec4 = np.matmul(M, np.array([bb[2], bb[3], 1], dtype=np.float64)) # bottom right corner transformed
x_vals = [vec1[0], vec2[0], vec3[0], vec4[0]]
y_vals = [vec1[1], vec2[1], vec3[1], vec4[1]]
x_min = math.ceil(np.min(x_vals))
x_max = math.floor(np.max(x_vals))
y_min = math.ceil(np.min(y_vals))
y_max = math.floor(np.max(y_vals))
bb = [x_min, y_min, x_max, y_max]
rotated_bbs.append(bb)
// my function to resize image and bbs to the original image size
rotated_im, rotated_bbs = resizeImageAndBoxes(rotated_im, w, h, rotated_bbs)
return rotated_im, rotated_bbs
The good bounding box looks like:
The not-so-good bounding box looks like :
I am trying to determine if this is an error of my code, or this is expected behavior? It seems like this problem is less apparent at integer multiples of pi/2 radians (90 degrees), but I would like to achieve tight bounding boxes at any angle of rotation. Any insights at all appreciated.
Sources:
[Open CV2 documentation] https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#gafbbc470ce83812914a70abfb604f4326
[Data Augmentation Discussion]
https://blog.paperspace.com/data-augmentation-for-object-detection-rotation-and-shearing/
[Mathematics of rotation around an arbitrary point in 2 dimension]
https://math.stackexchange.com/questions/2093314/rotation-matrix-of-rotation-around-a-point-other-than-the-origin
It seems for the most part this is expected behavior as per the comments. I do have a kind of hacky solution to this problem, where you can write a function like
# assuming box coords = [x_min, y_min, x_max, y_max]
def cropBoxByPercentage(box_coords, image_width, image_height, x_percentage=0.05, y_percentage=0.05):
box_xmin = box_coords[0]
box_ymin = box_coords[1]
box_xmax = box_coords[2]
box_ymax = box_coords[3]
box_width = box_xmax-box_xmin+1
box_height = box_ymax-box_ymin+1
dx = int(x_percentage * box_width)
dy = int(y_percentage * box_height)
box_xmin = max(0, box_xmin-dx)
box_xmax = min(image_width-1, box_xmax+dx)
box_ymin = max(0, box_ymax - dy)
box_ymax = min(image_height - 1, box_ymax + dy)
return np.array([box_xmin, box_xmax, box_ymin, box_ymax])
Where computing the x_percentage and y_percentage can be computed using a fixed value, or could be computed using some heuristic.

2D image coordinate to 3D world coordinate using Depth Map

Having the intrinsic and extrinsic matrices how does one transfer 2D image coordinate to 3D world coordinate using the depth map image? A similar problem was discussed here: 2D Coordinate to 3D world coordinate, but it assumes images are rectified which is not the case here. I have a problem formulating the equation for this.
p_camera.x = (pixCoord.x - cx) / fx;
p_camera.y = (pixCoord.y - cy) / fy;
p_camera.z = depth;
p_camera.x = -p_camera.x*p_camera.z;
p_camera.y = -p_camera.y*p_camera.z;
P_world = R * (p_camera + T);

Angles of rotation matrix using OpenCv function cvPosit

I'm working on a 3D Pose estimation system. I used OpenCVs function cvPosit to calculate the rotation matrix and the translation vector.
I also need the angles of the rotation matrix, but no algorithms seem to be working.
The function cv::RQDecomp3x3(), which was the answer of topic "in opencv : how to get yaw, roll, pitch from POSIT rotation matrix" cannot work, because the function needs the 3x3 matrix of the projection matrix.
Furthermore I tried to use algorithms from the links below, but nothing worked.
visionopen.com/cv/vosm/doc/html/recognitionalgs_8cpp_source.html
stackoverflow.com/questions/16266740/in-opencv-how-to-get-yaw-roll-pitch-from-posit-rotation-matrix
quad08pyro.groups.et.byu.net/vision.htm
stackoverflow.com/questions/13565625/opencv-c-posit-why-are-my-values-always-nan-with-small-focal-lenght
www.c-plusplus.de/forum/308773-full
I used the most common Posit Tutorial and an own example with Blender, so I could render an image to retreive the image points and to know the exact angles. The object's Z-Axis in Blender was rotated by 10 degrees - And I checked all the degrees of all 3 Axis due to changes in Axis between Blender and OpenCV.
double focalLength = 700.0;
CvPOSITObject* positObject;
std::vector<CvPoint3D32f> modelPoints;
modelPoints.push_back(cvPoint3D32f(0.0f, 0.0f, 0.0f));
modelPoints.push_back(cvPoint3D32f(CUBE_SIZE, 0.0f, 0.0f));
modelPoints.push_back(cvPoint3D32f(0.0f, CUBE_SIZE, 0.0f));
modelPoints.push_back(cvPoint3D32f(0.0f, 0.0f, CUBE_SIZE));
std::vector<CvPoint2D32f> imagePoints;
imagePoints.push_back( cvPoint2D32f( 157,372) );
imagePoints.push_back( cvPoint2D32f(423,386 ));
imagePoints.push_back( cvPoint2D32f( 157,108 ));
imagePoints.push_back( cvPoint2D32f(250,337));
// Moving the points to the image center as described in the tutorial
for (int i = 0; i < imagePoints.size();i++) {
imagePoints[i] = cvPoint2D32f(imagePoints[i].x -320, 240 - imagePoints[i].y);
}
CvVect32f translation_vector = new float[3];
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER,iterations, 0.1f);
positObject = cvCreatePOSITObject( &modelPoints[0], static_cast<int>(modelPoints.size()));
CvMatr32f rotation_matrix = new float[9];
cvPOSIT( positObject, &imagePoints[0], focalLength, criteria, rotation_matrix, translation_vector );
algorithms to get angles...
I already tried to calculate the results from radian to degree and clockwise but I already get bad results using the rotation matrix of cvPosit from OpenCV. I also changed matrix format to check wrong formatting...
I used simple rotation matrices - like only doing a rotation of the x-axis, y and z-axis and some algorithm worked. The rotation matrix of cvPosit didn't work with that algorithm.
I appreciate any support.

OpenCV: rotation/translation vector to OpenGL modelview matrix

I'm trying to use OpenCV to do some basic augmented reality. The way I'm going about it is using findChessboardCorners to get a set of points from a camera image. Then, I create a 3D quad along the z = 0 plane and use solvePnP to get a homography between the imaged points and the planar points. From that, I figure I should be able to set up a modelview matrix which will allow me to render a cube with the right pose on top of the image.
The documentation for solvePnP says that it outputs a rotation vector "that (together with [the translation vector] ) brings points from the model coordinate system to the camera coordinate system." I think that's the opposite of what I want; since my quad is on the plane z = 0, I want a a modelview matrix which will transform that quad to the appropriate 3D plane.
I thought that by performing the opposite rotations and translations in the opposite order I could calculate the correct modelview matrix, but that seems not to work. While the rendered object (a cube) does move with the camera image and seems to be roughly correct translationally, the rotation just doesn't work at all; it on multiple axes when it should only be rotating on one, and sometimes in the wrong direction. Here's what I'm doing so far:
std::vector<Point2f> corners;
bool found = findChessboardCorners(*_imageBuffer, cv::Size(5,4), corners,
CV_CALIB_CB_FILTER_QUADS |
CV_CALIB_CB_FAST_CHECK);
if(found)
{
drawChessboardCorners(*_imageBuffer, cv::Size(6, 5), corners, found);
std::vector<double> distortionCoefficients(5); // camera distortion
distortionCoefficients[0] = 0.070969;
distortionCoefficients[1] = 0.777647;
distortionCoefficients[2] = -0.009131;
distortionCoefficients[3] = -0.013867;
distortionCoefficients[4] = -5.141519;
// Since the image was resized, we need to scale the found corner points
float sw = _width / SMALL_WIDTH;
float sh = _height / SMALL_HEIGHT;
std::vector<Point2f> board_verts;
board_verts.push_back(Point2f(corners[0].x * sw, corners[0].y * sh));
board_verts.push_back(Point2f(corners[15].x * sw, corners[15].y * sh));
board_verts.push_back(Point2f(corners[19].x * sw, corners[19].y * sh));
board_verts.push_back(Point2f(corners[4].x * sw, corners[4].y * sh));
Mat boardMat(board_verts);
std::vector<Point3f> square_verts;
square_verts.push_back(Point3f(-1, 1, 0));
square_verts.push_back(Point3f(-1, -1, 0));
square_verts.push_back(Point3f(1, -1, 0));
square_verts.push_back(Point3f(1, 1, 0));
Mat squareMat(square_verts);
// Transform the camera's intrinsic parameters into an OpenGL camera matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Camera parameters
double f_x = 786.42938232; // Focal length in x axis
double f_y = 786.42938232; // Focal length in y axis (usually the same?)
double c_x = 217.01358032; // Camera primary point x
double c_y = 311.25384521; // Camera primary point y
cv::Mat cameraMatrix(3,3,CV_32FC1);
cameraMatrix.at<float>(0,0) = f_x;
cameraMatrix.at<float>(0,1) = 0.0;
cameraMatrix.at<float>(0,2) = c_x;
cameraMatrix.at<float>(1,0) = 0.0;
cameraMatrix.at<float>(1,1) = f_y;
cameraMatrix.at<float>(1,2) = c_y;
cameraMatrix.at<float>(2,0) = 0.0;
cameraMatrix.at<float>(2,1) = 0.0;
cameraMatrix.at<float>(2,2) = 1.0;
Mat rvec(3, 1, CV_32F), tvec(3, 1, CV_32F);
solvePnP(squareMat, boardMat, cameraMatrix, distortionCoefficients,
rvec, tvec);
_rv[0] = rvec.at<double>(0, 0);
_rv[1] = rvec.at<double>(1, 0);
_rv[2] = rvec.at<double>(2, 0);
_tv[0] = tvec.at<double>(0, 0);
_tv[1] = tvec.at<double>(1, 0);
_tv[2] = tvec.at<double>(2, 0);
}
Then in the drawing code...
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, -tv[1], -tv[0], -tv[2]);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[0], 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[1], 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[2], 0.0f, 0.0f, 1.0f);
The vertices I'm rendering create a cube of unit length around the origin (i.e. from -0.5 to 0.5 along each edge.) I know with OpenGL translation functions performed transformations in "reverse order," so the above should rotate the cube along the z, y, and then x axes, and then translate it. However, it seems like it's being translated first and then rotated, so perhaps Apple's GLKMatrix4 works differently?
This question seems very similar to mine, and in particular coder9's answer seems like it might be more or less what I'm looking for. However, I tried it and compared the results to my method, and the matrices I arrived at in both cases were the same. I feel like that answer is right, but that I'm missing some crucial detail.
You have to make sure the axis are facing the correct direction. Especially, the y and z axis are facing different directions in OpenGL and OpenCV to ensure the x-y-z basis is direct. You can find some information and code (with an iPad camera) in this blog post.
-- Edit --
Ah ok. Unfortunately, I used these resources to do it the other way round (opengl ---> opencv) to test some algorithms. My main issue was that the row order of the images was inverted between OpenGL and OpenCV (maybe this helps).
When simulating cameras, I came across the same projection matrices that can be found here and in the generalized projection matrix paper. This paper quoted in the comments of the blog post also shows some link between computer vision and OpenGL projections.
I'm not an IOS programmer, so this answer might be misleading!
If the problem is not in the order of applying the rotations and the translation, then suggest using a simpler and more commonly used coordinate system.
The points in the corners vector have the origin (0,0) at the top left corner of the image and the y axis is towards the bottom of the image. Often from math we are used to think of the coordinate system with the origin at the center and y axis towards the top of the image. From the coordinates you're pushing into board_verts I'm guessing you're making the same mistake. If that's the case, it's easy to transform the positions of the corners by something like this:
for (i=0;i<corners.size();i++) {
corners[i].x -= width/2;
corners[i].y = -corners[i].y + height/2;
}
then you call solvePnP(). Debugging this is not that difficult, just print the positions of the four corners and the estimated R and T, and see if they make sense. Then you can proceed to the OpenGL step. Please let me know how it goes.

Image transformation matrix in opencv

I'm currently working on this [opencv sample]
The interesting part is at line 89 warpPerspectiveRand method. I want to set the rotation angle, translation, scaling and other transformation values manually instead of using random generated values. But I don't know how to calculate the matrix elements.
A simple calculation example would be helpful.
Thanks
double ang = 0.1;
double xscale = 1.2;
double yscale = 1.5;
double xTranslation = 100;
double yTranslation = 200;
cv::Mat t(3,3,CV_64F);
t=0;
t.at<double>(0,0) = xscale*cos(ang);
t.at<double>(1,1) = yscale*cos(ang);
t.at<double>(0,1) = -sin(ang);
t.at<double>(1,0) = sin(ang);
t.at<double>(0,2) = xTranslation ;
t.at<double>(1,2) = yTranslation;
t.at<double>(2,2) = 1;
EDIT:
Rotation is always around (0,0). If you would like to rotated around a different point, you need to translate(move), rotate, and move back. It can be done by creating two matrices, one for rotation (A) and one for translation(T), and building a new Matrix M as:
M = inv(T) * A * T
What you're looking for is a projection matrix
http://en.wikipedia.org/wiki/3D_projection
There are different matrix styles, some of them are 4x4 (the complete theoretical projection matrix), some are 3x3 (as in OpenCV), because they consider the projection as a transform from a planar surface to another planar surface, and this constraint allows one to express the trasform by a 3x3 matrix.

Resources