Xna draw cube supplied from OBJ - xna

I got generated Cube to .obj file from Blender. Like this:
o Cube_Cube.001
v -1.000000 -1.000000 -1.000000
v 1.000000 -1.000000 -1.000000
v 1.000000 -1.000000 1.000000
v -1.000000 -1.000000 1.000000
v -1.000000 1.000000 -1.000000
v 1.000000 1.000000 -1.000000
v 1.000000 1.000000 1.000000
v -1.000000 1.000000 1.000000
vn 0.000000 0.000000 -1.000000
vn 1.000000 0.000000 0.000000
vn 0.000000 0.000000 1.000000
vn -1.000000 0.000000 0.000000
vn 0.000000 -1.000000 0.000000
vn 0.000000 1.000000 0.000000
usemtl None
s off
f 5//1 6//1 1//1
f 6//2 7//2 2//2
f 7//3 8//3 3//3
f 8//4 5//4 4//4
f 1//5 2//5 4//5
f 8//6 7//6 5//6
f 6//1 2//1 1//1
f 7//2 3//2 2//2
f 8//3 4//3 3//3
f 5//4 1//4 4//4
f 2//5 3//5 4//5
f 7//6 6//6 5//6
I load vertices as they are line by line and store them in array. Then indicies (lines starting with 'f' as I suppose) stored to Indicies array so something like {4, 5, 0, 6, 7 ...} (decreased by 1 because of position of vertex in array) and then create array VertexPositionColors.
But when I draw primitives the cube is wierdly deformed.
Draw method:
GraphicsDevice.Clear(Color.CornflowerBlue);
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
effect.World = world;
effect.View = view;
effect.Projection = projection;
effect.VertexColorEnabled = true;
RasterizerState rs = new RasterizerState();
rs.CullMode = CullMode.None;
rs.FillMode = FillMode.WireFrame;
GraphicsDevice.RasterizerState = rs;
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
GraphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, model.VertexPositionColors, 0, model.VertexPositionColors.Length, model.Indicies, 0, model.Indicies.Length / 3, model.VertexDeclarations);
}
output
http://i.imgur.com/3pLj0U9.png
[SOLUTION]
Check what your import algorithm generates. My indices were messed up because of bad iteration. My bad :)

I would recommend using FBX and the standard XNA content importer system. This streamlines the process so you don't even have to write an importer; you just point XNA at your content and can draw it in very few, very simple commands. Writing an importer yourself is a good exercise, but on the XNA platform it is not necessary.

Related

Eigen matrix product doesn't work correctly in ORB-SLAM3 in release mode on windows

The whole picture of the problem is as below:
I cloned the ORB-SLAM3 from https://github.com/rexdsp/ORB_SLAM3_Windows. It can be built and works well. Then I tried to evaluate its accuracy with EuRoC MH_01_easy, but found it was at least 10 times worse in release mode than the accuracy reported in the paper. After debugging several days, I found the reason was in the function below:
void EdgeSE3ProjectXYZ::linearizeOplus() {
g2o::VertexSE3Expmap * vj = static_cast<g2o::VertexSE3Expmap *>(_vertices[1]);
g2o::SE3Quat T(vj->estimate());
g2o::VertexSBAPointXYZ* vi = static_cast<g2o::VertexSBAPointXYZ*>(_vertices[0]);
Eigen::Vector3d xyz = vi->estimate();
Eigen::Vector3d xyz_trans = T.map(xyz);
double x = xyz_trans[0];
double y = xyz_trans[1];
double z = xyz_trans[2];
auto projectJac = -pCamera->projectJac(xyz_trans);
_jacobianOplusXi = projectJac * T.rotation().toRotationMatrix();
double* buf = (double*)_jacobianOplusXi.data();
Eigen::Matrix<double,3,6> SE3deriv;
SE3deriv << 0.f, z, -y, 1.f, 0.f, 0.f,
-z , 0.f, x, 0.f, 1.f, 0.f,
y , -x , 0.f, 0.f, 0.f, 1.f;
buf = (double*)SE3deriv.data();
printf("S: %f %f %f %f %f %f\n", buf[0], buf[3], buf[6], buf[9], buf[12], buf[15]);
printf("S: %f %f %f %f %f %f\n", buf[1], buf[4], buf[7], buf[10], buf[13], buf[16]);
printf("S: %f %f %f %f %f %f\n", buf[2], buf[5], buf[8], buf[11], buf[14], buf[17]);
_jacobianOplusXj = projectJac * SE3deriv;
buf = (double*)_jacobianOplusXj.data();
printf("j: %f %f %f %f %f %f\n", buf[0], buf[2], buf[4], buf[6], buf[8], buf[10]);
printf("j: %f %f %f %f %f %f\n", buf[1], buf[3], buf[5], buf[7], buf[9], buf[11]);
}
The function does not work correctly in release mode, as what is shown in the log: (Jac is the content of projectJac printed in other function)
Jac: 6.586051 0.000000 5.789042
Jac: 0.000000 6.566550 -0.414389
S: 0.000000 69.640217 -4.394719 1.000000 0.000000 0.000000
S: -69.640217 0.000000 -61.212726 0.000000 1.000000 0.000000
S: 4.394719 61.212726 0.000000 0.000000 0.000000 1.000000
j: 25.478939 354.888530 0.000000 -0.000000 -0.000000 5.797627
j: -2.092870 -29.150968 0.000000 -0.000000 -0.000000 -0.476224
The whole system works correctly in Debug mode,the accuracy is similar to the results reported in the paper. And the log of the function above is as follows:
Jac: 6.586051 0.000000 5.789042
Jac: 0.000000 6.566550 -0.414389
S: 0.000000 69.640217 -4.394719 1.000000 0.000000 0.000000
S: -69.640217 0.000000 -61.212726 0.000000 1.000000 0.000000
S: 4.394719 61.212726 0.000000 0.000000 0.000000 1.000000
j: -25.441210 -813.017008 28.943840 -6.586051 -0.000000 -5.789042
j: 459.117113 25.365882 401.956448 0.000000 -6.566550 0.414389
Have you met this kind of problem before? Could you tell me the reason of this problem? I guess it is caused by compiler settings. Could anyone tell me how to solve this problem?
Thanks a lot.
Hi, I changed the declaration of projectJac and the whole system works correctly. My changes are as follows:
original:
auto projectJac = -pCamera->projectJac(xyz_trans);
modification:
Eigen::Matrix<double, 2, 3> projectJac = -pCamera->projectJac(xyz_trans);
The definition of function projectJac is as follows:
Eigen::Matrix<double, 2, 3> Pinhole::projectJac(const Eigen::Vector3d &v3D) {
Eigen::Matrix<double, 2, 3> Jac;
Jac(0, 0) = mvParameters[0] / v3D[2];
Jac(0, 1) = 0.f;
Jac(0, 2) = -mvParameters[0] * v3D[0] / (v3D[2] * v3D[2]);
Jac(1, 0) = 0.f;
Jac(1, 1) = mvParameters[1] / v3D[2];
Jac(1, 2) = -mvParameters[1] * v3D[1] / (v3D[2] * v3D[2]);
printf("v3D: %f %f %f\n", v3D[0], v3D[1], v3D[2]);
printf("Jac: %f %f %f %f %f %f\n", Jac(0, 0), Jac(0, 1), Jac(0, 2), Jac(1, 0), Jac(1, 1), Jac(1, 2));
return Jac;
}
Now it is clear the bug is caused by "auto". Could you tell me what is reason for this? How should I changed the settings in Visual Studio 2019 to make "auto" works correctly? Thanks a lot.

Undistortion of CAHVORE camera model

I have some images with fisheye distortion and their corresponding CAVHORE calibration files. I want to have the images undistorted, using OpenCV (namely cv2.fisheye.undistortImage) for now, which needs the intrinsic matrix K and distortion coefficients D.
I have been reading about camera models and their conversions. It seems constructing K is pretty easy (Section 2.2.4) when there is no radial distortion, but getting distortion coefficients D and solving for KRCr is not straightforward. Experimentally I played with the image with no radial distortion assumption and constructed K from given H_* and V_* parameters. The result is undistorted but not perfect.
The question is, given a calibration file as below, is there any formula or approximation to obtain the distortion coefficients? Or, is there an easier way to undistort using CAVHORE parameters?
Codebase, formula, pointer, anything is appreciated, thanks.
Example CAVHORE file:
C = -0.000000 -0.000000 -0.000000
A = 0.000000 -0.000000 1.000000
H = 2080.155870 0.000000 3010.375794
V = -0.000000 2078.727106 1932.069537
O = 0.000096 0.000068 1.000000
R = 0.000000 -0.040627 -0.004186
E = -0.003159 0.004129 -0.001279
...
Hs = 2080.155870
Hc = 3010.375794
Vs = 2078.727106
Vc = 1932.069537
Theta = -1.570796 (-90.000000 deg)

Conjugate rotation transformation in OpenCV

I am trying to apply a certain rotation to an image, but it doesn't work as expected. The rotation I have is:
[0.109285 0.527975 0.000000
-0.527975 0.109285 0.000000
0.000000 0.000000 1.000000]
Which should be a rotation of ~78 degrees around the camera center (or the Z axis if you prefer).
To build a homography, as there is no translation component, I use the formula: K * R * K^-1 (infinite homography).
The code I use to transform the image (320x240) is:
cv::warpPerspective(image1, image2, K * R * K.inv(), image1.size());
where K is:
[276.666667 0.000000 160.000000
0.000000 276.666667 120.000000
0.000000 0.000000 1.000000]
The resulting matrix from K * R * K.inv() is:
[0.109285 0.527975 79.157461
-0.527975 0.109285 191.361865
0.000000 0.000000 1.000000]
The result should just be a rotation of the image, but the image gets "zoomed out" like this:
What am I doing wrong?
Apparently my rotation matrix was wrong.

Using projection matrix on GLKMatrixStack

Given this setup:
GLKMatrixStackRef transformHierarchy = GLKMatrixStackCreate (kCFAllocatorDefault);
float aspect = (float)self.drawableWidth / (float)self.drawableHeight;
float fov = PI * 0.125;
float height = 10.0f;
float cameraOffset = -height / (2.0f * tanf (fov / 2.0f));
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective (fov, aspect, 0.01f, 1000.0f);
GLKMatrix4 T = GLKMatrix4MakeTranslation (0.0f, 0.0f, cameraOffset);
I would expect these two options to create an identical matrix on the top of the stack. Option A:
GLKMatrixStackLoadMatrix4 (transformHierarchy, projectionMatrix);
GLKMatrixStackTranslate (transformHierarchy, 0.0f, 0.0f, cameraOffset);
Option B:
GLKMatrixStackLoadMatrix4 (transformHierarchy, GLKMatrix4Multiply (projectionMatrix, T));
Option A gives me a black screen:
3.351560 0.000000 0.000000 0.000000
0.000000 5.027339 0.000000 0.000000
0.000000 0.000000 -1.000020 25.117201
0.000000 0.000000 -1.000000 0.000000
Option B (otherwise identical matrix but with a non-zero m44) gives me the visual result I expect:
3.351560 0.000000 0.000000 0.000000
0.000000 5.027339 0.000000 0.000000
0.000000 0.000000 -1.000020 25.117201
0.000000 0.000000 -1.000000 25.136698
So I've found a solution to my black screen, but the result using GLKMatrixStackTranslate() surprises me. Am I missing something here? Is it not considered a good practice to put the projection + world matrix at the bottom of the stack (to be followed by model transforms)?
UPDATE
Corrected mistake in my cameraOffset math and updated matrix output to match, although it has no effect on the problem as described. Also for clarity I transposed the matrix values (copy/pasted from xcode, which presents them in row-major order), to column-major (as OpenGL interprets them).
Also filed a bug with Apple to inquire if this is the expected behavior.
For the record, Apple determined my bug report on this issue to be a duplicate of bug 15419952. I expect people don't run into this much as they might typically keep the projection matrix separate from the model transform stack. In any case the workaround presented in the question works fine.

iOS: GLKit matrix multiplication is broken as far as I can tell

Here's the source code. ANSWER is the correct answer, RESULT is the actual result.
Am I blind, or is it calculating a -1 for entry 33 when it should be a 1?
Here's the code:
GLKMatrix4 a = GLKMatrix4Make(-1.000000, 0.000000, 0.000000, 0.000000,
0.000000, 1.000000, 0.000000, 0.000000,
0.000000, 0.000000, -1.000000, 0.000000,
0.000000, 0.000000, 0.000000, 1.000000);
GLKMatrix4 b = GLKMatrix4Make(1.000000, 0.000000, 0.000000, 0.000000,
0.000000, 1.000000, 0.000000, -1.000000,
0.000000, 0.000000, 1.000000, -1.000000,
0.000000, 0.000000, 0.000000, 1.000000);
GLKMatrix4 ANSWER = GLKMatrix4Make(-1.000000, 0.000000, 0.000000, 0.000000,
0.000000, 1.000000, 0.000000, -1.000000,
0.000000, 0.000000, -1.000000, 1.000000,
0.000000, 0.000000, 0.000000, 1.000000);
NSLog(#"##################################################");
GLKMatrix4 RESULT = GLKMatrix4Multiply(a,b);
NSLog(#"Result:");
NSLog(#" %f %f %f %f",RESULT.m00,RESULT.m01,RESULT.m02,RESULT.m03);
NSLog(#" %f %f %f %f",RESULT.m10,RESULT.m11,RESULT.m12,RESULT.m13);
NSLog(#" %f %f %f %f",RESULT.m20,RESULT.m21,RESULT.m22,RESULT.m23);
NSLog(#" %f %f %f %f",RESULT.m30,RESULT.m31,RESULT.m32,RESULT.m33);
NSLog(#"Answer:");
NSLog(#" %f %f %f %f",ANSWER.m00,ANSWER.m01,ANSWER.m02,ANSWER.m03);
NSLog(#" %f %f %f %f",ANSWER.m10,ANSWER.m11,ANSWER.m12,ANSWER.m13);
NSLog(#" %f %f %f %f",ANSWER.m20,ANSWER.m21,ANSWER.m22,ANSWER.m23);
NSLog(#" %f %f %f %f",ANSWER.m30,ANSWER.m31,ANSWER.m32,ANSWER.m33);
NSLog(#"##################################################");
Here's the output:
##################################################
Result:
-1.000000 0.000000 0.000000 0.000000
0.000000 1.000000 0.000000 -1.000000
0.000000 0.000000 -1.000000 -1.000000
0.000000 0.000000 0.000000 1.000000
Answer:
-1.000000 0.000000 0.000000 0.000000
0.000000 1.000000 0.000000 -1.000000
0.000000 0.000000 -1.000000 1.000000
0.000000 0.000000 0.000000 1.000000
##################################################
I've spent the last 5 hours trying to debug some OpenGL code only to find this is the problem. I must be missing something, surely. Can anyone spot what's going on, or verify this shouldn't be happening?
Um, I got the same result as your "RESULT" matrix.
I did the matrix multiplication with pen and paper, following these rules:
http://www.mathsisfun.com/algebra/matrix-multiplying.html
This is how I did it using your two matrix a and b:
OK, I think I know where you went wrong, you must have listed the numbers in the wrong position:
The way you list the number in code is horizontal e.g Coordinate(x,y,z) but in a Matrix, these numbers get moved to vertical positions.
So for example, [ X Y Z W ] should become vertically:
Now IF we were to list the your numbers in the wrong position and multiply Matrix a with Matrix b, we get the same answer as your "ANSWER" matrix:
Little bit more details on this.
OpenGL ES suppose matrices have column major order. So element order how OpenGL ES reads matrix is:
m00 m10 m20 m30
m01 m11 m21 m31
m02 m12 m22 m32
m03 m13 m23 m33
Of course Apple's GLKit Math Utilities are also built according to this column majored order. And it may be little bit confusing while you realise it. :)

Resources