Rotation, Translation and Default Camera Location - metal

I am just playing around the template setup in MTKView; and, I have been trying to understand the followings:
Default location of the camera.
The default location when creating primitives using MDLMesh and MTKMesh.
Why does a rotation involve also a translation.
Relevant code:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, 5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
matrix_float4x4 base_mv = matrix_multiply(_viewMatrix, base_model);
matrix_float4x4 modelViewMatrix = matrix_multiply(base_mv, matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
The preceding code is from the _update method by the template; evidently, it is trying to rotate the model instead of the camera. But what baffles me is the fact that it requires also a translation. I have read claims such as "because it always rotates at (0, 0, 0)". But why (0, 0, 0), if the object is placed somewhere else? Also, it appears to me that the camera is looking at the positive z-axis (question 1) instead of the usual negative z-axis because if I change:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, 5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
to:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, -5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
nothing will be displayed on the screen because it appears that the object is behind the camera, which means that the camera is looking at the positive z-axis.
If I set matrix_from_translation(0.0f, 0.0f, 0.0f) (all zeros), the object simply rotate not on the y-axis (question 3) as I expected.
I have tried to find out where the MDLMesh and MTKMesh is placed by default (question 2), but I could not find a property that logs its position. The following is, also by the template, how the primitive is created:
MDLMesh *mdl = [MDLMesh newBoxWithDimensions:(vector_float3){2,2,2} segments:(vector_uint3){1,1,1}
geometryType:MDLGeometryTypeTriangles inwardNormals:NO
allocator:[[MTKMeshBufferAllocator alloc] initWithDevice: _device]];
_boxMesh = [[MTKMesh alloc] initWithMesh:mdl device:_device error:nil];
Without knowing its location generated by the above method, it hinders my understanding of how the rotation and translation work and the default location the camera in Metal.
Thanks.

I think the order in which the matrices are written in the code somewhat obfuscates the intent, so I've boiled down what's actually happening into the following pseudocode to make it easier to explain.
I've replaced that last matrix with the one from the template, since your modification just has the effect of doubling the rotation about the Y axis.
modelViewMatrix = identity *
translate(0, 0, 5) *
rotate(angle, axis(0, 1, 0)) *
rotate(angle, axis(1, 1, 1))
Since the matrix is multiplied on the left of the vector in the shader, we're going to read the matrices from right to left to determine their cumulative effect.
First, we rotate the cube around the axis (1, 1, 1), which passes diagonally through the origin. Then, we rotate about the cube about the Y axis. These rotations combine to form a sort of "tumble" animation. Then, we translate the cube by 5 units along the +Z axis (which, as you observe, goes into the screen since we're regarding our world as left-handed). Finally, we apply our camera transformation, which is hard-coded to be the identity matrix. We could have used an additional positive translation along +Z as the camera matrix to move the cube even further from the camera, or a negative value to move the cube closer.
To answer your questions:
There is no default location for the camera, other than the origin (0, 0, 0) if you want to think of it like that. You "position" the camera in the world by multiplying the vertex positions by the inverse of the transformation that represents how the camera is placed in the world.
Model I/O builds meshes that are "centered" around the origin, to the extent this makes sense for the shape being generated. Cylinders, ellipsoids, and boxes are actually centered around the origin, while cones are constructed with their apex at the origin and their axis extending along -Y.
The rotation doesn't really involve the translation as much as it's combined with it. The reason for the translation is that we need to position the cube away from the camera; otherwise we'd be inside it when drawing.
One final note on order of operations: If we applied the translation before the rotation, it would cause the box to "orbit" around the camera, since as you note, rotations are always relative to the origin of the current frame of reference.

Related

Detecting closest vertex to camera in openGL ES

I have a mesh that is stored as an array of Vertices with an Index array used to draw it. Four of the vertices are also redrawn with a shader to highlight the points, and the indices for these are stored in another array.
The user can rotate the model using touches, which affects the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, _rotMatrix);
My problem is that I need to detect which of my four highlighted points is closest to the screen when the user makes a rotation.
I think the best method would be to calculate the distance from the near clip of the view frustum to the the point, but how to I calculate those points in the first place?
You can do this easily from camera/eye space[1], where everything is relative to the camera (So, the camera will be at (0, 0, 0) and looking down the negative z axis).
Use your modelViewMatrix to transform the vertex to camera space, say vertex_cs. Then the distance of the vertex from the camera (plane) would simply be the -vertex_cs.z .
--
1. What exactly are eye space coordinates?

DirectX11 - Matrix Translation

I am creating a 3D scene and I have just inserted a cube object into it. It is rendered fine at the origin but when I try to rotate it and then translate it I get a huge deformed cube. Here is the problem area in my code:
D3DXMATRIX cubeROT, cubeMOVE;
D3DXMatrixRotationY(&cubeROT, D3DXToRadian(45.0f));
D3DXMatrixTranslation(&cubeMOVE, 10.0f, 2.0f, 1.0f);
D3DXMatrixTranspose(&worldMatrix, &(cubeROT * cubeMOVE));
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
// Reset the world matrix.
m_Direct3D->GetWorldMatrix(worldMatrix);
I have discovered that it's the cubeMOVE part of the transpose that is giving me the problem but I have no idea why.
This rotates the cube properly:
D3DXMatrixTranspose(&worldMatrix, &cubeROT);
This translates the cube properly:
D3DXMatrixTranslation(&worldMatrix, 10.0f, 2.0f, 1.0f);
But this creates the deformed mesh:
D3DXMatrixTranspose(&worldMatrix, &cubeMOVE);
I'm quite new to DirectX so any help would be very much appreciated.
I don't think transpose does what you think it does. To combine transformation matrices, you just multiply them -- no need to transpose. I guess it should be simply:
worldMatrix = cubeROT * cubeMOVE;
Edit
The reason "transpose" seems to work for rotation but not translation, is that transpose flips the non-diagonal parts of the matrix. But for an axis-rotation matrix, that leaves the matrix nearly unchanged. (It does change a couple of signs, but that would only affect the direction of the rotation.) For a translation matrix, applying a transpose would completely deform it -- hence the result you see.

How to rotate around x and y in OpenGL ES 1.1?

I am drawing a texture with 4 vertices in OpenGL ES 1.1.
It can rotate around z:
glRotatef(20, 0, 0, 1);
But when I try to rotate it around x or y like a CALayer then the texture just disappears completely. Example for rotation around x:
glRotatef(20, 1, 0, 0);
I also tried very small values and incremented them in animation loop.
// called in render loop
static double angle = 0;
angle += 0.005;
glRotatef(angle, 1, 0, 0);
At certain angles I see only the edge of the texture. As if OpenGL ES would clip away anything that goes into depth.
Can the problem be related to projection mode? How would you achieve a perspective transformation of a texture like you can do with CALayer transform property?
The problem is most likely in one of the glFrustumf or glOrthof. The last parameter in this 2 calls will take z-far and it should be large enough for the primitive to be drawn. If a side length of the square is 1.0 and centre is in (.0, .0, .5) then z-far should be (> 1.0) to see the square rotated 90 degrees around X or Y axis. Though note these can depend on other matrix operations as well (translating the object or using tools like lookAt).
Making this parameter large enough should solve your problem.
To achieve a perspective transformation use glFrustumf instead of glOrthof.

Moving the Projection Matrix to a Specific X,Y,Z Point ? OpenGLES 2.0 - iOS

Say I am using the below code to setup a projection view :
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 4.0f, 10.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
If I now wanted to make to camera look at a specific point in my scene, how would I best achieve this ? Currently I am changing the modelViewMatrix to move the object so it is centred in view, but am wondering if I could achieve the same by manipulating the projectionMatrix somehow.
As any good 3D programming basics tutorial (like maybe this one) will tell you...
A Model matrix converts vertex coordinates from model space (the coordinate space in which your mesh is specified, which is usually ignorant of where you want to place the model in the scene) to world space (the conceptual space of your scene).
A View matrix converts from world space to eye space (that is, a coordinate system relative to the "camera" that vies your scene).
A Projection matrix converts from eye space to clip space (a -1.0 to 1.0 cube representing your screen plus some depth, which the GPU then converts into pixel space).
The Projection matrix already works in terms relative to the viewpoint -- you've already fixed where the eye is and what point it's looking at, so the projection matrix only changes your field of view angle, aspect ratio, and near and far clipping planes. If you want to change the point you're looking at, specify a different LookAt transform for your View matrix.

OpenGL ES rendering triangle mesh - Black dots in iPhone and perfect image in simulator

This is not a texture related problem as described in other StackOverflow questions: Rendering to texture on iOS...
My Redraw loop:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -300.0f);
glMultMatrixf(transform);
glVertexPointer(3, GL_FLOAT, MODEL_STRIDE, &model_vertices[0]);
glEnableClientState(GL_VERTEX_ARRAY);
glNormalPointer(GL_FLOAT, MODEL_STRIDE, &model_vertices[3]);
glEnableClientState(GL_NORMAL_ARRAY);
glColorPointer(4, GL_FLOAT, MODEL_STRIDE, &model_vertices[6]);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_COLOR_MATERIAL);
glDrawArrays(GL_TRIANGLES, 0, MODEL_NUM_VERTICES);
The result in the simulator:
Then the result in the IPhone 4 (iOS5 using OpenGLES 1.1):
Notice the black dots, they are random as you rotate the object (brain)
The mesh has 15002 vertices and 30k triangles.
Any ideas on how to fix this jitter in the Device image?
I've solved the problem increasing the precision of depth buffer:
// Set up the projection
static const GLfloat zNear = 0.1f, zFar = 1000.0f, fieldOfView = 45.0f;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
GLfloat size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = self.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);
In the code that produced the jitter the zNear was 0.01f
The hint came from devforums.apple.com
There's nothing special in the code you posted that would cause this. The problem is likely in your mesh data rather than in your code, due to precision limitations on the processing of the vertices in your model. This type of problem is common if you have adjacent triangles that have close, but not identical, values for the positions of the vertices they share. It's also the type of thing that will commonly vary between a gpu and a simulator.
You say that the black dots flash around randomly as you rotate the object. If you're rotating the object, I assume your real code isn't always loading the identity matrix in for the model-view?
If the gaps between your triangles are much smaller than the projected size of one pixel then usually they will end up being rounded to the same pixel and you won't see any problem. But if one vertex is rounded in one direction and the other vertex being rounded in the other direction then that can leave a one-pixel gap. The locations of the rounding errors will vary depending on the transform matrix, so will move every frame as the object rotates.
If you load a different mesh do you get the same errors?
If you have your brain mesh in a data format that you can edit in a 3D modeling app, then search for an option named something like "weld vertices" or "merge vertices". You set a minimum threshold for vertices to be considered identical and it will look for vertex pairs within that distance and move one (or both) to match perfectly. Many 3D modelling apps will have cleanup tools to ensure that a mesh is manifold, which means (among other things) that there are no holes in the mesh. You usually only want to deal with manifold meshes in 3D rendering. You can can also weld vertices in your own code, though the operation is expensive and not usually the type of thing you want to do at runtime unless you really have to.

Resources