I am just playing around the template setup in MTKView; and, I have been trying to understand the followings:
Default location of the camera.
The default location when creating primitives using MDLMesh and MTKMesh.
Why does a rotation involve also a translation.
Relevant code:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, 5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
matrix_float4x4 base_mv = matrix_multiply(_viewMatrix, base_model);
matrix_float4x4 modelViewMatrix = matrix_multiply(base_mv, matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
The preceding code is from the _update method by the template; evidently, it is trying to rotate the model instead of the camera. But what baffles me is the fact that it requires also a translation. I have read claims such as "because it always rotates at (0, 0, 0)". But why (0, 0, 0), if the object is placed somewhere else? Also, it appears to me that the camera is looking at the positive z-axis (question 1) instead of the usual negative z-axis because if I change:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, 5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
to:
matrix_float4x4 base_model = matrix_multiply(matrix_from_translation(0.0f, 0.0f, -5.0f), matrix_from_rotation(_rotation, 0.0f, 1.0f, 0.0f));
nothing will be displayed on the screen because it appears that the object is behind the camera, which means that the camera is looking at the positive z-axis.
If I set matrix_from_translation(0.0f, 0.0f, 0.0f) (all zeros), the object simply rotate not on the y-axis (question 3) as I expected.
I have tried to find out where the MDLMesh and MTKMesh is placed by default (question 2), but I could not find a property that logs its position. The following is, also by the template, how the primitive is created:
MDLMesh *mdl = [MDLMesh newBoxWithDimensions:(vector_float3){2,2,2} segments:(vector_uint3){1,1,1}
geometryType:MDLGeometryTypeTriangles inwardNormals:NO
allocator:[[MTKMeshBufferAllocator alloc] initWithDevice: _device]];
_boxMesh = [[MTKMesh alloc] initWithMesh:mdl device:_device error:nil];
Without knowing its location generated by the above method, it hinders my understanding of how the rotation and translation work and the default location the camera in Metal.
Thanks.
I think the order in which the matrices are written in the code somewhat obfuscates the intent, so I've boiled down what's actually happening into the following pseudocode to make it easier to explain.
I've replaced that last matrix with the one from the template, since your modification just has the effect of doubling the rotation about the Y axis.
modelViewMatrix = identity *
translate(0, 0, 5) *
rotate(angle, axis(0, 1, 0)) *
rotate(angle, axis(1, 1, 1))
Since the matrix is multiplied on the left of the vector in the shader, we're going to read the matrices from right to left to determine their cumulative effect.
First, we rotate the cube around the axis (1, 1, 1), which passes diagonally through the origin. Then, we rotate about the cube about the Y axis. These rotations combine to form a sort of "tumble" animation. Then, we translate the cube by 5 units along the +Z axis (which, as you observe, goes into the screen since we're regarding our world as left-handed). Finally, we apply our camera transformation, which is hard-coded to be the identity matrix. We could have used an additional positive translation along +Z as the camera matrix to move the cube even further from the camera, or a negative value to move the cube closer.
To answer your questions:
There is no default location for the camera, other than the origin (0, 0, 0) if you want to think of it like that. You "position" the camera in the world by multiplying the vertex positions by the inverse of the transformation that represents how the camera is placed in the world.
Model I/O builds meshes that are "centered" around the origin, to the extent this makes sense for the shape being generated. Cylinders, ellipsoids, and boxes are actually centered around the origin, while cones are constructed with their apex at the origin and their axis extending along -Y.
The rotation doesn't really involve the translation as much as it's combined with it. The reason for the translation is that we need to position the cube away from the camera; otherwise we'd be inside it when drawing.
One final note on order of operations: If we applied the translation before the rotation, it would cause the box to "orbit" around the camera, since as you note, rotations are always relative to the origin of the current frame of reference.
I have four points and I would like to draw an UIImageView with corners in those four points.
The four points represent an arbitrary rectangle (might be a trapez or a parallelogram).
I guess I somehow need to make a transform from those four points, but I'm not quite sure how.
Any suggestions?
Other solutions?
Look at https://math.stackexchange.com/questions/169176/2d-transformation-matrix-to-make-a-trapezoid-out-of-a-rectangle to determine your transformation matrix.
Note that you cannot use a 2D matrix so you have to use a CATransform3D.
CATransform3D transform = CATransform3DIdentity;
// update transform elements, .m11, .m12, .m13, etc.
imageView.layer.transform = transform;
There are:
1.
CGAffineTransform layerTransform = CGAffineTransformMakeRotation(M_PI_2);
layerTransform = CGAffineTransformTranslate(layerTransform, 1080, 0);
2.
CGAffineTransform layerTransform = CGAffineTransformMakeTranslation(1080, 0);
layerTransform = CGAffineTransformRotate(layerTransform, M_PI_2);
What's the different between them ?
Rotate before or after translate, is there any different?
CGAffineTransformTranslate is basically create a new affine transform by translating (moving) existing affine transform with specified method. We have to use CGAffineTransform CGAffineTransformTranslate ( CGAffineTransform t, CGFloat tx, CGFloat ty ) method for it. Here tx and ty has the values by which new affine transform have to move in X and Y direction respectively.
CGAffineTransformRotate is basically create a new affine transform by rotating existing transform with specified method. We have to use CGAffineTransform CGAffineTransformRotate ( CGAffineTransform t, CGFloat angle ) method for it. Here angle has the rotation angle for new affine transform.
I hope, you can get that first one is used to get a shift from one position to another one, and second one is used to get a roation.
The thing you need to know is that translation means move the location.
Rotation is self explanatory.
The order does matter depending on the origin point and the effect you want to achieve.
I'm Core Graphics this is lower left on a OS X and upper left on iOS.
In Core Animation objects can be transformed more easily by their center points.
CG works with rects though.
The most common hurtful thing is trying to rotate something about its center in CG.
If you just rotate transform the thing will appear to rotate around its origin like the hand of a clock.
So if you move the origin then rotate, you you can compensate for this.
The math is not easy unless you know trig and radians.
The trick is that you need to apply both transformations before drawing again or the thing will jump about.
Is it possible to use a normalized coordinate system (from 0.0 to 1.0) instead of using pixel coordinates when drawing stuff with CoreGraphics? It would certainly make a lot of things easier...
Yes, use a CGAffineTransform. I do this in an app. keep all coordinates normalized to -1.0 to 1.0, create a transform based on the size of the view I am drawing into.
Example:
CGAffineTransform translateTransform = CGAffineTransformMakeTranslation(offset.x, offset.y);
One can transform each point or create a path and transform the entire path:
CGMutablePathRef transformedPath = CGPathCreateMutable();
CGPathAddPath(transformedPath, &transform, path);
As #yurish points out one can also use CGContextScaleCTM, CGContextTranslateCTM, etc. instead of scaling the points/path if that works better for you.
Core graphics does not use pixel coordinates. It uses abstract points that are converted to pixels using current transformation matrix (CTM). You can use the normalized coordinate system if you adjust CTM appropriately (CGContextScaleCTM etc) .
I'm currently working on a Final Fantasy like game, and I'm at the point where I'm working on the effect when switching from world map to battles. I wanted a zoom-in while rotating effect, I was thinking of simply animating a transformation matrix that would be passed to SpriteBatch.Begin, but my problem is when I rotate, the rotation origin is the top left of my entire scene and it doesn't "zoom-in" centered. I saw that you could specify a rotation origin on SpriteBatch.Draw but that sets it per sprites and I want to rotate the entire scene.
The transform you are looking for is this:
Matrix Transform = Matrix.CreateTranslation(-Position)
* Matrix.CreateScale(scale)
* Matrix.CreateRotationZ(angle)
* Matrix.CreateTranslation(GraphisDevice.Viewport.Bounds.Center);