Imported blender model rotates incorrectly - ios

I've imported a Blender 3D model (a sphere with an Earth texture) in SceneKit, and I'm trying to rotate it through a pan gesture. I have to mentally swap the x and y axises, because the rotation system in SceneKit is different from the one adopted by Blender.
This is how I rotate the Earth object:
func pan(gesture:UIPanGestureRecognizer)
{
let translation = gesture.translation(in: self.view) * 0.05
let intensity = Float(magnitude(point: translation)) // Euclidean distance
let rotation = SCNMatrix4MakeRotation(intensity, Float(translation.y), Float(translation.x), 0.0) // I invert the x and y because of the different coordinate system
earth.transform = SCNMatrix4Mult(earth.transform, rotation)
gesture.setTranslation(CGPoint(x:0.0, y:0.0), in: self.view)
}
The rotation around the y axis is correct, but if I try to rotate it along the x axis by panning vertically, it seems like if the x axis of the Earth is oblique and not perpendicular with the y axis.
In this video I first drag the finger upwards and then downwards:
https://www.youtube.com/watch?v=7YumAB_rXlk

When you rotate the node with a gesture, it is done with respect to the node's axes. Say your node is in it's original default orientation, +X direction to the right. Now you rotate the the node 180° about the Y with a left-to-right swipe. If you now apply a top-to-bottom swipe, the node will rotate upward about the X instead of downward. This is because the first swipe swung the X-axis around such that it was pointing in the opposite direction. That's just the extreme example, if your first Y rotation was less than 180°, an attempt to now swipe about the X will cause the node to rotate in a slanted manner such as in your video.
One way to deal with this is to place your earth node in another node (a "container node"). Use the earth node for Y rotation and,separately, use the container node for the X rotation.
But the way I deal with this can be found here: How to rotate an SCNBox

Related

Why does ARFaceAnchor have negative Z position?

I am using ARKit's ARFaceTrackingConfiguration with ARConfiguration.WorldAlignment.camera alignment, but I found that the documentation (seemingly) does not reflect the reality;
Based on the excerpt of documentation below, I would expect that the face anchor's transform is expressed in right handed coordinate system. However, when I tried moving my head, I noticed that the Z coordinate of the face anchor is always negative (i.e. faceAnchor.transform.columns.3.z < 0). Note that moving head in the X and Y directions corresponds to expected outcome (unlike Z coordinate).
Camera alignment defines a coordinate system based on the native sensor orientation of the device camera. Relative to a AVCaptureVideoOrientation.landscapeRight-oriented camera image, the x-axis points to the right, the y-axis points up, and the z-axis points out the front of the device (toward the user).
I want the transform to behave as per the documentation, i.e. the Z coordinate of face anchor should be positive given that documentation says "the z-axis points out the front of the device (toward the user)". So far it seems the Z-axis points out the back of the device…
Am I missing something obvious?
I tried to repair the rotation by the following code, but I am not sure if it's correct way to fix this:
// Repair rotation
let oldFaceRotation = simd_quatf(face.transform) // get quaternion from
let repairedFaceRotation = simd_quatf(ix: oldFaceRotation.axis.y, iy: oldFaceRotation.axis.x, iz: -oldFaceRotation.axis.z, r: oldFaceRotation.real)
// Repair translation
var repairedPosition = face.transform.columns.3
repairedPosition.z *= -1
// Combine
var correctedFaceTransform = float4x4(repairedFaceRotation)
correctedFaceTransform.columns.3 = repairedPosition
It seems quite obvious:
When ARSession is running and ARCamera begins tracking environment, it places WorldOriginAxis in front of your face at (x: 0, y: 0, z: 0). Just check it using:
sceneView.debugOptions = [.showWorldOrigin]
So your face's position must be at positive part of Z axis of World Coordinates.
Thus, ARFaceAnchor will be placed at positive Z-axis direction, as well.
And when you use ARFaceTrackingConfiguration vs ARWorldTrackingConfiguration there's two things to consider:
Rear Camera moves towards objects along negative Z-axes (positive X-axis is on the right).
Front Camera moves towards faces along positive Z-axes (positive X-axis is on the left).
Hence, when you are "looking" through TrueDepth Camera, a 4x4 Matrix is mirrored.
Although I still don't know why does not the face anchor behave as described in the documentation, I can at least answer how to correct its left-handed system into the Metal- and SceneKit-friendly right-handed system (X axis to the right, Y axis up, Z axis from the screen towards user):
func faceAnchorPoseToRHS(_ mat: float4x4) -> float4x4 {
let correctedPos = float4(x: mat.columns.3.x, y: mat.columns.3.y, z: -mat.columns.3.z, w: 1)
let quat = simd_quatf(mat)
let newQuat = simd_quatf(angle: -quat.angle, axis: float3(quat.axis.x, quat.axis.y, -quat.axis.z))
var newPose = float4x4(newQuat)
newPose.columns.3 = correctedPos
return newPose
}

Rotate node around y axis considering euler values in scene kit iOS

I want to rotate a node in y axis which has euler x value -90 (x:0,y:0,z:0). How can i achieve this? I have checked other posts related to rotation but all solutions are provided for euler values (0,0,0).
Don't rotate the around (0, 1, 0) but rotate this vector by multiplying it to the nodes transform. For eulerAngles of (-90, 0, 0) this results in the axis (0.0, -0.448074, 0.893997).
But I assume you actually want the euler angles set to -90° (all angles are in radians in SceneKit), wich results in the axis (0, 0, 1), so you need to rotate the node around the z-axis of the rotated node.
You can also wrap your node with your euler angles set in a parent node and rotate this parent node around the y-axis and let SceneKit handle the transformations and coordinate spaces.

Scenekit: Angle between two SCNNode

I am trying to develop a robotic arm tracking system.
I used scenekit to develop the visualization and the control of the system.
The SCNNodes of my system is:
Shoulder--->Upper_arm--->Fore_arm--->Palm.
I could now rotate each node using the rotation property of each SCNNode.
And I am now interested in whether there's any existing API to compute the angle between two SCNNode when the system is moving, eg. the angle between the Upper_arm and Fore_arm?
Try SCNNode.eulerAngles, you will get SCNVector3, which has components:
Pitch (the x component) is the rotation about the node’s x-axis.
Yaw (the y component) is the rotation about the node’s y-axis.
Roll (the z component) is the rotation about the node’s z-axis.
Fore_arm.eulerAngles will give you rotation angles relative to Upper_arm

How to calculate camera orientation using one point in large distance (using opencv)?

Let's say I have a pinhole camera with known intristic values like camera matrix and distortion coefficients. Let's say there is a point in large enough distance from the camera, so we can say it is placed in infinity.
Given image coordinates of this point in pixels, I would like to calculate camera rotation relative to the axis that connects camera and this point (so rotation is 0,0 if camera is directed at this point and it is in the optical center of the image).
How can this be done using opencv?
Many thanks!
You need to specify an additional constraint - rotating the camera from its current pose to one that aligns the optical axis with an arbitrary ray leaves the camera free to rotate about the ray itself (i.e. it leaves the "roll" angle unspecified).
Let's assume that you want the roll to be zero, i.e. that you want the motion to be a pure pan-tilt. This has a unique solution as long as the ray you want to align to is not parallel to the vertical image axis (in which case pan and roll are the same motion).
Then the solution is computed as follows. Let's use the OpenCV camera frame: Z=[0,0,1]' (, where " ' " means transpose) be the camera focal axis, oriented going out of the lens, Y=[0,1,0]' the vertical axis going down, and X = Z x Y (where 'x' is the cross product) the horizontal camera axis going toward the right of the image. So "pan" is a rotation about Y, "tilt" is a rotation about X.
Let U = [u1, u2, u3]', with || u || = 1 be the ray you want to rotate to. You want to apply a pan that brings Z onto the plane Puy defined by the vectors u and Y, then apply a tilt that brings Z onto u.
The angle of the first rotation is (angle between Z and Puy) = [90 deg - (angle between Z and Y x U)]. this is because Y x U is orthogonal to Puy. Look up the expressions for computing the angle between vectors on Wikipedia or elsewhere online. Once you have the angle (or its cosine and sine), the rotation about Y can be expressed as a standard rotation matrix Ry.
The angle of the second rotation, about X after once Z is onto Puy, is the angle between vector Z and U after Ry is applied to Z, or equivalently, between Z and inv(Ry) * U. Compute the angle between the vector, and use to build a standard rotation matrix about X, Rx
The final transformation is then Rx * Ry.

How to see scaling matrices from a geometric perspective

I'm using XNA but it doesn't matter too much for this example. So let's say I have a sprite. I then apply a scaling matrix before anything. Is the scaling matrix applied scaling the local axis of the sprite or just moving the points down? In other words, is applying a scaling matrix of 0.5f in the world space to my sprite at the world origin scaling down the local axis of the sprite or just all the points that make up that sprite by half?
The same kind of applies to a translation and then scaling. In my head, I picture a translation matrix of 30,30 as moving the sprite's local origin to 30,30 and as a result, the sprite's local axis to 30,30. Then, scaling by 0.5f would scale back the local axis but I don't see why the origin of the sprite would now be at 15,15.
This confusion compounds the fact that is you perform a translation of 1 to the right on the x-axis in the world, you are now moving based on the scale which you applied (so you would only move .5 in the world). This leads me to believe that the scale is applied to the object's own axis.
Btw, if you guys talk about the origin in your followups, could you state which origin you are referring to?
Thanks
Normally a sprite is defined by it's vertices (points). Applying a scaling matrix to a sprite will transform the vertices (points) of the sprite.
A scale matrix always assumes (0, 0) is the origin of the scale transform. So if you scale a sprite centered at (30, 30) all points will stretch away from the (0, 0) point. If it helps, imagine the sprite as a small dot on a circle around the (0, 0) point with that entire circle being scaled.
If you want to scale a sprite at (30, 30) from the center of the sprite, you have to translate the center of the sprite to (0, 0) first, then translate the sprite back out to (30, 30) after the scale has been performed.
So that would be:
Translate(-30, -30)
Scale(0.5)
Translate(30, 30)
To expand on Empyrean's answer, 3D worlds usually have at least four coordinate systems, each with its own local origin:
Object Space
World Space
Camera Space
View Space (2D!)
with three transformations:
Object to World
World to Camera
Camera to View
You can create new coordinate systems, for example 'Model Space', with the transformation 'Model to Object'. Using this, you get a series of steps:
Model -> scale -> Object
Object -> rotate -> translate -> World
World -> rotate -> translate -> Camera
Camera -> perspective -> View
In OpenGL you would push the matrices in the reverse order listed above, so the Model->Object transformation is the last to be pushed, and OpenGL should render the object correctly. I would assume XNA / DirectX has a similar system.
Getting more complex, Model Space can have a hierarchy of translations, scales and rotations in a tree to produce a skeletal system which can then be used to deform the model mesh. This is usually called Skinning.
So, to answer the question, depending on which transformation you apply a rotation transformation, for example, you will get different results. In the Model->Object transformation, the model will rotate about the object's origin. In the Object->World transformation, the object will rotate about the world's origin.

Resources