Ah, go on then.
I'd like to create a SCNTransformConstraint orientation constraint so that a SCNNode is always oriented to the world's x, y & z axes, no matter how the node's parents are oriented / move about.
I can create an orientation constraint working in world space like this:
let orientationConstraint =
SCNTransformConstraint.orientationConstraint(inWorldSpace: true) {
(node, orientation) -> SCNQuaternion in
return <<<need quaternion identity here>>>
}
node.constraints = [orientationConstraint]
But I need by constraint callback to provide the multiplicative identity quaternion. When using quaternions to describe orientation and rotation as scene kit does, the quaternion identity represents no rotation. Or an orientation in world coordinates that is aligned to the world.
This is analogous to the way that scaling by 1 gives the same scale. And setting a scale to 1 resets something to have no scaling.
For scene kit matrices there is SCNMatrix4Identity. Or, the additive identity for scene kit vectors: SCNVector3Zero.
How can I get the quaternion identity for multiplication?
There does not appear to be a SCNQuaternionIdentity.
The identity quaternion can be constructed with SCNQuaternion(0, 0, 0, 1).
Be careful not to use the default constructor SCNQuaternion(). It is equivalent to SCNQuaternion(0, 0, 0, 0), which doesn't represent a valid orientation.
Related
I'd like to simulate the shift of a tilt-shift/perspective-control lens in Scene Kit on MacOS.
Imagine the user has the camera facing a tall building at ground level, I'd like to be able to shift the 'lens' so that the projective distortion shifts (see e.g. Wikipedia).
Apple provides lots of physically-based parameters for SCNCamera (sensor height, aperture blade count), but I can't see anything obvious for this. It seems to exist in Unity.
Crucially I'd like to shift the lens so that the object stays in the same position relative to the camera. Obviously I could move the camera to get the effect, but the object needs to stay centred in the viewport (and I can't see a way to modify the viewport either). I've tried to modify the .projectionTransform matrix directly, but it was unsuccessful.
Thanks!
There's is no API on SCNCamera that does that out of the box. As you guessed one has to create a custom projection matrix and set it to the projectionTransform property.
I finally worked out the correct adjustment to the projection matrix – it's quite confusing to follow the maths, because it is a 4x4 matrix rather than 3x4 or 4x3 as you'd use for a plain camera projection matrix, which additionally makes it especially confusing to work out whether it is expecting row vectors or column vectors.
Anyway, the correct element is .m32 for the y axis
let camera = SCNNode()
camera.camera = SCNCamera()
let yShift: CGFloat = 1.0
camera.camera!.projectionTransform.m32 = yShift
Presumably .m31 will shift in the x axis, but I have to admit I haven't tested this.
When I thought about it a bit more, I also realised that the effect I actually wanted involves moving the camera too. Adjusting .m32 simulates moving the sensor, which will appear to move the subject relative to the camera, as if you had a wide angle lens and you were moving the crop. To keep the subject centred in frame, you need to move the camera's position too.
With a bit (a lot) of help from this blog post and in particular this code, I implemented this too:
let distance: CGFloat = 1.0 // calculate distance from subject here
let fovRadians = camera.camera!.fieldOfView * CGFloat.pi / 180.0
let yAdjust = tan(fovRadians / 2) * distance * yShift
camera.position = camera.position - camera.worldUp * yAdjust
(any interested readers could presumably work out the x axis shift from the source above)
I am using ARKit's ARFaceTrackingConfiguration with ARConfiguration.WorldAlignment.camera alignment, but I found that the documentation (seemingly) does not reflect the reality;
Based on the excerpt of documentation below, I would expect that the face anchor's transform is expressed in right handed coordinate system. However, when I tried moving my head, I noticed that the Z coordinate of the face anchor is always negative (i.e. faceAnchor.transform.columns.3.z < 0). Note that moving head in the X and Y directions corresponds to expected outcome (unlike Z coordinate).
Camera alignment defines a coordinate system based on the native sensor orientation of the device camera. Relative to a AVCaptureVideoOrientation.landscapeRight-oriented camera image, the x-axis points to the right, the y-axis points up, and the z-axis points out the front of the device (toward the user).
I want the transform to behave as per the documentation, i.e. the Z coordinate of face anchor should be positive given that documentation says "the z-axis points out the front of the device (toward the user)". So far it seems the Z-axis points out the back of the device…
Am I missing something obvious?
I tried to repair the rotation by the following code, but I am not sure if it's correct way to fix this:
// Repair rotation
let oldFaceRotation = simd_quatf(face.transform) // get quaternion from
let repairedFaceRotation = simd_quatf(ix: oldFaceRotation.axis.y, iy: oldFaceRotation.axis.x, iz: -oldFaceRotation.axis.z, r: oldFaceRotation.real)
// Repair translation
var repairedPosition = face.transform.columns.3
repairedPosition.z *= -1
// Combine
var correctedFaceTransform = float4x4(repairedFaceRotation)
correctedFaceTransform.columns.3 = repairedPosition
It seems quite obvious:
When ARSession is running and ARCamera begins tracking environment, it places WorldOriginAxis in front of your face at (x: 0, y: 0, z: 0). Just check it using:
sceneView.debugOptions = [.showWorldOrigin]
So your face's position must be at positive part of Z axis of World Coordinates.
Thus, ARFaceAnchor will be placed at positive Z-axis direction, as well.
And when you use ARFaceTrackingConfiguration vs ARWorldTrackingConfiguration there's two things to consider:
Rear Camera moves towards objects along negative Z-axes (positive X-axis is on the right).
Front Camera moves towards faces along positive Z-axes (positive X-axis is on the left).
Hence, when you are "looking" through TrueDepth Camera, a 4x4 Matrix is mirrored.
Although I still don't know why does not the face anchor behave as described in the documentation, I can at least answer how to correct its left-handed system into the Metal- and SceneKit-friendly right-handed system (X axis to the right, Y axis up, Z axis from the screen towards user):
func faceAnchorPoseToRHS(_ mat: float4x4) -> float4x4 {
let correctedPos = float4(x: mat.columns.3.x, y: mat.columns.3.y, z: -mat.columns.3.z, w: 1)
let quat = simd_quatf(mat)
let newQuat = simd_quatf(angle: -quat.angle, axis: float3(quat.axis.x, quat.axis.y, -quat.axis.z))
var newPose = float4x4(newQuat)
newPose.columns.3 = correctedPos
return newPose
}
When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?
For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.
I'm new to ARKit. I want to get the direction from anchor 1 to anchor 2. Currently, I can get the position from transform.columns.3. However, this works only for fixed axis.(z-axis always toward user)
How can I compare two anchor with respect to 6 axes (pitch, yaw, roll)? What should I read to get more detail information about this?
func showDirection(of object: ARAnchor) { // only work for fixed axis
if let currentFrame = sceneView.session.currentFrame {
print("diff(x) = \(currentFrame.camera.transform.columns.3.x - object.transform.columns.3.x)")
print("diff(y) = \(currentFrame.camera.transform.columns.3.y - object.transform.columns.3.y)")
print("diff(z) = \(currentFrame.camera.transform.columns.3.z - object.transform.columns.3.z)")
}
}
I think my answer to this other user's question may be helpful. Basically, using SceneKit or ARKit, you can find the orientations of the camera and of your target anchor, and do some quaternion math to find the axis and angle of the relative rotation between them on x, y and z axes. My example assumed a SceneKit/ARKit app, which allows you to use quaternions instead of matrices, but the math should essentially be the same for ARKit transforms. If you use ARKit's simd_float4x4 transform matrices, you could find one matrix in the space of the other (A.inverse * B) and use the resulting matrix to glean relative position and orientation.
Your question was a little hard to follow, as I'm not sure if the orientation of the anchor you're targeting matters in your case, but this should help as far as comparing two anchors with respect to pitch, yaw and roll.
I'm using XNA but it doesn't matter too much for this example. So let's say I have a sprite. I then apply a scaling matrix before anything. Is the scaling matrix applied scaling the local axis of the sprite or just moving the points down? In other words, is applying a scaling matrix of 0.5f in the world space to my sprite at the world origin scaling down the local axis of the sprite or just all the points that make up that sprite by half?
The same kind of applies to a translation and then scaling. In my head, I picture a translation matrix of 30,30 as moving the sprite's local origin to 30,30 and as a result, the sprite's local axis to 30,30. Then, scaling by 0.5f would scale back the local axis but I don't see why the origin of the sprite would now be at 15,15.
This confusion compounds the fact that is you perform a translation of 1 to the right on the x-axis in the world, you are now moving based on the scale which you applied (so you would only move .5 in the world). This leads me to believe that the scale is applied to the object's own axis.
Btw, if you guys talk about the origin in your followups, could you state which origin you are referring to?
Thanks
Normally a sprite is defined by it's vertices (points). Applying a scaling matrix to a sprite will transform the vertices (points) of the sprite.
A scale matrix always assumes (0, 0) is the origin of the scale transform. So if you scale a sprite centered at (30, 30) all points will stretch away from the (0, 0) point. If it helps, imagine the sprite as a small dot on a circle around the (0, 0) point with that entire circle being scaled.
If you want to scale a sprite at (30, 30) from the center of the sprite, you have to translate the center of the sprite to (0, 0) first, then translate the sprite back out to (30, 30) after the scale has been performed.
So that would be:
Translate(-30, -30)
Scale(0.5)
Translate(30, 30)
To expand on Empyrean's answer, 3D worlds usually have at least four coordinate systems, each with its own local origin:
Object Space
World Space
Camera Space
View Space (2D!)
with three transformations:
Object to World
World to Camera
Camera to View
You can create new coordinate systems, for example 'Model Space', with the transformation 'Model to Object'. Using this, you get a series of steps:
Model -> scale -> Object
Object -> rotate -> translate -> World
World -> rotate -> translate -> Camera
Camera -> perspective -> View
In OpenGL you would push the matrices in the reverse order listed above, so the Model->Object transformation is the last to be pushed, and OpenGL should render the object correctly. I would assume XNA / DirectX has a similar system.
Getting more complex, Model Space can have a hierarchy of translations, scales and rotations in a tree to produce a skeletal system which can then be used to deform the model mesh. This is usually called Skinning.
So, to answer the question, depending on which transformation you apply a rotation transformation, for example, you will get different results. In the Model->Object transformation, the model will rotate about the object's origin. In the Object->World transformation, the object will rotate about the world's origin.