Scenekit - multiple rotation of SCNNode and direction of axes - ios

I have a SCNNode which i keep on rotating with pan gesture by 90deg increments. The first few rotations are working as intended, by with a scenario where the nodes axes rotate in the oposite direction, the rotation is executed in wrong direction. How can i determine the orientation of the axes after each rotation?
Scenario (using a cube for simplicity):
i rotate the cube 90deg along Y axis. Y points still up, X now points to the camera, Z points right
i rotate the cube again 90deg along Y. X now points left, Z to the camera
PROBLEM - i now try to rotate 90deg along X axis. Because X got rotated 180 degrees, the rotation is now reversed.
How can i understand when to rotate (-1,0,0) and when (1,0,0) ?
I'm quite new to the world of 3D math, i hope i explained my issue correctly.

After further research i realised i chose completely wrong approach. The way to achieve what i want was to rotate the node using the axes of the rootNode - this way i don't need to worry about local axes of my node.
EDIT: updated code based on Xartec's suggestions
let rotation = SCNMatrix4MakeRotation(angle, x, y, Float(0))
let newTransform = SCNMatrix4Mult(bricksNode.transform, rotation)
let animation = CABasicAnimation(keyPath: "transform")
animation.fromValue = bricksNode.transform
animation.toValue = scnScene.rootNode.convertTransform(newTransform,from: nil)
animation.duration = 0.5
bricksNode.addAnimation(animation, forKey: nil)

Related

How to point the camera towards a SCNVector3 point below iOS 11

I just started learning how to use SceneKit yesterday, so I may get some stuff wrong or incorrect. I am trying to make my cameraNode look at a SCNVector3 point in the scene.
I am trying to make my app available to people below iOS 11.0. However, the look(at:) function is only for iOS 11.0+.
Here is my function where I initialise the camera:
func initCamera() {
cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(5, 12, 10)
if #available(iOS 11.0, *) {
cameraNode.look(at: SCNVector3(0, 5, 0)) // Calculate the look angle
} else {
// How can I calculate the orientation? <-----------
}
print(cameraNode.rotation) // Prints: SCNVector4(x: -0.7600127, y: 0.62465125, z: 0.17941462, w: 0.7226559)
gameScene.rootNode.addChildNode(cameraNode)
}
The orientation of SCNVector4(x: -0.7600127, y: 0.62465125, z: 0.17941462, w: 0.7226559) in degrees is x: -43.5, y: 35.8, z: 10.3, and I don't understand w. (Also, why isn't z = 0? I thought z was the roll...?)
Here is my workings out for recreating what I thought the Y-angle should be:
So I worked it out to be 63.4 degrees, but the returned rotation shows that it should be 35.8 degrees. Is there something wrong with my calculations, do I not fully understand SCNVector4, or is there another method to do this?
I looked at Explaining in Detail the ScnVector4 method for what SCNVector4 is, but I still don't really understand what w is for. It says that w is the 'angle of rotation' which I thought was what I thought X, Y & Z were for.
If you have any questions, please ask!
Although #rickster has given the explanations of the properties of the node, I have figured out a method to rotate the node to look at a point using maths (trigonometry).
Here is my code:
// Extension for Float
extension Float {
/// Convert degrees to radians
func asRadians() -> Float {
return self * Float.pi / 180
}
}
and also:
// Extension for SCNNode
extension SCNNode {
/// Look at a SCNVector3 point
func lookAt(_ point: SCNVector3) {
// Find change in positions
let changeX = self.position.x - point.x // Change in X position
let changeY = self.position.y - point.y // Change in Y position
let changeZ = self.position.z - point.z // Change in Z position
// Calculate the X and Y angles
let angleX = atan2(changeZ, changeY) * (changeZ > 0 ? -1 : 1)
let angleY = atan2(changeZ, changeX)
// Calculate the X and Y rotations
let xRot = Float(-90).asRadians() - angleX // X rotation
let yRot = Float(90).asRadians() - angleY // Y rotation
self.eulerAngles = SCNVector3(CGFloat(xRot), CGFloat(yRot), 0) // Rotate
}
}
And you call the function using:
cameraNode.lookAt(SCNVector3(0, 5, 0))
Hope this helps people in the future!
There are three ways to express a 3D rotation in SceneKit:
What you're doing on paper is calculating separate angles around the x, y, and z axes. These are called Euler angles, or pitch, yaw, and roll. You might get results that more resemble your hand-calculations if you use eulerAngles or simdEulerAngles instead of `rotation. (Or you might not, because one of the difficulties of an Euler-angle system is that you have to apply each of those three rotations in the correct order.)
simdRotation or rotation uses a four-component vector (float4 or SCNVector4) to express an axis-angle representation of the rotation. This relies on a bit of math that isn't obvious for many newcomers to 3D graphics: the result of any sequence of rotations around different axes can be minimally expressed as a single rotation around a new axis.
For example, a rotation of π/2 radians (90°) around the z-axis (0,0,1) followed by a rotation of π/2 around the y-axis (0,1,0) has the same result as a rotation of 2π/3 around the axis (-1/√3, 1/√3, 1/√3).
This is where you're getting confused about the x, y, z, and w components of a SceneKit rotation vector — the first three components are lengths, expressing a 3D vector, and the fourth is a rotation in radians around that vector.
Quaternions are another way to express 3D rotation (and one that's even further off the beaten path for those of us with the formal math education common to undergraduate computer science curricula, but not crazy advanced, either). These have lots of great features for 3D graphics, like being easy to compose and interpolate between. In SceneKit, the simdOrientation or orientation property lets you work with a node's rotation as a quaternion.
Explaining how quaternions work is too much for one SO answer, but the practical upshot is this: if you're working with a good vector math library (like the SIMD library built into iOS 9 and later), you can basically treat them as opaque — just convert from whichever other rotation representation is easiest for you, and reap the benefits.

SCNNode direction issue

I put a SCNNode to a scene. I want to provide proper rotation in space, because this node is a pyramid. I want Z axis to be pointed to V2 point, and X axis to be pointed to V1 point (these V2 and V1 point are calculated dynamically, and of course in this case the angle between axis will be 90 degrees, because I calculate them properly).
The problem: I can't point X axis, because SCNLookAtConstraint(target: nodeWithV2) points only Z axis, and what I see is that Z axis is OK, but X axis is always random, that's why my Pyramid orientation is always wrong. How can I point X axis?
Here is my code:
let pyramidGeometry = SCNPyramid(width: 0.1, height: 0.2, length: 0.001)
pyramidGeometry.firstMaterial?.diffuse.contents = UIColor.white
pyramidGeometry.firstMaterial?.lightingModel = .constant
pyramidGeometry.firstMaterial?.isDoubleSided = true
let nodePyramid = SCNNode(geometry: pyramidGeometry)
nodePyramid.position = SCNVector3(2, 1, 2)
parent.addChildNode(nodePyramid)
let nodeZToLookAt = SCNNode()
nodeZToLookAt.position = v2
parent.addChildNode(nodeZToLookAt)
let constraint = SCNLookAtConstraint(target: nodeZToLookAt)
nodePyramid.constraints = [constraint]
But that only sets the direction for Z axis, that's why X axis is always random, so the rotation is always random. How can I set X axis direction to my point V1?
starting iOS 11 SCNLookAtConstraint has a localFront property that allows you to change which axis is used to orientate the constrained node.

CMMotionData to SceneKit SCNNode orientation

Trying to use CoreMotion to correctly rotate a SceneKit camera. The scene I've built is done rather simple ... all I do is create a bunch of boxes, distributed in an area, and the camera just points down the Z axis.
Unfortunately, the data coming back from device motion doesn't seem to relate to the device's physical position and orientation in any way. It just seems to meander randomly.
As suggested in this SO post, I'm passing the attitude's quaternion directly to the camera node's orientation property.
Am I misunderstanding what data core motion is giving me here? shouldn't the attitude reflect the device's physical orientation? or is it incremental movement and I should be building upon the prior orientation?
This snippet here might help you:
var motionManager = CMMotionManager()
motionManager?.deviceMotionUpdateInterval = 1.0 / 60.0
motionManager?.startDeviceMotionUpdatesToQueue(
NSOperationQueue.mainQueue(),
withHandler: { (motion: CMDeviceMotion!, error: NSError!) -> Void in
let currentAttitude = motion.attitude
var roll = Float(currentAttitude.roll) + (0.5*Float(M_PI))
var yaw = Float(currentAttitude.yaw)
var pitch = Float(currentAttitude.pitch)
self.cameraNode.eulerAngles = SCNVector3(
x: -roll,
y: yaw,
z: -pitch)
})
This setting is for the device in landscape right. You can play around with different orientations by changing the + and -
Import CoreMotion.
For anyone who stumbles on this, here's a more complete answer so you can understand the need for negations and pi/2 shifts. You first need to know your reference frame. Spherical coordinate systems define points as vectors angled away from the z- and x- axes. For the earth, let's define the z-axis as the line from the earth's center to the north pole and the x-axis as the line from the center through the equator at the prime meridian (mid-Africa in the Atlantic).
For (lat, lon, alt), we can then define roll and yaw around the z- and y- axes in radians:
let roll = lon * Float.pi / 180
let yaw = (90 - lat) * Float.pi / 180
I'm pairing roll, pitch, and yaw with z, x, and y, respectively, as defined for eulerAngles.
The extra 90 degrees accounts for the north pole being at 90 degrees latitude instead of zero.
To place my SCNCamera on the globe, I used two SCNNodes: an 'arm' node and the camera node:
let scnCamera = SCNNode()
scnCamera.camera = SCNCamera()
scnCamera.position = SCNVector3(x: 0.0, y: 0.0, z: alt + EARTH_RADIUS)
let scnCameraArm = SCNNode()
scnCameraArm?.position = SCNVector3(x: 0, y: 0, z: 0)
scnCameraArm?.addChildNode(scnCamera)
The arm is positioned at the center of the earth, and the camera is place at alt + EARTH_RADIUS away, i.e. the camera is now at the north pole. To move the camera on every location update, we can now just rotate the arm node with new roll and yaw values:
scnCameraArm.eulerAngles.z = roll
scnCameraArm.eulerAngles.y = yaw
Without changing the camera's orientation, it's virtual lens is always facing the ground and it's virtual 'up' direction is pointed westward.
To change the virtual camera's orientation, the CMMotion callback returns a CMAttitude with roll, pitch, and yaw values relative to a different z- and x- axis reference of your choosing. The magnetometer-based ones use a z-axis pointed away from gravity and an x-axis pointed at the north pole. So a phone with zero pitch, roll, and yaw, would have its screen facing away from gravity, it's back camera pointed at the ground, and its right side of portrait mode facing north. Notice that this orientation is relative to gravity, not to the phone's portrait/landscape mode (which is also relative to gravity). So portrait/landscape is irrelevant.
If you imagine the phone's camera in this orientation near the north pole on the prime meridian, you'll notice that the CMMotion reference is in a different orientation than the virtual camera (SCNCamera). Both cameras are facing the ground, but their respective y-axes (and x) are 180 degrees apart. To line them up, we need to spin one around its respective z-axis, i.e. add/subtract 180 degrees to the roll ...or, since they're expressed in radians, negate them for the same effect.
Also, as far as I can tell, CMAttitude doesn't explicitly document that its roll value means a rotation about the z-axis coming out of the phone's screen, and from experimenting, it seems that attitude.roll and attitude.yaw have opposite definitions than defined in eulerAngles, but maybe this is an artifact of the order that the rotational transformations are applied in virtual space with eulerAngles (?). Anyway, the callback:
motionManager?.startDeviceMotionUpdates(using: .xTrueNorthZVertical, to: OperationQueue.main, withHandler: { (motion: CMDeviceMotion?, err: Error?) in
guard let m = motion else { return }
scnCamera.eulerAngles.z = Float(m.attitude.yaw - Double.pi)
scnCamera.eulerAngles.x = Float(m.attitude.pitch)
scnCamera.eulerAngles.y = Float(m.attitude.roll)
})
You can also start with a different reference frame for your virtual camera, e.g. z-axis pointing through the prime meridian at the equator and x-axis pointing through the north pole (i.e. the CMMotion reference), but you'll still need to invert the longitude somewhere.
With this set up, you can build a scene heavily reliant on GPS locations pretty easily.

Converting position with regarding anchorPoint?

I have a sprite, which is added to CCSpriteBatchNode. Then I fix position of the sprite, changing anchor point the way so I can rotate sprite around that point.
Hierarchy is sprite <- batchNode <- scene
Basically sprite is moving but it's .position property is not changing. I need to get the real position of the sprite after transformations. So I tried to use
CGPoint p = sprite.position;
p = [sprite convertToWorldSpace:p];
However, position is not matching to the sprite's position which I see in the scene.
Sprite position is a point at the middle of the CCSprite(by default). Changing the anchor point moves the sprite such that the point moves with respect to the sprite but remains same with respect to the world space. For example, changing the anchor point of a sprite(say square) to ccp(0,0) will move the square so that its bottom left vertex will come at the position where the square's center point was initially. So while the square may seem to be "repositioned" ,its position (property) stays the same(unaffected by change in anchor point) unless specifically changed.
EDIT
If by real position of the sprite, you mean its mid point after its anchor point has been changed then it can be calculated by taking into account the two transformations that have been applied on it i.e. Translation and Rotation.
First we take care of Translation:
Your sprite has moved by:
CGPoint translation;
translation.x = sprite.contentSize.width x (0.5 - sprite.anchorPoint.x);
translation.y = sprite.contentSize.height x (0.5 - sprite.anchorPoint.y);
Now we accomodate for change in Rotation
Converting "translation" point into polar coordinates we get
#define RADIANS_TO_DEGREES(radians) ((radians) * (180.0 / M_PI))
r = ccpDistance(translation,ccp(0,0));
ø = RADIANS_TO_DEGREES(atan2(translation.x,translation.y)); //i take x/y here as i want the CW angle from y axis.
If you rotated your sprite by "D" degrees:
ø = ø + D;
CGPoint transPositionAfterRotation = ccp(r * cos(ø),r * sin(ø) );
CGPoint realPosition = ccp(sprite.position.x + transPositionAfterRotation.x, sprite.position.y +transPositionAfterRotation.y)

Reverse the rotation of one single axis of CMRotationMatrix

Please bear with me, I'm really awful at matrix math. I have a layer that I want to remain "stationary" with gravity at the referenceAttitude while the phone rotates to other attitudes. I have a motionManager working nicely, am using multiplyByInverseOfAttitude on the current attitude, and applying the resulting delta as a rotation to my layer using a CMRotationMatrix (doing separate CATransform3DRotates for the pitch, roll, and yaw caused considerable wackiness near the axes). It's basically inspired by code like this example.
I concat this with another transform to apply the m34 perspective trick before I apply the rotation to my layer.
[attitude multiplyByInverseOfAttitude:referenceAttitude];
CATransform3D t = CATransform3DIdentity;
CMRotationMatrix r = attitude.rotationMatrix;
t.m11=r.m11; t.m12=r.m12; t.m13=r.m13; t.m14=0;
t.m21=r.m21; t.m22=r.m22; t.m23=r.m23; t.m24=0;
t.m31=r.m31; t.m32=r.m32; t.m33=r.m33; t.m34=0;
t.m41=0; t.m42=0; t.m43=0; t.m44=1;
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0 / -650;
t = CATransform3DConcat(t, perspectiveTransform);
myUIImageView.layer.transform = t;
The result is pretty and works like you'd expect, the layer staying stationary with gravity as I move my phone around, except for a single axis, the y-axis; holding the phone flat and rolling it, the layer rolls double-time in the direction the phone is, instead of remaining stationary.
I don't know why this one axis moves wrong while the other moves correctly after applying the multiplyByInverseOfAttitude. When using separate CATransform3DRotates for the pitch, yaw, roll, I was able to easily correct the problem by multiplying the roll vector by -1, but I have no idea how to apply that to a rotation matrix. The problem obviously is only visible once you introduce perspective into the equation, so perhaps I'm doing that wrong. Inverting my m34 value fixes the roll but creates the same problem on the pitch. I either need to figure out why the rotation on this axis is backwards, invert the rotation on that axis via my matrix, or correct the perspective somehow.
You have to take into account the following:
In your case, CMRotationMatrix needs to be transposed (http://en.wikipedia.org/wiki/Transpose) which means swapping columns and rows.
You don't need to set the transform as CATransform3DIdentity, because you're overwriting each value, so you can start with an empty matrix. If you want to use CATransform3DIdentity you can omit setting 0s and 1s since they've already been defined. (CATransform3DIdentity is an identity matrix, see http://en.wikipedia.org/wiki/Identity_matrix)
To also correct the rotation around the Y axis, you need to multiply your vector with [1 0 0 0; 0 -1 0 0; 0 0 1 0; 0 0 0 1].
Make the following changes to your code:
CMRotationMatrix r = attitude.rotationMatrix;
CATransform3D t;
t.m11=r.m11; t.m12=r.m21; t.m13=r.m31; t.m14=0;
t.m21=r.m12; t.m22=r.m22; t.m23=r.m32; t.m24=0;
t.m31=r.m13; t.m32=r.m23; t.m33=r.m33; t.m34=0;
t.m41=0; t.m42=0; t.m43=0; t.m44=1;
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0 / -650;
t = CATransform3DConcat(t, perspectiveTransform);
t = CATransform3DConcat(t, CATransform3DMakeScale(1.0, -1.0, 1.0));
Or, with setting t to CATransform3DIdentity just leave the 0s and 1s out:
...
CATransform3D t = CATransform3DIdentity;
t.m11=r.m11; t.m12=r.m21; t.m13=r.m31;
t.m21=r.m12; t.m22=r.m22; t.m23=r.m32;
t.m31=r.m13; t.m32=r.m23; t.m33=r.m33;
....

Resources