Make an object orbit another in SceneKit - ios

Say I have 2 nodes in my SceneKit scene. I want one to rotate around or orbit (like a planet orbiting a star), the other node once in a certain time interval. I know I can set up animations like so:
let anim = CABasicAnimation(keyPath: "rotation")
anim.fromValue = NSValue(scnVector4: SCNVector4(x: 0, y: 1, z: 0, w: 0))
anim.toValue = NSValue(scnVector4: SCNVector4(x: 0, y: 1, z: 0, w: Float(2 * Double.pi)))
anim.duration = 60
anim.repeatCount = .infinity
parentNode.addAnimation(aim, forKey: "spin around")
Is there an animation for "orbiting", and a way to specify the target node?

The way to do this is by using an additional (helper) SCNNode. You'll use the fact that it adds its own coordinate system and that all of its Child Nodes will move together with that (helper) coordinate system. The child nodes that are off-centre will effectively be orbiting if you view them from the world coordinate system.
You add the HelperNode at the centre of your FixedPlanetNode (orbited planet), perhaps as its child, but definitely at the same position
You add your OrbitingPlanetNode as a child to the HelperNode, but with an offset on one of the axes, e.g. 10 points on the X axis
You start the HelperNode rotating (together with its coordinate system) around a different axis, e.g. the Y axis
This will result in the OrbitingPlanetNode orbiting around the Y axis of HelperNode with an orbit radius of 10 points.
EXAMPLE
earthNode - fixed orbited planet
moonNode - orbiting planet
helperNode - helper node added to provide coordinate system
// assuming all planet geometry is at the centre of corresponding nodes
// also helperNode.position is set to (0, 0, 0)
[earthNode addChildNode:helperNode];
moonNode.position = SCNVector3Make(10, 0, 0);
[helperNode addChildNode:moonNode];
// set helperNode to rotate forever
SCNAction * rotation = [SCNAction rotateByX:0 y:3 z:0];
SCNAction * infiniteRotation = [SCNAction repeatActionForever:rotation];
[helperNode runAction:infiniteRotation];
I used actions and objective-c as this is what I am familiar with, but should be perfectly doable in Swift and with animations.

Related

Node rotation value is reset when animating

I'm working on an AR project and need to position some 3D models in the scene when a specific image is recognized.
I also need the 3d models to rotate indefinitely around their y axis, which I managed to do with the following code.
let spin = CABasicAnimation(keyPath: "rotation")
spin.fromValue = NSValue(scnVector4: SCNVector4(x: 0, y: 1, z: 0, w: 0))
spin.toValue = NSValue(scnVector4: SCNVector4(x: 0, y: 1, z: 0, w: Float(2 * Double.pi)))
spin.duration = 8
spin.repeatCount = .infinity
modelNode.addAnimation(spin, forKey: nil)
The issue I'm encountering is when I try to apply this rotation animation to a node that has been rotated beforehand to be set in the correct starting orientation using
modelNode.transform = SCNMatrix4Mult(modelNode.transform, SCNMatrix4MakeRotation(Float(-Double.pi/2), 1, 0, 0))
In this case it seems that the animation doesn't take into account the current rotation of the node but uses the original one, nullifying my setup.
Am I doing something wrong?
How can I set a starting object rotation before animating another rotation?

iOS revert camera projection

I'm trying to estimate my device position related to a QR code in space. I'm using ARKit and the Vision framework, both introduced in iOS11, but the answer to this question probably doesn't depend on them.
With the Vision framework, I'm able to get the rectangle that bounds a QR code in the camera frame. I'd like to match this rectangle to the device translation and rotation necessary to transform the QR code from a standard position.
For instance if I observe the frame:
* *
B
C
A
D
* *
while if I was 1m away from the QR code, centered on it, and assuming the QR code has a side of 10cm I'd see:
* *
A0 B0
D0 C0
* *
what has been my device transformation between those two frames? I understand that an exact result might not be possible, because maybe the observed QR code is slightly non planar and we're trying to estimate an affine transform on something that is not one perfectly.
I guess the sceneView.pointOfView?.camera?.projectionTransform is more helpful than the sceneView.pointOfView?.camera?.projectionTransform?.camera.projectionMatrix since the later already takes into account transform inferred from the ARKit that I'm not interested into for this problem.
How would I fill
func get transform(
qrCodeRectangle: VNBarcodeObservation,
cameraTransform: SCNMatrix4) {
// qrCodeRectangle.topLeft etc is the position in [0, 1] * [0, 1] of A0
// expected real world position of the QR code in a referential coordinate system
let a0 = SCNVector3(x: -0.05, y: 0.05, z: 1)
let b0 = SCNVector3(x: 0.05, y: 0.05, z: 1)
let c0 = SCNVector3(x: 0.05, y: -0.05, z: 1)
let d0 = SCNVector3(x: -0.05, y: -0.05, z: 1)
let A0, B0, C0, D0 = ?? // CGPoints representing position in
// camera frame for camera in 0, 0, 0 facing Z+
// then get transform from 0, 0, 0 to current position/rotation that sees
// a0, b0, c0, d0 through the camera as qrCodeRectangle
}
====Edit====
After trying number of things, I ended up going for camera pose estimation using openCV projection and perspective solver, solvePnP This gives me a rotation and translation that should represent the camera pose in the QR code referential. However when using those values and placing objects corresponding to the inverse transformation, where the QR code should be in the camera space, I get inaccurate shifted values, and I'm not able to get the rotation to work:
// some flavor of pseudo code below
func renderer(_ sender: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let currentFrame = sceneView.session.currentFrame, let pov = sceneView.pointOfView else { return }
let intrisics = currentFrame.camera.intrinsics
let QRCornerCoordinatesInQRRef = [(-0.05, -0.05, 0), (0.05, -0.05, 0), (-0.05, 0.05, 0), (0.05, 0.05, 0)]
// uses VNDetectBarcodesRequest to find a QR code and returns a bounding rectangle
guard let qr = findQRCode(in: currentFrame) else { return }
let imageSize = CGSize(
width: CVPixelBufferGetWidth(currentFrame.capturedImage),
height: CVPixelBufferGetHeight(currentFrame.capturedImage)
)
let observations = [
qr.bottomLeft,
qr.bottomRight,
qr.topLeft,
qr.topRight,
].map({ (imageSize.height * (1 - $0.y), imageSize.width * $0.x) })
// image and SceneKit coordinated are not the same
// replacing this by:
// (imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
// weirdly fixes an issue, see below
let rotation, translation = openCV.solvePnP(QRCornerCoordinatesInQRRef, observations, intrisics)
// calls openCV solvePnP and get the results
let positionInCameraRef = -rotation.inverted * translation
let node = SCNNode(geometry: someGeometry)
pov.addChildNode(node)
node.position = translation
node.orientation = rotation.asQuaternion
}
Here is the output:
where A, B, C, D are the QR code corners in the order they are passed to the program.
The predicted origin stays in place when the phone rotates, but it's shifted from where it should be. Surprisingly, if I shift the observations values, I'm able to correct this:
// (imageSize.height * (1 - $0.y), imageSize.width * $0.x)
// replaced by:
(imageSize.height * (1.35 - $0.y), imageSize.width * ($0.x - 0.2))
and now the predicted origin stays robustly in place. However I don't understand where the shift values come from.
Finally, I've tried to get an orientation fixed relatively to the QR code referential:
var n = SCNNode(geometry: redGeometry)
node.addChildNode(n)
n.position = SCNVector3(0.1, 0, 0)
n = SCNNode(geometry: blueGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0.1, 0)
n = SCNNode(geometry: greenGeometry)
node.addChildNode(n)
n.position = SCNVector3(0, 0, 0.1)
The orientation is fine when I look at the QR code straight, but then it shifts by something that seems to be related to the phone rotation:
Outstanding questions I have are:
How do I solve the rotation?
where do the position shift values come from?
What simple relationship do rotation, translation, QRCornerCoordinatesInQRRef, observations, intrisics verify? Is it O ~ K^-1 * (R_3x2 | T) Q ? Because if so that's off by a few order of magnitude.
If that's helpful, here are a few numerical values:
Intrisics matrix
Mat 3x3
1090.318, 0.000, 618.661
0.000, 1090.318, 359.616
0.000, 0.000, 1.000
imageSize
1280.0, 720.0
screenSize
414.0, 736.0
==== Edit2 ====
I've noticed that the rotation works fine when the phone stays horizontally parallel to the QR code (ie the rotation matrix is [[a, 0, b], [0, 1, 0], [c, 0, d]]), no matter what the actual QR code orientation is:
Other rotation don't work.
Coordinate systems' correspondence
Take into consideration that Vision/CoreML coordinate system doesn't correspond to ARKit/SceneKit coordinate system. For details look at this post.
Rotation's direction
I suppose the problem is not in matrix. It's in vertices placement. For tracking 2D images you need to place ABCD vertices counter-clockwise (the starting point is A vertex located in imaginary origin x:0, y:0). I think Apple Documentation on VNRectangleObservation class (info about projected rectangular regions detected by an image analysis request) is vague. You placed your vertices in the same order as is in official documentation:
var bottomLeft: CGPoint
var bottomRight: CGPoint
var topLeft: CGPoint
var topRight: CGPoint
But they need to be placed the same way like positive rotation direction (about Z axis) occurs in Cartesian coordinates system:
World Coordinate Space in ARKit (as well as in SceneKit and Vision) always follows a right-handed convention (the positive Y axis points upward, the positive Z axis points toward the viewer and the positive X axis points toward the viewer's right), but is oriented based on your session's configuration. Camera works in Local Coordinate Space.
Rotation direction about any axis is positive (Counter-Clockwise) and negative (Clockwise). For tracking in ARKit and Vision it's critically important.
The order of rotation also makes sense. ARKit, as well as SceneKit, applies rotation relative to the node’s pivot property in the reverse order of the components: first roll (about Z axis), then yaw (about Y axis), then pitch (about X axis). So the rotation order is ZYX.
Math (Trig.):
Notes: the bottom is l (the QR code length), the left angle is k, and the top angle is i (the camera)

SceneKit to Metal migration - Euler Angle to Rotation Matrix and Z Coordinate System

I am moving from scene kit api to metal api. In have to do transforms on a node.
In scene kit i do this by simply setting
let camera = SCNCamera()
camera.zFar = 10000
camera.zNear = 0.1
camera.xFov = 93.299
camera.yFov = 77.65614
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.position = SCNVector3Make(0, 10, 0)
cameraNode.eulerAngles = SCNVector3Make(-2.degreesToRadians, 0, 0)
cameraNode.name = "CameraNode"
scene.rootNode.addChildNode(cameraNode)
let plane = SCNPlane(width:1, height:1)
plane.firstMaterial?.diffuse.contents = UIColor.blueColor()
plane.firstMaterial?.doubleSided = true
let node = SCNNode(geometry: plane)
node.position = SCNVector3Make(0, 0, 55)
node.eulerAngles = SCNVector3Make(90.degreesToRadians, -47.degreesToRadians, 0)
node.scale = SCNVector3Make(169, 169, 169)
scene.rootNode.addChildNode(node)
but in metal there is no way for setting eulerAngles and creating the projection transform based on xFieldOfView and yFieldOfView. As of now i am using GLKIT of all matrix calculations
Model Matrix - here i am doing the rotation in ZYX order (because thats how the scene kit does for the provided euler angle) (using node properties)
var modelMatrix = GLKMatrix4MakeZRotation(0)
modelMatrix = GLKMatrix4RotateY(modelMatrix, -47.degreesToRadians)
modelMatrix = GLKMatrix4RotateX(modelMatrix, 90.degreesToRadians)
modelMatrix = GLKMatrix4Translate(modelMatrix, 0, 0, 100)
modelMatrix = GLKMatrix4Scale(modelMatrix, 169, 169, 169)
View Matrix (using camera node properties)
var viewMatrix = GLKMatrix4MakeTranslation(0, 10, -0)
viewMatrix = GLKMatrix4RotateZ(viewMatrix, 0)
viewMatrix = GLKMatrix4RotateY(viewMatrix, 0)
viewMatrix = GLKMatrix4RotateX(viewMatrix,-2.degreesToRadians)
Perspective Transform (Using camera properties)
let projectionMatrix = GLKMatrix4MakePerspective(77.65614, Float(1024/682.66666666666674), 0.1, 10000)
here 1024 and 682.66666666666674 is my views's height and width.
The scene kit's output and metal output is not the same. for some reason the plane is always rendered on top half of the view in metal kit but in scene kit it is rendered on the bottom half (which is what i wanted) and adjusting the yFieldOfView in metal changes whether the plane is rendered in top half or bottom half and also the scaling and rotation of the output plane in metal view is different from the scene kit.
My Question is :
1. SceneKit coordinate system uses +z in front and -z at back. Whereas Metal uses -z in front and +z at back.
In metal, will simply multiplying all the z rotation and translation values by -1 fix it?
How to do euler angle in metal. I went through couple of links and stack overflow answers and my understanding is that euler angle is the rotation on each axis and multiplied in specific order. Here i am multiplying in zyx order because thats how scene kit uses it.
How do you generate perspectiveMatrix with xFieldOfView and yFieldOfView. I am using GLKMatrix for all matrix transformations. It doesn't have any function like scene kit does. I am passing viewWidth/viewHeight as a value to aspect. How does one generate aspect from xFieldOfView and yFieldOfView. Is that as simple as this
aspect = xFieldOfView/yFieldOfView or is it something different. There is this library https://github.com/nicklockwood/VectorMath/blob/master/VectorMath/VectorMath.swift
which calculates the aspect using xFieldOfView/yFieldOfView
when i am creating the plane in scene kit i am setting the size of plane as follows
let plane = SCNPlane(width:1, height:1)
Is the size matter in metal? In metal the clip space is from -1 to 1 along all axis. And In metal u can set the size of the plane by generating the plane vertices accordingly i have tried generating the vertices from -1 to 1 (here the width and height is 2.) and -0.5 to 0.5 (here the width and height is 1). But it doesn't give me right result.
In what order does the scene kit does the rotation, scaling and translation. I couldn't find any information related to this in apple documentation.
Basically i want to port my scene kit code to metal with same output. Any help greatly appreciated.
Thanks

Creating custom shapes from primitives

I'm trying to create a custom Physics shape with combining primitive shapes. The goal is to create a rounded cube. The appropriate method seems to be init(shapes:transforms:) which I found here https://developer.apple.com/library/prerelease/ios/documentation/SceneKit/Reference/SCNPhysicsShape_Class/index.html#//apple_ref/occ/clm/SCNPhysicsShape/shapeWithShapes:transforms:
I'm thinking this could be done with 8 spheres, 12 cylinders and a box in the middle. Can anyone provide an example of doing that?
Yes, as you may have noticed, creating a physics body from an SCNBox with rounded corners ignores the chamfer radius. Actually, nearly all of the basic geometries (box, sphere, cylinder, pyramid, teapot, etc) generate physics shapes that are idealized forms rather than direct conversions of their vertex meshes to physics bodies.
Generally, this is a good thing. It's much faster to perform collision detection on an idealized sphere than on a mesh of eleventy-hundred triangles that approximates a sphere (is the point to test within radius distance of the sphere's center?). Ditto for an idealized box (convert point to box's local coordinate system, text for x/y/z within bounds).
The init(shapes:transforms:) initializer for SCNShape is a good way to build a complex shape from these idealized shapes. Actually, so is the init(node:options:) initializer: If you pass [SCNPhysicsShapeKeepAsCompoundKey: true] for the options parameter, you can pass an SCNNode that contains an hierarchy of child nodes whose geometries are primitive shapes, and SceneKit will convert each of those geometries to its idealized physics shape before creating a physics shape that's the union of all of them.
I'll show an example of each. But first, some shared context:
let side: CGFloat = 1 // one side of the cube
let radius: CGFloat = side / 4 // the corner radius
// the visual (but not physical) cube
let cube = SCNNode(geometry: SCNBox(width: side, height: side, length: side, chamferRadius: radius))
Here's a shot at making it with init(shapes:transforms:):
var compound: SCNPhysicsShape {
let sphereShape = SCNPhysicsShape(geometry: SCNSphere(radius: radius), options: nil)
let spheres = [SCNPhysicsShape](count: 8, repeatedValue: sphereShape)
let sphereTransforms = [
SCNMatrix4MakeTranslation( radius, radius, radius),
SCNMatrix4MakeTranslation(-radius, radius, radius),
SCNMatrix4MakeTranslation(-radius, -radius, radius),
SCNMatrix4MakeTranslation(-radius, -radius, -radius),
SCNMatrix4MakeTranslation( radius, -radius, -radius),
SCNMatrix4MakeTranslation( radius, radius, -radius),
SCNMatrix4MakeTranslation(-radius, radius, -radius),
SCNMatrix4MakeTranslation( radius, -radius, radius),
]
let transforms = sphereTransforms.map {
NSValue(SCNMatrix4: $0)
}
return SCNPhysicsShape(shapes: spheres, transforms: transforms)
}
cube.physicsBody = SCNPhysicsBody(type: .Dynamic, shape: compound)
The dance you see in there with sphereTransforms and transforms is because SceneKit expects an ObjC NSArray for each of its parameters, and NSArrays can contain only ObjC objects... a transform is an SCNMatrix4, which is a struct, so we have to wrap it in an NSValue to store it in an NSArray. In Swift, it's convenient to work with an array of SCNMatrix4, then use map to get an array of NSValues wrapping each element. (And Swift automatically bridges to NSArray under the hood when we pass our [NSValue] to the SceneKit API.)
This creates a body that's just the rounded corners for the cube — there's empty space in between them. Depending on the situation where you need rounded-cube collisions, that may be enough. For example, if you just want to make rounded-cube dice roll on a floor, corner collisions are the only important ones, because the floor won't collide with the middle of a die without also contacting the corner spheres. If that's all you need, go for it — you get the best performance if your physics shapes are as simple as possible.
If you wanted to make a more accurate compound shape, with cylinders for the edges and either three boxes or six planes for the faces, you could extend the above example. Just make arrays of shapes transforms for each kind of shape, and concatenate the arrays before converting to [NSValue] and passing to SceneKit. (Note that the cylinders will need both rotation and translation transforms, so combine SCNMatrix4MakeTranslation with SCNMatrix4Rotate.)
Then again, all that math is getting hard to visualize. And nesting calls to SCNMatrix4Whatever to do that math isn't so fun. So you could do it with nodes instead:
var nodeCompound: SCNNode {
// a node to hold the compound geometry
let parent = SCNNode()
// one node with a sphere
let sphere = SCNNode(geometry: SCNSphere(radius: radius))
// inner func to clone the sphere to a specific position
func corner(x x: CGFloat, y: CGFloat, z: CGFloat) -> SCNNode {
let node = sphere.clone()
node.position = SCNVector3(x: x, y: y, z: z)
return node
}
// clone the sphere to each corner as child nodes
parent.addChildNode(corner(x: radius, y: radius, z: radius))
parent.addChildNode(corner(x: -radius, y: radius, z: radius))
parent.addChildNode(corner(x: -radius, y: -radius, z: radius))
parent.addChildNode(corner(x: -radius, y: -radius, z: -radius))
parent.addChildNode(corner(x: radius, y: -radius, z: -radius))
parent.addChildNode(corner(x: radius, y: radius, z: -radius))
parent.addChildNode(corner(x: -radius, y: radius, z: -radius))
parent.addChildNode(corner(x: radius, y: -radius, z: radius))
return parent
}
Put this node in a scene and you can visualize the results as you position your spheres (and cylinders, etc). Notice that this node doesn't have to actually be added to your scene, though (except when you're visualizing it for debugging purposes). Once you've got it how you want it, use it to create a physics shape, and assign that shape to the other node that you actually want to draw in your scene:
cube.physicsBody = SCNPhysicsBody(type: .Dynamic,
shape: SCNPhysicsShape(node: nodeCompound,
options: [SCNPhysicsShapeKeepAsCompoundKey: true]))
By the way, if you drop the keep-as-compound option here, you'll get a shape that's a convex hull mesh of your eight corner spheres (regardless of whether you also put edges and faces in, because those lie within the hull). That is, it gets you some approximation of a rounded cube... the corner radius will be less smooth than with the idealized geometry, but depending on what you need this collision body for, it might be all you need.

SceneKit – SCNCamera Top-down view

I'm new to SceneKit coming from 2D SpriteKit and was trying to figure out how to adjust the camera so that it's at the top of the world facing down. I have the location part right, however on the rotation I'm getting stuck. If I adjust the X,YorZaxis, nothing seems to happen, however on the W axis the slightest change (even0.1` higher or lower) seems to move the camera in an unknown direction. What am I doing wrong?
cameraNode.position = SCNVector3Make(0, 10, 0)
cameraNode.rotation = SCNVector4Make(0, 0, 0, 0.5)
the rotation vector is decomposed as (x_axis, y_axis, z_axis, angle)
Setting a rotation axis with a null angle is the identity (no effective rotation). Setting an angle with a null rotation axis does not actually define a rotation.
As for why a small change of the angle has a huge effect, it's because they are expressed in radians.
A rotation of 90º around the x axis can be achieved as follows
node.rotation = SCNVector4Make(1, 0, 0, M_PI_2)
But you can also use Euler angles (see SCNNode.eulerAngles) if you find it easier:
node.eulerAngles = SCNVector3Make(M_PI_2, 0, 0)

Resources