Have SCNCamera follow a node at a fixed distance? - ios

I have a SCNNode, and an SCNCamera. The camera is located up and in front of the node, and looks down on the node via an SCNLookAtConstraint that I have setup. However when the node moves laterally, the camera only rotates, instead of moving with the it. Is there any way to get the camera to move with the node?

You are only using SCNLookAtConstraint, which as its name says, make the camera Look At the object only. (You only need to rotate your head to look at something)
To make the Camera move with it, you will need either a SCNTransformConstraint (documentation here), or simply make the Camera Node a child of the object you want it to follow.
In case you want the Camera to smoothly follow the object, and be constrained only by a distance (as if it was dragged by a rope), the SCNTransformConstraint is the way to go.

If the transform between your node and the camera is always the same, you should consider making the camera a child node of your node. This is way more efficient and simpler than using constraints.

I made my camera node a child of the SCNNode I wanted to follow. That's another way to achieve this.

Related

ARKit + Core location - points are not fixed on the same places

I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).

ARKit - Update only the world coordinates origin

Once the user has scanned the environment and that I detected a plane, I would like the world origin anchor, which is the device position when the app opens (which is the origin of the 3D world), to be reset to where my device is right now so that the user can see my AR objects in front of him.
(my objects are floating and not related to the floor but detecting a plane makes the objects more stable)
I didn't find a way to do that. It's linked to ARConfiguration but it doesn't seem like we can update the coordinate system without resetting all the tracking. Do you have any idea?
According to this post link, the documentation of the rootNode says:
You should not modify the transform property of the root node.
I still tried affecting the camera position to the position of the rootNode but it didn't change anything.
So it seems like the only way is to create a new node and use it as a rootNode from where we are.

SceneKit nodes aren't changing position with scene's root node

I'm using SceneKit with ARKit, and right now have a simple app where I tap on the screen and it adds an ARAnchor and a SCNNode to my scene.
At some point, I'm going to want to move the entire scene, so I tried changing sceneView.scene.rootNode.position.x += 10 to test this out. If I call this on any particular node, that node does move appropriately. But calling this on rootNode, nothing happens, where I'd expect every child node (which is every node in the scene) to move along with it.
Why are my other nodes not moving appropriately, and is there something I can do to fix this? Or am I thinking about this wrong?
Per the docs for SCNScene.rootNode:
You should not modify the transform property of the root node.
The root node defines the origin of the world coordinate system — all other measurements are relative to it. Hence, it's not meaningful (and is often problematic) to change its position, orientation, scale, or any other aspect of its transform.
If you want to move all the content in your SceneKit scene, create a new node to contain all of the others, and change that node's transform. (You can't do this for nodes added by ARSCNView, because ARKit makes those direct children of the root node, but the whole point of those is positioning them in world space.)

How can I point an SCNNode at another SCNNode?

I have a series of (flat plane) nodes in my scene that I need to have constantly facing the camera.
How can I adjust the transform/rotation to get this working?
Also, where do I make this calculation?
Currently I am trying to make it happen on user interaction in the SCNSceneRendererDelegate renderer:updateAtTime: delegate method.
How about an SCNBillboardConstraint? That restricts you to iOS 9/El Capitan/tvOS. Add the constraint to each of your flat plane (billboard) nodes.
From the SceneKit Framework Reference: https://developer.apple.com/library/ios/documentation/SceneKit/Reference/SCNBillboardConstraint_Class/index.html
An SCNBillboardConstraint object automatically adjusts a node’s orientation so that it always points toward the pointOfView node currently being used to render the scene.
In the more general case, SCNLookAtConstraint will keep any node's minus-Z axis pointed toward any other node.

Hit-testing a UIGestureRecogniser in 3d space

I'm reasonably new to iOS's SceneKit and have come across a dilemma with regards to user-interaction in a 3d scene:
I have a set of SCNNode cubes in an SCNView, and would like to be able to pin-point where a user touches the mesh of a given cube, as a 3d coordinate (so as to later manipulate the scene according to touch vectors). At present, I've been using a UIGestureRecognizer in order to achieve basic hit-testing, but this seems to be limited to returning 2d-points.
This isn't a problem when wanting to hit-test a whole node itself, as this can be achieved via a UIGestureRecognizer's hittest method in the SCNView. However, does anybody have any suggestions as to how to precisely locate where a touch landed on a node, in terms of coordinates (i.e. SCNVector3)?
Thanks!
You are on the right track with calling hitTest:options: on the SCNView. As you have probably seen it results in an array of SCNHitTestResults.
The hit test result can tell you many things about the hit, one of them being what node was hit. What you are looking for is either the localCoordinates or the worldCoordinates.
The local coordinate is relative to the node that was hit. Since you are asking "how to precisely locate where a touch landed on a node" this is probably the one you are looking for.
The world coordinate is relative to the root node.

Resources