Maintain correct SCNNode position in ARKit while walking, without calling run and .resetTracking on each CLLocation update - ios

I'm building a simple navigation app with ARKit. The app shows a pin at a destination, which can be far away or nearby. The user is able to walk toward the pin to navigate.
In my ARSCNView I have an SCNNode called waypointNode, which represents the pin at the destination.
To determine where to place the waypointNode, I calculate the distance to the destination in meters, and the bearing (degrees away from North) to the destination. Then, I create and multiply some transformations and apply them to the node to put it in the proper position.
There's also some logic establish a maximum distance away for the waypointNode so it's not too small for the user to see.
This is how I configure the ARSCNView, so the axes line up with the real-world compass directions:
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading
configuration.planeDetection = .horizontal
session.run(configuration, options: [.resetTracking])
}
Every time the device gets a new CLLocation from CoreLocation, I update the distance and bearing then call this function to update the position of the waypointNode:
func updateWaypointNode() {
// limit the max distance so the node doesn't become invisible
let distanceLimit: Float = 80
let translationDistance: Float
if navigationInfo.distance > distanceLimit {
translationDistance = distanceLimit
} else {
translationDistance = navigationInfo.distance
}
// transform matrix to adjust node distance
let distanceTranslation = SCNMatrix4MakeTranslation(0, 0, -translationDistance)
// transform matrix to rotate node around y-axis
let rotation = SCNMatrix4MakeRotation(-1 * GLKMathDegreesToRadians(Float(navigationInfo.bearing)), 0, 1, 0)
// multiply the rotation and distance translation matrices
let distanceTimesRotation = SCNMatrix4Mult(distanceTranslation, rotation)
// grab the current camera transform
guard let cameraTransform = session.currentFrame?.camera.transform else { return }
// multiply the rotation and distance translation transform by the camera transform
let finalTransform = SCNMatrix4Mult(SCNMatrix4(cameraTransform), distanceTimesRotation)
// update the waypoint node with this updated transform
waypointNode.transform = finalTransform
}
This works fine when the user first starts the session, and when the user moves less than about 100m.
Once the user covers a significant distance, like over 100m walking or driving, just calling updateWaypointNode() is not enough to maintain the proper position of the node at the destination. When walking toward the node, for example, it's possible for the user to eventually reach the node, even though the user has not reached the destination. Note: This incorrect positioning happens while the session open the whole time, not if the session is interrupted.
As a workaround, I'm also calling setUpSceneView() every time the device gets a location update.
Even though this works OK, it feels wrong to me. It doesn't seem like I should have to call run with the .resetTracking option every time. I think I might just be overlooking something in my translations. I also see some jumpiness in the camera that seems to happen every time run is called when the session is running, so that's less desirable than simply updating translations.
Is there something different I could do to avoid calling run on the session and resetting the tracking every time the device gets a location update?

Related

How to limit SCNNode from moving when camera moves along the x-axis?

I'm working a simple app using ARKit where a user can tap their screen and place a node (SCNNode) on a given location. I want the user to be able to place nodes that stay in place no matter where the camera is so that when they pan back to the location where they placed the node, it's still there.
I've gotten the tap functionality to work, but I've noticed that when I physically move my device along the x-axis, the placed node moves along with it. I've tried to anchor the nodes to something other than the root node, but it hasn't worked as expected. I tried to look up documentation on how the root node is placed and if it's calculated based on the camera which would explain why the nodes are moving along with the camera, but no luck there either.
Here's the code for placing the nodes. The node position is placed using scenePoint which is a projection from the touch location to the scene that was done using SceneKit: unprojectPoint returns same/similar point no matter where you touch screen.
let nodeImg = SCNNode(geometry: SCNSphere(radius: 0.05))
nodeImg.physicsBody? = .static()
nodeImg.geometry?.materials.first?.diffuse.contents = hexColor
nodeImg.geometry?.materials.first?.specular.contents = UIColor.white
nodeImg.position = SCNVector3(scenePoint.x, scenePoint.y, scenePoint.z)
print(nodeImg.position)
sceneView.scene.rootNode.addChildNode(nodeImg)
I think this has something to do with the fact that I'm adding the nodeImg node as a child to the rootNode, but I'm not sure what else to anchor it to.
On tap you need to set the 'worldPosition' of the node and not just 'position'
Check this link : ARKit: position vs worldposition vs simdposition
Assuming you have set sceneView.allowsCameraControl = false
#objc func handleTapGesture(withGestureRecognizer recognizer: UITapGestureRecognizer) {
let location: CGPoint = recognizer.location(in: self.sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if let tappednode = hits.first?.node {
nodeImg.worldPosition = tappednode.worldPosition
self.sceneView.scene.rootNode.addChildNode(nodeImg)
}
}

ARKit - Object stuck to camera after tap on screen

I started out with the template project which you get when you choose ARKit project. As you run the app you can see the ship and view it from any angle.
However, once I allow camera control and tap on the screen or zoom into the ship through panning the ship gets stuck to camera. Now wherever I go with the camera the ship is stuck to the screen.
I went through the Apple Guide and seems like the don't really consider this as unexpected behavior as there is nothing about this behavior.
How to keep the position of the ship fixed after I zoom it or touch the screen?
Well, looks like allowsCameraControl is not the answer at all. It's good for SceneKit but not for ARKit(maybe it's good for something in AR but I'm not aware of it yet).
In order to zoom into the view a UIPinchGestureRecognizer is required.
// 1. Find the touch location
// 2. Perform a hit test
// 3. From the results take the first result
// 4. Take the node from that first result and change the scale
#objc private func handlePan(recognizer: UIPinchGestureRecognizer) {
if recognizer.state == .changed {
// 1.
let location = recognizer.location(in: sceneView)
// 2.
let hitTestResults = sceneView.hitTest(location, options: nil)
// 3.
if let hitTest = hitTestResults.first {
let shipNode = hitTest.node
let newScaleX = Float(recognizer.scale) * shipNode.scale.x
let newScaleY = Float(recognizer.scale) * shipNode.scale.y
let newScaleZ = Float(recognizer.scale) * shipNode.scale.z
// 4.
shipNode.scale = SCNVector3(newScaleX, newScaleY, newScaleZ)
recognizer.scale = 1
}
}
Regarding #2. I got confused a little with another hitTest method called hitTest(_:types:)
Note from documentation
This method searches for AR anchors and real-world objects detected by
the AR session, not SceneKit content displayed in the view. To search
for SceneKit objects, use the view's hitTest(_:options:) method
instead.
So that method cannot be used if you want to scale a node which is a SceneKit content

ARkit - Camera Position and 3D model Positions

I am trying to put several models in the scene.
for candidate in selectedCandidate {
sceneView.scene.rootNode.addChildNode(selectedObjects[candidate])
}
The candidate and selectedCandidate stands for the index of the model I want to use. Each model contains a rootNode and nodes attached to it. I use the API worldPosition and position of SCNNode to get and modify 3D model's position.
The thing I want to do is put those models right in front users' eyes. It means I need to get the camera's position and orientation vector to put the models in the right position I want. I also use these codes to get the camera's position according to this solution https://stackoverflow.com/a/47241952/7772038:
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
The PROBLEM is that the camera's position and the model's position I printed out directly are severely different in order of magnitude. Camera's position is 10^-2 level like {0.038..., 0.047..., 0.024...} BUT the model's position is 10^2 level like {197.28, 100.29, -79.25}. From my point of view when I run the program, I am in the middle of those models and models are very near, but the positions are so different. So can you tell me how to modify the model's position to whatever I want? I really need to put the model right in front of user's eyes. If I simply do addChildNode() the models are behind me or somewhere else, while I need the model just be in front of users' eyes. Thank you in advance!
If you want to place an SCNNode infront of the camera you can do so like this:
/// Adds An SCNNode 3m Away From The Current Frame Of The Camera
func addNodeInFrontOfCamera(){
guard let currentTransform = augmentedRealitySession.currentFrame?.camera.transform else { return }
let nodeToAdd = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
boxGeometry.firstMaterial?.diffuse.contents = UIColor.red
nodeToAdd.geometry = boxGeometry
var translation = matrix_identity_float4x4
//Change The X Value
translation.columns.3.x = 0
//Change The Y Value
translation.columns.3.y = 0
//Change The Z Value
translation.columns.3.z = -3
nodeToAdd.simdTransform = matrix_multiply(currentTransform, translation)
augmentedRealityView?.scene.rootNode.addChildNode(nodeToAdd)
}
And you can change any of the X,Y,Z values as you need.
Hope it points you in the right direction...
Update:
If you have multiple nodes e.g. in a scene, in order to use this function, it's probably best to create a 'holder' node, and then add all your content as a child.
Which means then you can simply call this function on the holder node.

Move nodes along axis with respect to camera position

I currently have a rootNode with multiple children nodes attached to it that I want to move around the scene together as a cluster. I currently move it along the x and y axis by using left, right, up and down buttons by changing the position of the rootNode little by little every time the button is clicked, for example for moving left:
self.newRootNode.position.x = self.newRootNode.position.x - 0.01
This way, the cluster always moves with respect to the coordinate system set when the app is initialized. I'm trying to make it move with respect to the user's left and right everytime they change their position. I've tried doing it as follows:
let nodeCam = self.sceneView.session.currentFrame!.camera
let cameraTransform = nodeCam.transform
self.newRootNode.position.x = cameraTransform.columns.3.x - 0.01
I know this is not what I want, I must be missing a transform from the camera's position to the root node's position, but I'm not sure what steps to follow.
What would be the right way to approach this? Do I need to reset tracking every time the user changes position? Any help would be appreciated :)
I believe you can use this function on your ARSession now in ARKit 1.5:
func setWorldOrigin(relativeTransform: matrix_float4x4)
Which:
Changes the basis for the AR world coordinate space using the specified transform.
Here is an example (untested) which may point you in the right direction:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let currentFrame = augmentedRealitySession.currentFrame?.camera else { return }
let transform = currentFrame.transform
augmentedRealitySession.setWorldOrigin(relativeTransform: transform )
}
Not forgetting of course to use this ARSCNDebugOptions.showWorldOrigin
in order to debug and adjust your matrixes and transforms etc.
You could also use the code in the delegate callback as an IBAction etc..
Hope it helps...

Keeping Direction of a Vector Constant while Rotating Sprite

I'm trying to make a game where the sprite will always move to the right when hit by an object. However since the Sprite rotates constantly and the zero radians rotates with the Sprite causes my calculated magnitude to go the opposite direction if the sprite is facing left and hits the object. Is there a way to keep the direction of the magnitude always pointing to the right even if the zero is facing left?
// referencePoint = upper right corner of the frame
let rightTriangleFinalPoint:CGPoint = CGPoint(x: referencePoint.x, y: theSprite.position.y)
let theSpriteToReferenceDistance = distanceBetweenCGPoints(theSprite.position, b: referencePoint)
let theSpriteToFinalPointDistance = distanceBetweenCGPoints(theSprite.position, b: rightTriangleFinalPoint)
let arcCosineValue = theSpriteToFinalPointDistance / theSpriteToReferenceDistance
let angle = Double(acos(arcCosineValue))
let xMagnitude = magnitude * cos(angle)
let yMagnitude = (magnitude * sin(angle)) / 1.5
Not sure if this works for you:
I would use an orientation constraint to rotate the sprite. The movement can be done independent from the orientation in that case.
I made an tutorial some time ago: http://stefansdevplayground.blogspot.de/2014/09/howto-implement-targeting-or-follow.html
So I figured out what was going on.
It seems like the angle doesn't rotate with the Sprite like I originally thought and the vector that I am making is working with the code above. THE problem that I had was that I also set the collision bit for the objects which is wrong. If I only set the contact bit for the objects against the sprite the my desired outcome comes true.

Resources