ARKit Continuously get Camera Real World Position - ios

This is the first time I am creating an ARKit app, and also not very familiar with iOS development, however I am trying to achieve something relitively simple...
All the the app needs to do is get the world position of the phone and send it continously to a rest api.
I am using the default ARKit project in Xcode, and am able to get the phone's position with the following function in ViewController.swift:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.z))
plane.firstMaterial?.diffuse.contents = UIColor(white: 1, alpha: 0.75)
let planeNode = SCNNode(geometry: plane)
planeNode.position = SCNVector3Make(planeAnchor.center.x, planeAnchor.center.x, planeAnchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
print(currentPositionOfCamera)
}
this function works as expected, in renders a plane in the view and prints the position, but only once
I need to get the phone's position as it is moved, I tried to add:
updateAtTime time: NSTimeInterval
to the function definition, but after doing this even the plane was not rendered any longer.
How can I continuously get the phone's position vector?
TIA

Related

How to get current position of 3D object while animation is going on in ARKit?

On image marker detection, I want to play animation of walking guy within that marker boundary only using ARKit. For that I want to find out the position of that 3D object while it is walking on marker. Animation is created using external 3D authoring tool and saved in .scnassets as .dae file. I have added node and start animation using below code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let imageAnchor = anchor as? ARImageAnchor {
DispatchQueue.main.async {
//let translation = imageAnchor.transform.columns.3
let idleScene = SCNScene(named: "art.scnassets/WalkAround/WalkAround.dae")!
// This node will be parent of all the animation models
let node1 = SCNNode()
// Add all the child nodes to the parent node
for child in idleScene.rootNode.childNodes {
node1.addChildNode(child)
}
node1.scale = SCNVector3(0.2, 0.2, 0.2)
let physicalSize = imageAnchor.referenceImage.physicalSize
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
skScene.backgroundColor = .white
let plane = SCNPlane(width: self.referenceImage!.physicalSize.width, height: self.referenceImage!.physicalSize.height)
let material = SCNMaterial()
material.lightingModel = SCNMaterial.LightingModel.constant
material.isDoubleSided = true
material.diffuse.contents = skScene
plane.materials = [material]
let rectNode = SCNNode(geometry: plane)
rectNode.eulerAngles.x = -.pi / 2
node.addChildNode(rectNode)
node.addChildNode(node1)
self.loadAnimation(withKey: "walking", sceneName: "art.scnassets/WalkAround/SambaArmtr", animationIdentifier: "SambaArmtr-1")
}
}
}
func loadAnimation(withKey: String, sceneName:String, animationIdentifier:String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObject = sceneSource?.entryWithIdentifier(animationIdentifier, withClass: CAAnimation.self) {
// The animation will only play once
animationObject.repeatCount = 1
}
}
I tried using node.presentation.position in both below methods to get current position of object.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval)
// Or
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
If I will not move device anymore once animation has been started, those methods will not get called and till the time I am getting same position of node. Thats why I am not getting where I am wrong. Or is there any way to get current position of object while animation is going on in ARKit?
I don't know of any way to get the current frame within an embedded animation. With that said, the animation embedded within a model uses CoreAnimation to run the animation. You could use the CAAimationDelegate to listen to the start/end events of your animation and run a timer. The timer would give you the best estimate of which frame the animation is on.
References:
SceneKit Animating Content Documentation: https://developer.apple.com/documentation/scenekit/animation/animating_scenekit_content
CAAnimationDelegate Documentation: https://developer.apple.com/documentation/quartzcore/caanimationdelegate

Child nodes not visible when setWorldOrigin is called?

So I have a function like below that sets the world origin to a node placed where an image is after scanning it. I load the saved nodes from a database that adds them to the scene in a separate function.
For some reason, the nodes will not show when I run the app. It works when setWorldOrigin is commented out.
I would like for the nodes to show relative to the image as the origin.
Am I missing something? Does setWorldOrigin change the session?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let nodeGeometry = SCNText(string: "Welcome!", extrusionDepth: 1)
nodeGeometry.font = UIFont(name: "Helvetica", size: 30)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.black
anchorNode.geometry = nodeGeometry
anchorNode.scale = SCNVector3(0.1, 0.1, 0.1)
anchorNode.constraints = [SCNBillboardConstraint()]
anchorNode.position = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
// Create a plane to visualize the initial position of the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.25
/*
* Plane is rotated to match the picture location
*/
planeNode.eulerAngles.x = -.pi / 2
/*
* Scan runs as an action for a set amount of time
*/
planeNode.runAction(self.imageHighlightAction)
// Add the plane visualization to the scene.
node.addChildNode(planeNode)
sceneView.session.setWorldOrigin(relativeTransform: imageAnchor.transform)
sceneView.scene.rootNode.addChildNode(node)
/*
* Populates the scene
*/
handleData()
} // renderer
I figured it out. The size I had for my image was incorrect. The image I was using is 512x512 pixels. I got that by right clicking on the image and selecting "Get Info" and looking at dimensions.
I also had the measurements as meters. I changed it to centimeters and used a pixel to centimeters converter

Convert coordinates in ARImageTrackingConfiguration

With ARKit 2 a new configuration was added: ARImageTrackingConfiguration which according to the SDK can have better performance and some new use cases.
Experimenting with it on Xcode 10b2 (see https://forums.developer.apple.com/thread/103894 how to fix the asset loading) my code now correctly calls the delegate that an image was tracked and hereafter a node was added but I could not find any documentation where the coordinate system is located, hence does anybody know how to put the node into the scene for it to overlay the detected image:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
DispatchQueue.main.async {
if let imageAnchor = anchor as? ARImageAnchor {
let imageNode = SCNNode.createImage(size: imageAnchor.referenceImage.physicalSize)
imageNode.transform = // ... ???
node.addChildNode(imageNode)
}
}
}
ps: in contrast to ARWorldTrackingConfiguration the origin seems to constantly move around (most likely putting the camera into 0,0,0).
pps: SCNNode.createImage is a helper function without any coordinate calculations.
Assuming that I have read your question correctly, you can do something like the following:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let nodeToReturn = SCNNode()
//1. Check We Have Detected Our Image
if let validImageAnchor = anchor as? ARImageAnchor {
//2. Log The Information About The Anchor & Our Reference Image
print("""
ARImageAnchor Transform = \(validImageAnchor.transform)
Name Of Detected Image = \(validImageAnchor.referenceImage.name)
Width Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.width)
Height Of Detected Image = \(validImageAnchor.referenceImage.physicalSize.height)
""")
//3. Create An SCNPlane To Cover The Detected Image
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: validImageAnchor.referenceImage.physicalSize.width,
height: validImageAnchor.referenceImage.physicalSize.height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.geometry = planeGeometry
//a. Set The Opacity To Less Than 1 So We Can See The RealWorld Image
planeNode.opacity = 0.5
//b. Rotate The PlaneNode So It Matches The Rotation Of The Anchor
planeNode.eulerAngles.x = -.pi / 2
//4. Add It To The Node
nodeToReturn.addChildNode(planeNode)
//5. Add Something Such As An SCNScene To The Plane
if let modelScene = SCNScene(named: "art.scnassets/model.scn"), let modelNode = modelScene.rootNode.childNodes.first{
//a. Set The Model At The Center Of The Plane & Move It Forward A Tad
modelNode.position = SCNVector3Zero
modeNode.position.z = 0.15
//b. Add It To The PlaneNode
planeNode.addChildNode(modelNode)
}
}
return nodeToReturn
}
Hopefully this will point you in the right direction...

Coordinates of ARImageAnchor transform matrix are way too different from the ARPlaneAnchor ones

I am doing this simple thing:
Vertical plane detection
Image recognition on a vertical plane
The image is hanged on the detected plane (on my wall). In both case I implement the renderer:didAddNode:forAnchor: function from ARSCNViewDelegate. I stand at the place for the vertical plane detection and the image recognition.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let shipScene = SCNScene(named: "ship.scn"), let shipNode = shipScene.rootNode.childNode(withName: "ship", recursively: false) else { return }
shipNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
sceneView.scene.rootNode.addChildNode(shipNode)
print(anchor.transform)
}
In the case of a vertical plane detection the anchor will be an ARPlaneAnchor. In the case of an image recognition the anchor will be an ARImageAnchor.
Why are the transform matrices of those two anchors so different? I'm printing the anchor.transform and I get those results:
1.
simd_float4x4([
[0.941312, 0.0, -0.337538, 0.0)],
[0.336284, -0.0861278, 0.937814, 0.0)],
[-0.0290714,-0.996284, -0.0810731, 0.0)],
[0.191099, 0.172432, -1.14543, 1.0)]
])
2.
simd_float4x4([
[0.361231, 0.10894, 0.926093, 0.0)],
[-0.919883, -0.121052, 0.373049, 0.0)],
[0.152743, -0.986651, 0.0564843, 0.0)],
[75.4418, 10.9618, -14.3788, 1.0)]
])
So if I want to place a 3D object on the detected vertical plane I can simply use [x = 0.191099, y = 0.172432, z = -1.14543] as coordinates to set the position of my node (myNode), and then add this node to the scene with sceneView.scene.rootNode.addChildNode(myNode) but if I want to place a 3D object at the detected image's anchor, I cannot use [x = 75.4418, y = 10.9618, z = -14.3788].
What should I do to place a 3D object on the detected image's anchor? I really don't understand the transform matrix of the ARImageAnchor.
Here is an example for you in which I use the func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
let x = currentImageAnchor.transform
print(x.columns.3.x, x.columns.3.y , x.columns.3.z)
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height In Meters
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
//4. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.25
planeNode.geometry = planeGeometry
//5. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//6. Create AN SCNBox
let boxNode = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
//7. Create A Different Colour For Each Face
let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
var faceMaterials = [SCNMaterial]()
//8. Apply It To Each Face
for face in 0 ..< 5{
let material = SCNMaterial()
material.diffuse.contents = faceColours[face]
faceMaterials.append(material)
}
boxGeometry.materials = faceMaterials
boxNode.geometry = boxGeometry
//9. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
boxNode.position = SCNVector3(0 , 0.05, 0)
//10. Add The Box To The Node
node.addChildNode(boxNode)
}
From my understanding (I could of course by wrong) you would know that your placement area is the width and height of the referenceImage.physicalSize which is expressed in Metres:
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
As such you would need to scale your content (if needed to fit) within these boundaries assuming you wanted it to appear to overlay the image.

Physics object falls to infinity in SceneKit

I'm making an AR app that's a ball toss game using Swift's ARKit.
Click here for my repo
The point of the game is to toss the ball and make it land in the hat. However, whenever I try to toss the ball, it always appear to fall to infinity instead of landing in the hat or on the floor plane that I've created.
Here's the code for tossing the ball:
#IBAction func throwBall(_ sender: Any) {
// Create ball
let ball = SCNSphere(radius: 0.02)
currentBallNode = SCNNode(geometry: ball)
currentBallNode?.physicsBody = .dynamic()
currentBallNode?.physicsBody?.allowsResting = true
currentBallNode?.physicsBody?.isAffectedByGravity = true
// Apply transformation
let camera = sceneView.session.currentFrame?.camera
let cameraTransform = camera?.transform
currentBallNode?.simdTransform = cameraTransform!
// Add current ball node to balls array
balls.append(currentBallNode!)
// Add ball node to root node
sceneView.scene.rootNode.addChildNode(currentBallNode!)
// Set force to be applied
let force = simd_make_float4(0, 0, -3, 0)
let rotatedForce = simd_mul(cameraTransform!, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
// Apply force to ball
currentBallNode?.physicsBody?.applyForce(vectorForce, asImpulse: true)
}
And here's the physics body setting for the floor:
Look at below screenshot for get more idea.
Nevermind, I managed to resolve this by adding the following function:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor, planeAnchor.center == self.planeAnchor?.center || self.planeAnchor == nil else { return }
// Set the floor's geometry to be the detected plane
let floor = sceneView.scene.rootNode.childNode(withName: "floor", recursively: true)
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.y))
floor?.geometry = plane
}

Resources