How to set entity in front of screen with Reality kit? - augmented-reality

The new reality kit camera transform appears to be misleading. When I set an entity’s transform to the camera’s, it is not following the front of the screen, instead, it is always near the world origin. There used to be pointOfView in scnview. What should I do to create the same effect

If you want an entity to follow the camera and always be in front of the camera, the simplest way to achieve this is using an AnchorEntity:
let box = ModelEntity(
mesh: MeshResource.generateBox(size: 0.05),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
let cameraAnchor = AnchorEntity(.camera)
cameraAnchor.addChild(box)
arView.scene.addAnchor(cameraAnchor)
// Move the box in front of the camera slightly, otherwise
// it will be centered on the camera position and we will
// be inside the box and not be able to see it
box.transform.translation = [0, 0, -0.5]
However if you want to use the cameraTransform property, this seemed to work fine for me:
var c: Cancellable?
var boxAnchor: AnchorEntity?
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let box = ModelEntity(
mesh: MeshResource.generateBox(size: 0.05),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
boxAnchor = AnchorEntity(world: [0,0,0])
arView.scene.addAnchor(boxAnchor!)
boxAnchor!.addChild(box)
c = arView.scene.subscribe(to: SceneEvents.Update.self) { (event) in
guard let boxAnchor = boxAnchor else {
return
}
// Translation matrix that moves the box 1m in front of the camera
let translate = float4x4(
[1,0,0,0],
[0,1,0,0],
[0,0,1,0],
[0,0,-1,1]
)
// Transformed applied right to left
let finalMatrix = arView.cameraTransform.matrix * translate
boxAnchor.setTransformMatrix(finalMatrix, relativeTo: nil)
}
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}

Related

How to get a ordinary Mixamo character animation working in SceneKit?

Go to mixamo.com, pick a character, tap animations, pick one, simply download as .dae.
Have the file on your Mac desktop; tap file info). It will perfectly animate the character move.
Xcode, drag in the folder. Tap the .dae file, tap the Play icon at the bottom. It will perfectly animate the character move.
Now, add the character to your existing SceneKit scene. For example:
let p = Bundle.main.url(forResource: "File Name", withExtension: "dae")!
modelSource = SCNSceneSource(url: p, options: nil)!
let geom = modelSource.entryWithIdentifier("geometry316",
withClass: SCNGeometry.self)! as SCNGeometry
theModel = SCNNode(geometry: geom)
.. your node .. .addChildNode(theModel)
(To get the geometry name, just look in the .dae text example )
You will PERFECTLY see the character, in T pose
However it seems impossible to run the animation on the character.
Code would look something like ...
theAnime = amySource.entryWithIdentifier("unnamed_animation__0", withClass: CAAnimation.self)!
theModel.addAnimation(theAnime, forKey:"aKey")
No matter what I try it just doesn't animate.
At the moment you addAnimation, the character jumps to a different static position, and does nothing. (If you arrange to "end" the animation removeAllAnimations(), it simply returns to the T-pose.)
Clearly the dae file is perfect since the shows the animation perfectly simply in the Mac finder, and perfectly on the actual screen of the .dae file in Xcode!
In short, from the mixamo image above, has anyone been able to get the animation to run, actually, in a SceneKit scene?
(PS not ARKit .. scene kit.)
First, you need your character in the T-Position only. Download that file as Collada (DAE) with the Skin. Do NOT include any animations to this File. No Further modifications are required to this file then.
Then, for any animation effect you will implement like walking, running, dancing, or whatever - do it like so:
Test/Apply your desired animation in Mixamo on the character, adjust the settings as you want then download it. Here it is very important to Download as Collada (DAE) and choose WITHOUT Skin!!! Leave Framerate and keyframe reduction default.
This will give you a single DAE File for each animation you want to implement. This DAE contains no mesh data and no rig. It only contains the deformations of the Model to which it belongs (this is why you choose to download it without Skin).
Then you need to do two additional operations on all DAE Files which contains animations.
First, you need to pretty-print the XML structure of each DAE containing an animation. You can do this i.Ex. using the XML Tools in Notepad++ or you open a terminal on your Mac and use this command:
xmllint —-format my_anim_orig.dae > my_anim.dae
Then install this Tool here on your Mac.
(https://drive.google.com/file/d/0B1_uvI21ZYGUaGdJckdwaTRZUEk/edit?usp=sharing)
Convert all of your DAE Animations with this converter:
(But do NOT convert your T-Pose Model using this tool!!!)
No we are ready to setup the Animation:
you should organise the DAE's within the art.scnassets folder
Let's configure this:
I usually organise this within a struct called characters. But any other implementation will do
add this:
struct Characters {
// MARK: Characters
var bodyWarrior : SCNNode!
private let objectMaterialWarrior : SCNMaterial = {
let material = SCNMaterial()
material.name = "warrior"
material.diffuse.contents = UIImage.init(named: "art.scnassets/warrior/textures/warrior_diffuse.png")
material.normal.contents = UIImage.init(named: "art.scnassets/warrior/textures/warrior_normal.png")
material.metalness.contents = UIImage.init(named: "art.scnassets/warrior/textures/warrior_metalness.png")
material.roughness.contents = UIImage.init(named: "art.scnassets/warrior/textures/warrior_roughness.png")
material.ambientOcclusion.contents = UIImage.init(named: "art.scnassets/warrior/textures/warrior_AO.png")
material.lightingModel = .physicallyBased
material.isDoubleSided = false
return material
}()
// MARK: MAIN Init Function
init() {
// Init Warrior
bodyWarrior = SCNNode(named: "art.scnassets/warrior/warrior.dae")
bodyWarrior.childNodes[1].geometry?.firstMaterial = objectMaterialWarrior // character body material
print("Characters Init Completed.")
}
}
Then you can init the struct i.Ex. in the viewDidLoad
var characters = Characters()
Pay Attention to use the correct childNodes!
in this case the childNodes[1] is the visible mesh and childNodes[0] then will be the animation Node.
you might also implement this SceneKit extension to your code, it is very useful to import Models. (attention, it will organise the model nodes as Childs from a new node!)
extension SCNNode {
convenience init(named name: String) {
self.init()
guard let scene = SCNScene(named: name) else {return}
for childNode in scene.rootNode.childNodes {addChildNode(childNode)}
}
}
also add that extension below. You'll need it for the animation player later.
extension SCNAnimationPlayer {
class func loadAnimation(fromSceneNamed sceneName: String) -> SCNAnimationPlayer {
let scene = SCNScene( named: sceneName )!
// find top level animation
var animationPlayer: SCNAnimationPlayer! = nil
scene.rootNode.enumerateChildNodes { (child, stop) in
if !child.animationKeys.isEmpty {
animationPlayer = child.animationPlayer(forKey: child.animationKeys[0])
stop.pointee = true
}
}
return animationPlayer
}
}
Handle Character setup and Animation like so:
(here is a simplified version of my Class)
class Warrior {
// Main Nodes
var node = SCNNode()
private var animNode : SCNNode!
// Control Variables
var isIdle : Bool = true
// For Initial Warrior Position and Scale
private var position = SCNMatrix4Mult(SCNMatrix4MakeRotation(0,0,0,0), SCNMatrix4MakeTranslation(0,0,0))
private var scale = SCNMatrix4MakeScale(0.03, 0.03, 0.03) // default size ca 6m height
// MARK: ANIMATIONS
private let aniKEY_NeutralIdle : String = "NeutralIdle-1" ; private let aniMAT_NeutralIdle : String = "art.scnassets/warrior/NeutralIdle.dae"
private let aniKEY_DwarfIdle : String = "DwarfIdle-1" ; private let aniMAT_DwarfIdle : String = "art.scnassets/warrior/DwarfIdle.dae"
private let aniKEY_LookAroundIdle : String = "LookAroundIdle-1" ; private let aniMAT_LookAroundIdle : String = "art.scnassets/warrior/LookAround.dae"
private let aniKEY_Stomp : String = "Stomp-1" ; private let aniMAT_Stomp : String = "art.scnassets/warrior/Stomp.dae"
private let aniKEY_ThrowObject : String = "ThrowObject-1" ; private let aniMAT_ThrowObject : String = "art.scnassets/warrior/ThrowObject.dae"
private let aniKEY_FlyingBackDeath : String = "FlyingBackDeath-1" ; private let aniMAT_FlyingBackDeath : String = "art.scnassets/warrior/FlyingBackDeath.dae"
// MARK: MAIN CLASS INIT
init(index: Int, scaleFactor: Float = 0.03) {
scale = SCNMatrix4MakeScale(scaleFactor, scaleFactor, scaleFactor)
// Config Node
node.index = index
node.name = "warrior"
node.addChildNode(GameViewController.characters.bodyWarrior.clone()) // childNodes[0] of node. this holds all subnodes for the character including animation skeletton
node.childNodes[0].transform = SCNMatrix4Mult(position, scale)
// Set permanent animation Node
animNode = node.childNodes[0].childNodes[0]
// Add to Scene
gameScene.rootNode.addChildNode(node) // add the warrior to scene
print("Warrior initialized with index: \(String(describing: node.index))")
}
// Cleanup & Deinit
func remove() {
print("Warrior deinitializing")
self.animNode.removeAllAnimations()
self.node.removeAllActions()
self.node.removeFromParentNode()
}
deinit { remove() }
// Set Warrior Position
func setPosition(position: SCNVector3) { self.node.position = position }
// Normal Idle
enum IdleType: Int {
case NeutralIdle
case DwarfIdle // observe Fingers
case LookAroundIdle
}
// Normal Idles
func idle(type: IdleType) {
isIdle = true // also sets all walking and running variabled to false
var animationName : String = ""
var key : String = ""
switch type {
case .NeutralIdle: animationName = aniMAT_NeutralIdle ; key = aniKEY_NeutralIdle // ; print("NeutralIdle ")
case .DwarfIdle: animationName = aniMAT_DwarfIdle ; key = aniKEY_DwarfIdle // ; print("DwarfIdle ")
case .LookAroundIdle: animationName = aniMAT_LookAroundIdle ; key = aniKEY_LookAroundIdle // ; print("LookAroundIdle")
}
makeAnimation(animationName, key, self.animNode, backwards: false, once: false, speed: 1.0, blendIn: 0.5, blendOut: 0.5)
}
func idleRandom() {
switch Int.random(in: 1...3) {
case 1: self.idle(type: .NeutralIdle)
case 2: self.idle(type: .DwarfIdle)
case 3: self.idle(type: .LookAroundIdle)
default: break
}
}
// MARK: Private Functions
// Common Animation Function
private func makeAnimation(_ fileName : String,
_ key : String,
_ node : SCNNode,
backwards : Bool = false,
once : Bool = true,
speed : CGFloat = 1.0,
blendIn : TimeInterval = 0.2,
blendOut : TimeInterval = 0.2,
removedWhenComplete : Bool = true,
fillForward : Bool = false
)
{
let anim = SCNAnimationPlayer.loadAnimation(fromSceneNamed: fileName)
if once { anim.animation.repeatCount = 0 }
anim.animation.autoreverses = false
anim.animation.blendInDuration = blendIn
anim.animation.blendOutDuration = blendOut
anim.speed = speed; if backwards {anim.speed = -anim.speed}
anim.stop()
print("duration: \(anim.animation.duration)")
anim.animation.isRemovedOnCompletion = removedWhenComplete
anim.animation.fillsForward = fillForward
anim.animation.fillsBackward = false
// Attach Animation
node.addAnimationPlayer(anim, forKey: key)
node.animationPlayer(forKey: key)?.play()
}
}
you can then initialise the Class Object after you initialised the characters struct.
the rest you'll figure out, come back on me, if you have questions or need a complete example App :)

Developing an ARKit app that leaves text for others to view

I am creating an iOS AR app that sets text in a specific location and leaves it there for others to view. Is there a better way to implement it than what I am doing?
Currently, I have it set so that the text is saved to Firebase and loads it by setting the nodes relative to the camera’s position. I’m wondering if there is a way to save ARAnchors in a fashion similar to what I am doing but is that possible?
My current function for saving the text to the location via a user tapping the screen:
/*
* Variables for saving the user touch
*/
var touchX : Float = 0.0
var touchY : Float = 0.0
var touchZ : Float = 0.0
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// will be used for getting the text
let textNode = SCNNode()
var writing = SCNText()
// gets the user’s touch upon tapping the screen
guard let touch = touches.first else {return}
let result = sceneView.hitTest(touch.location(in: sceneView), types: [ARHitTestResult.ResultType.featurePoint])
guard let hitResult = result.last else {return}
let hitTransform = SCNMatrix4.init(hitResult.worldTransform)
let hitVector = SCNVector3Make(hitTransform.m41, hitTransform.m42, hitTransform.m43)
// saves X, Y, and Z coordinates of touch relative to the camera
touchX = hitTransform.m41
touchY = hitTransform.m42
touchZ = hitTransform.m43
// Was thinking of adding the ability to change colors. Probably can skip next seven lines
var colorArray = [UIColor]()
colorArray.append(UIColor.red)
writing = SCNText(string: input.text, extrusionDepth: 1)
material.diffuse.contents = colorArray[0]
writing.materials = [material]
// modifies the node’s position and size
textNode.scale = SCNVector3(0.01, 0.01, 0.01)
textNode.geometry = writing
textNode.position = hitVector
sceneView.scene.rootNode.addChildNode(textNode)
// last few lines save the info to Firebase
let values = ["X" : touchX, "Y" : touchY, "Z" : touchZ, "Text" : input.text!] as [String : Any]
let childKey = reference.child("Test").childByAutoId().key
if input.text != nil && input.text != "" {
let child = reference.child("Test").child(childKey!)
child.updateChildValues(values)
} else {
let child = reference.child("Test").child(childKey!)
child.updateChildValues(values)
} // if
} // override func
/*
* Similar to the previous function but used in next function
*/
func placeNode(x: Float, y: Float, z: Float, text: String) -> Void {
let textNode = SCNNode()
var writing = SCNText()
let hitVector = SCNVector3Make(x, y, z)
touchX = x
touchY = y
touchZ = z
var colorArray = [UIColor]()
colorArray.append(UIColor.red)
writing = SCNText(string: text, extrusionDepth: 1)
material.diffuse.contents = colorArray[0]
writing.materials = [material]
textNode.scale = SCNVector3(0.01, 0.01, 0.01)
textNode.geometry = writing
textNode.position = hitVector
sceneView.scene.rootNode.addChildNode(textNode)
} // func
/*
* This next function is used in my viewDidLoad to load the data
*/
func handleData() {
reference.child("Test").observeSingleEvent(of: .value, with: { (snapshot) in
if let result = snapshot.children.allObjects as? [DataSnapshot] {
for child in result {
let xCoord = Float(truncating: child.childSnapshot(forPath: "X").value as! NSNumber)
let yCoord = Float(truncating: child.childSnapshot(forPath: "Y").value as! NSNumber)
let zCoord = Float(truncating: child.childSnapshot(forPath: "Z").value as! NSNumber)
let inscription = child.childSnapshot(forPath: "Text").value
self.placeNode(x: xCoord , y: yCoord , z: zCoord , text: inscription as! String)
} // for
} // if
}) // reference
} // func
I have looked into a few things such as ARCore but that looks like it uses Objective-C. I’ve made this app in Swift and I am not sure if I can incorporate ARCore with how I have implemented my current application.
Do I just need to get over it and learn Objective-C? Can I still work with what I have?
I think that ARCore anchors are only available for 24 hours, so that could be a problem.
You probably need to use ARKit2.0's ARWorldMap and save it as data on firebase for others to see the text in the same place, otherwise you are assuming in your code that future users will start their AR session in the exact same position and direction as the person who left the text. You probably need to use core location first to see where in the world the user is.

ARKit - getting distance from camera to anchor

I'm creating an anchor and adding it to my ARSKView at a certain distance in front of the camera like this:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
then the node gets created for the anchor like this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This works fine and I see the image get added at the appropriate distance. However, what I'm trying to do is then calculate how far the anchor/node is in front of the camera as you move. The problem is the calculation seems to be off immediately using fabs(cameraZ - anchor.transform.columns.3.z). Please see my code below that is in the update() method to calculate distance between camera and object:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
//print("token is within camera view from update method")
print("DISTANCE BETWEEN CAMERA AND TOKEN: \(fabs(cameraZ - anchor.transform.columns.3.z))")
print(cameraZ)
print(anchor.transform.columns.3.z)
}
}
}
}
Any help is appreciated in order to accurately get distance between camera and the anchor.
The last column of a 4x4 transform matrix is the translation vector (or position relative to a parent coordinate space), so you can get the distance in three dimensions between two transforms by simply subtracting those vectors.
let anchorPosition = anchor.transforms.columns.3
let cameraPosition = camera.transform.columns.3
// here’s a line connecting the two points, which might be useful for other things
let cameraToAnchor = cameraPosition - anchorPosition
// and here’s just the scalar distance
let distance = length(cameraToAnchor)
What you’re doing isn’t working right because you’re subtracting the z-coordinates of each vector. If the two points are different in x, y, and z, just subtracting z doesn’t get you distance.
This one is for scenekit, I'll leave it here though.
let end = node.presentation.worldPosition
let start = sceneView.pointOfView?.worldPosition
let dx = (end?.x)! - (start?.x)!
let dy = (end?.y)! - (start?.y)!
let dz = (end?.z)! - (start?.z)!
let distance = sqrt(pow(dx,2)+pow(dy,2)+pow(dz,2))
With RealityKit there is a slightly different way to do this. If you're using the world tracking configuration, your AnchorEntity object conforms to HasAnchoring which gives you a target. Target is an enum of AnchoringComponent.Target. It has a case .world(let transform). You can compare your world transform to the camera's world transform like this:
if case let AnchoringComponent.Target.world(transform) = yourAnchorEntity.anchoring.target {
let theDistance = distance(transform.columns.3, frame.camera.transform.columns.3)
}
This took me a bit to figure out but I figure others that might be using RealityKit might benefit from this.
As mentioned above by #codeman, this is the right solution:
let distance = simd_distance(YOUR_NODE.simdTransform.columns.3, (sceneView.session.currentFrame?.camera.transform.columns.3)!);
3D distance - You can check these utils,
class ARSceneUtils {
/// return the distance between anchor and camera.
class func distanceBetween(anchor : ARAnchor,AndCamera camera: ARCamera) -> CGFloat {
let anchorPostion = SCNVector3Make(
anchor.transform.columns.3.x,
anchor.transform.columns.3.y,
anchor.transform.columns.3.z
)
let cametaPosition = SCNVector3Make(
camera.transform.columns.3.x,
camera.transform.columns.3.y,
camera.transform.columns.3.z
)
return CGFloat(self.calculateDistance(from: cametaPosition , to: anchorPostion))
}
/// return the distance between 2 vectors.
class func calculateDistance(from: SCNVector3, to: SCNVector3) -> Float {
let x = from.x - to.x
let y = from.y - to.y
let z = from.z - to.z
return sqrtf( (x * x) + (y * y) + (z * z))
}
}
And now you can call:
guard let camera = session.currentFrame?.camera else { return }
let anchor = // you anchor
let distanceAchorAndCamera = ARSceneUtils.distanceBetween(anchor: anchor, AndCamera: camera)

ARKit 3D Head tracking in scene

I am using ARKit to create an augmented camera app. When the ARSession initialises, a 3d character is shown in a ARSCNView. I am trying to get the character's
head to track the ARCamera's point of view so they are always looking at the camera as the user moves to take a photo.
I've used Apple's chameleon demo, which adds a focus node that tracks the cameras point of view using SCNLookAtConstraint but I am getting
strange behaviour. The head drops to the side and rotates as the ARCamera pans. If I add a SCNTransformConstraint to restrict the
head movement to up/down/side-to-side, it stays vertical but then looks away and doesn't track.
I've tried picking the chameleon demo apart to see why mine is not working but after a few days I am stuck.
The code I am using is:
class Daisy: SCNScene, ARCharacter, CAAnimationDelegate {
// Rig for animation
private var contentRootNode: SCNNode! = SCNNode()
private var geometryRoot: SCNNode!
private var head: SCNNode!
private var leftEye: SCNNode!
private var rightEye: SCNNode!
// Head tracking properties
private var focusOfTheHead = SCNNode()
private let focusNodeBasePosition = simd_float3(0, 0.1, 0.25)
// State properties
private var modelLoaded: Bool = false
private var headIsMoving: Bool = false
private var shouldTrackCamera: Bool = false
/*
* MARK: - Init methods
*/
override init() {
super.init()
loadModel()
setupSpecialNodes()
setupConstraints()
}
/*
* MARK: - Setup methods
*/
func loadModel() {
guard let virtualObjectScene = SCNScene(named: "daisy_3.dae", inDirectory: "art.scnassets") else {
print("virtualObjectScene not intialised")
return
}
let wrapper = SCNNode()
for child in virtualObjectScene.rootNode.childNodes {
wrapper.addChildNode(child)
}
self.rootNode.addChildNode(contentRootNode)
contentRootNode.addChildNode(wrapper)
hide()
modelLoaded = true
}
private func setupSpecialNodes() {
// Assign characters rig elements to nodes
geometryRoot = self.rootNode.childNode(withName: "D_Rig", recursively: true)
head = self.rootNode.childNode(withName: "D_RigFBXASC032Head", recursively: true)
leftEye = self.rootNode.childNode(withName: "D_Eye_L", recursively: true)
rightEye = self.rootNode.childNode(withName: "D_Eye_R", recursively: true)
// Set up looking position nodes
focusOfTheHead.simdPosition = focusNodeBasePosition
geometryRoot.addChildNode(focusOfTheHead)
}
/*
* MARK: - Head animations
*/
func updateForScene(_ scene: ARSCNView) {
guard shouldTrackCamera, let pointOfView = scene.pointOfView else {
print("Not going to updateForScene")
return
}
followUserWithHead(to: pointOfView)
}
private func followUserWithHead(to pov: SCNNode) {
guard !headIsMoving else { return }
// Update the focus node to the point of views position
let target = focusOfTheHead.simdConvertPosition(pov.simdWorldPosition, to: nil)
// Slightly delay the head movement and the animate it to the new focus position
DispatchQueue.main.asyncAfter(deadline: .now() + 0.2, execute: {
let moveToTarget = SCNAction.move(to: SCNVector3(target.x, target.y, target.z), duration: 1.5)
self.headIsMoving = true
self.focusOfTheHead.runAction(moveToTarget, completionHandler: {
self.headIsMoving = false
})
})
}
private func setupConstraints() {
let headConstraint = SCNLookAtConstraint(target: focusOfTheHead)
headConstraint.isGimbalLockEnabled = true
let headRotationConstraint = SCNTransformConstraint(inWorldSpace: false) { (node, transform) -> SCNMatrix4 in
// Only track the up/down and side to side movement
var eulerX = node.presentation.eulerAngles.x
var eulerZ = node.presentation.eulerAngles.z
// Restrict the head movement so it doesn't rotate too far
if eulerX < self.rad(-90) { eulerX = self.rad(-90) }
if eulerX > self.rad(90) { eulerX = self.rad(90) }
if eulerZ < self.rad(-30) { eulerZ = self.rad(-30) }
if eulerZ > self.rad(30) { eulerZ = self.rad(30) }
let tempNode = SCNNode()
tempNode.transform = node.presentation.transform
tempNode.eulerAngles = SCNVector3(eulerX, 0, eulerZ)
return tempNode.transform
}
head?.constraints = [headConstraint, headRotationConstraint]
}
// Helper to convert degrees to radians
private func rad(_ deg: Float) -> Float {
return deg * Float.pi / 180
}
}
The model in the Scene editor is:
I have solved the problem I was having. There were 2 issues:
The target in followUserWithHead should have converted the simdWorldPosition for it's parent and been convert from (not to)
focusOfTheHead.parent!.simdConvertPosition(pov.simdWorldPosition, from: nil)
The local coordinates for the head node are incorrect. The z-axis should be the x-axis so when I got the focus the head movement tracking, the ear was always following the camera.
I didn't realise that the Debug View Hierarchy in Xcode will show the details of an SCNScene. This helped me to debug the scene and find where the nodes were tracking. You can export the scene as a dae and then load into SceneKit editor
Edit:
I used localFront as mnuages suggested in the comments below, which got the tracking working in the correct direction. The head did occasionally moved about though. I have put this down to the animation that was running on the model trying to apply a transform that was then changed on the next update cycle. I decided to remove the tracking from the head and use the same approach to track the eyes only.

Find a plane position in ARKit

I was trying to find the position of the closest Plane in my ARKit app. I wrote some code to help find it, but for some reason, when I run my app, it keeps crashing when I try to add an AR object to the plane. Is there something wrong with my code?
struct myPlaneCoords {
var x = Float()
var y = Float()
var z = Float()
}
func getPlaneCoordinates(sceneView: ARSCNView) -> myPlaneCoords{//coordinates where an AR node will be added
let cameraTransform = sceneView.session.currentFrame?.camera.transform
let cameraCoordinates = MDLTransform(matrix: cameraTransform!)
let camX = CGFloat(cameraCoordinates.translation.x)
let camY = CGFloat(cameraCoordinates.translation.y)
let cameraPosition = CGPoint(x: camX, y: camY)
let anchors = sceneView.hitTest(cameraPosition, types: ARHitTestResult.ResultType.existingPlane)
let spefAnchor = MDLTransform(matrix: anchors[0].localTransform)//finds closest plane
var cc = myPlaneCoords()
cc.x = spefAnchor.translation.x
cc.y = spefAnchor.translation.y
cc.z = spefAnchor.translation.z
return cc
}
Difficult to judge w/o exception description.
I can assume that hitTest doesn't detect any anchor.
In such case your anchors is empty.
let spefAnchor = MDLTransform(matrix: anchors[0].localTransform)//finds closest plane
And here you should get a crash.

Resources