ARKit - getting distance from camera to anchor - ios

I'm creating an anchor and adding it to my ARSKView at a certain distance in front of the camera like this:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
then the node gets created for the anchor like this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This works fine and I see the image get added at the appropriate distance. However, what I'm trying to do is then calculate how far the anchor/node is in front of the camera as you move. The problem is the calculation seems to be off immediately using fabs(cameraZ - anchor.transform.columns.3.z). Please see my code below that is in the update() method to calculate distance between camera and object:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
//print("token is within camera view from update method")
print("DISTANCE BETWEEN CAMERA AND TOKEN: \(fabs(cameraZ - anchor.transform.columns.3.z))")
print(cameraZ)
print(anchor.transform.columns.3.z)
}
}
}
}
Any help is appreciated in order to accurately get distance between camera and the anchor.

The last column of a 4x4 transform matrix is the translation vector (or position relative to a parent coordinate space), so you can get the distance in three dimensions between two transforms by simply subtracting those vectors.
let anchorPosition = anchor.transforms.columns.3
let cameraPosition = camera.transform.columns.3
// here’s a line connecting the two points, which might be useful for other things
let cameraToAnchor = cameraPosition - anchorPosition
// and here’s just the scalar distance
let distance = length(cameraToAnchor)
What you’re doing isn’t working right because you’re subtracting the z-coordinates of each vector. If the two points are different in x, y, and z, just subtracting z doesn’t get you distance.

This one is for scenekit, I'll leave it here though.
let end = node.presentation.worldPosition
let start = sceneView.pointOfView?.worldPosition
let dx = (end?.x)! - (start?.x)!
let dy = (end?.y)! - (start?.y)!
let dz = (end?.z)! - (start?.z)!
let distance = sqrt(pow(dx,2)+pow(dy,2)+pow(dz,2))

With RealityKit there is a slightly different way to do this. If you're using the world tracking configuration, your AnchorEntity object conforms to HasAnchoring which gives you a target. Target is an enum of AnchoringComponent.Target. It has a case .world(let transform). You can compare your world transform to the camera's world transform like this:
if case let AnchoringComponent.Target.world(transform) = yourAnchorEntity.anchoring.target {
let theDistance = distance(transform.columns.3, frame.camera.transform.columns.3)
}
This took me a bit to figure out but I figure others that might be using RealityKit might benefit from this.

As mentioned above by #codeman, this is the right solution:
let distance = simd_distance(YOUR_NODE.simdTransform.columns.3, (sceneView.session.currentFrame?.camera.transform.columns.3)!);

3D distance - You can check these utils,
class ARSceneUtils {
/// return the distance between anchor and camera.
class func distanceBetween(anchor : ARAnchor,AndCamera camera: ARCamera) -> CGFloat {
let anchorPostion = SCNVector3Make(
anchor.transform.columns.3.x,
anchor.transform.columns.3.y,
anchor.transform.columns.3.z
)
let cametaPosition = SCNVector3Make(
camera.transform.columns.3.x,
camera.transform.columns.3.y,
camera.transform.columns.3.z
)
return CGFloat(self.calculateDistance(from: cametaPosition , to: anchorPostion))
}
/// return the distance between 2 vectors.
class func calculateDistance(from: SCNVector3, to: SCNVector3) -> Float {
let x = from.x - to.x
let y = from.y - to.y
let z = from.z - to.z
return sqrtf( (x * x) + (y * y) + (z * z))
}
}
And now you can call:
guard let camera = session.currentFrame?.camera else { return }
let anchor = // you anchor
let distanceAchorAndCamera = ARSceneUtils.distanceBetween(anchor: anchor, AndCamera: camera)

Related

How to tap to add SCNNode to sceneView using ARKit?

I am trying to implement a function that allows the user to tap and add a node to the to a scene at the location where the user clicked. I would like this to be on a plane. I did some research and found the following function, but I get the warning Value of type 'simd_float4x4' has no member 'translation' on the line let translation = hitTestResult.worldTransform.translation
Does anyone know how I can change this so I am not getting the warning?
#objc func addRoomToSceneView(withGestureRecognizer recognizer: UIGestureRecognizer) {
let tapLocation = recognizer.location(in: sceneView)
let hitTestResults = sceneView.hitTest(tapLocation, types: .existingPlaneUsingExtent)
guard let hitTestResult = hitTestResults.first else { return }
//THE FOLLOWING LINE HAS THE warning: Value of type 'simd_float4x4' has no member 'translation'
let translation = hitTestResult.worldTransform.translation
let x = translation.x
let y = translation.y
let z = translation.z
let room = createMaskedRectangleRoom(width: 4, height: 4, depth: 4, color: .white)
room.scale = SCNVector3(2, 2, 2)
room.position = SCNVector3(x,y,z)
sceneView.scene.rootNode.addChildNode(room)
}
The result of hitResult.worldTransform is an SCNMatrix4 and a 4x4 matrix has no translation property. When applied to a point using matrix multiplication the transformation done might include a translation, but the underlying data type is essentially a 16 element rectangular array.
You can probably extract the translation aspect of the matrix with:
let translation = SCNVector4(float4x4(hitTestResult.worldTransform) * SIMD4<Float>(0,0,0,1))
I'm assuming the hitTestResult.worldTransform matrix is affine. It would be weird if it were not.

How to calculate distance from point 2 to point 3 and 3 to 4 so on using ARKit?

I am making an app to calculate the distance and area now the problem is I made an array and I am appending my nodes in there.
func calculate () {
let start = dotNodes[0]
let end = dotNodes[1]
let a = end.position.x - start.position.x
let b = end.position.y - start.position.y
let c = end.position.z - start.position.z
let distance = sqrt(pow(a,2) + pow(b,2) + pow(c, 2))
updateText(text:"\(abs( distance))", atPosition: (end.position))
}
Now the start point is 0 index and end is index 1 but these are only two points. How can I make it to calculate distance from 2 to 3 and 3to 4 so on, and at the end when the last point is touching the point 1 it should give me area?
As #Maxim has said you can begin by simplifying your calculations ^______^.
I will attempt to answer your question however, using the GLK Math Helper Methods which if your'e interested you can read more about here: GLK Documentation.
In essence what you need to do is iterate through your array of positions, and calculate the distance between these in segments of 2. When your last iteration has only one element, then you would calculate the position between this and the first one.
Since I am not great a Maths, I did a quick search on StackOverflow to find a solution, and made use of the answer provided by #Gasim in the post Iterate Over Collections Two At A Time In Swift.
Since my attempt is quite lengthy, instead of going through each part step, by step, I have provided answer which is fully commented, and hope will point you in the right direction.
As always if someone else can help refactor and or improve the code, please feel free:
//
// ViewController.swift
// Measuring Example
//
// Created By Josh Robbins (∩`-´)⊃━☆゚.*・。゚* on 27/04/2019.
// Copyright © 2019 BlackMirrorz. All rights reserved.
//
import UIKit
import ARKit
class ViewController: UIViewController {
#IBOutlet weak var augmentedRealityView: ARSCNView!
var augmentedRealityConfiguration = ARWorldTrackingConfiguration()
var augmentedRealitySession = ARSession()
var markerNodes = [SCNNode]()
typealias NodeNameData = (name: String, node: SCNNode)
typealias DistanceData = (distance: Float, positionA: GLKVector3, positionB: GLKVector3)
//---------------------
//MARK:- Initialization
//---------------------
override func viewDidLoad() {
super.viewDidLoad()
setupARSession()
}
/// Sets Up Our ARSession
func setupARSession(){
augmentedRealityView.session = augmentedRealitySession
augmentedRealitySession.run(augmentedRealityConfiguration, options: [.removeExistingAnchors, .resetTracking])
}
/// Creates A Node To Mark The Touch Position In The Scene
///
/// - Returns: SCNNode
func markerNode() -> SCNNode{
let node = SCNNode(geometry: SCNSphere(radius: 0.01))
node.geometry?.firstMaterial?.diffuse.contents = UIColor.cyan
return node
}
//------------------------
//MARK:- Marker Placemenet
//------------------------
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Get The Users Current Touch Point & Check We Have A Valid HitTest Result
guard let touchPoint = touches.first?.location(in: self.augmentedRealityView),
let hitTest = self.augmentedRealityView.hitTest(touchPoint, types: .featurePoint).first
else { return }
//2. Get The World Transorm & Create An SCNNode At The Converted Touch Position
let transform = hitTest.worldTransform
let node = markerNode()
node.position = SCNVector3(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
self.augmentedRealityView.scene.rootNode.addChildNode(node)
//3. Add The Node To Our Markers Array So We Can Calculate The Distance Later
markerNodes.append(node)
//4. If We Have 5 Marker Nodes Then Calculate The Distances Between Them & Join Them Together
if markerNodes.count == 5{
calculateMarkerNodeDistances()
markerNodes.removeAll()
}
}
//-------------------
//MARK:- Calculations
//-------------------
/// Enemurates Our Marker Nodes & Creates A Joining Node Between Them
func calculateMarkerNodeDistances(){
var index = 0;
while index < markerNodes.count {
let nodeA = markerNodes[index];
var nodeB : SCNNode? = nil;
if index + 1 < markerNodes.count {
nodeB = markerNodes[index+1];
}
//1. Create A Joining Node Between The Two Nodes And Calculate The Distance
if let lastNode = nodeB{
let nodeA = NodeNameData("Node \(index)", nodeA)
let nodeB = NodeNameData("Node \(index+1)", lastNode)
self.augmentedRealityView.scene.rootNode.addChildNode(joiningNode(between: [nodeA, nodeB]))
}else{
//2. Here We Can Assume The We Have Reached The Last Node So We Calculate The Distance Between The 1st & Last Nodes
guard let initialNode = markerNodes.first, let lastNode = markerNodes.last else { return }
let nodeA = NodeNameData("Node 0 ", initialNode)
let nodeB = NodeNameData("Node \(markerNodes.count)", lastNode)
self.augmentedRealityView.scene.rootNode.addChildNode(joiningNode(between: [nodeA, nodeB]))
}
//Increment By 1 So We Join The Nodes Together In The Correct Sequence e.g. (1, 2), (3, 4) And Not (1, 2), (3, 4)
index += 1;
}
}
/// Creates A Joining Node Between Two Names
///
/// - Parameter nodes: [NodeNameData]
/// - Returns: MeasuringLineNode
func joiningNode(between nodes: [NodeNameData]) -> MeasuringLineNode{
let distance = calculateDistanceBetweenNodes([nodes[0], nodes[1]])
let joiner = MeasuringLineNode(startingVector: distance.positionA, endingVector: distance.positionB)
return joiner
}
/// Calculates The Distance Between Two SCNNodes
///
/// - Parameter nodes: [NodeNameData]
/// - Returns: DistanceData
func calculateDistanceBetweenNodes(_ nodes: [NodeNameData]) -> DistanceData{
//1. Calculate The Distance
let positionA = GLKVectorThreeFrom(nodes[0].node.position)
let positionB = GLKVectorThreeFrom(nodes[1].node.position)
let distance = GLKVector3Distance(positionA, positionB)
let meters = Measurement(value: Double(distance), unit: UnitLength.meters)
print("Distance Between Markers [ \(nodes[0].name) & \(nodes[1].name) ] = \(String(format: "%.2f", meters.value))m")
//2. Return The Distance A Positions Of The Nodes
return (distance, positionA, positionB)
}
/// Creates A GLKVector3 From An SCNVectore3
///
/// - Parameter vector3: SCNVector3
/// - Returns: GLKVector3
func GLKVectorThreeFrom(_ vector3: SCNVector3) -> GLKVector3 { return GLKVector3Make(vector3.x, vector3.y, vector3.z) }
}
//-------------------------
//MARK:- Mesuring Line Node
//-------------------------
class MeasuringLineNode: SCNNode{
/// Creates A Line Between Two SCNNodes
///
/// - Parameters:
/// - vectorA: GLKVector3
/// - vectorB: GLKVector3
init(startingVector vectorA: GLKVector3, endingVector vectorB: GLKVector3) {
super.init()
let height = CGFloat(GLKVector3Distance(vectorA, vectorB))
self.position = SCNVector3(vectorA.x, vectorA.y, vectorA.z)
let nodeVectorTwo = SCNNode()
nodeVectorTwo.position = SCNVector3(vectorB.x, vectorB.y, vectorB.z)
let nodeZAlign = SCNNode()
nodeZAlign.eulerAngles.x = Float.pi/2
let box = SCNBox(width: 0.001, height: height, length: 0.001, chamferRadius: 0)
let material = SCNMaterial()
material.diffuse.contents = UIColor.white
box.materials = [material]
let nodeLine = SCNNode(geometry: box)
nodeLine.position.y = Float(-height/2)
nodeZAlign.addChildNode(nodeLine)
self.addChildNode(nodeZAlign)
self.constraints = [SCNLookAtConstraint(target: nodeVectorTwo)]
}
required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) }
}
Based on this simple (and hopefully accurate answer) the result was something like so:
Distance Between Markers [ Node 0 & Node 1 ] = 0.14m
Distance Between Markers [ Node 1 & Node 2 ] = 0.09m
Distance Between Markers [ Node 2 & Node 3 ] = 0.09m
Distance Between Markers [ Node 3 & Node 4 ] = 0.05m
Distance Between Markers [ Node 0 & Node 5 ] = 0.36m
In my example I am calculating the distances of five nodes but you could call this a any point. And of course you will then need to use a formula for calculating the area itself. However this should be more than enough to point you in the right direction.
Hope it helps...
The best performing (and also the easiest) way is to use SIMD -
https://developer.apple.com/documentation/accelerate/simd/working_with_vectors
let dist = simd_distance(start, end)
where vectors should probably be redefined as simd_float3 (or SIMD3<Float>, if you are using Swift 5).
P.S. You need to import simd framework first.

How to convert camera coordinates to coordinate-space of scene?

I am trying to find the coordinates of the Camera in the scene I have made but I end up with coordinates in a different system. Coordinates such as (0.0134094329550862, which is about 1cm in the Scene Coordinate System while moving more than that.
I do this to get coordinates:
let cameraCoordinates = self.sceneView.pointOfView?.worldPosition
self.POSx = Double((cameraCoordinates?.x)!)
self.POSy = Double((cameraCoordinates?.y)!)
self.POSz = Double((cameraCoordinates?.z)!)
This is how I found the coordinates of the camera and its orientation.
func getUserVector() -> (SCNVector3, SCNVector3) {
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let pos = SCNVector3(mat.m41, mat.m42, mat.m43)
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}

Bad scaling rendering ARKit

I'm working on perspective tests with ARKit and SceneKit. The idea is to improve 3D rendering when displaying a flat 3D model on the ground. I had already opened a ticket to another perspective problem that is almost solved. (ARKit Perspective Rendering)
However, I noticed after my multitudes tests / 3D display, that sometimes when I anchor a 3D model, the size of it can differ... (width and length)
I usually display a 3D model that is 16 meters long and 1.5 meters wide. You can well imagine that this distorts my rendering.
I don't know why my display may differ in terms of 3D model size.
Maybe it's from the tracking and my test environment.
Below is the code I use to add my 3D model to the scene:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageAnchorPosition = imageAnchor.transform.columns.3
print("Image detected")
let modelName = "couloirV2"
//let modelName = "lamp"
guard let object = VirtualObject
.availableObjects
.filter({ $0.modelName == modelName })
.first else { fatalError("Cannot get model \(modelName)") }
print("Loading \(object)...")
self.sceneView.prepare([object], completionHandler: { _ in
self.updateQueue.async {
// Translate the object's position to the reference node position.
object.position.x = imageAnchorPosition.x
object.position.y = imageAnchorPosition.y
object.position.z = imageAnchorPosition.z
// Save the initial y value for slider handler function
self.tmpYPosition = object.position.y
// Match y node's orientation
object.orientation.y = node.orientation.y
print("Adding object to the scene")
// Prepare the object
object.load()
// Show origin axis
object.showObjectOrigin()
// Translate on z axis to match perfectly with image detected.
var translation = matrix_identity_float4x4
translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
object.simdTransform = matrix_multiply(object.simdTransform, translation)
self.sceneView.scene.rootNode.addChildNode(object)
self.virtualObjectInteraction.selectedObject = object
self.sceneView.addOrUpdateAnchor(for: object)
}
})
}

Get 3D coordinates of the both eyes in 3D Facemesh of ARKit

I want to get real-world 3D coordinates of both eyes from the 3D Facemesh of the ARKit.
So far, I have tried below code;
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
let geometry = virtualFaceNode.geometry as! ARSCNFaceGeometry
let pointOfView = SCNNode()
// Determine Adjusted Position for Right Eye
let orientation : SCNQuaternion = node.orientation
let orientationQuaternion : GLKQuaternion = GLKQuaternionMake(orientation.x, orientation.y, orientation.z, orientation.w)
let eyePos : GLKVector3 = GLKVector3Make(1.0, 0.0, 0.0)
let rotatedEyePos : GLKVector3 = GLKQuaternionRotateVector3(orientationQuaternion, eyePos)
let rotatedEyePosSCNV : SCNVector3 = SCNVector3Make(rotatedEyePos.x, rotatedEyePos.y, rotatedEyePos.z)
let mag : Float = 0.066 // This is the value for the distance between two pupils (in metres). The Interpupilary Distance (IPD).
pointOfView.position.x += rotatedEyePosSCNV.x * mag
pointOfView.position.y += rotatedEyePosSCNV.y * mag
pointOfView.position.z += rotatedEyePosSCNV.z * mag
DispatchQueue.main.async {
self.updateValue.text = "x= " + String(describing: pointOfView.position.x) + "y= " + String(describing: pointOfView.position.y) + "z= " + String(describing: pointOfView.position.z)
}
geometry.update(from: faceAnchor.geometry)
}
I am not sure this code works fine. If not than what is the other way to get the eyes coordinates? I want to calculate size, distance of eyes.
Appreciated your Help, Thanks in Advance...

Resources