When application launched first a vertical surface is detected on one wall than camera focus to the second wall, in the second wall another surface is detected. The first wall is now no more visible to the ARCamera but this code is providing me the anchors of the first wall. but I need anchors of the second wall which is right now Visible/focused in camera.
if let anchor = sceneView.session.currentFrame?.anchors.first {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
The clue to the answer is in the beginning line of your if let statement.
Lets break this down:
When you say let anchor = sceneView.session.currentFrame?.anchors.first, you are referencing an optional array of ARAnchor, which naturally can have more than one element.
Since your are always calling first e.g. index [0], you will always get the 1st ARAnchor which was added to the array.
Since you now have 2 anchors, you would naturally need the last (latest) element. As such you can try this as a starter:
if let anchor = sceneView.session.currentFrame?.anchors.last {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
Update:
Since another poster has interpreted the question differently, in that they believe the question is how can I detect if an ARPlaneAnchor is in view? Let's approach it another way.
First we need to take into consideration that the ARCamera has a Frostrum in which our content is shown:
As such, we would then need to determine whether an ARPlaneAnchor was inViewOfFrostrum.
First we will create 2 variables:
var planesDetected = [ARPlaneAnchor: SCNNode]()
var planeID: Int = 0
The 1st to store the ARPlaneAnchor and its associated SCNNode, and the 2nd in order to provide a unique ID for each plane.
In the ARSCNViewDelegate we can visualise an ARPlaneAnchor and then store it's information e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Create An SCNode & Geometry To Visualize The Plane
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
planeNode.geometry = planeGeometry
//3. Set The Position Based On The Anchors Extent & Rotate It
planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
//4. Add The PlaneNode To The Node & Give It A Unique ID
node.addChildNode(planeNode)
planeNode.name = String(planeID)
//5. Store The Anchor & Node
planesDetected[anchor] = planeNode
//6. Increment The Plane ID
planeID += 1
}
Now we have stored the detected planes, we then of course need to determine if any of these are in view of the ARCamera e.g:
/// Detects If An Object Is In View Of The Camera Frostrum
func detectPlaneInFrostrumOfCamera(){
//1. Get The Current Point Of View
if let currentPointOfView = augmentedRealityView.pointOfView{
//2. Loop Through All The Detected Planes
for anchorKey in planesDetected{
let anchor = anchorKey.value
if augmentedRealityView.isNode(anchor, insideFrustumOf: currentPointOfView){
print("ARPlaneAnchor With ID \(anchor.name!) Is In View")
}else{
print("ARPlaneAnchor With ID \(anchor.name!) Is Not In View")
}
}
}
}
Finally we then need to access this function which we could do in the following delegate method for example renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval):
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectPlaneInFrostrumOfCamera()
}
Hopefully both of these will point in the right direction...
In order to get the node that is currently is in point of view you can do something like this:
var targettedAnchorNode: SCNNode?
if let anchors = sceneView.session.currentFrame?.anchors {
for anchor in anchors {
if let anchorNode = sceneView.node(for: anchor), let pointOfView = sceneView.pointOfView, sceneView.isNode(anchorNode, insideFrustumOf: pointOfView) {
targettedAnchorNode = anchorNode
break
}
}
if let targettedAnchorNode = targettedAnchorNode {
addNode(position: SCNVector3Zero, anchorNode: targettedAnchorNode)
} else {
debugPrint("Targetted node not found")
}
} else {
debugPrint("Anchors not found")
}
If you would like to get all focused nodes, collect them in an array satisfying specified condition
Good luck!
Related
So I'm fairly new to ARKit and SceneKit, and I'm following the tutorial from Apple here. https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces.
What I'm trying to do is create a view, similar to the Animoji screen, where the SCNNode containing the BlendShape is centred in the view and z value of the Blendshape does not change depending on how close/far the face is from the camera. I'd also like to make the camera invisible so you can only see the BlendShape.
What is the best way of going about this and how?
I've tried setting the pivot to 0 and the position.z to 0 too, but I don't think this is the correct approach.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor
else { return }
let blendShapes = faceAnchor.blendShapes
guard let eyeBlinkLeft = blendShapes[.eyeBlinkLeft] as? Float,
let eyeBlinkRight = blendShapes[.eyeBlinkRight] as? Float,
let jawOpen = blendShapes[.jawOpen] as? Float
else { return }
eyeLeftNode.scale.z = 1 - eyeBlinkLeft
eyeRightNode.scale.z = 1 - eyeBlinkRight
jawNode.position.y = originalJawY - jawHeight * jawOpen
node.pivot = SCNMatrix4MakeTranslation(0,0,0)
node.position.z = 0
}
A view below is similar to what I'm trying to achieve, without the list of other blendshapes.
If you're not interested in ARKit automatically moving nodes in the scene you could avoid tying an SCNNode to the ARAnchor (that is don't implement -renderer:nodeForAnchor:). Rather you would query the ARAnchor in -session:didUpdateAnchors:.
In fact since you don't actually have an AR experience, but just face tracking, you don't even need an ARSCNView but just a SCNView.
I want to place a car object on the plane.
I am setting the sceneview like this.
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
sceneView.session.run(configuration)
sceneView.delegate = self
sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { // 1 guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
// 2
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
let plane = SCNPlane(width: width, height: height)
// 3
plane.materials.first?.diffuse.contents = UIColor.transparentLightBlue
// 4
let anchorNode = SCNScene(named: "art.scnassets/car.scn")!.rootNode
// 5
let x = CGFloat(planeAnchor.center.x)
let y = CGFloat(planeAnchor.center.y)
let z = CGFloat(planeAnchor.center.z)
planeNode.position = SCNVector3(x,y,z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(anchorNode)[car object][1]
}
https://app.box.com/s/vdloxlqxk9rh6h4k5ggwrxm1hslktn8g
I am able to place the car but it is allover camera scene . can any one tell me problem with cooridnate system or 3D object.
This depends on how you want to place the car. Right now your app will place an anchor and where it decides to create its first anchor is where your car will arrive. If you would like to update your anchor as you scan, you need to call didUpdate and your anchor will move to the center of the extent of your plane:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor){}
However, it looks as though you have already defined the size you want your anchor to be. I've never tried it that way before and I'm not sure if you can programatically move your anchor to a desired location. In my mind it would cause issues if the app doesn't already know its a horizontal surface, but I've never tested it.
Instead what I would recommend is to create your plane as you did above (without declaring its size). Update your plane size with the 'didUpdate' function above. Then if you want to place your car in a predetermined spot, run a hittest.
Here is a good resource to walk you through:
https://www.appcoda.com/arkit-horizontal-plane/
I'm currently trying to put an SCNNode fixed in place while use ARImageTrackingConfiguration which is no plane detection but it seems like not working properly because the SCNNode is moving while camera moves
below are the code:
/// show cylinder line
func showCylinderLine(point a: SCNVector3, point b: SCNVector3, object: VirtualObject) {
let length = Vector3Helper.distanceBetweenPoints(a: a, b: b)
let cyclinderLine = GeometryHelper.drawCyclinderBetweenPoints(a: a, b: b, length: length, radius: 0.001, radialSegments: 10)
cyclinderLine.name = "line"
cyclinderLine.geometry?.firstMaterial?.diffuse.contents = UIColor.red
self.sceneView.scene.rootNode.addChildNode(cyclinderLine)
cyclinderLine.look(at: b, up: self.sceneView.scene.rootNode.worldUp, localFront: cyclinderLine.worldUp)
}
is it possible to make the cylinderLine SCNNode fixed in place without ARPlaneAnchor ?
(Note: I had tried ARAnchor on nodeForAnchor delegate methods and it is still moving as camera moves)
Can you show your nodeForAnchors method?
That is where nodes are "fixed" to the image so i am guessing an error is there somewhere. Here is one example application of that delegate:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let _ = anchor as? ARImageAnchor {
if let objectScene = SCNScene(named: "ARassets.scnassets/object.scn") {
//creates a new node that is connected to the image. This is what makes your object "fixed"
let objectNode = objectScene.rootNode.childNodes.first!
objectNode.position = SCNVector3Zero
objectNode.position.y = 0.15
node.addChildNode(planeNode)
}
}
Let me know if this helps ;)
I'm creating my anchor and 2d node in ARSKView like so:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
removeToken()
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This places it exactly in front of the camera at a given distance (z). What I want to also do is take a latitude/longitude for an object, calculate an angle or heading in degrees, and initially drop the anchor/node at this angle from the camera. I'm currently getting the heading by using the GMSGeometryHeading method, which takes users current location, and the target location to determine the heading. So when dropping the anchor, I want to put it in the right direction towards the target location's lat/lon. How can I achieve this with SpriteKit/ARKit?
Can you clarify your question a bit please?
Perhaps the following lines can help you as example. There the cameraNode moves using a basic geometry-formula, moving obliquely depending on the angle (in Euler) in both x and y coordinates
var angleEuler = 0.1
let rotateX = Double(cameraNode.presentation.position.x) - sin(degrees(radians: Double(angleEuler)))*10
let rotateZ = Double(cameraNode.presentation.position.z) - abs(cos(degrees(radians: Double(angleEuler))))*10
cameraNode.position = SCNVector3(x:Float(rotateX), y:0, z:Float(rotateX))
If you want an object fall in front of the camera, and the length of the fall depend on a degree just calculate the value of "a" in the geometry-formula "Tan A = a/b" and update the node's "presentation.position.y"
I hope this helps
I'm trying to verify if a specific node is inside the current frustum of the scene.
Therefore I use the method isNode(_:insideFrustumOf:) from Apple.
I save in every call to renderer(_:didAdd:for:) the corresponding node and later test with isNode(_:insideFrustumOf:).
But the result is always true, which is obviously wrong.
Why can't I test the nodes added by ARKit?
UPDATE:
The asked code, if it helps, great!
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//...
nodes.append(node)
//...
}
nodes is an array of SCNNodes.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
for node in nodes {
let result = sceneView.isNode(node, insideFrustumOf: sceneView.pointOfView!)
//...
}
}
Here the evaluation of the node takes place. Result always true.
Since you haven't posted all your code, its hard to provide a 'definitive' answer.
Having said this, I have created an example for you which works perfectly well.
First I created an [SCNNode] to store any SCNNodes added to the screen:
//Array To Store Any Added Nodes To The Scene Hierachy
var nodesRendered = [SCNNode]()
I then created 3 different SCNNodes:
/// Creates A Red, Blue & Green SCNNode
func createNodes(){
//1. Create A Red Sphere
let redNode = SCNNode()
let redGeometry = SCNSphere(radius: 0.2)
redGeometry.firstMaterial?.diffuse.contents = UIColor.red
redNode.geometry = redGeometry
redNode.position = SCNVector3(-1.5, 0, -1.5)
redNode.name = "RedNode"
//2. Create A Green Sphere
let greenNode = SCNNode()
let greenGeometry = SCNSphere(radius: 0.2)
greenGeometry.firstMaterial?.diffuse.contents = UIColor.green
greenNode.geometry = greenGeometry
greenNode.position = SCNVector3(0, 0, -1.5)
greenNode.name = "GreenNode"
//3. Create A Blue Sphere
let blueNode = SCNNode()
let blueGeometry = SCNSphere(radius: 0.2)
blueGeometry.firstMaterial?.diffuse.contents = UIColor.blue
blueNode.geometry = blueGeometry
blueNode.position = SCNVector3(1.5, 0, -1.5)
blueNode.name = "BlueNode"
//4. Add Them To The Hierachy
augmentedRealityView.scene.rootNode.addChildNode(redNode)
augmentedRealityView.scene.rootNode.addChildNode(greenNode)
augmentedRealityView.scene.rootNode.addChildNode(blueNode)
//5. Store A Reference To The Nodes
nodesRendered.append(redNode)
nodesRendered.append(blueNode)
nodesRendered.append(greenNode)
}
Having done this, I then created a function to determine whether these where in the Frustrum of the Camera:
/// Detects If A Node Is In View Of The Camera
func detectNodeInFrustrumOfCamera(){
guard let cameraPointOfView = self.augmentedRealityView.pointOfView else { return }
for node in nodesRendered{
if augmentedRealityView.isNode(node, insideFrustumOf: cameraPointOfView){
print("\(node.name!) Is In View Of Camera")
}else{
print("\(node.name!) Is Not In View Of Camera")
}
}
}
Finally in the delegate callback I called the function like so:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectNodeInFrustrumOfCamera()
}
Which yielded results such as:
RedNode Is Not In View Of Camera
BlueNode Is Not In View Of Camera
GreenNode Is In View Of Camera
Hope it points you in the right direction...
So I ran into this issue myself while working on an ARKit project. It seems like the function isNode(node:, insideFrustumOf:) will always return true for nodes that were automatically added by ARKit.
My workaround was instead of attempting to track the node added by ARKit, create a new node with a clear material that has the same "volume" as your detected object then create a reference and check if that node is inside the point of view.
Add these variables:
/// The object that was detected.
var refObject: ARReferenceObject?
/// The reference node for the detected object.
var refNode: SCNNode?
/// additional node which we'll use to check the POV against.
var detectionNode: SCNNode?
Implement the delegate function didAdd:
public func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let objectAnchor = anchor as? ARObjectAnchor {
guard let name = objectAnchor.referenceObject.name, name == "my_object"
else { return }
print("detected object for the first time")
// create geometry
let cube = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0.0)
// apply transparent material
let material = SCNMaterial()
material.transparency = 0.0
cube.firstMaterial = material
// add child node
let detectionNode = SCNNode(geometry: cube)
node.addChildNode(detectionNode)
// store references
self.detectionNode = detectionNode // this is the reference we really need
self.refNode = node
self.refObject = objectAnchor.referenceObject
}
}
Finally implement this delegate function updateAtTime:
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.detectionNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}