How to centre blend shape SCNNode to view - ios

So I'm fairly new to ARKit and SceneKit, and I'm following the tutorial from Apple here. https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces.
What I'm trying to do is create a view, similar to the Animoji screen, where the SCNNode containing the BlendShape is centred in the view and z value of the Blendshape does not change depending on how close/far the face is from the camera. I'd also like to make the camera invisible so you can only see the BlendShape.
What is the best way of going about this and how?
I've tried setting the pivot to 0 and the position.z to 0 too, but I don't think this is the correct approach.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor
else { return }
let blendShapes = faceAnchor.blendShapes
guard let eyeBlinkLeft = blendShapes[.eyeBlinkLeft] as? Float,
let eyeBlinkRight = blendShapes[.eyeBlinkRight] as? Float,
let jawOpen = blendShapes[.jawOpen] as? Float
else { return }
eyeLeftNode.scale.z = 1 - eyeBlinkLeft
eyeRightNode.scale.z = 1 - eyeBlinkRight
jawNode.position.y = originalJawY - jawHeight * jawOpen
node.pivot = SCNMatrix4MakeTranslation(0,0,0)
node.position.z = 0
}
A view below is similar to what I'm trying to achieve, without the list of other blendshapes.

If you're not interested in ARKit automatically moving nodes in the scene you could avoid tying an SCNNode to the ARAnchor (that is don't implement -renderer:nodeForAnchor:). Rather you would query the ARAnchor in -session:didUpdateAnchors:.
In fact since you don't actually have an AR experience, but just face tracking, you don't even need an ARSCNView but just a SCNView.

Related

How to place 3D object (.scn file) on detected vertical plane which should be parallel to plane in ARKIt?

I am going to detect horizontal and vertical plane in ARKit. After detecting whether it is horizontal or vertical surface, respectively add plane of gray color on detected surface.On tap on detected plane I am going to add 3D object of .scn file.
My code is working fine for placing 3D object (.scn file) on horizontal plane but not working correctly with vertical plane.
3D object (.scn file) for vertical plane like photo frame is facing right in SceneKit editor. So I changed it’s EularAngleY to -0 and it’s facing front now in SceneKit Editor. When I tap on detected vertical plane which is facing to front then also photo frame is facing to right and If I move device facing right and place photo frame then it’s correct.
I want to place 3D object .scn file which should be parallel to plane (If vertical plane is facing front then it should be face front even in .scn file it faces to any direction).That 3D object is not parallel to detected plane.
Have I needed to change rotation also with respect to detected plane’s angle or need to do any changes in .scn file in SceneKit editor? How can I achieve it?
Please check below code on hitting detected vertical plane. Is there anything wrong?
#objc func addObjectToSceneView1(withGestureRecognizer recognizer: UIGestureRecognizer){
let tapLocation = recognizer.location(in: sceneView)
let hitTestResults = sceneView.hitTest(tapLocation, types: .existingPlaneUsingExtent)
guard let hitTestResult = hitTestResults.first, let anchor = hitTestResult.anchor as? ARPlaneAnchor else { return }
let translation = hitTestResult.worldTransform.columns.3
let x = translation.x
let y = translation.y
let z = translation.z
guard let shipScene = SCNScene(named: "art.scnassets/frame/frame.scn"),
let shipNode = shipScene.rootNode.childNode(withName: "frame", recursively: true)
else { return }
shipNode.position = SCNVector3(x,y,z)
sceneView.scene.rootNode.addChildNode(shipNode)
}
.scn file is like below. Is this .scn file correct? Or x should be with frame's depth? Everytime whenever I tap on plane it will show image like this only.
As I mentioned in the answer here, it is better to add an ARAnchor to the ARSession rather than directly adding an SCNNode into the scene graph after doing a hit test. Currently the code you posted doesn't take into account the rotation of the detected plane. For the code to work you would need to determine the normal of the detected plane take the dot product and cross product with the desired orientation of the model calculate the rotation, then apply the rotation. However, the ARSession will do all of that for you. By using ARAnchor(transform: hitTestResult.worldTransform) the rotation is encoded into the anchor. So you will only need to deal with transformations with the models own local coordinate space.
For example:
#objc func addObjectToSceneView1(withGestureRecognizer recognizer: UIGestureRecognizer){
let tapLocation = recognizer.location(in: sceneView)
let hitTestResults = sceneView.hitTest(tapLocation, types: .existingPlaneUsingExtent)
guard let hitTestResult = hitTestResults.first, let anchor = hitTestResult.anchor as? ARPlaneAnchor else { return }
// create anchor and add to session and wait for callback
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Then in your session delegate call back:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
// node for plane anchor
let anchorNode = SCNNode()
return anchorNode
} else {
// must be node for most recent hit test
guard let frameScene = SCNScene(named: "art.scnassets/frame/frame.scn"),
let frameNode = frameScene.rootNode.childNode(withName: "frame", recursively: true) else { return nil }
return frameNode
}
}
In you're scn file you'll want the model to be located at the origin laying flat without any transforms. This means you'll likely need to nest nodes and position the underlying model relative to a parent empty node.
Here "frame" is the outer node with no transform that is returned from nodeForAnchor and "picture" is rotated to be flat and scaled to the size of the content.
Final result:

ARKit: How to place .obj file on plane which is having multiple materials

I want to place a car object on the plane.
I am setting the sceneview like this.
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal
sceneView.session.run(configuration)
sceneView.delegate = self
sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { // 1 guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
// 2
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
let plane = SCNPlane(width: width, height: height)
// 3
plane.materials.first?.diffuse.contents = UIColor.transparentLightBlue
// 4
let anchorNode = SCNScene(named: "art.scnassets/car.scn")!.rootNode
// 5
let x = CGFloat(planeAnchor.center.x)
let y = CGFloat(planeAnchor.center.y)
let z = CGFloat(planeAnchor.center.z)
planeNode.position = SCNVector3(x,y,z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(anchorNode)[car object][1]
}
https://app.box.com/s/vdloxlqxk9rh6h4k5ggwrxm1hslktn8g
I am able to place the car but it is allover camera scene . can any one tell me problem with cooridnate system or 3D object.
This depends on how you want to place the car. Right now your app will place an anchor and where it decides to create its first anchor is where your car will arrive. If you would like to update your anchor as you scan, you need to call didUpdate and your anchor will move to the center of the extent of your plane:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor){}
However, it looks as though you have already defined the size you want your anchor to be. I've never tried it that way before and I'm not sure if you can programatically move your anchor to a desired location. In my mind it would cause issues if the app doesn't already know its a horizontal surface, but I've never tested it.
Instead what I would recommend is to create your plane as you did above (without declaring its size). Update your plane size with the 'didUpdate' function above. Then if you want to place your car in a predetermined spot, run a hittest.
Here is a good resource to walk you through:
https://www.appcoda.com/arkit-horizontal-plane/

place SCNNode fixed in place without ARPlaneAnchor

I'm currently trying to put an SCNNode fixed in place while use ARImageTrackingConfiguration which is no plane detection but it seems like not working properly because the SCNNode is moving while camera moves
below are the code:
/// show cylinder line
func showCylinderLine(point a: SCNVector3, point b: SCNVector3, object: VirtualObject) {
let length = Vector3Helper.distanceBetweenPoints(a: a, b: b)
let cyclinderLine = GeometryHelper.drawCyclinderBetweenPoints(a: a, b: b, length: length, radius: 0.001, radialSegments: 10)
cyclinderLine.name = "line"
cyclinderLine.geometry?.firstMaterial?.diffuse.contents = UIColor.red
self.sceneView.scene.rootNode.addChildNode(cyclinderLine)
cyclinderLine.look(at: b, up: self.sceneView.scene.rootNode.worldUp, localFront: cyclinderLine.worldUp)
}
is it possible to make the cylinderLine SCNNode fixed in place without ARPlaneAnchor ?
(Note: I had tried ARAnchor on nodeForAnchor delegate methods and it is still moving as camera moves)
Can you show your nodeForAnchors method?
That is where nodes are "fixed" to the image so i am guessing an error is there somewhere. Here is one example application of that delegate:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let _ = anchor as? ARImageAnchor {
if let objectScene = SCNScene(named: "ARassets.scnassets/object.scn") {
//creates a new node that is connected to the image. This is what makes your object "fixed"
let objectNode = objectScene.rootNode.childNodes.first!
objectNode.position = SCNVector3Zero
objectNode.position.y = 0.15
node.addChildNode(planeNode)
}
}
Let me know if this helps ;)

ARKit - initially placing object in ARSKView at a certain heading/angle from camera

I'm creating my anchor and 2d node in ARSKView like so:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
removeToken()
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This places it exactly in front of the camera at a given distance (z). What I want to also do is take a latitude/longitude for an object, calculate an angle or heading in degrees, and initially drop the anchor/node at this angle from the camera. I'm currently getting the heading by using the GMSGeometryHeading method, which takes users current location, and the target location to determine the heading. So when dropping the anchor, I want to put it in the right direction towards the target location's lat/lon. How can I achieve this with SpriteKit/ARKit?
Can you clarify your question a bit please?
Perhaps the following lines can help you as example. There the cameraNode moves using a basic geometry-formula, moving obliquely depending on the angle (in Euler) in both x and y coordinates
var angleEuler = 0.1
let rotateX = Double(cameraNode.presentation.position.x) - sin(degrees(radians: Double(angleEuler)))*10
let rotateZ = Double(cameraNode.presentation.position.z) - abs(cos(degrees(radians: Double(angleEuler))))*10
cameraNode.position = SCNVector3(x:Float(rotateX), y:0, z:Float(rotateX))
If you want an object fall in front of the camera, and the length of the fall depend on a degree just calculate the value of "a" in the geometry-formula "Tan A = a/b" and update the node's "presentation.position.y"
I hope this helps

Get All ARAnchors of focused Camera in ARKIT

When application launched first a vertical surface is detected on one wall than camera focus to the second wall, in the second wall another surface is detected. The first wall is now no more visible to the ARCamera but this code is providing me the anchors of the first wall. but I need anchors of the second wall which is right now Visible/focused in camera.
if let anchor = sceneView.session.currentFrame?.anchors.first {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
The clue to the answer is in the beginning line of your if let statement.
Lets break this down:
When you say let anchor = sceneView.session.currentFrame?.anchors.first, you are referencing an optional array of ARAnchor, which naturally can have more than one element.
Since your are always calling first e.g. index [0], you will always get the 1st ARAnchor which was added to the array.
Since you now have 2 anchors, you would naturally need the last (latest) element. As such you can try this as a starter:
if let anchor = sceneView.session.currentFrame?.anchors.last {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
Update:
Since another poster has interpreted the question differently, in that they believe the question is how can I detect if an ARPlaneAnchor is in view? Let's approach it another way.
First we need to take into consideration that the ARCamera has a Frostrum in which our content is shown:
As such, we would then need to determine whether an ARPlaneAnchor was inViewOfFrostrum.
First we will create 2 variables:
var planesDetected = [ARPlaneAnchor: SCNNode]()
var planeID: Int = 0
The 1st to store the ARPlaneAnchor and its associated SCNNode, and the 2nd in order to provide a unique ID for each plane.
In the ARSCNViewDelegate we can visualise an ARPlaneAnchor and then store it's information e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Create An SCNode & Geometry To Visualize The Plane
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
planeNode.geometry = planeGeometry
//3. Set The Position Based On The Anchors Extent & Rotate It
planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
//4. Add The PlaneNode To The Node & Give It A Unique ID
node.addChildNode(planeNode)
planeNode.name = String(planeID)
//5. Store The Anchor & Node
planesDetected[anchor] = planeNode
//6. Increment The Plane ID
planeID += 1
}
Now we have stored the detected planes, we then of course need to determine if any of these are in view of the ARCamera e.g:
/// Detects If An Object Is In View Of The Camera Frostrum
func detectPlaneInFrostrumOfCamera(){
//1. Get The Current Point Of View
if let currentPointOfView = augmentedRealityView.pointOfView{
//2. Loop Through All The Detected Planes
for anchorKey in planesDetected{
let anchor = anchorKey.value
if augmentedRealityView.isNode(anchor, insideFrustumOf: currentPointOfView){
print("ARPlaneAnchor With ID \(anchor.name!) Is In View")
}else{
print("ARPlaneAnchor With ID \(anchor.name!) Is Not In View")
}
}
}
}
Finally we then need to access this function which we could do in the following delegate method for example renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval):
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectPlaneInFrostrumOfCamera()
}
Hopefully both of these will point in the right direction...
In order to get the node that is currently is in point of view you can do something like this:
var targettedAnchorNode: SCNNode?
if let anchors = sceneView.session.currentFrame?.anchors {
for anchor in anchors {
if let anchorNode = sceneView.node(for: anchor), let pointOfView = sceneView.pointOfView, sceneView.isNode(anchorNode, insideFrustumOf: pointOfView) {
targettedAnchorNode = anchorNode
break
}
}
if let targettedAnchorNode = targettedAnchorNode {
addNode(position: SCNVector3Zero, anchorNode: targettedAnchorNode)
} else {
debugPrint("Targetted node not found")
}
} else {
debugPrint("Anchors not found")
}
If you would like to get all focused nodes, collect them in an array satisfying specified condition
Good luck!

Resources