In ARKit, what are the plane detection delegate methods in ARSKViewDelegate? - ios

Reading the documentation for planeDetection, it states
If you enable horizontal plane detection, the session adds ARPlaneAnchor objects and notifies your ARSessionDelegate, ARSCNViewDelegate, or ARSKViewDelegate object whenever its analysis of captured video images detects an area that appears to be a flat surface.
However, I can't find the method in ARSKViewDelegate that would receive the plane detection events. I see plenty of examples with ARSCNViewDelegate. Would it be in the method view(_:didAdd:for:) and if so how can I detect whether it's a plane detection anchor?

Detected planes are anchors added to the ARSession, so you use the delegate methods for responding to newly added anchors.
In Apple's "Providing 2D Virtual Content with SpriteKit" doc, they show some basic code for creating SpriteKit nodes in response to new anchors:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
return SKLabelNode(text: "👾")
}
If you want to put a billboarded emoji at the center of every detected plane, that's all the code you need. Otherwise, you can do one or more of the following...
Provide a different SpriteKit node — initialize it in that method and return it there. (Refer to SpriteKit docs, tutorials, SO questions, etc on how to use SpriteKit.)
Also be adding anchors to the scene manually, in which case you might need to sort out the plane-detection-based anchors from the rest. Plane anchors are ARPlaneAnchor instances, so you can test types in that method:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
if let plane = anchor as? ARPlaneAnchor {
// this anchor came from plane detection
return SKLabelNode(text: "✈️") // or whatever other SK content
} else {
// this anchor came from manually calling addAnchor on the ARSession
return SKLabelNode(text: "⚓️") // or whatever other SK content
}
}
Use some of the properties of ARPlaneAnchor to choose what SK content to provide or how to set it up. In that case, use the conditional cast (as? ARPlaneAnchor) like above so you can access those properties.
Change the position/orientation of your SK content relative to that provided/managed by ARKit, or add multiple SK nodes for each anchor. In that case, implement view(_:didAdd:for:) instead, create new node(s) for your SK content and set their positions (etc) before adding them as children of the node that method provides.

Related

Adding ground shadows to a USDZ model in RealityKit?

For some time now, I have been trying to add a realistic ground shadow to an object in RealityKit. For my use case, I will not be using Reality Composer, nor (per this question) will I be using an anchor entity from a horizontal plane (my user will tap to place an object and that tap could align with either a horizontal plane or an ARMeshAnchor, as we support LiDAR in our app).
When I test my USDZ model via QuickLook on iOS, I see that iOS adds a shadow beneath my model, and while not wholly realistic, it appears a bit more "placed" on a surface, as compared to no shadow.
In trying to add my model, I am taking the following steps;
self.model = Entity.load(named: "model.usdz")
When a user taps on the screen, I perform a raycast and add the model to the built anchor;
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors {
if anchor.name == "tapped" {
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(self.model!)
arView.scene.addAnchor(anchorEntity)
}
}
}
When the model is added to the tapped point, there are no ground shadows. As a test, I had gone down the path of trying to add a Directional Light, believing that its placement may cast a light on the object and, therefore, create shadows. I create the light like so;
class Lighting: Entity, HasDirectionalLight {
required init() {
super.init()
self.light = DirectionalLightComponent(color: .white, intensity: 5000, isRealWorldProxy: true)
}
}
I've added a global var lightEntity = AnchorEntity(). Then, in my viewDidLoad method, I am attempting to set up the light like so;
let spotLight = Lighting().light
let shadow = Lighting().shadow
lightAnchor.components.set(shadow!)
lightAnchor.components.set(spotLight)
arView.scene.anchors.append(lightAnchor)
self.model = Entity.load(named: "model.usdz")
While I can see that there is a light shining on the object, it does not seem to cause any shadows to be cast.
if your app supports LiDAR , you can use
arView.environment.sceneUnderstanding.options.insert(.receivesLighting)

didAddNode vs SceneKit Collision Detection

I am building a small demo where two objects can collide with each other. Basically an object will be placed on a plane. I have the following code for adding physics body to the plane.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARPlaneAnchor {
let plane = SCNPlane(width: 0.5, height: 0.5)
let material = SCNMaterial()
material.isDoubleSided = true
material.diffuse.contents = UIImage(named: "overlay_grid")
plane.firstMaterial = material
let planeNode = SCNNode(geometry: plane)
planeNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
planeNode.physicsBody?.categoryBitMask = BodyType.plane.rawValue
planeNode.eulerAngles.x = .pi/2
node.addChildNode(planeNode)
}
Even though the plane gets added it does not participate in any physical collisions. If I try to place objects on it, it goes right through it. But if I change the last line to the following it works:
// node.addChildNode(planeNode) // INSTEAD OF THIS
planeNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(planeNode) // THIS WORKS
My understanding is that all the collision related stuff is maintained by SceneView and in order to participate in collisions I need to add it to the SceneView hierarchy instead of the ARSCNView hierarchy.
QUESTION:
// node.addChildNode(planeNode) // WHY THIS DOES NOT WORK
planeNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(planeNode) // WHY THIS WORKS
static physics bodies are so named because they aren’t supposed to move (relative to the global/world/scene coordinate space). Many optimizations in the inner workings of a physics engine depend on this, so changing the position of a node with an attached static physics body is likely to cause incorrect behavior.
ARKit continually moves the ARPlaneAnchors that result from plane detection — the longer it looks at a real-world planar surface, from more different angles, the better it knows the position and size of that plane.
When you add a child node to the ARSCNView-managed node in renderer(_:didAdd:for:), the child node’s position may not change... but that position is relative to its parent node, and ARKit automatically moves the parent node to match the ARPlaneAnchor it goes with. So the child node moves relative to the world whenever ARKit updates the plane anchor. If you have a static physics body on that node, you get weirdness.
When you directly add a node as a child of the scene’s rootNode and set its position based on the initial of a plane anchor, that node stays still — you’re the only one setting its world-space position, and you’re doing so exactly once. So it’s safe to give it a static physics body.
(Note that if you want “static” physics body behavior for something that can change over time, it is possible to delete the physics body and re-create it at a new position. Just don’t do so too often or you’re probably defeating other optimizations in the physics engine.)

ARKit how to draw face mesh?

I want to draw the face mesh in real time as is shown in the Apple video. It's also being done in the MeasureKit's app too. I got the ARSession running which constantly delivers updated ARFrame objects in delegate and I can get ARFaceAnchor from it which contains the face geomatery consisting of ARFaceGeometry and blendShapes. How to use the ARFaceGeometry data to draw the mesh in real time ?
Thanks.
It's possible to create a face mesh in augmented reality. I'd recommend using the following innovative approach which utilises ARSCNViewDelegate.
For example:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else {
return nil
}
let faceGeometry = ARSCNFaceGeometry(device: device)
let node = SCNNode(geometry: faceGeometry)
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
}
In this example, we created a face geometry to be rendered.
Next, we need a SceneKit node based on that face geometry.
We require the fill mode for the node’s material to be fine lines. This should achieve the face mesh that you're after. A further approach to this would be for the face mesh to react to facial expressions.

How to dynamically create annotations for 3D object using scenekit - ARKit in iOS 11?

I am working on creating annotations using overlaySKScene something similar to this(https://sketchfab.com/models/1144d7be20434e8387a2f0e311eca9b1#). I followed https://github.com/halmueller/ImmersiveInterfaces/tree/master/Tracking%20Overlay to create the overlay.
But in the provided example, they are creating only one annotation and it is static. I want to create multiple annotations dynamically based on the number of child nodes we have and also should be able to position annotation on top of respective child node. How to achieve this?
I am adding overlay like below,
sceneView.overlaySKScene = InformationOverlayScene(size: sceneView.frame.size)
where InformationOverlayScene is the SKScene in which i have added two childnodes to create one annotation.
Create an array with the annotation sprites that is mapped to the childnodes array, and then do something like the following:
func renderer(_ aRenderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
let scnView = self.view as! SCNView
//for each character
for var i in 0...inBattleChars.count-1 {
let healthbarpos = scnView.projectPoint(inBattleChars[i].position)
battleSKO.healthbars[i].position = CGPoint(x: CGFloat(healthbarpos.x), y: (scnView.bounds.size.height-10)-CGFloat(healthbarpos.y))
}
}
Before every frame is rendered this updates the position of an SKSprite (in healthBars) for each SCNNode in inBattleChars. The key part is where projectPoint is used to get the SK overlay scene's 2D position based on the SCNNode in the 3D scene.
To prevent the annotations of non-visible nodes from showing up (such as childnodes on the back side of the parent object) use the SCNRenderer’s nodesInsideFrustum(of:) method.
You can add a SKScene or a CALayer as a material property.
You could create a SCNPlane with a specific width and height and add a SpriteKit scene as the material.
You can find an example here.
Then you just position the plane where you want it to be and create and delete the annotations as you need them.

Swift : ARKit Save ARPlaneAnchor for next session

ARKit is quite new and I am quite new in swift... So I'm having some troubles...
I'd like to save the ARPlaneAnchor detected during a session and reload them when I relaunch my app. My phone will always be at the same place and I'd like to scan the room one time. And remembering the Anchor I found in the room everytime I launch the app.
I tried several solutions :
Solution1 :
Save the ARPlaneAnchor using : NSKeyedArchiver.archiveRootObject(plane, toFile: filePath)
I got this error :
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[ARPlaneAnchor encodeWithCoder:]: unrecognized selector sent to instance
I think that maybe I can't save this kind of data locally
Solution2 : Store the datas of the ARPlaneAnchor then intantiate them when I launch the app. the datas are mainly float. I could creat ARAnchor easily, I could cast them as ARPlaneAnchor, but I could not modify the "center" and "extend" parameter of the ARPlaneAnchor because they only have a getter and not a setter. So I can't create the good anchors.
I am open to anysolution. I think I need to store the ARAnchor object, but for now I could not find a way to do it without a crash!
So if someone can help me I would be very grateful.
First... if your app is restricted to a situation where the device is permanently installed and the user can never move or rotate it, using ARKit to display overlay content on the camera feed is sort of a "killing mosquitos with a cannon" kind of situation. You could just as well work out at development time what kind of camera projection your 3D engine needs, use a "dumb" camera feed with your 3D engine running on top, and not need iOS 11 or an ARKit-capable device.
So you might want to think about your use case or your technology stack some more before you commit to specific solutions and workarounds.
As for your more specific problem...
ARPlaneAnchor is entirely a read-only class, because its use case is entirely read-only. It exists for the sole purpose of giving ARKit a way to give you information about detected planes. However, once you have that information, you can do with it whatever you want. And from there on, you don't need to keep ARPlaneAnchor in the equation anymore.
Perhaps you're confused because of the typical use case for plane detection (and SceneKit-based display):
Turn on plane detection
Respond to renderer(_:didAdd:for:) to receive ARPlaneAnchor objects
In that method, return virtual content to associate with the plane anchor
Let ARSCNView automatically position that content for you so it follows the plane's position
If your plane's position is static with respect to the camera, though, you don't need all that.
You only need ARKit to handle the placement of your content within the scene if that placement needs ongoing management, as is the case when plane detection is live (ARKit refines its estimates of plane location and extent and updates the anchor accordingly). If you did all your plane-finding ahead of time, you won't be getting updates, so you don't need ARKit to manage updates.
Instead your steps can look more like this:
Know where a plane is (position in world space).
Set the position of your virtual content to the position of the plane.
Add the content to the scene directly.
In other words, your "Solution 2" is a step in the right direction, but not far enough. You want to archive not an ARPlaneAnchor instance itself, but the information it contains — and then when unarchiving, you don't need to re-create an ARPlaneAnchor instance, you just need to use that information.
So, if this is what you do to place content with "live" plane detection:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
let extent = planeAnchor.extent
let center = planeAnchor.center
// planeAnchor.transform not used, because ARSCNView automatically applies it
// to the container node, and we make a child of the container node
let plane = SCNPlane(width: CGFloat(extent.x), height: CGFloat(extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = .pi / 2
planeNode.simdPosition = center
node.addChildNode(planeNode)
}
Then you can do something like this for static content placement:
struct PlaneInfo { // something to save and restore ARPlaneAnchor data
let transform: float4x4
let center: float3
let extent: float3
}
func makePlane(from planeInfo: PlaneInfo) { // call this when you place content
let extent = planeInfo.extent
let center = float4(planeInfo.center, 1) * planeInfo.transform
// we're positioning content in world space, so center is now
// an offset relative to transform
let plane = SCNPlane(width: CGFloat(extent.x), height: CGFloat(extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = .pi / 2
planeNode.simdPosition = center.xyz
view.scene.rootNode.addChildNode(planeNode)
}
// convenience vector-width conversions used above
extension float4 {
init(_ xyz: float3, _ w: Float) {
self.init(xyz.x, xyz.y, xyz.z, 1)
}
var xyz: float3 {
return float3(self.x, self.y, self.z)
}
}

Resources