Node clones not maintaining geometry despite copying it over in iOS 12 - ios

I have a little painting app in SceneKit that I originally developed on iOS 9. There's a part where I take a collection of nodes, clone them, then flatten them to make one single node that can be moved around.
Problem is, in iOS 9 I was able to successfully do the "deep clone" technique where I copy over the node's geometry and materials, and these properties were retained. This is not the case in iOS 12, however. The geometry gets all wonky despite running the exact same code. I've spent a few hours trying different variations of the process below, to no avail. Here's the code:
func sendPainting() {
// a parent node that contains the canvas and frame nodes as children
guard let configPainting = sceneView.scene!["configpainting"] else {
return
}
guard let configCanvas = configPainting.childNode(withName: "canvas", recursively: true) else {
return
}
guard let configFrame = configPainting.childNode(withName: "frame", recursively: true) else {
return
}
let configClone = configPainting.clone()
// copy over geometry and material
configClone.childNode(withName: "canvas", recursively: true)?.geometry = configCanvas.geometry?.copy() as? SCNGeometry
configClone.childNode(withName: "canvas", recursively: true)?.geometry?.firstMaterial = configCanvas.geometry?.firstMaterial?.copy() as? SCNMaterial
configClone.childNode(withName: "frame", recursively: true)?.geometry = configFrame.geometry?.copy() as? SCNGeometry
configClone.childNode(withName: "frame", recursively: true)?.geometry?.firstMaterial = configFrame.geometry?.firstMaterial?.copy() as? SCNMaterial
// make a flattened clone to now have the canvas and frame consolidated into 1 node
let localClone = configClone.flattenedClone()
...
// add to my scene
sceneView.scene!["localpaintings"]?.addChildNode(localClone)
}
Here's what the group of nodes look like before cloning and flattening, in both iOS 9 and 12:
And after:
In iOS 9, geometry is retained (sorry, different image, but you get the idea):
In iOS 12, the frame gets shrunk and rotated:

Related

Removing planeDetection planes prevents placing additional planes

I am generating planes using ARKit and if I manually remove the plane I am unable to detect planes it the same position.
To detect planes I set the scene config
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravity
config.providesAudioData = false
config.isLightEstimationEnabled = true
...
else if ARMode == .floor {
self.ARMode = .floor
scannerBox.isHidden=true
focusNode?.isHidden=false
plusButton.isHidden=true
config.planeDetection = .horizontal self.planeTexture="test.scnassets/Textures/Surface_Texture.png"
}
...
sceneView.session.run(config)
Add plane to the scene in renderer function
let planeNode = self.createARPlaneNode(planeAnchor: planeAnchor,
color: UIColor.yellow.withAlphaComponent(0.5))
node.addChildNode(planeNode)
If a certain button is pushed the plane and other objects should be removed
scene.rootNode.enumerateChildNodes { (node, _) in
print(node.name)
if (node.name == "sphere" ) {
node.removeFromParentNode()
}
if ( node.name == "surfacePlane") {
node.removeFromParentNode()
}
if(node.name != nil){
if (node.name?.contains("ProductModel_"))! {
node.removeFromParentNode()
}
}
}
When the above code fires the planes, spheres, and products disappear as expected.
If I try to scan the surface elsewhere in the room it works as expected but if I try to scan where the plane was and generate a new one it will not work. It will not generate a new plane and if nearby planes expand to cover the same area they disappear.
I believe the problem is likely the scene geometry of the removed planes is somehow still present and preventing new planes in the same space.
As a temporary work around I and stopping and starting the AR sessions and that removes the planes.
let config = sceneView.session.configuration as!
ARWorldTrackingConfiguration
config.planeDetection = .horizontal
sceneView.session.run(config,
options: [.resetTracking, .removeExistingAnchors])
I am trying to determine why this is the case and the planes aren't treated the same as objects I explicitly placed in the scene.
I believe you also need to remove the ARPlaneAnchor your plane node was attached to so that ARKit will create a new ARPlaneAnchor and kick off the renderer function that is creating your plane node.
Something like
if ( node.name == "surfacePlane") {
if let planeAnchor = sceneView.anchor(for: node) as? ARPlaneAnchor {
sceneView.session.remove(anchor: planeAnchor)
}
node.removeFromParentNode()
}

ARKit Face Tracking SceneKit Object Moves Incorrectly

I'm having trouble understanding SceneKit transforms and anchoring an object to a detected face. I have created a face detection app and have successfully applied masks, with and without texture. I also successfully applied "glasses" made from text ( "00" ) including an occlusion node.
In both cases, the objects move with the face as expected. However, when I create a simple hat made from two cylinders within ScendKit the behavior is totally unexpected.
First, I could not seem to anchor the hat to the face, but had to adjust the transforms which made the hat appear in a different place with almost every face. Even worse, the hat moves in the opposite direction to the face. Rotate the user face to the left, the hat moves to the right. Rotate the face up, the hat moves down.
Clearly, I'm missing something important here about anchoring objects to the face. Any guidance would be appreciated.
Xcode 10 beta 3, iOS 11.4.1 running on an iPhone X.
There is a separate class for hat, glasses, mask:
class Hat : SCNNode {
init(geometry : ARSCNFaceGeometry) {
geometry.firstMaterial?.colorBufferWriteMask = []
super.init()
self.geometry = geometry
guard let url = Bundle.main.url(forResource: "hat", withExtension: "scn", subdirectory: "Models.scnassets") else {fatalError("missing hat resource")}
let node = SCNReferenceNode(url: url)!
node.load()
addChildNode(node)
}//init
func update(withFaceAnchor anchor : ARFaceAnchor) {
let faceGeometry = geometry as! ARSCNFaceGeometry
faceGeometry.update(from: anchor.geometry)
}//upadate
required init?(coder aDecoder: NSCoder) {
fatalError("(#function) has not been implemented")
}//r init
}//class
A couple of the functions in the ViewController:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else {return}
updateMessage(text: "Tracking your face")
switch contentTypeSelected {
case .none:
break
case .mask:
mask?.update(withFaceAnchor: faceAnchor)
case .glasses:
glasses?.update(withFaceAnchor: faceAnchor)
case .hat:
hat?.update(withFaceAnchor: faceAnchor)
}//switch
}//didUpdate
func createFaceGeometry() {
updateMessage(text: "Creating face geometry")
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
mask = Mask(geometry: maskGeometry, maskType : maskType)
let glassesGeometry = ARSCNFaceGeometry(device: device)!
glasses = Glasses(geometry: glassesGeometry)
let hatGeometry = ARSCNFaceGeometry(device: device)!
hat = Hat(geometry: hatGeometry)
}//createFaceGeometry
The verisimilitude of a hat is going to depend on how well it can be position in relation to the face and how well it can appear situated in the scene (i.e. features that would be in front of the hat should occlude the hat itself). With that in mind, you'll want the face to occlude the hat. So your init for Hat should setup an occlusion node using the face geometry:
let occlusionNode: SCNNode
init(geometry: ARSCNFaceGeometry) {
/*
Taken directly from Apple's sample code https://developer.apple.com/documentation/arkit/creating_face_based_ar_experiences
*/
geometry.firstMaterial!.colorBufferWriteMask = []
occlusionNode = SCNNode(geometry: geometry)
occlusionNode.renderingOrder = -1
super.init()
addChildNode(occlusionNode)
guard let url = Bundle.main.url(forResource: "hat", withExtension: "scn", subdirectory: "Models.scnassets") else {fatalError("missing hat resource")}
let node = SCNReferenceNode(url: url)!
node.load()
addChildNode(node)
}
This will allow the face to appear in front of any virtual objects that have a z depth greater than the face mesh.
You will also want change let hatGeometry = ARSCNFaceGeometry(device: device)! to let hatGeometry = ARSCNFaceGeometry(device: device, fillMesh: true)! otherwise the hat will be visible through the eyes giving an uncanny, undesirable effect.
The next issue is to position the hat so that it appears believably in the scene.
Because we want the face to occlude a large part of the hat, it is best to position it in the y direct at the top of the face geometry. To do that successfully, you'll likely want your hat to have pivot point at the bottom center of the hat geometry and located at x = 0, y = 0 in your .scn file. For example the scene editor and node inspector might look something like:
Then in your func update(withFaceAnchor anchor : ARFaceAnchor) you can say
func update(withFaceAnchor anchor : ARFaceAnchor) {
let faceGeometry = geometry as! ARSCNFaceGeometry
faceGeometry.update(from: anchor.geometry)
hat.position.y = faceGeometry.boundingSphere.radius
}
Finally for the z position of the hat you'll like want a slightly negative value as the bulk of a hat is behind one's face. -0.089 worked well for me.

How to drag SCNode with finger irrespective of axis using ARKit?

I am working on an AR based application using ARKit. I am using https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
Now there are lot of child nodes in the Virtual Object. I want to drag/move any child node with user finger irrespective of the axis. The child SCNode may be in ground or floating. I want to move the object wherever the user finger goes irrespective of the axis or irrespective of the euler angles of the child node. Is this even possible?
I followed the below links but it is just moving along a particular axis.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
I tried using the below code and it is not at all helping,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}

IOS11 Beta ARKit can't scale Scene object

I created a basic scene, and added an dae file.
First every time i run or save the project i get the popup:
The document “billboard.dae” could not be saved.
It still runs though but is annoying.
But the issue is I can't scale the object.
I have tried different values 0.5s and also > 1 but nothing seems to work. Here is my code
override func viewDidLoad()
{
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/billboard.dae")!
let billboardNode = scene.rootNode.childNode(withName: "billboard", recursively: true)
// billboardNode?.position = SCNVector3Make(0, 0, 1)
billboardNode?.position.z = 10
billboardNode?.scale.z = 0.5
// billboardNode?.scale = SCNVector3Make(0.4,0.4, 0.4)
sceneView.scene = scene
}
Any ideas?
Thanks
Have you verified billboardNode is not nil? You're sending an optional (the result of looking for a child node with a given name) position and scaling messages but if it's nil (because finding the child node failed) it won't have any impact.
The error suggests to me there was some problem converting the .dae file, which might explain why the scene can't locate the asset by name. Or it might be as simple as "billboard" vs. "Billboard".

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

Resources