ARKit hide objects behind walls - ios

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?

You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)

For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100

In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1

Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.

ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

Related

Can't detect collision between rootNode and pointOfView child nodes in SceneKit / ARKit

In an AR app, I want to detect collisions between the user walking around and the walls of an AR node that I construct. In order to do that, I create an invisible cylinder right in front of the user and set it all up to detect collisions.
The walls are all part of a node which is a child of sceneView.scene.rootNode.
The cylinder, I want it to be a child of sceneView.pointOfView so that it would always follow the camera.
However, when I do so, no collisions are detected.
I know that I set it all up correctly, because if instead I set the cylinder node as a child of sceneView.scene.rootNode as well, I do get collisions correctly. In that case, I continuously move that cylinder node to always be in front of the camera in a renderer(updateAtTime ...) function. So I do have a workaround, but I'd prefer it to be a child of pointOfView.
Is it impossible to detect collisions if nodes are children of different root nodes?
Or maybe I'm missing something in my code?
The contactDelegate is set like that:
sceneView.scene.physicsWorld.contactDelegate = self so maybe this only includes sceneView.scene, but will exclude sceneView.pointOfView???
Is that the issue?
Here's what I do:
I have a separate file to create and configure my cylinder node which I call pov:
import Foundation
import SceneKit
func createPOV() -> SCNNode {
let pov = SCNNode()
pov.geometry = SCNCylinder(radius: 0.1, height: 4)
pov.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
pov.opacity = 0.3 // will be set to 0 when it'll work correctly
pov.physicsBody = SCNPhysicsBody(type: .kinematic, shape: nil)
pov.physicsBody?.isAffectedByGravity = false
pov.physicsBody?.mass = 1
pov.physicsBody?.categoryBitMask = BodyType.cameraCategory.rawValue
pov.physicsBody?.collisionBitMask = BodyType.wallsCategory.rawValue
pov.physicsBody?.contactTestBitMask = BodyType.wallsCategory.rawValue
pov.simdPosition = simd_float3(0, -1.5, -0.3) // this position only makes sense when setting as a child of pointOfView, otherwise the position will always be changed by renderer
return pov
}
Now in my viewController.swift file I call this function and set is as a child of either root nodes:
pov = createPOV()
sceneView.pointOfView?.addChildNode(pov!)
(Don't worry right now about not checking and unwrapping).
The above does not detect collisions.
But if instead I add it like so:
sceneView.scene.rootNode.addChildNode(pov!)
then collisions are detected just fine.
But then I need to always move this cylinder to be in front of the camera and I do it like that:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else {return}
let currentPosition = pointOfView.simdPosition
let currentTransform = pointOfView.simdTransform
let orientation = SCNVector3(-currentTransform.columns.2.x, -currentTransform.columns.2.y, -currentTransform.columns.2.z)
let currentPositionOfCamera = orientation + SCNVector3(currentPosition)
DispatchQueue.main.async {
self.pov?.position = currentPositionOfCamera
}
}
For completeness, here's the code I use to configure the node of walls in ViewController (they're built elsewhere in another function):
node?.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: node!, options: nil))
node?.physicsBody?.isAffectedByGravity = false
node?.physicsBody?.mass = 1
node?.physicsBody?.damping = 1.0 // remove linear velocity, needed to stop moving after collision
node?.physicsBody?.angularDamping = 1.0 // remove angular velocity, needed to stop rotating after collision
node?.physicsBody?.velocityFactor = SCNVector3(1.0, 0.0, 1.0) // will allow movement only in X and Z coordinates
node?.physicsBody?.angularVelocityFactor = SCNVector3(0.0, 1.0, 0.0) // will allow rotation only around Y axis
node?.physicsBody?.categoryBitMask = BodyType.wallsCategory.rawValue
node?.physicsBody?.collisionBitMask = BodyType.cameraCategory.rawValue
node?.physicsBody?.contactTestBitMask = BodyType.cameraCategory.rawValue
And here's my physycsWorld(didBegin contact) code:
func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
if contact.nodeA.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue || contact.nodeB.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue {
print("Begin COLLISION")
contactBeginLabel.isHidden = false
}
}
So I print something to the console and I also turn on a label on the view so I'll see that collision was detected (and the walls indeed move as a whole when it works).
So Again, it all works fine when the pov node is a child of sceneView.scene.rootNode, but not if it's a child of sceneView.pointOfView.
Am I doing something wrong or is this a limitation of collision detection?
Is there something else I can do to make this work, besides the workaround I already implemented?
Thanks!
Regarding the positioning of your cyliner:
instead to use the render update at time, you better use a position constraint for your cylinder node to move with the point of view. the result will be the same, as if it were a child of the point of view, but collisions will be detected, because you add it to the main rootnode scenegraph.
let constraint = SCNReplicatorConstraint(target: pointOfView) // must be a node
constraint.positionOffset = positionOffset // some SCNVector3
constraint.replicatesOrientation = false
constraint.replicatesScale = false
constraint.replicatesPosition = true
cylinder.constraints = [constraint]
There is also an influence factor you can configure. By default the influence is 100%, the position will immediatly follow.

Detect a object using camera and position a 3D object using ARKit in iOS

What am I looking for?
A simple explanation of my requirement is this
Using ARKit, detect an object using iPhone camera
Find the position of this object on this virtual space
Place a 3D object on this virtual space using SceneKit. The 3D object should be behind the
marker.
An example would be to detect a small image/marker position in a 3D space using camera, place another 3D ball model behind this marker in virtual space (so the ball will be hidden from the user because the marker/image is in front)
What I am able to do so far?
I am able to detect a marker/image using ARKit
I am able to position a ball 3D model on the screen.
What is my problem?
I am unable to position the ball in such a way that ball is behind the marker that is detected.
When the ball is in front the marker, the ball correctly hide the marker. You can see in the side view that ball is in front of the marker. See below
But when the ball is behind the marker, opposite doesn't happen. The ball is always seeing in front blocking the marker. I expected the marker to hide the ball. So the scene is not respecting the z depth of the ball's position. See below
Code
Please look into the comments as well
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.autoenablesDefaultLighting = true
//This loads my 3d model.
let ballScene = SCNScene(named: "art.scnassets/ball.scn")
ballNode = ballScene?.rootNode
//The model I have is too big. Scaling it here.
ballNode?.scale = SCNVector3Make(0.1, 0.1, 0.1)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
//I am trying to detect a marker/image. So ImageTracking configuration is enough
let configuration = ARImageTrackingConfiguration()
//Load the image/marker and set it as tracking image
//There is only one image in this set
if let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "Markers",
bundle: Bundle.main) {
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 1
}
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if anchor is ARImageAnchor {
//my image is detected
if let ballNode = self.ballNode {
//for some reason changing the y position translate the ball in z direction
//Positive y value moves it towards the screen (infront the marker)
ballNode.position = SCNVector3(0.0, -0.02, 0.0)
//Negative y value moves it away from the screen (behind the marker)
ballNode.position = SCNVector3(0.0, -0.02, 0.0)
node.addChildNode(ballNode)
}
}
return node
}
How to make the scene to respect the z position? Or in other words, how to show a 3D model behind an image/marker that has been detected using ARKit framework?
I am running against iOS 12, using Xcode 10.3. Let me know if any other information is needed.
To achieve that you need to create an occluder in the 3D scene. Since an ARReferenceImage has a physicalSize it should be straightforward to add a geometry in the scene when the ARImageAnchor is created.
The geometry would be a SCNPlane with a SCNMaterial appropriate for an occluder. I would opt for a SCNLightingModelConstant lighting model (it's the cheapest and we won't actually draw the plane) with a colorBufferWriteMask equal to SCNColorMaskNone. The object should be transparent but still write in the depth buffer (that's how it will act as an occluder).
Finally, make sure that the occluder is rendered before any augmented object by setting its renderingOrder to -1 (or an even lower value if the app already uses rendering orders).
In ARKit 3.0 Apple engineers implemented ZDepth compositing technique called People Occlusion. This feature is available only on devices with A12 and A13 'cause it's highly processor intensive. At the moment ARKit ZDepth compositing feature is in its infancy, hence it allows you only composite people over and under (or people-like objects) background, not any other object seen via rear camera. And, I think, you know about front TrueDepth camera – it's for face tracking and it has additional IR sensor for this task.
To turn ZDepth compositing feature on, use these instance properties in ARKit 3.0:
var frameSemantics: ARConfiguration.FrameSemantics { get set }
static var personSegmentationWithDepth: ARConfiguration.FrameSemantics { get }
Real code should look like this:
let config = ARWorldTrackingConfiguration()
if let config = mySession.configuration as? ARWorldTrackingConfiguration {
config.frameSemantics.insert(.personSegmentationWithDepth)
mySession.run(config)
}
After alpha channel's segmentation a formula for every channel computation looks like this:
r = Az > Bz ? Ar : Br
g = Az > Bz ? Ag : Bg
b = Az > Bz ? Ab : Bb
a = Az > Bz ? Aa : Ba
where Az is a ZDepth channel of Foreground image (3D model)
Bz is ZDepth a channel of Background image (2D video)
Ar, Ag, Ab, Aa – Red, Green, Blue and Alpha channels of 3D model
Br, Bg, Bb, Ba – Red, Green, Blue and Alpha channels of 2D video
But in early versions of ARKit there's no ZDepth compositing feature, so you can composite a 3D model over 2D background video only using standard 4-channel compositing OVER operation:
(Argb * Aa) + (Brgb * (1 - Aa))
where Argb is RGB channels of Foreground A image (3D model)
Aa is an Alpha channel of Foreground A image (3D model)
Brgb is RGB channels of Background B image (2D video)
(1 - Aa) is an inversion of Foreground Alpha channel
As a result, without personSegmentationWithDepth property your 3D model will always be OVER a 2D video.
Thus, if object on a Video doesn't look like humans' hand or like a human body, when using regular ARKit tools, you can't place the object from 2D video over 3D model.
.....
Nonetheless, you can do it using Metal and AVFoundation frameworks. Consider – it's not easy.
To extract ZDepth data from video stream you need the following instance property:
// Works from iOS 11
var capturedDepthData: AVDepthData? { get }
Or you may use these two instance methods (remember ZDepth channel must be 32-bit):
// Works from iOS 13
func generateDilatedDepth(from frame: ARFrame,
commandBuffer: MTLCommandBuffer) -> MTLTexture
func generateMatte(from frame: ARFrame,
commandBuffer: MTLCommandBuffer) -> MTLTexture
Please read this SO post if you wanna know how to do it using Metal.
For additional information, please read this SO post.

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

ARKit: Are renderer(didAdd: ) and renderer(nodeFor: ) exclusive

Relying completely on ARKit automatic plane detection is something I don't want to do since it takes time to detect surfaces and then real life surfaces should be textured enough, hence I need to think of something to give an option where if I want I should be able to add anchors at my will with a tap of a button.
Here is where renderer(nodeFor: ) comes in handy. Just add an anchor at the tap of a button, using hitTest to ascertain the position of the anchor and then add nodes using nodeFor: method.
However, in other cases when I don't want to manually tap buttons, renderer(didAdd: ) should work. I have made a sharedObject through which I can ascertain whether plane detection needs to be "automated" or "manual". In case it's automated planeDetection would be set as .horizontal whereas in case it is manual, planeDetection would be set as [].
The issue is on testing it appears that either one of the two methods under delegate would work. Is there a way that I can achieve what I desire? Having a switch using which I can toggle whether I want automated plane detection or I want to add anchors and then planes. I would love to have both option.
Is it possible to use two different delegates to achieve it, just a thought...in that case how would it work. Pointers would be very much appreciated.
Yes, renderer(didAdd: ) and renderer(nodeFor: ) are exclusive. As per the docs, if we want to implement our own method for adding node in the scene, we can go ahead and use renderer(nodeFor: ), or we can instead choose ARKit to do the same for us using renderer(didAdd: ).
The way to manage both cases, viz. adding nodes manually while planeDetection = []; automatically adding nodes when planeDetection = .horizontal can be achieved by using renderer(nodeFor: ) method itself. There is no need of renderer(didAdd: ).
Within renderer(nodeFor: ) in case of planeDetection = .horizontal, anchor can be casted as ARPlaneAnchor whose center and extent can be used to update the added node.
Such as:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if let planeAnchor = anchor as? ARPlaneAnchor {
let node = SCNNode()
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x),
height: CGFloat(planeAnchor.extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.name = "anchorPlane"
planeNode.simdPosition = float3(planeAnchor.center.x, 0, planeAnchor.center.z)
node.addChildNode(planeNode)
return node
At the same time, another condition can be imposed for planeDetection = [], when anchor can't be casted as ARPlaneAnchor, and geometry underlying the node can be given size as desired.
} else {
let node = SCNNode()
let plane = SCNPlane(width: 0.5, height: 0.5)
plane.firstMaterial?.diffuse.contents = UIColor.white
let planeNode = SCNNode(geometry: plane)
node.addChildNode(node)
return node
}
}

Add shape to sphere surface in SceneKit

I'd like to be able to add shapes to the surface of a sphere using SceneKit. I started with a simple example where I'm just trying to color a portion of the sphere's surface another color. I'd like this to be an object that can be tapped, selected, etc... so my thought was to add shapes as SCNNodes using custom SCNShape objects for the geometry.
What I have now is a blue square that I'm drawing from a series of points and adding to the scene containing a red sphere. It basically ends up tangent to a point on the sphere, but the real goal is to draw it on the surface. Is there anything in SceneKit that will allow me to do this? Do I need to do some math/geometry to make it the same shape as the sphere or map to a sphere's coordinates? Is what I'm trying to do outside the scope of SceneKit?
If this question is way too broad I'd be glad if anyone could point me towards books or resources to learn what I'm missing. I'm totally new to SceneKit and 3D in general, just having fun playing around with some ideas.
Here's some playground code for what I have now:
import UIKit
import SceneKit
import XCPlayground
class SceneViewController: UIViewController {
let sceneView = SCNView()
private lazy var sphere: SCNSphere = {
let sphere = SCNSphere(radius: 100.0)
sphere.materials = [self.surfaceMaterial]
return sphere
}()
private lazy var testScene: SCNScene = {
let scene = SCNScene()
let sphereNode: SCNNode = SCNNode(geometry: self.sphere)
sphereNode.addChildNode(self.blueChildNode)
scene.rootNode.addChildNode(sphereNode)
//scene.rootNode.addChildNode(self.blueChildNode)
return scene
}()
private lazy var surfaceMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.specular.contents = UIColor(white: 0.6, alpha: 1.0)
material.shininess = 0.3
return material
}()
private lazy var blueChildNode: SCNNode = {
let node: SCNNode = SCNNode(geometry: self.blueGeometry)
node.position = SCNVector3(0, 0, 100)
return node
}()
private lazy var blueGeometry: SCNShape = {
let points: [CGPoint] = [
CGPointMake(0, 0),
CGPointMake(50, 0),
CGPointMake(50, 50),
CGPointMake(0, 50),
CGPointMake(0, 0)]
var pathRef: CGMutablePathRef = CGPathCreateMutable()
CGPathAddLines(pathRef, nil, points, points.count)
let bezierPath: UIBezierPath = UIBezierPath(CGPath: pathRef)
let shape = SCNShape(path: bezierPath, extrusionDepth: 1)
shape.materials = [self.blueNodeMaterial]
return shape
}()
private lazy var blueNodeMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.blueColor()
return material
}()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.blackColor()
self.view.addSubview(sceneView)
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = testScene
}
}
XCPShowView("SceneKit", view: SceneViewController().view)
If you want to map 2D content into the surface of a 3D SceneKit object, and have the 2D content be dynamic/interactive, one of the easiest solutions is to use SpriteKit for the 2D content. You can set your sphere's diffuse contents to an SKScene, and create/position/decorate SpriteKit nodes in that scene to arrange them on the face of the sphere.
If you want to have this content respond to tap events... Using hitTest in your SceneKit view gets you a SCNHitTestResult, and from that you can get texture coordinates for the hit point on the sphere. From texture coordinates you can convert to SKScene coordinates and spawn nodes, run actions, or whatever.
For further details, your best bet is probably Apple's SceneKitReel sample code project. This is the demo that introduced SceneKit for iOS at WWDC14. There's a "slide" in that demo where paint globs fly from the camera at a spinning torus and leave paint splashes where they hit it — the torus has a SpriteKit scene as its material, and the trick for leaving splashes on collisions is basically the same hit test -> texture coordinate -> SpriteKit coordinate approach outlined above.
David Rönnqvist's SceneKit book (available as an iBook) has an example (the EarthView example, a talking globe, chapter 5) that is worth looking at. That example constructs a 3D pushpin, which is then attached to the surface of a globe at the location of a tap.
Your problem is more complicated because you're constructing a shape that covers a segment of the sphere. Your "square" is really a spherical trapezium, a segment of the sphere bounded by four great circle arcs. I can see three possible approaches, depending on what you're ultimately looking for.
The simplest way to do it is to use an image as the material for the sphere's surface. That approach is well illustrated in the Ronnqvist EarthView example, which uses several images to show the earth's surface. Instead of drawing continents, you'd draw your square. This approach isn't suitable for interactivity, though. Look at SCNMaterial.
Another approach would be to use hit test results. That's documented on SCNSceneRenderer (which SCNView complies with) and SCNHitTest. Using the hit test results, you could pull out the face that was tapped, and then its geometry elements. This won't get you all the way home, though, because SceneKit uses triangles for SCNSphere, and you're looking for quads. You will also be limited to squares that line up with SceneKit's underlying wireframe representation.
If you want full control of where the "square" is drawn, including varying its angle relative to the equator, I think you'll have to build your own geometry from scratch. That means calculating the latitude/longitude of each corner point, then generating arcs between those points, then calculating a bunch of intermediate points along the arcs. You'll have to add a fudge factor, to raise the intermediate points slightly above the sphere's surface, and build up your own quads or triangle strips. Classes here are SCNGeometry, SCNGeometryElement, and SCNGeometrySource.

Resources