ARKit Occlusion Invisible Plane Reference Image - ios

I'm trying to hide SCNPlanes behind an invisible SCNPlane the same size as the ARReferenceImage. Using SCNAction I want to reveal those planes next to the ARReferenceImage
Problem
The SCNPlanes are still visible and not hidden.
This is the code I use for Occlusion:
let plane = SCNPlane(width: referenceImage.physicalSize.width,
height:referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.geometry?.firstMaterial?.writesToDepthBuffer = true
planeNode.geometry?.firstMaterial!.colorBufferWriteMask = .alpha
planeNode.renderingOrder = -1
This is my code:
https://gist.github.com/magicmikek/0444fbd5c146131ad08fbb19875fbc83

The invisible planeNode can't have the same Y value as the SCNPlanes it wants to hide using occlusion.
Solution
nodeBehind.position.y = -0.005

Related

Normal mapping in Scenekit

I am trying to add normal map for a 3D model in swift using SCNMaterial properties. The diffuse property is working but no other property including normal property is visible on the screen. When I debug to check if the node's material consists of the normal property, it shows the property exists with the image that I added.
I have also checked if the normal image that I am using is correct or not in the SceneKit Editor where it works fine.
I have added the code that I am using.
let node = SCNNode()
node.geometry = SCNSphere(radius: 0.1)
node.geometry!.firstMaterial!.diffuse.contents = UIColor.lightGray
node.geometry!.firstMaterial!.normal.contents = UIImage(named: "normal")
node.position = SCNVector3(0,0,0)
sceneView.scene.rootNode.addChildNode(node)
This is the output I am getting
I am expecting something like this
I got the solution. Since I did not enable DefaultLighting, there was no lighting in the scene. Added this to the code.
sceneView.autoenablesDefaultLighting = true
Given the screenshot, it seems like there is no lighting in the scene, or the material does not respond to lighting, since the sphere is not shaded. For a normal map to work, lighting has to be taken into account, because it responds to lighting direction. Have you tried creating an entirely new SCNMaterial and played with its properties? (I.E. https://developer.apple.com/documentation/scenekit/scnmaterial/lightingmodel seems interesting)
I would try setting
node.geometry!.firstMaterial!.lightingModel = .physicallyBased
Try this.
let scene = SCNScene()
let sphere = SCNSphere(radius: 0.1)
let sphereMaterial = SCNMaterial()
sphereMaterial.diffuse.contents = UIImage(named: "normal.png")
let sphereNode = SCNNode()
sphereNode.geometry = sphere
sphereNode.geometry?.materials = [sphereMaterial]
sphereNode.position = SCNVector3(0.5,0.1,-1)
scene.rootNode.addChildNode(sphereNode)
sceneView.scene = scene

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

Align a SceneKit Plane to Face of cube

I have created a scnbox in SceneKit and am trying to add a circular plane on the face that is touched by the user.
I can add the SCNPlane as a child node at the touch point using the hittest but I’m struggling to orient the plane to the face that was touched.
The localnormal vector provided as part of hit test seems to be what I need but I’m nit sure how to use it. Normally I would orient using the EulerAngles property but localnormal looks to be a vector. I tried Look(at:) which takes a vector3 but that didn’t seem to work.
Any suggestions would be gratefully received. Code sample below which is taken from touchesBegan. "result" is the SCNHitTestResult:
//Draw circular plane, double sided
let circle = SCNPlane(width: 0.1, height: 0.1) //SCNSphere(radius: 0.1)
circle.cornerRadius = 0.5
circle.materials.first?.diffuse.contents = UIColor.black
circle.materials.first?.isDoubleSided = true
let circleNode = SCNNode(geometry: circle)
//Set position to hit test
circleNode.position = result.localCoordinates
let lookAtPoint = SCNVector3(result.localNormal.x * 100, result.localNormal.y * 100, result.localNormal.z * 100)
//Align to far point on normal
circleNode.look(at: lookAtPoint)
//Add to touched node
result.node.addChildNode(circleNode)

Create UIBezierPath shape in 3D world ARKit

I'm making an app where the user can create some flat shapes by positioning some points on a 3D space with ARKit, but it seems that the part where I create the UIBezierPath using these points is problematic.
In my app, the user starts by positioning a virtual transparent wall in AR at the same place that his device by pressing a button:
guard let currentFrame = sceneView.session.currentFrame else {
return
}
let imagePlane = SCNPlane(width: sceneView.bounds.width, height: sceneView.bounds.height)
imagePlane.firstMaterial?.diffuse.contents = UIColor.black
imagePlane.firstMaterial?.lightingModel = .constant
var windowNode = SCNNode()
windowNode.geometry = imagePlane
sceneView.scene.rootNode.addChildNode(windowNode)
windowNode.simdTransform = currentFrame.camera.transform
windowNode.opacity = 0.1
Then, the user place some points (some sphere nodes) on that wall to determine the shape of the flat object that he wants to create by pressing a button. If the user points back to the first sphere node created, I close the shape, create a node of it and place it at the same position that the wall:
let hitTestResult = sceneView.hitTest(self.view.center, options: nil)
if let firstHit = hitTestResult.first {
if firstHit.node == windowNode {
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
let pointCoordinates = CGPoint(x: x , y: y)
let sphere = SCNSphere(radius: 0.02)
sphere.firstMaterial?.diffuse.contents = UIColor.white
sphere.firstMaterial?.lightingModel = .constant
let sphereNode = SCNNode(geometry: sphere)
sceneView.scene.rootNode.addChildNode(sphereNode)
sphereNode.worldPosition = firstHit.worldCoordinates
if points.isEmpty {
windowPath.move(to: pointCoordinates)
} else {
windowPath.addLine(to: pointCoordinates)
}
points.append(sphereNode)
if undoButton.alpha == 0 {
undoButton.alpha = 1
}
} else if firstHit.node == points.first {
windowPath.close()
let windowShape = SCNShape(path: windowPath, extrusionDepth: 0)
windowShape.firstMaterial?.diffuse.contents = UIColor.white
windowShape.firstMaterial?.lightingModel = .constant
let tintedWindow = SCNNode(geometry: windowShape)
let worldPosition = windowNode.worldPosition
tintedWindow.worldPosition = worldPosition
sceneView.scene.rootNode.addChildNode(tintedWindow)
//removing all the sphere nodes from points and reinitializing the UIBezierPath windowPath
removeAllPoints()
}
}
That code works when I create a first invisible wall and a first shape, but when I create a second wall, when I'm done to draw my shape, the shape appears to be deformed and not at the right place like really not at the right place at all. So I think that I'm missing something with the coordinates of my UIBezierPath points but what ?
EDIT
Ok so after several tests, it seems that it depends on the orientation of the device at the launch of the AR session. When the device, at launch, faces the first wall that the user will create, the shape is created and places as expected. But if the user for exemple launch the app with his device pointed in one direction, then do a rotation of 90 degrees on himself, place the first wall and create his shape, the shape will be deformed and not at the right place.
So it seems that it's a problem of 3D coordinates but I still don't figure it out.
Ok I just found the problem ! I was just using the wrong vectors and coordinates... I've never been a math/geometry guy haha
So instead of using:
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
I now use:
let x = Double(firstHit.localCoordinates.x)
let y = Double(firstHit.localCoordinates.y)
And instead of using:
let worldPosition = windowNode.worldPosition
I now use:
let worldPosition = windowNode.transform
That's why the position of my shape node was depending of the initialisation of the AR session, I was working with world coordinates, seems obvious to me now.

In SceneKIT how do I add a material to my SCNNode() that has SCNPlane() geometry?

I am trying to create a 3d model that is moving above a 2d background. I read somewhere else that in order to do that I need to create a SCNNode() with SCNPlane() geometry and use my backgroundimage as the material of the SCNPlane(). However I have no clue how to add materials to a geometry structure, can you help me?
So far this is my code:
let background = SCNNode()
background.geometry = SCNPlane()
First add your texture image to your assets catalogue, say "Background.jpg", to Assets.xcassets
Then
let background = SCNNode()
background.geometry = SCNPlane.init(width: 100, height: 100) // better set its size
background.geometry?.firstMaterial?.diffuse.contents = "Background.jpg"
scene.rootNode.addChildNode(background)

Resources