Normal mapping in Scenekit - ios

I am trying to add normal map for a 3D model in swift using SCNMaterial properties. The diffuse property is working but no other property including normal property is visible on the screen. When I debug to check if the node's material consists of the normal property, it shows the property exists with the image that I added.
I have also checked if the normal image that I am using is correct or not in the SceneKit Editor where it works fine.
I have added the code that I am using.
let node = SCNNode()
node.geometry = SCNSphere(radius: 0.1)
node.geometry!.firstMaterial!.diffuse.contents = UIColor.lightGray
node.geometry!.firstMaterial!.normal.contents = UIImage(named: "normal")
node.position = SCNVector3(0,0,0)
sceneView.scene.rootNode.addChildNode(node)
This is the output I am getting
I am expecting something like this

I got the solution. Since I did not enable DefaultLighting, there was no lighting in the scene. Added this to the code.
sceneView.autoenablesDefaultLighting = true

Given the screenshot, it seems like there is no lighting in the scene, or the material does not respond to lighting, since the sphere is not shaded. For a normal map to work, lighting has to be taken into account, because it responds to lighting direction. Have you tried creating an entirely new SCNMaterial and played with its properties? (I.E. https://developer.apple.com/documentation/scenekit/scnmaterial/lightingmodel seems interesting)
I would try setting
node.geometry!.firstMaterial!.lightingModel = .physicallyBased

Try this.
let scene = SCNScene()
let sphere = SCNSphere(radius: 0.1)
let sphereMaterial = SCNMaterial()
sphereMaterial.diffuse.contents = UIImage(named: "normal.png")
let sphereNode = SCNNode()
sphereNode.geometry = sphere
sphereNode.geometry?.materials = [sphereMaterial]
sphereNode.position = SCNVector3(0.5,0.1,-1)
scene.rootNode.addChildNode(sphereNode)
sceneView.scene = scene

Related

swift gestures RealityKit like in SceneKit

I created an AR scene with RealityComposer.
let boxScene = try! Experience.loadBox()
But I don't like the gestures that Apple has provided for this method.
boxScene.generateCollisionShapes(recursive: true)
let box = boxScene.box as? Entity & Has Collision
arView.installGestures(for:box!) //Add gestures
I would like to use the same gestures as in SceneKit. I created a cube with the help of RealityKit and added a textEntity to each of its faces.
let textEntity_0: Entity = boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0]
var textModelComp_0: ModelComponent = (textEntity_0.components[ModelComponent])!
var material_0 = SimpleMaterial()
material_0.baseColor = .color(.red)
textModelComp_0.materials[0] = material_0
textModelComp_0.mesh = .generateText("testText",
extrusionDepth: 0.001
font: .systemFont(ofSize: 0.03),
containerFrame: CGRect(),
lineBreakMode: .byCharWrapping)
boxScene.self.children[0].children[0].children[0].children[0].children[0].children[0].components.set(textModelComp_0)
I do not need it to stand on the surface or be strictly attached to the surface, but it is necessary that all its faces be visible during rotation. It is also necessary to implement the method:
arView.scene.anchors.append(boxScene)
let anchor = AnchorEntity(.image(group: "LogoTypes", name: "logo"))
anchor.addChild(boxScene)
This method is from RealityKit. It only shows my 3D cube when it sees a marker. I need the surrounding space from the camera in the background, like in AR.
How can I use SceneKit gestures in this case?

ARKit Occlusion Invisible Plane Reference Image

I'm trying to hide SCNPlanes behind an invisible SCNPlane the same size as the ARReferenceImage. Using SCNAction I want to reveal those planes next to the ARReferenceImage
Problem
The SCNPlanes are still visible and not hidden.
This is the code I use for Occlusion:
let plane = SCNPlane(width: referenceImage.physicalSize.width,
height:referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.geometry?.firstMaterial?.writesToDepthBuffer = true
planeNode.geometry?.firstMaterial!.colorBufferWriteMask = .alpha
planeNode.renderingOrder = -1
This is my code:
https://gist.github.com/magicmikek/0444fbd5c146131ad08fbb19875fbc83
The invisible planeNode can't have the same Y value as the SCNPlanes it wants to hide using occlusion.
Solution
nodeBehind.position.y = -0.005

In SceneKIT how do I add a material to my SCNNode() that has SCNPlane() geometry?

I am trying to create a 3d model that is moving above a 2d background. I read somewhere else that in order to do that I need to create a SCNNode() with SCNPlane() geometry and use my backgroundimage as the material of the SCNPlane(). However I have no clue how to add materials to a geometry structure, can you help me?
So far this is my code:
let background = SCNNode()
background.geometry = SCNPlane()
First add your texture image to your assets catalogue, say "Background.jpg", to Assets.xcassets
Then
let background = SCNNode()
background.geometry = SCNPlane.init(width: 100, height: 100) // better set its size
background.geometry?.firstMaterial?.diffuse.contents = "Background.jpg"
scene.rootNode.addChildNode(background)

SceneKit hit test error while moving camera

I declare my camera like this at init:
defaultCameraNode.camera = SCNCamera()
defaultCameraNode.position = SCNVector3Make(0, 200, 500)
defaultCameraNode.camera?.zFar = 1000.0
defaultCameraNode.camera?.zNear = 10.0
defaultCameraNode.camera?.xFov = 30.0
defaultCameraNode.camera?.yFov = 30.0
scene.rootNode.addChildNode(defaultCameraNode)
sceneView.pointOfView = defaultCameraNode
defaultCameraNode.constraints = [SCNLookAtConstraint(target: rootNode)]
After this in a tapGesture block I do a hit test:
let hitResults = sceneView.hitTest(sender.locationInView(sceneView), options: nil)
This returns what I want, got the node.
After I add a new camera and change the scene's point of view
var cameraNode = SCNNode()
cameraNode.name = "cameraNode"
cameraNode.position = SCNVector3Make(position.x, position.y + 50.0, position.z + Float(radius * 3))
cameraNode.rotation = SCNVector4Make(1, 0, 0, -atan2f(20.0, 40.0))
var camera = SCNCamera()
camera.zNear = 0.0
camera.zFar = 1000.0
camera.xFov = 40.0
camera.yFov = 40.0
cameraNode.camera = camera
node.addChildNode(cameraNode)
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(animationDuration)
sceneView.pointOfView = cameraNode
SCNTransaction.commit()
When the camera position is changed the same hit test I used before returns a 0 length array and got this error on the console:
SceneKit: error, error in _C3DUnProjectPoints
Anyone can help me solving this?
thanks
I've started a new project and figured it out step by step when does the hittest go wrong. I didn't find it anywhere in the offical Apple documentation, but my experiences are the followings:
If you want to change the camera's position or any other property, you can do it by adding a new camera to a new node with new position, parameters, etc. then you set the SCNView's pointOfView property, you can do it animated like this:
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(2.0)
sceneView.pointOfView = cameraNode
SCNTransaction.commit()
One important point here: the node that holding the new SCNCamera has to be added to the SCNScene's rootView, otherwise (if you add it to the rootView's childNode) the hittest will give you an error instead the SCNNode that you touched.
It looks like you are setting another node (on that doesn't have a camera attached to it) to be the scenes point of view.
Look at your code. The node you are attaching the camera node to is cameraNode, and the node you are making the point of view is node (which you are adding the camera node to).

How to programmatically wrap png texture around cube in SceneKit

I'm new to SceneKit... trying to get some basic stuff working without much success so far. For some reason when I try to apply a png texture to a CNBox I end up with nothing but blackness. Here is the simple code snippet I have in viewDidLoad:
let sceneView = (view as SCNView)
let scene = SCNScene()
let boxGeometry = SCNBox(width: 10.0, height: 10.0, length: 10.0, chamferRadius: 1.0)
let mat = SCNMaterial()
mat.locksAmbientWithDiffuse = true
mat.diffuse.contents = ["sofb.png","sofb.png","sofb.png","sofb.png","sofb.png", "sofb.png"]
mat.specular.contents = UIColor.whiteColor()
boxGeometry.firstMaterial = mat
let boxNode = SCNNode(geometry: boxGeometry)
scene.rootNode.addChildNode(boxNode)
sceneView.scene = scene
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
What it ends up looking like is a white light source reflecting off of a black cube against a black background. What am I missing? I appreciate all responses
If you had different images, you would build a different SCNMaterial object from each like so:
let material_L = SCNMaterial()
material_L.diffuse.contents = UIImage(named: "CapL")
Here, CapL refers to a .png file that has been stored in the project's Assets.xcassets folder. After building 6 such objects, you hand them to the boxNode as follows:
boxGeometry.materials = [material_L, material_green_r, material_K, material_purple_r, material_g, material_j]
Note that "boxGeometry" would be better named "box" or "cube". Also, it would be a good idea to do that work in a new class in your project, constructed like:
class BoxScene: SCNScene {
Which you would then call with modern Swift in your viewController's viewDidLoad method like this:
let scnView = self.view as! SCNView
scnView.scene = BoxScene()
(For that let statement to work, go to Main.storyboard -> View Controller Scene -> View Controller -> View -> Identity icon Then under Custom Class, change it from UIView to SCNView. Otherwise, you receive an error message, like:
Could not cast value of type 'UIView' to 'SCNView'
Passing an array of images (to create a cube map) is only supported by the reflective material property and the scene's background.
In your case, all the images are the same, so you would only have to assign the image (not an array) to the contents to have it appear on all sides of the box

Resources