I have an idea for AR app, and I noticed that in most AR apps the objects are not hided through the wall, is it possible for example I pinned the AR portrait in my room, and it is only can be seen if I go there, not through the walls?
I have no experience in AR, just about to learn it.
Thank you.
it's simple way to do this, you have to detect plane (in your case a wall) and setup for
node.geometry.firstMaterial?.colorBufferWriteMask = []
node.renderingOrder = -1
You can easily hide all your objects behind a wall using three different approaches for wall creation: SCNBox(), SCNShape() with extrusion, or SCNGeometry(). Whatever you choose, just assign empty instance property .colorBufferWriteMask to it. And .renderingOrder instance property must be -1. Node with a negative value of rendering order is rendered first.
let wallNode = SCNNode()
wallNode.geometry = SCNBox(width: 15.0, height: 3.0, length: 0.1, chamferRadius: 0)
wallNode.position = SCNVector3(x: 0, y: 0, z: 5)
//wallNode.geometry = SCNShape(path: NSBezierPath?, extrusionDepth: CGFloat)
//wallNode.geometry = SCNGeometry(sources: [SCNGeometrySource], elements: [SCNGeometryElement]?)
wallNode.geometry?.firstMaterial?.colorBufferWriteMask = []
wallNode.renderingOrder = -1
scene.rootNode.addChildNode(wallNode)
Hope this helps.
Related
I'm trying to wrap my head around how to use SCNTechnique with Scenekit.
The effect I'm trying to create can easily be achieved in Unity by adding a custom shader to two materials, making objects with the "filtered" shader to only be visible when it is seen through the other shader (the portal). I've followed this tutorial to create this in Unity: https://www.youtube.com/watch?v=b9xUO0zHNXw
I'm trying to achieve the same effect in SceneKit, but I'm not sure how to apply the passes. I'm looking at SCNTechnique because it has options for stencilStates, but I'm not sure if this will work.
There are very little resources and tutorials on SCNTechnique
As mentioned in the comments above, I am not sure how to achieve an occlusion effect using Stencil Tests which as we know are fairly (and I use that term very loosely) easy to do in Unity.
Having said this, we can achieve a similar effect in Swift using transparency, and rendering order.
Rendering Order simply refers to:
The order the node’s content is drawn in relative to that of other
nodes.
Whereby SCNNodes with a larger rendering order are rendered last and vice versa.
In order to make an object virtually invisible to the naked eye we would need to set the transparency value of an SCNMaterial to a value such as 0.0000001.
So how would we go about this?
In this very basic example which is simply a portal door, and two walls we can do something like this (which is easy enough to adapt into something more substantial and aesthetically pleasing):
The example is fully commented so it should be easy to understand what we are doing.
/// Generates A Portal Door And Walls Which Can Only Be Seen From Behind e.g. When Walking Through The Portsal
///
/// - Returns: SCNNode
func portalNode() -> SCNNode{
//1. Create An SCNNode To Hold Our Portal
let portal = SCNNode()
//2. Create The Portal Door
let doorFrame = SCNNode()
//a. Create The Top Of The Door Frame
let doorFrameTop = SCNNode()
let doorFrameTopGeometry = SCNPlane(width: 0.55, height: 0.1)
doorFrameTopGeometry.firstMaterial?.diffuse.contents = UIColor.black
doorFrameTopGeometry.firstMaterial?.isDoubleSided = true
doorFrameTop.geometry = doorFrameTopGeometry
doorFrame.addChildNode(doorFrameTop)
doorFrameTop.position = SCNVector3(0, 0.45, 0)
//b. Create The Left Side Of The Door Frame
let doorFrameLeft = SCNNode()
let doorFrameLeftGeometry = SCNPlane(width: 0.1, height: 1)
doorFrameLeftGeometry.firstMaterial?.diffuse.contents = UIColor.black
doorFrameLeftGeometry.firstMaterial?.isDoubleSided = true
doorFrameLeft.geometry = doorFrameLeftGeometry
doorFrame.addChildNode(doorFrameLeft)
doorFrameLeft.position = SCNVector3(-0.25, 0, 0)
//c. Create The Right Side Of The Door Frame
let doorFrameRight = SCNNode()
let doorFrameRightGeometry = SCNPlane(width: 0.1, height: 1)
doorFrameRightGeometry.firstMaterial?.diffuse.contents = UIColor.black
doorFrameRightGeometry.firstMaterial?.isDoubleSided = true
doorFrameRight.geometry = doorFrameRightGeometry
doorFrame.addChildNode(doorFrameRight)
doorFrameRight.position = SCNVector3(0.25, 0, 0)
//d. Add The Portal Door To The Portal And Set Its Rendering Order To 200 Meaning It Will Be Rendered After Our Masks
portal.addChildNode(doorFrame)
doorFrame.renderingOrder = 200
doorFrame.position = SCNVector3(0, 0, -1)
//3. Create The Left Wall Enclosure To Hold Our Wall And The Occlusion Node
let leftWallEnclosure = SCNNode()
//a. Create The Left Wall And Set Its Rendering Order To 200 Meaning It Will Be Rendered After Our Masks
let leftWallNode = SCNNode()
let leftWallMainGeometry = SCNPlane(width: 0.5, height: 1)
leftWallNode.geometry = leftWallMainGeometry
leftWallMainGeometry.firstMaterial?.diffuse.contents = UIColor.red
leftWallMainGeometry.firstMaterial?.isDoubleSided = true
leftWallNode.renderingOrder = 200
//b. Create The Left Wall Mask And Set Its Rendering Order To 10 Meaning It Will Be Rendered Before Our Walls
let leftWallMaskNode = SCNNode()
let leftWallMaskGeometry = SCNPlane(width: 0.5, height: 1)
leftWallMaskNode.geometry = leftWallMaskGeometry
leftWallMaskGeometry.firstMaterial?.diffuse.contents = UIColor.blue
leftWallMaskGeometry.firstMaterial?.isDoubleSided = true
leftWallMaskGeometry.firstMaterial?.transparency = 0.0000001
leftWallMaskNode.renderingOrder = 10
leftWallMaskNode.position = SCNVector3(0, 0, 0.001)
//c. Add Our Wall And Mask To The Wall Enclosure
leftWallEnclosure.addChildNode(leftWallMaskNode)
leftWallEnclosure.addChildNode(leftWallNode)
//4. Add The Left Wall Enclosure To Our Portal
portal.addChildNode(leftWallEnclosure)
leftWallEnclosure.position = SCNVector3(-0.55, 0, -1)
//5. Create The Left Wall Enclosure
let rightWallEnclosure = SCNNode()
//a. Create The Right Wall And Set Its Rendering Order To 200 Meaning It Will Be Rendered After Our Masks
let rightWallNode = SCNNode()
let rightWallMainGeometry = SCNPlane(width: 0.5, height: 1)
rightWallNode.geometry = rightWallMainGeometry
rightWallMainGeometry.firstMaterial?.diffuse.contents = UIColor.red
rightWallMainGeometry.firstMaterial?.isDoubleSided = true
rightWallNode.renderingOrder = 200
//b. Create The Right Wall Mask And Set Its Rendering Order To 10 Meaning It Will Be Rendered Before Our Walls
let rightWallMaskNode = SCNNode()
let rightWallMaskGeometry = SCNPlane(width: 0.5, height: 1)
rightWallMaskNode.geometry = rightWallMaskGeometry
rightWallMaskGeometry.firstMaterial?.diffuse.contents = UIColor.blue
rightWallMaskGeometry.firstMaterial?.isDoubleSided = true
rightWallMaskGeometry.firstMaterial?.transparency = 0.0000001
rightWallMaskNode.renderingOrder = 10
rightWallMaskNode.position = SCNVector3(0, 0, 0.001)
//c. Add Our Wall And Mask To The Wall Enclosure
rightWallEnclosure.addChildNode(rightWallMaskNode)
rightWallEnclosure.addChildNode(rightWallNode)
//6. Add The Left Wall Enclosure To Our Portal
portal.addChildNode(rightWallEnclosure)
rightWallEnclosure.position = SCNVector3(0.55, 0, -1)
return portal
}
Which can be tested like so:
let portal = portalNode()
portal.position = SCNVector3(0, 0, -1.5)
self.sceneView.scene.rootNode.addChildNode(portal)
Hope it points you in the right direction...
FYI there is a good tutorial here from Ray Wenderlich which will also be of use to you...
I have a simple 3d area that contains 4 walls, each is a SCNNode with a simple SCNBox geometry, of rectangular shape, and matching SCNPhysicsBody attached. The SCNPhysicsBody uses a SCNPhysicsShape.ShapeType.boundingBox, and is set to static type. Here is a code snippet:
let size = (self.levelNode.boundingBox.max - self.levelNode.boundingBox.min) * self.levelNode.scale
//x //z
let geometryA = SCNBox(width: CGFloat(size.x), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
let geometryB = SCNBox(width: CGFloat(size.z), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
geometryA.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
geometryB.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
let nodeA = SCNNode(geometry: geometryA)
nodeA.position += self.levelNode.position
nodeA.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, -size.z/2)
nodeA.name = "Boundary-01"
let nodeB = SCNNode(geometry: geometryA)
nodeB.position += self.levelNode.position
nodeB.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, size.z/2)
nodeB.name = "Boundary-03"
let nodeC = SCNNode(geometry: geometryB)
nodeC.position += self.levelNode.position
nodeC.position += SCNVector3(-size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeC.eulerAngles = SCNVector3(0, -Float.pi/2, 0)
nodeC.name = "Boundary-02"
let nodeD = SCNNode(geometry: geometryB)
nodeD.position += self.levelNode.position
nodeD.position += SCNVector3(size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeD.eulerAngles = SCNVector3(0, Float.pi/2, 0)
nodeD.name = "Boundary-04"
let nodes = [nodeA, nodeB, nodeC, nodeD]
for node in nodes {
//
let shape = SCNPhysicsShape(geometry: node.geometry!, options: [
SCNPhysicsShape.Option.type : SCNPhysicsShape.ShapeType.boundingBox])
let body = SCNPhysicsBody(type: .static, shape: shape)
node.physicsBody = body
node.physicsBody?.isAffectedByGravity = false
node.physicsBody?.categoryBitMask = Bitmask.boundary.rawValue
node.physicsBody?.contactTestBitMask = Bitmask.edge.rawValue
node.physicsBody?.collisionBitMask = 0
scene.rootNode.addChildNode(node)
node.physicsBody?.resetTransform()
}
Inside this area, I spawn entities at a regular time interval. Each also has a SCNBox geometry, that is a cube shape this time, smaller than the walls, and same parameters for the physics body as above.
To simplify the behaviour of my entities inside this game area, I am calculating their paths to travel, then applying a SCNAction to the relevant node to move them. The SCNAction moves both the node and physics body attached to it.
I am using the SCNPhysicsWorld contact delegate to detect when an entity reaches one of the boundary walls. I then calculate a random trajectory for it from that wall in another direction, clear its actions, and apply a new move SCNAction.
This is where it gets interesting...
When this 'world' is at 1:1 scale. The contacts are detected as normal both in a standard SCNScene, and a scene projected using ARKit. The visible contact, i.e. the visible change in direction of the entity appears to be close to the boundary as expected. When I check the contact.penetrationDistance of each contact their values are e.g. 0.00294602662324905.
BUT when I change the scale of this 'world' to something smaller, say the equivalent of 10cm width, in ARKit, the simulation falls apart.
The contacts between an entity and a boundary node have a comparatively huge visible gap between them when the contact is detected. Yet the contact.penetrationDistance is of the same magnitude as before.
I switched on the ARSCNView debug options to show the physics shapes in the render, and they all appear to be the correct proportions, matching the bounding box of their node.
As you can see from the code example above, the boundary nodes are generated after I have scaled the level, during my AR user setup. They are added to the root node of the scene, not as a child of the level node. The same code is being used to generate the entities.
Previously I had tried using the resetTransform() function on the physics bodies but this did not produce a reliable scaling of the physics bodies, after I had scaled the level, so I decided to generate the nodes for the boundaries and entities after the level has been scaled.
In Apple's documentation, it does state that if the SCNPhysicsBody is not a custom shape, that it will adopt the scale of the node geometry applied to it. I am not affected by this as I am generating the geometries and their respective nodes, after the scaling has been applied to the level.
One of assumptions at the moment is that the physics simulation falls apart at such a small scale. But I am not relying on the simulation of forces to move the bodies ...
Is there a more appropriate way to scale the physics world?
Or, am I staring a bug in the SCNPhysicsWorld, that is something beyond my control, at the moment.
One solution I did think about was to run the entire simulation at 1:1 scale but hidden, then apply those movements to the smaller entities. As you can imagine, that will affect the performance of the entire scene...
The penetration distance of the first contact is a negative value, suggesting there is a gap. This gap does not appear to scale as you scale down the size of the simulation.
As a way to resolve the above excess, I have implemented an additional check on the contacts in the Contact Delegate to not take the first contact detected for a particular category, but rather ensure the penetrationDistance value is positive, so ensuring that there is overlap between the two objects, before triggering a change in direction of the entity which connected with a boundary.
Perhaps I am not setting up the camera properly… I’m starting with a scn file with a camera. In Xcode, rotating the free camera around, the geometries rotate as expected. However, at runtime, nothing happens.
It doesn’t seem to matter if I add the constraint in code or in the editor. The look at constraint works.
It also doesn’t seem to matter if I use the camera from the scn file or if I add a camera in code.
The sample code is
class Poster: SCNNode {
let match:GKTurnBasedMatch
init(match:GKTurnBasedMatch, width:CGFloat, height:CGFloat) {
self.match = match
super.init()
// SCNPlane(width: width, height: height)
self.geometry = SCNBox(width: width, height: height, length: 0.1, chamferRadius: 0)
self.constraints = [SCNBillboardConstraint()]
self.updatePosterImage()
}
}
So… I gave up on the billboard constraint.
I’m using a SCNLookAtConstraint that looks at the camera node, with the gimbal lock enabled.
I was using a SCNPlane but it was doing weird stuff. So I went with a SCNBox for the geometry.
So, in the constructor:
self.geometry = SCNBox(width: self.size.width, height: self.size.height, length: 0.1, chamferRadius: 0)
let it = SCNLookAtConstraint(target: cameraNode)
it.isGimbalLockEnabled = true
self.constraints = [it]
It works.
You're missing ".init()" on the SCNBillboardConstraint(). This line alone did all the work for me:
node.constraints = [SCNBillboardConstraint.init()]
Try this:
A constraint that orients theNode to always point toward the current camera.
// Create the constraint
SCNBillboardConstraint *aConstraint = [SCNBillboardConstraint billboardConstraint];
theNode.constraints = #[aConstraint];
theNode is the node you want pointing to the camera. This should work.
Updated
Ok, if you were to create a sample Project with the Game template. And then make the following changes:
// create a clone of the ship, change position and rotation
SCNNode *ship2 = [ship clone];
ship2.position = SCNVector3Make(0, 4, 0);
ship2.eulerAngles = SCNVector3Make(0, M_PI_2, 0);
[scene.rootNode addChildNode:ship2];
// Add the constraint to `ship`
SCNBillboardConstraint *aConstraint = [SCNBillboardConstraint billboardConstraint];
ship.constraints = #[aConstraint];
ship is constrained but ship2 isn't.
If you were to add this:
ship2.constraints = #[aConstraint];
Now, both will face the camera. Isn't this what you are looking for?
By any chance are you using the allowsCameraControl property on SCNView?
If so, remember setting that will add a new camera to your scene (cloning an existing camera if one is present to match its settings), so if you create a constraint to your own camera, the constraint will not be linked to the camera that’s actually being used.
Per Apple in their WWDC 2017 video, they say that property is really for debugging purposes, not for a real-world use.
Simply put, you have to ensure you are moving around your own camera, not relying on the auto-created one.
The code below is used to create a scene and create blocks in SceneKit. The blocks come out looking flat and not "3D enough" according to our users. Screenshots 1-2 show our app.
Screenshots 3-5 show what users expect the blocks to look like, that is more 3D-like.
After speaking to different people, there are different opinions about how to render blocks that look more like screenshots 3-5. Some people say use ambient occlusion, others say voxel lighting, some say use spot lighting and use shadows, or directional lighting.
We previously tried adding omni lighting, but that didn't work so it was removed. As you can see in the code, we also experimented with an ambient light node but that also didn't yield the right results.
What is the best way to render our blocks and achieve a comparable look to screenshots 3-5?
Note: we understand the code is not optimized for performance, i.e., that polygons are shown that should not be shown. That is okay. The focus is not on performance but rather on achieving more 3D-like rendering. You can assume some hard limit on nodes, like no more than 1K or 10K in a scene.
Code:
func createScene() {
// Set scene view
let scene = SCNScene()
sceneView.jitteringEnabled = true
sceneView.scene = scene
// Add camera node
sceneView.pointOfView = cameraNode
// Make delegate to capture screenshots
sceneView.delegate = self
// Set ambient lighting
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = SCNLightTypeAmbient
ambientLightNode.light!.color = UIColor(white: 0.50, alpha: 1.0)
//scene.rootNode.addChildNode(ambientLightNode)
//sceneView.autoenablesDefaultLighting = true
// Set floor
setFloor()
// Set sky
setSky()
// Set initial position for user node
userNode.position = SCNVector3(x: 0, y: Float(CameraMinY), z: Float(CameraZoom))
// Add user node
scene.rootNode.addChildNode(userNode)
// Add camera to user node
// zNear fixes white triangle bug while zFar fixes white line bug
cameraNode.camera = SCNCamera()
cameraNode.camera!.zNear = Double(0.1)
cameraNode.camera!.zFar = Double(Int.max)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 0) //EB: Add some offset to represent the head
userNode.addChildNode(cameraNode)
}
private func setFloor() {
// Create floor geometry
let floorImage = UIImage(named: "FloorBG")!
let floor = SCNFloor()
floor.reflectionFalloffEnd = 0
floor.reflectivity = 0
floor.firstMaterial!.diffuse.contents = floorImage
floor.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(floorImage.size.width)/2, Float(floorImage.size.height)/2, 1)
floor.firstMaterial!.locksAmbientWithDiffuse = true
floor.firstMaterial!.diffuse.wrapS = .Repeat
floor.firstMaterial!.diffuse.wrapT = .Repeat
floor.firstMaterial!.diffuse.mipFilter = .Linear
// Set node & physics
// -- Must set y-position to 0.5 so blocks are flush with floor
floorLayer = SCNNode(geometry: floor)
floorLayer.position.y = -0.5
let floorShape = SCNPhysicsShape(geometry: floor, options: nil)
let floorBody = SCNPhysicsBody(type: .Static, shape: floorShape)
floorLayer.physicsBody = floorBody
floorLayer.physicsBody!.restitution = 1.0
// Add to scene
sceneView.scene!.rootNode.addChildNode(floorLayer)
}
private func setSky() {
// Create sky geometry
let sky = SCNFloor()
sky.reflectionFalloffEnd = 0
sky.reflectivity = 0
sky.firstMaterial!.diffuse.contents = SkyColor
sky.firstMaterial!.doubleSided = true
sky.firstMaterial!.locksAmbientWithDiffuse = true
sky.firstMaterial!.diffuse.wrapS = .Repeat
sky.firstMaterial!.diffuse.wrapT = .Repeat
sky.firstMaterial!.diffuse.mipFilter = .Linear
sky.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(2), Float(2), 1);
// Set node & physics
skyLayer = SCNNode(geometry: sky)
let skyShape = SCNPhysicsShape(geometry: sky, options: nil)
let skyBody = SCNPhysicsBody(type: .Static, shape: skyShape)
skyLayer.physicsBody = skyBody
skyLayer.physicsBody!.restitution = 1.0
// Set position
skyLayer.position = SCNVector3(0, SkyPosY, 0)
// Set fog
/*sceneView.scene?.fogEndDistance = 60
sceneView.scene?.fogStartDistance = 50
sceneView.scene?.fogDensityExponent = 1.0
sceneView.scene?.fogColor = SkyColor */
// Add to scene
sceneView.scene!.rootNode.addChildNode(skyLayer)
}
func createBlock(position: SCNVector3, animated: Bool) {
...
// Create box geometry
let box = SCNBox(width: 1.0, height: 1.0, length: 1.0, chamferRadius: 0.0)
box.firstMaterial!.diffuse.contents = curStyle.getContents() // "curStyle.getContents()" either returns UIColor or UIImage
box.firstMaterial!.specular.contents = UIColor.whiteColor()
// Add new block
let newBlock = SCNNode(geometry: box)
newBlock.position = position
blockLayer.addChildNode(newBlock)
}
Screenshots 1-2 (our app):
Screenshots 3-5 (ideal visual representation of blocks):
I still think there's a few easy things you can do that will make a big difference to how your scene is rendered. Apologies for not using your code, this example is something I had lying around.
Right now your scene is only lit by an ambient light.
let aLight = SCNLight()
aLight.type = SCNLightTypeAmbient
aLight.color = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
let aLightNode = SCNNode()
aLightNode.light = aLight
scene.rootNode.addChildNode(aLightNode)
If I use only this light in my scene I see the following. Note how all faces are lit the same irrespective of the direction they face. Some games do pull off this aesthetic very well.
The following block of code adds a directional light to this scene. The transformation applied in this light won't be valid for your scene, it's important to orientate the light according to where you want the light coming from.
let dLight = SCNLight()
dLight.type = SCNLightTypeDirectional
dLight.color = UIColor(red: 0.6, green: 0.6, blue: 0.6, alpha: 1.0)
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 37 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Now we have shading on each of the faces based on their angle relative to the direction of the light.
Currently SceneKit only supports shadows if you're using the SCNLightTypeSpot. Using a spotlight means we need to both orientate (as per directional light) and position it. I use this as a replacement for the directional light.
let sLight = SCNLight()
sLight.castsShadow = true
sLight.type = SCNLightTypeSpot
sLight.zNear = 50
sLight.zFar = 120
sLight.spotInnerAngle = 60
sLight.spotOuterAngle = 90
let sLightNode = SCNNode()
sLightNode.light = sLight
var sLightTransform = SCNMatrix4Identity
sLightTransform = SCNMatrix4Rotate(sLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
sLightTransform = SCNMatrix4Rotate(sLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
sLightTransform = SCNMatrix4Rotate(sLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
sLightTransform = SCNMatrix4Translate(sLightTransform, -20, 50, -10)
sLightNode.transform = sLightTransform
scene.rootNode.addChildNode(sLightNode)
In the above code we first tell the spotlight to cast a shadow, by default all nodes in your scene will then cast a shadow (this can be changed). The zNear and zFar settings are also important and must be specified so that the nodes casting shadows are within this range of distance from the light source. Nodes outside this range will not cast a shadow.
After shading/shadows there's a number of other effects you can apply easily. Depth of field effects are available for the camera. Fog is similarly easy to include.
scene.fogColor = UIColor.blackColor()
scene.fogStartDistance = 10
scene.fogEndDistance = 110
scenekitView.backgroundColor = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
Update
Turns out you can get shadows from a directional light. Modifying the spotlight code from above by changing its type and setting the orthographicScale. Default value for orthographicScale seems to be 1.0, obviously not suitable for scenes much larger than 1.
let dLight = SCNLight()
dLight.castsShadow = true
dLight.type = SCNLightTypeDirectional
dLight.zNear = 50
dLight.zFar = 120
dLight.orthographicScale = 30
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightTransform = SCNMatrix4Translate(dLightTransform, -20, 50, -10)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Produces the following image.
The scene size is 60x60, so in this case setting the orthographic scale to 30 produces shadows for the objects close to the light. The directional light shadows appear different to the spot light due to the difference in projections (orthographic vs perspective) used when rendering the shadow map.
Ambient occlusion calculations will give you the best results, but is very expensive, particularly in a dynamically changing world, which it looks like this is.
There are several ways to cheat, and get the look of Ambient occlusion.
Here's one:
place transparent, gradient shadow textures on geometry "placards" used to place/present the shadows at the places required. This will involve doing checks of geometry around the new block before determining what placards to place, with which desired texture for the shadowing. But this can be made to look VERY good, at a very low cost in terms of polygons, draw calls and filtrate. It's probably the cheapest way to do this, and have it look good/great, and can only really be done (with a good look) in a world of blocks. A more organic world rules this technique out. Please excuse the pun.
Or, another, similar: Place additional textures onto/into objects that have the shadow, and blend this with the other textures/materials in the object. This will be a bit fiddly, and I'm not an expert on the powers of materials in Scene Kit, so can't say for sure this is possible and/or easy in it.
Or: Use a blend of textures with a vertex shader that's adding a shadow from the edges that touch or otherwise need/desire a shadow based on your ascertaining what and where you want shadows and to what extent. Will still need the placards trick on the floors/walls unless you add more vertices inside flat surfaces for the purpose of vertex shading for shadows.
Here's something I did for a friend's CD cover... shows the power of shadows. It's orthographic, not true 3D perspective, but the shadows give the impression of depths and create the illusions of space:
all answers above (or below) seem to be good ones (at the time of this writing) however,
what I use (just for setting up a simple scene) is one ambient light (lights everything in all directions) to make things visible.And then one omnidirectional light positioned somewhere in the middle of your scene, the omni light can be raised up (Y up I mean) to light the whole of your scene. The omni light gives the user a sense of shading and the ambient light makes it more like a sun light.
for example:
Imagine sitting in a living room (like I am right now) and the sun-light peers through the window to your right.
You can obviously see a shadow of an area that the couch is not getting sun light, however you can still see details of what is in the shadow.
Now! all the sudden your wold gets rid of ambient light BOOM! The shadow is now pitch black, you can't anymore see what is in the shadow.
Or say the ambient light came back again (what a relief), but all the sudden the omni light stopped working. (probably my fault :( ) Everything now is lighted the same, no shadow, no difference, but if you lay a paper on the table, and look at it from above, there is no shadow! So you think it is part of the table! In a world like this your rely on the contour of something in order to see it- you would have to look at the table from side view, to see the thickness of the paper.
Hope this helps (at least a little)
Note: ambient lighting give a similar effect to emissive material
I have a spotlight, created with the code beneath, casting shadows on all of my nodes:
spotLight.type = SCNLightTypeSpot
spotLight.spotInnerAngle = 50.0
spotLight.spotOuterAngle = 150.0
spotLight.castsShadow = true
spotLight.shadowMode = SCNShadowMode.Deferred
spotlightNode.light = spotLight
spotlightNode.eulerAngles = SCNVector3(x: GLKMathDegreesToRadians(-90), y: 0, z: 0)
spotlightNode.position = levelData.coordinatesForGridPosition(column: 0, row: playerGridRow)
spotlightNode.position.y = 1.5
rootNode.addChildNode(spotlightNode)
The scene is moving along the z axis, and the camera has an infinite animation that makes it move:
let moveAction = SCNAction.moveByX(0.0, y: 0.0, z: CGFloat(-GameVariables.segmentSize / 2), duration: 2.0)
cameraContainerNode.runAction(SCNAction.repeatActionForever(moveAction))
As the camera moves though, the light doesn't, so after a while, the whole scene is dark. I want to move the light with the camera, however if I apply to the light node the same moving animation, all the shadows start to flicker. I tried to change the SCNShadowMode to Forward and the light type to Directional, but the flickering is still there. With directional, I actually loose most of my shadows. If I create a new light node later on, it will seem that I have two "suns", which of course is impossible. The final aim is simply to have an infinite light that shines parallel to the scene from the left, casting all the shadows to the right. Any ideas?
Build a node tree to hold both spotlight and camera.
Create, say, cameraRigNode as an SCNNode with no geometry. Create cameraContainerNode and spotlightNode the same way you are now. But make them children of cameraRigNode, not the scene's root node.
Apply moveAction to cameraRigNode. Both the camera and the light will now move together.