How to add transparency with a shader in SceneKit? - ios

I would like to have a transparency effect from an image, for now I just test with a torus, but the shader does not seem to work with alpha. From what I understood from this thread (Using Blending Functions in Scenekit) and this wiki link about transparency : (http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Transparency), GLBlendFunc is replaced by pragma transparency in SceneKit.
Would you know what is wrong with this code?
I created a new project with SceneKit, and I changed the ship mesh for a torus.
EDIT :
I am trying with a plane, but the image below does not appear inside the plane, instead I get the image with the red and brownish boxes below.
My image with alpha :
The result (the image with alpha should replace the brownish color) :
let plane = SCNPlane(width: 2, height: 2)
var texture = SKTexture(imageNamed:"small")
texture.filteringMode = SKTextureFilteringMode.Nearest
plane.firstMaterial?.diffuse.contents = texture
let ship = SCNNode(geometry: plane) //SCNTorus(ringRadius: 1, pipeRadius: 0.5)
ship.position = SCNVector3(x: 0, y: 0, z: 15)
scene.rootNode.addChildNode(ship)
let myscale : CGFloat = 10
let box = SCNBox(width: myscale, height: myscale, length: myscale, chamferRadius: 0)
box.firstMaterial?.diffuse.contents = UIColor.redColor()
let theBox = SCNNode(geometry: box)
theBox.position = SCNVector3(x: 0, y: 0, z: 5)
scene.rootNode.addChildNode(theBox)
let scnView = self.view as SCNView
scnView.scene = scene
scnView.backgroundColor = UIColor.blackColor()
var shaders = NSMutableDictionary()
shaders[SCNShaderModifierEntryPointFragment] = String(contentsOfFile: NSBundle.mainBundle().pathForResource("test", ofType: "shader")!, encoding: NSUTF8StringEncoding, error: nil)
var material = SCNMaterial()
material.shaderModifiers = shaders
ship.geometry?.materials = [material]
The shader :
#pragma transparent
#pragma body
_output.color.rgba = vec4(0.0, 0.2, 0.0, 0.2);

SceneKit uses premultiplied alpha (r, g and b fields should be multiplied by the desired a) :
vec4(0.0, 0.2, 0.0, 0.2); // `vec4(0.0, 1.0, 0.0, 1.0) * alpha` with alpha = 0.2

I was struggling with this problem too. Finally I found out that to make '#pragma transparent' work, I had to add it to another shader other than the one executing my transparency code.
For example, I added transparency code to the surface shader, and added '#pragma transparent' to the geometry shader. The Apple API document also added '#pragma transparent' to the geometry shader, don't know if they were intended to do so.
NSString *geometryScript = #""
"#pragma transparent";
NSString *surfaceScript = #""
//"#pragma transparent" // You must not put it together with the transparency code
"float a = 0.1;"
"_surface.diffuse = vec4(_surface.diffuse.rgb * a, a);";
// This works for the transparency code in surface shader too.
//NSString *fragmentScript = #""
//"#pragma transparent";
yourMaterial.shaderModifiers = #{SCNShaderModifierEntryPointGeometry:geometryScript,
SCNShaderModifierEntryPointSurface:surfaceScript};
This code works in iOS 11.2, Xcode 9.2.
This rule applies to SCNShaderModifierEntryPointFragment shader as well. Likewise, if you want to change transparency there, you can add '#pragma transparent' to the geometry shader or the surface shader. I haven't tested SCNShaderModifierEntryPointLightingModel shader.
If you don't add any '#pragma transparent' to a shader, a black background may be blended with the transparent pixels.

Adding transparency can be quite easily done in the SCNShadable Surface or Fragment entry point
The SCNShaderModifierEntryPointSurface entry point version
#pragma transparent
#pragma body
_surface.diffuse.a = 0.5;
The SCNShaderModifierEntryPointFragment entry point version
#pragma transparent
#pragma body
_output.color.a = 0.5;

Related

How to render a SceneKit shader at a lower resolution?

I'm adding some visual elements to my app with SceneKit shader modifiers like this:
// A SceneKit scene with orthographic projection
let shaderBundle = Bundle(for: Self.self)
let shaderUrl = shaderBundle.url(forResource: "MyShader.frag", withExtension: nil)!
let shaderString = try! String(contentsOf: shaderUrl)
let plane = SCNPlane(width: 512, height: 512) // 1024x1024 pixels on devices with x2 screen resolution
plane.firstMaterial!.shaderModifiers = [SCNShaderModifierEntryPoint.fragment: shaderString]
let planeNode = SCNNode(geometry: plane)
rootNode.addChildNode(planeNode)
The problem is slow performance because SceneKit is painstakingly rendering every single pixel of the plane that's screening the shader. How do I decrease the resolution of the shader keeping the plain's size unchanged?
I've already tried making plane smaller and using an enlarging scale transformation on planeNode but fruitless, the rendition of the shader remained as highly detailed as before.
Using plane.firstMaterial!.diffuse.contentsTransform didn't help either (or maybe I was doing it wrong).
I know I could make the global SCNView smaller and then apply an affine scale transform if that shader was the only node in the scene but it's not, there are other nodes (that aren't shaders) in the same scene and I'd prefer to avoid altering their appearance in any way.
Seems like I managed to solve it using a sort of "render to texture" approach by nesting a SceneKit scene inside a SpriteKit scene being displayed by the top level SceneKit scene.
Going into more detail, the following subclass of SCNNode is placing a downscaled shader plane within a SpriteKit's SK3DNode, then taking that SK3DNode and putting it inside a SpriteKit scene as a SceneKit's SKScene, and then using that SKScene as the diffuse contents of an upscaled plane put inside the top level SceneKit scene.
Strangely, for keeping the native resolution I need to use scaleFactor*2, so for halving the rendering resolution (normally scale factor 0.5) I actually need to use scaleFactor = 1.
If anyone happens to know the reason for this strange behavior or a workaround for it, please let me know in a comment.
import Foundation
import SceneKit
import SpriteKit
class ScaledResolutionFragmentShaderModifierPlaneNode: SCNNode {
private static let nestedSCNSceneFrustumLength: CGFloat = 8
// For shader parameter input
let shaderPlaneMaterial: SCNMaterial
// shaderModifier: the shader
// planeSize: the size of the shader on the screen
// scaleFactor: the scale to be used for the shader's rendering resolution; the lower, the faster
init(shaderModifier: String, planeSize: CGSize, scaleFactor: CGFloat) {
let scaledSize = CGSize(width: planeSize.width*scaleFactor, height: planeSize.height*scaleFactor)
// Nested SceneKit scene with orthographic projection
let nestedSCNScene = SCNScene()
let camera = SCNCamera()
camera.zFar = Double(Self.nestedSCNSceneFrustumLength)
camera.usesOrthographicProjection = true
camera.orthographicScale = Double(scaledSize.height/2)
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.simdPosition = simd_float3(x: 0, y: 0, z: Float(Self.nestedSCNSceneFrustumLength/2))
nestedSCNScene.rootNode.addChildNode(cameraNode)
let shaderPlane = SCNPlane(width: scaledSize.width, height: scaledSize.height)
shaderPlaneMaterial = shaderPlane.firstMaterial!
shaderPlaneMaterial.shaderModifiers = [SCNShaderModifierEntryPoint.fragment: shaderModifier]
let shaderPlaneNode = SCNNode(geometry: shaderPlane)
nestedSCNScene.rootNode.addChildNode(shaderPlaneNode)
// Intermediary SpriteKit scene
let nestedSCNSceneSKNode = SK3DNode(viewportSize: scaledSize)
nestedSCNSceneSKNode.scnScene = nestedSCNScene
nestedSCNSceneSKNode.position = CGPoint(x: scaledSize.width/2, y: scaledSize.height/2)
nestedSCNSceneSKNode.isPlaying = true
let intermediarySKScene = SKScene(size: scaledSize)
intermediarySKScene.backgroundColor = .clear
intermediarySKScene.addChild(nestedSCNSceneSKNode)
let intermediarySKScenePlane = SCNPlane(width: scaledSize.width, height: scaledSize.height)
intermediarySKScenePlane.firstMaterial!.diffuse.contents = intermediarySKScene
let intermediarySKScenePlaneNode = SCNNode(geometry: intermediarySKScenePlane)
let invScaleFactor = 1/Float(scaleFactor)
intermediarySKScenePlaneNode.simdScale = simd_float3(x: invScaleFactor, y: invScaleFactor, z: 1)
super.init()
addChildNode(intermediarySKScenePlaneNode)
}
required init?(coder: NSCoder) {
fatalError()
}
}
In general, without a fairly new GPU feature called variable rasterization rate in Metal or variable rate shading elsewhere, you can’t make one object in a scene run its fragment shader at a different resolution than the rest of the scene.
For this case, depending on what your setup is, you might be able to use SCNTechnique to render the plane in a separate pass at a different resolution, then composite that back into your scene, in the same way some game engines render particles at a lower resolution to save on fill rate. Here’s an example.
First, you’ll need a Metal file in your project (if you already have one, just add to it), containing the following:
#include <SceneKit/scn_metal>
struct QuadVertexIn {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 uv [[ attribute(SCNVertexSemanticTexcoord0) ]];
};
struct QuadVertexOut {
float4 position [[ position ]];
float2 uv;
};
vertex QuadVertexOut quadVertex(QuadVertexIn v [[ stage_in ]]) {
QuadVertexOut o;
o.position = float4(v.position.x, -v.position.y, 1, 1);
o.uv = v.uv;
return o;
}
constexpr sampler compositingSampler(coord::normalized, address::clamp_to_edge, filter::linear);
fragment half4 compositeFragment(QuadVertexOut v [[ stage_in ]], texture2d<half, access::sample> compositeInput [[ texture(0) ]]) {
return compositeInput.sample(compositingSampler, v.uv);
}
Then, in your SceneKit code, you can set up and apply the technique like this:
let technique = SCNTechnique(dictionary: [
"passes": ["drawLowResStuff":
["draw": "DRAW_SCENE",
// only draw nodes that are in this category
"includeCategoryMask": 2,
"colorStates": ["clear": true, "clearColor": "0.0"],
"outputs": ["color": "lowResStuff"]],
"drawScene":
["draw": "DRAW_SCENE",
// don’t draw nodes that are in the low-res-stuff category
"excludeCategoryMask": 2,
"colorStates": ["clear": true, "clearColor": "sceneBackground"],
"outputs": ["color": "COLOR"]],
"composite":
["draw": "DRAW_QUAD",
"metalVertexShader": "quadVertex",
"metalFragmentShader": "compositeFragment",
// don’t clear what’s currently there (the rest of the scene)
"colorStates": ["clear": false],
// use alpha blending
"blendStates": ["enable": true, "colorSrc": "srcAlpha", "colorDst": "oneMinusSrcAlpha"],
// supply the lowResStuff render target to the fragment shader
"inputs": ["compositeInput": "lowResStuff"],
// draw into the main color render target
"outputs": ["color": "COLOR"]]
],
"sequence": ["drawLowResStuff", "drawScene", "composite"],
"targets": ["lowResStuff": ["type": "color", "scaleFactor": 0.5]]
])
// mark the plane node as belonging to the category of stuff that gets drawn in the low-res pass
myPlaneNode.categoryBitMask = 2
// apply the technique to the scene view
mySceneView.technique = technique
With a test scene consisting of two spheres with the same texture, and the scaleFactor set to 0.25 instead of 0.5 to exaggerate the effect, the result looks like this.
If you’d prefer sharp pixelation instead of the blurrier resizing depicted above, change filter::linear to filter::nearest in the Metal code. Also, note that the low-res content being composited in is not taking into account the depth buffer, so if your plane is supposed to appear “behind” other objects then you’ll have to do some more work in the compositing function to fix that.

How to write a sceneKit shader modifier for a dissolve in effect

I'd like to build a dissolve in effect for a Scenekit game. I've been looking into shader modifiers since they seem to be the most light weight and haven't had any luck in replicating this effect:
Is it possible to use shader modifiers to create this effect?
How would you go about implementing one?
You can get pretty close to the intended effect with a fragment shader modifier. The basic approach is as follows:
Sample from a noise texture
If the noise sample is below a certain threshold (which I call "revealage"), discard it, making it fully transparent
Otherwise, if the fragment is close to the edge, replace its color with your preferred edge color (or gradient)
Apply bloom to make the edges glow
Here's the shader modifier code for doing this:
#pragma arguments
float revealage;
texture2d<float, access::sample> noiseTexture;
#pragma transparent
#pragma body
const float edgeWidth = 0.02;
const float edgeBrightness = 2;
const float3 innerColor = float3(0.4, 0.8, 1);
const float3 outerColor = float3(0, 0.5, 1);
const float noiseScale = 3;
constexpr sampler noiseSampler(filter::linear, address::repeat);
float2 noiseCoords = noiseScale * _surface.ambientTexcoord;
float noiseValue = noiseTexture.sample(noiseSampler, noiseCoords).r;
if (noiseValue > revealage) {
discard_fragment();
}
float edgeDist = revealage - noiseValue;
if (edgeDist < edgeWidth) {
float t = edgeDist / edgeWidth;
float3 edgeColor = edgeBrightness * mix(outerColor, innerColor, t);
_output.color.rgb = edgeColor;
}
Notice that the revealage parameter is exposed as a material parameter, since you might want to animate it. There are other internal constants, such as edge width and noise scale that can be fine-tuned to get the desired effect with your content.
Different noise textures produce different dissolve effects, so you can experiment with that as well. I just used this multioctave value noise image:
Load the image as a UIImage or NSImage and set it on the material property that gets exposed as noiseTexture:
material.setValue(SCNMaterialProperty(contents: noiseImage), forKey: "noiseTexture")
You'll need to add bloom as a post-process to get that glowy, e-wire effect. In SceneKit, this is as simple as enabling the HDR pipeline and setting some parameters:
let camera = SCNCamera()
camera.wantsHDR = true
camera.bloomThreshold = 0.8
camera.bloomIntensity = 2
camera.bloomBlurRadius = 16.0
camera.wantsExposureAdaptation = false
All of the numeric parameters will potentially need to be tuned to your content.
To keep things tidy, I prefer to keep shader modifiers in their own text files (I named mine "dissolve.fragment.txt"). Here's how to load some modifier code and attach it to a material.
let modifierURL = Bundle.main.url(forResource: "dissolve.fragment", withExtension: "txt")!
let modifierString = try! String(contentsOf: modifierURL)
material.shaderModifiers = [
SCNShaderModifierEntryPoint.fragment : modifierString
]
And finally, to animate the effect, you can use a CABasicAnimation wrapped with a SCNAnimation:
let revealAnimation = CABasicAnimation(keyPath: "revealage")
revealAnimation.timingFunction = CAMediaTimingFunction(name: .linear)
revealAnimation.duration = 2.5
revealAnimation.fromValue = 0.0
revealAnimation.toValue = 1.0
let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation)
material.addAnimation(scnRevealAnimation, forKey: "Reveal")

How do you play a video with alpha channel using AVFoundation?

I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.
I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)

Rotate my SceneKit material

I'm taking images with AVCapturePhotoOutput and then using their JPEG representation as the texture on a SceneKit SCNPlane that is the same aspect ratio as the image:
let image = UIImage(data: dataImage!)
let rectangle = SCNPlane(width:9, height:12)
let rectmaterial = SCNMaterial()
rectmaterial.diffuse.contents = image
rectmaterial.isDoubleSided = true
rectangle.materials = [rectmaterial]
let rectnode = SCNNode(geometry: rectangle)
let pos = sceneSpacePosition(inFrontOf: self.pictCamera, atDistance: 16.5) // 16.5 is arbitrary, but makes the rectangle the same size as the camera
rectnode.position = pos
rectnode.orientation = self.pictCamera.orientation
pictView.scene?.rootNode.addChildNode(rectnode)
sceneSpacePosition is a bit of code that can be found here on SO that maps CoreMotion into SceneKit orientation. It is used to place the rectangle, which does indeed appear at the right location with the right size. All very cool.
The problem is that the image is rotated 90 degrees to the rectangle. So I did the obvious:
rectmaterial.diffuse.contentsTransform = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
This does not work property; the resulting image is unrecognizable. It appears that one small part of the image has been stretched to a huge size. I thought it might be the axis, but I tried all three with the same result.
Any ideas?
You are rotating on the upper left corner as suggested by Alain T.
If you move your image down, you may get the rotation you were expecting.
Try this:
let translation = SCNMatrix4MakeTranslation(0, -1, 0)
let rotation = SCNMatrix4MakeRotation(Float.pi / 2, 0, 0, 1)
let transform = SCNMatrix4Mult(translation, rotation)
rectmaterial.diffuse.contentsTransform = transform

SceneKit: Make blocks more lifelike or 3D-like

The code below is used to create a scene and create blocks in SceneKit. The blocks come out looking flat and not "3D enough" according to our users. Screenshots 1-2 show our app.
Screenshots 3-5 show what users expect the blocks to look like, that is more 3D-like.
After speaking to different people, there are different opinions about how to render blocks that look more like screenshots 3-5. Some people say use ambient occlusion, others say voxel lighting, some say use spot lighting and use shadows, or directional lighting.
We previously tried adding omni lighting, but that didn't work so it was removed. As you can see in the code, we also experimented with an ambient light node but that also didn't yield the right results.
What is the best way to render our blocks and achieve a comparable look to screenshots 3-5?
Note: we understand the code is not optimized for performance, i.e., that polygons are shown that should not be shown. That is okay. The focus is not on performance but rather on achieving more 3D-like rendering. You can assume some hard limit on nodes, like no more than 1K or 10K in a scene.
Code:
func createScene() {
// Set scene view
let scene = SCNScene()
sceneView.jitteringEnabled = true
sceneView.scene = scene
// Add camera node
sceneView.pointOfView = cameraNode
// Make delegate to capture screenshots
sceneView.delegate = self
// Set ambient lighting
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = SCNLightTypeAmbient
ambientLightNode.light!.color = UIColor(white: 0.50, alpha: 1.0)
//scene.rootNode.addChildNode(ambientLightNode)
//sceneView.autoenablesDefaultLighting = true
// Set floor
setFloor()
// Set sky
setSky()
// Set initial position for user node
userNode.position = SCNVector3(x: 0, y: Float(CameraMinY), z: Float(CameraZoom))
// Add user node
scene.rootNode.addChildNode(userNode)
// Add camera to user node
// zNear fixes white triangle bug while zFar fixes white line bug
cameraNode.camera = SCNCamera()
cameraNode.camera!.zNear = Double(0.1)
cameraNode.camera!.zFar = Double(Int.max)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 0) //EB: Add some offset to represent the head
userNode.addChildNode(cameraNode)
}
private func setFloor() {
// Create floor geometry
let floorImage = UIImage(named: "FloorBG")!
let floor = SCNFloor()
floor.reflectionFalloffEnd = 0
floor.reflectivity = 0
floor.firstMaterial!.diffuse.contents = floorImage
floor.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(floorImage.size.width)/2, Float(floorImage.size.height)/2, 1)
floor.firstMaterial!.locksAmbientWithDiffuse = true
floor.firstMaterial!.diffuse.wrapS = .Repeat
floor.firstMaterial!.diffuse.wrapT = .Repeat
floor.firstMaterial!.diffuse.mipFilter = .Linear
// Set node & physics
// -- Must set y-position to 0.5 so blocks are flush with floor
floorLayer = SCNNode(geometry: floor)
floorLayer.position.y = -0.5
let floorShape = SCNPhysicsShape(geometry: floor, options: nil)
let floorBody = SCNPhysicsBody(type: .Static, shape: floorShape)
floorLayer.physicsBody = floorBody
floorLayer.physicsBody!.restitution = 1.0
// Add to scene
sceneView.scene!.rootNode.addChildNode(floorLayer)
}
private func setSky() {
// Create sky geometry
let sky = SCNFloor()
sky.reflectionFalloffEnd = 0
sky.reflectivity = 0
sky.firstMaterial!.diffuse.contents = SkyColor
sky.firstMaterial!.doubleSided = true
sky.firstMaterial!.locksAmbientWithDiffuse = true
sky.firstMaterial!.diffuse.wrapS = .Repeat
sky.firstMaterial!.diffuse.wrapT = .Repeat
sky.firstMaterial!.diffuse.mipFilter = .Linear
sky.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(2), Float(2), 1);
// Set node & physics
skyLayer = SCNNode(geometry: sky)
let skyShape = SCNPhysicsShape(geometry: sky, options: nil)
let skyBody = SCNPhysicsBody(type: .Static, shape: skyShape)
skyLayer.physicsBody = skyBody
skyLayer.physicsBody!.restitution = 1.0
// Set position
skyLayer.position = SCNVector3(0, SkyPosY, 0)
// Set fog
/*sceneView.scene?.fogEndDistance = 60
sceneView.scene?.fogStartDistance = 50
sceneView.scene?.fogDensityExponent = 1.0
sceneView.scene?.fogColor = SkyColor */
// Add to scene
sceneView.scene!.rootNode.addChildNode(skyLayer)
}
func createBlock(position: SCNVector3, animated: Bool) {
...
// Create box geometry
let box = SCNBox(width: 1.0, height: 1.0, length: 1.0, chamferRadius: 0.0)
box.firstMaterial!.diffuse.contents = curStyle.getContents() // "curStyle.getContents()" either returns UIColor or UIImage
box.firstMaterial!.specular.contents = UIColor.whiteColor()
// Add new block
let newBlock = SCNNode(geometry: box)
newBlock.position = position
blockLayer.addChildNode(newBlock)
}
Screenshots 1-2 (our app):
Screenshots 3-5 (ideal visual representation of blocks):
I still think there's a few easy things you can do that will make a big difference to how your scene is rendered. Apologies for not using your code, this example is something I had lying around.
Right now your scene is only lit by an ambient light.
let aLight = SCNLight()
aLight.type = SCNLightTypeAmbient
aLight.color = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
let aLightNode = SCNNode()
aLightNode.light = aLight
scene.rootNode.addChildNode(aLightNode)
If I use only this light in my scene I see the following. Note how all faces are lit the same irrespective of the direction they face. Some games do pull off this aesthetic very well.
The following block of code adds a directional light to this scene. The transformation applied in this light won't be valid for your scene, it's important to orientate the light according to where you want the light coming from.
let dLight = SCNLight()
dLight.type = SCNLightTypeDirectional
dLight.color = UIColor(red: 0.6, green: 0.6, blue: 0.6, alpha: 1.0)
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 37 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Now we have shading on each of the faces based on their angle relative to the direction of the light.
Currently SceneKit only supports shadows if you're using the SCNLightTypeSpot. Using a spotlight means we need to both orientate (as per directional light) and position it. I use this as a replacement for the directional light.
let sLight = SCNLight()
sLight.castsShadow = true
sLight.type = SCNLightTypeSpot
sLight.zNear = 50
sLight.zFar = 120
sLight.spotInnerAngle = 60
sLight.spotOuterAngle = 90
let sLightNode = SCNNode()
sLightNode.light = sLight
var sLightTransform = SCNMatrix4Identity
sLightTransform = SCNMatrix4Rotate(sLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
sLightTransform = SCNMatrix4Rotate(sLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
sLightTransform = SCNMatrix4Rotate(sLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
sLightTransform = SCNMatrix4Translate(sLightTransform, -20, 50, -10)
sLightNode.transform = sLightTransform
scene.rootNode.addChildNode(sLightNode)
In the above code we first tell the spotlight to cast a shadow, by default all nodes in your scene will then cast a shadow (this can be changed). The zNear and zFar settings are also important and must be specified so that the nodes casting shadows are within this range of distance from the light source. Nodes outside this range will not cast a shadow.
After shading/shadows there's a number of other effects you can apply easily. Depth of field effects are available for the camera. Fog is similarly easy to include.
scene.fogColor = UIColor.blackColor()
scene.fogStartDistance = 10
scene.fogEndDistance = 110
scenekitView.backgroundColor = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
Update
Turns out you can get shadows from a directional light. Modifying the spotlight code from above by changing its type and setting the orthographicScale. Default value for orthographicScale seems to be 1.0, obviously not suitable for scenes much larger than 1.
let dLight = SCNLight()
dLight.castsShadow = true
dLight.type = SCNLightTypeDirectional
dLight.zNear = 50
dLight.zFar = 120
dLight.orthographicScale = 30
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightTransform = SCNMatrix4Translate(dLightTransform, -20, 50, -10)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Produces the following image.
The scene size is 60x60, so in this case setting the orthographic scale to 30 produces shadows for the objects close to the light. The directional light shadows appear different to the spot light due to the difference in projections (orthographic vs perspective) used when rendering the shadow map.
Ambient occlusion calculations will give you the best results, but is very expensive, particularly in a dynamically changing world, which it looks like this is.
There are several ways to cheat, and get the look of Ambient occlusion.
Here's one:
place transparent, gradient shadow textures on geometry "placards" used to place/present the shadows at the places required. This will involve doing checks of geometry around the new block before determining what placards to place, with which desired texture for the shadowing. But this can be made to look VERY good, at a very low cost in terms of polygons, draw calls and filtrate. It's probably the cheapest way to do this, and have it look good/great, and can only really be done (with a good look) in a world of blocks. A more organic world rules this technique out. Please excuse the pun.
Or, another, similar: Place additional textures onto/into objects that have the shadow, and blend this with the other textures/materials in the object. This will be a bit fiddly, and I'm not an expert on the powers of materials in Scene Kit, so can't say for sure this is possible and/or easy in it.
Or: Use a blend of textures with a vertex shader that's adding a shadow from the edges that touch or otherwise need/desire a shadow based on your ascertaining what and where you want shadows and to what extent. Will still need the placards trick on the floors/walls unless you add more vertices inside flat surfaces for the purpose of vertex shading for shadows.
Here's something I did for a friend's CD cover... shows the power of shadows. It's orthographic, not true 3D perspective, but the shadows give the impression of depths and create the illusions of space:
all answers above (or below) seem to be good ones (at the time of this writing) however,
what I use (just for setting up a simple scene) is one ambient light (lights everything in all directions) to make things visible.And then one omnidirectional light positioned somewhere in the middle of your scene, the omni light can be raised up (Y up I mean) to light the whole of your scene. The omni light gives the user a sense of shading and the ambient light makes it more like a sun light.
for example:
Imagine sitting in a living room (like I am right now) and the sun-light peers through the window to your right.
You can obviously see a shadow of an area that the couch is not getting sun light, however you can still see details of what is in the shadow.
Now! all the sudden your wold gets rid of ambient light BOOM! The shadow is now pitch black, you can't anymore see what is in the shadow.
Or say the ambient light came back again (what a relief), but all the sudden the omni light stopped working. (probably my fault :( ) Everything now is lighted the same, no shadow, no difference, but if you lay a paper on the table, and look at it from above, there is no shadow! So you think it is part of the table! In a world like this your rely on the contour of something in order to see it- you would have to look at the table from side view, to see the thickness of the paper.
Hope this helps (at least a little)
Note: ambient lighting give a similar effect to emissive material

Resources