How do you play a video with alpha channel using AVFoundation? - ios

I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.

I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)

Related

SceneKit + ARKit: Billboarding without rolling with camera

I'm trying to draw a billboarded quad using SceneKit and ARKit. I have basic billboarding working, however when I roll the camera the billboard also rotates in place. This video shows this in action as I roll the camera to the left (the smily face is the billboard):
Instead I'd like the billboard to still face the camera but keep oriented vertically in the scene, no matter what the camera is doing
Here's how I compute billboarding:
// inside frame update function
struct Vertex {
var position: SIMD3<Float>
var texCoord: SIMD2<Float>
}
let halfSize = Float(0.25)
let cameraNode = sceneView.scene.rootNode.childNodes.first!
let modelTransform = self.scnNode.simdWorldTransform
let viewTransform = cameraNode.simdWorldTransform.inverse
let modelViewTransform = viewTransform * modelTransform
let right = SIMD3<Float>(modelViewTransform[0][0], modelViewTransform[1][0], modelViewTransform[2][0]);
let up = SIMD3<Float>(modelViewTransform[0][1], modelViewTransform[1][1], modelViewTransform[2][1]);
// drawBuffer is a MTL buffer of vertex data
let data = drawBuffer.contents().bindMemory(to: ParticleVertex.self, capacity: 4)
data[0].position = (right + up) * halfSize
data[0].texCoord = SIMD2<Float>(0, 0)
data[1].position = -(right - up) * halfSize
data[1].texCoord = SIMD2<Float>(1, 0)
data[2].position = (right - up) * halfSize
data[2].texCoord = SIMD2<Float>(0, 1)
data[3].position = -(right + up) * halfSize
data[3].texCoord = SIMD2<Float>(1, 1)
Again this gets the billboard facing the camera correctly, however when I roll the camera, the billboard rotates along with it.
What I'd like instead is for the billboard to point towards the camera but keep its orientation in the world. Any suggestions on how to fix this?
Note that my code example is simplified so I can't use SCNBillboardConstraint or anything like that; I need to be able to compute the billboarding myself
Here's the solution I came up with: create a new node that matches the camera's position and rotation, but without any roll:
let tempNode = SCNNode()
tempNode.simdWorldPosition = cameraNode.simdWorldPosition
// This changes the node's pitch and yaw, but not roll
tempNode.simdLook(at: cameraNode.simdConvertPosition(SIMD3<Float>(0, 0, 1), to: nil))
let view = tempNode.simdWorldTransform.inverse
let modelViewTransform = view * node.simdWorldTransform
This keeps the billboard pointing upwards in world space, even as the camera rolls.
I had actually tried doing this earlier by setting tempNode.eulerAngles.z = 0, however that seems to effect the rest of the transform matrix in unexpected ways
There's probably a way to do this without creating a temporary node too but this works well enough for me

How to write a sceneKit shader modifier for a dissolve in effect

I'd like to build a dissolve in effect for a Scenekit game. I've been looking into shader modifiers since they seem to be the most light weight and haven't had any luck in replicating this effect:
Is it possible to use shader modifiers to create this effect?
How would you go about implementing one?
You can get pretty close to the intended effect with a fragment shader modifier. The basic approach is as follows:
Sample from a noise texture
If the noise sample is below a certain threshold (which I call "revealage"), discard it, making it fully transparent
Otherwise, if the fragment is close to the edge, replace its color with your preferred edge color (or gradient)
Apply bloom to make the edges glow
Here's the shader modifier code for doing this:
#pragma arguments
float revealage;
texture2d<float, access::sample> noiseTexture;
#pragma transparent
#pragma body
const float edgeWidth = 0.02;
const float edgeBrightness = 2;
const float3 innerColor = float3(0.4, 0.8, 1);
const float3 outerColor = float3(0, 0.5, 1);
const float noiseScale = 3;
constexpr sampler noiseSampler(filter::linear, address::repeat);
float2 noiseCoords = noiseScale * _surface.ambientTexcoord;
float noiseValue = noiseTexture.sample(noiseSampler, noiseCoords).r;
if (noiseValue > revealage) {
discard_fragment();
}
float edgeDist = revealage - noiseValue;
if (edgeDist < edgeWidth) {
float t = edgeDist / edgeWidth;
float3 edgeColor = edgeBrightness * mix(outerColor, innerColor, t);
_output.color.rgb = edgeColor;
}
Notice that the revealage parameter is exposed as a material parameter, since you might want to animate it. There are other internal constants, such as edge width and noise scale that can be fine-tuned to get the desired effect with your content.
Different noise textures produce different dissolve effects, so you can experiment with that as well. I just used this multioctave value noise image:
Load the image as a UIImage or NSImage and set it on the material property that gets exposed as noiseTexture:
material.setValue(SCNMaterialProperty(contents: noiseImage), forKey: "noiseTexture")
You'll need to add bloom as a post-process to get that glowy, e-wire effect. In SceneKit, this is as simple as enabling the HDR pipeline and setting some parameters:
let camera = SCNCamera()
camera.wantsHDR = true
camera.bloomThreshold = 0.8
camera.bloomIntensity = 2
camera.bloomBlurRadius = 16.0
camera.wantsExposureAdaptation = false
All of the numeric parameters will potentially need to be tuned to your content.
To keep things tidy, I prefer to keep shader modifiers in their own text files (I named mine "dissolve.fragment.txt"). Here's how to load some modifier code and attach it to a material.
let modifierURL = Bundle.main.url(forResource: "dissolve.fragment", withExtension: "txt")!
let modifierString = try! String(contentsOf: modifierURL)
material.shaderModifiers = [
SCNShaderModifierEntryPoint.fragment : modifierString
]
And finally, to animate the effect, you can use a CABasicAnimation wrapped with a SCNAnimation:
let revealAnimation = CABasicAnimation(keyPath: "revealage")
revealAnimation.timingFunction = CAMediaTimingFunction(name: .linear)
revealAnimation.duration = 2.5
revealAnimation.fromValue = 0.0
revealAnimation.toValue = 1.0
let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation)
material.addAnimation(scnRevealAnimation, forKey: "Reveal")

SKEffectNode to an SKTexture?

SKEffectionNodes have a shouldRasterise "switch" that bakes them into a bitmap, and doesn't update them until such time as the underlying nodes that are impacted by the effect are changed.
However I can't find a way to create an SKTexture from this rasterised "image".
Is it possible to get a SKTexture from a SKEffectNode?
I think you could try a code like this (it's just an example):
if let effect = SKEffectNode.init(fileNamed: "myeffect") {
effect.shouldRasterize = true
self.addChild(effect)
...
let texture = SKView().texture(from: self)
}
Update:
After you answer, hope I understood better what do you want to achieve.
This is my point of view: if you want to make a shadow of a texture, you could simply create an SKSpriteNode with this texture:
let shadow = SKSpriteNode.init(texture: <yourTexture>)
shadow.blendMode = SKBlendMode.alpha
shadow.colorBlendFactor = 1
shadow.color = SKColor.black
shadow.alpha = 0.25
What I want to say is that you could proceed step by step:
get your texture
elaborate your texture (add filters, make some other effect..)
get shadow
This way of working produces a series of useful methods you could use in your project to build other kind of elements.
Maybe, by separating the tasks you don't need to use texture(from:)
I've figured this out, in a way that solves my problems, using a Factory.
Read more on how to make a factory, from BenMobile's patient and clear articulation, here: Factory creation and use for making Sprites and Shapes
There's an issue with blurring a SKTexture or SKSpriteNode in that it's going to run out of space. The blur/glow goes beyond the edges of the sprite. To solve this, in the below, you'll see I've created a "framer" object. This is simply an empty SKSpriteNode that's double the size of the texture to be blurred. The texture to be blurred is added as a child, to this "framer" object.
It works, regardless of how hacky this is ;)
Inside a static factory class file:
import SpriteKit
class Factory {
private static let view:SKView = SKView() // the magic. This is the rendering space
static func makeShadow(from source: SKTexture, rgb: SKColor, a: CGFloat) -> SKSpriteNode {
let shadowNode = SKSpriteNode(texture: source)
shadowNode.colorBlendFactor = 0.5 // near 1 makes following line more effective
shadowNode.color = SKColor.gray // makes for a darker shadow. White for "glow" shadow
let textureSize = source.size()
let doubleTextureSize = CGSize(width: textureSize.width * 2, height: textureSize.height * 2)
let framer = SKSpriteNode(color: UIColor.clear, size: doubleTextureSize)
framer.addChild(shadowNode)
let blurAmount = 10
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(blurAmount, forKey: kCIInputRadiusKey)
let fxNode = SKEffectNode()
fxNode.filter = filter
fxNode.blendMode = .alpha
fxNode.addChild(framer)
fxNode.shouldRasterize = true
let tex = view.texture(from: fxNode) // ‘view’ refers to the magic first line
let shadow = SKSpriteNode(texture: tex) //WHOOPEE!!! TEXTURE!!!
shadow.colorBlendFactor = 0.5
shadow.color = rgb
shadow.alpha = a
shadow.zPosition = -1
return shadow
}
}
Inside anywhere you can access the Sprite you want to make a shadow or glow texture for:
shadowSprite = Factory.makeShadow(from: button, rgb: myColor, a: 0.33)
shadowSprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY - 5)
addChild(shadowSprite)
-
button is a texture of the button to be given a shadow. a: is an alpha setting (actually transparency level, 0.0 to 1.0, where 1.0 is fully opaque) the lower this is the lighter the shadow will be.
The positioning serves to drop the shadow slightly below the button so it looks like light is coming from the top, casting shadows down and onto the background.

SceneKit: Make blocks more lifelike or 3D-like

The code below is used to create a scene and create blocks in SceneKit. The blocks come out looking flat and not "3D enough" according to our users. Screenshots 1-2 show our app.
Screenshots 3-5 show what users expect the blocks to look like, that is more 3D-like.
After speaking to different people, there are different opinions about how to render blocks that look more like screenshots 3-5. Some people say use ambient occlusion, others say voxel lighting, some say use spot lighting and use shadows, or directional lighting.
We previously tried adding omni lighting, but that didn't work so it was removed. As you can see in the code, we also experimented with an ambient light node but that also didn't yield the right results.
What is the best way to render our blocks and achieve a comparable look to screenshots 3-5?
Note: we understand the code is not optimized for performance, i.e., that polygons are shown that should not be shown. That is okay. The focus is not on performance but rather on achieving more 3D-like rendering. You can assume some hard limit on nodes, like no more than 1K or 10K in a scene.
Code:
func createScene() {
// Set scene view
let scene = SCNScene()
sceneView.jitteringEnabled = true
sceneView.scene = scene
// Add camera node
sceneView.pointOfView = cameraNode
// Make delegate to capture screenshots
sceneView.delegate = self
// Set ambient lighting
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = SCNLightTypeAmbient
ambientLightNode.light!.color = UIColor(white: 0.50, alpha: 1.0)
//scene.rootNode.addChildNode(ambientLightNode)
//sceneView.autoenablesDefaultLighting = true
// Set floor
setFloor()
// Set sky
setSky()
// Set initial position for user node
userNode.position = SCNVector3(x: 0, y: Float(CameraMinY), z: Float(CameraZoom))
// Add user node
scene.rootNode.addChildNode(userNode)
// Add camera to user node
// zNear fixes white triangle bug while zFar fixes white line bug
cameraNode.camera = SCNCamera()
cameraNode.camera!.zNear = Double(0.1)
cameraNode.camera!.zFar = Double(Int.max)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 0) //EB: Add some offset to represent the head
userNode.addChildNode(cameraNode)
}
private func setFloor() {
// Create floor geometry
let floorImage = UIImage(named: "FloorBG")!
let floor = SCNFloor()
floor.reflectionFalloffEnd = 0
floor.reflectivity = 0
floor.firstMaterial!.diffuse.contents = floorImage
floor.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(floorImage.size.width)/2, Float(floorImage.size.height)/2, 1)
floor.firstMaterial!.locksAmbientWithDiffuse = true
floor.firstMaterial!.diffuse.wrapS = .Repeat
floor.firstMaterial!.diffuse.wrapT = .Repeat
floor.firstMaterial!.diffuse.mipFilter = .Linear
// Set node & physics
// -- Must set y-position to 0.5 so blocks are flush with floor
floorLayer = SCNNode(geometry: floor)
floorLayer.position.y = -0.5
let floorShape = SCNPhysicsShape(geometry: floor, options: nil)
let floorBody = SCNPhysicsBody(type: .Static, shape: floorShape)
floorLayer.physicsBody = floorBody
floorLayer.physicsBody!.restitution = 1.0
// Add to scene
sceneView.scene!.rootNode.addChildNode(floorLayer)
}
private func setSky() {
// Create sky geometry
let sky = SCNFloor()
sky.reflectionFalloffEnd = 0
sky.reflectivity = 0
sky.firstMaterial!.diffuse.contents = SkyColor
sky.firstMaterial!.doubleSided = true
sky.firstMaterial!.locksAmbientWithDiffuse = true
sky.firstMaterial!.diffuse.wrapS = .Repeat
sky.firstMaterial!.diffuse.wrapT = .Repeat
sky.firstMaterial!.diffuse.mipFilter = .Linear
sky.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(2), Float(2), 1);
// Set node & physics
skyLayer = SCNNode(geometry: sky)
let skyShape = SCNPhysicsShape(geometry: sky, options: nil)
let skyBody = SCNPhysicsBody(type: .Static, shape: skyShape)
skyLayer.physicsBody = skyBody
skyLayer.physicsBody!.restitution = 1.0
// Set position
skyLayer.position = SCNVector3(0, SkyPosY, 0)
// Set fog
/*sceneView.scene?.fogEndDistance = 60
sceneView.scene?.fogStartDistance = 50
sceneView.scene?.fogDensityExponent = 1.0
sceneView.scene?.fogColor = SkyColor */
// Add to scene
sceneView.scene!.rootNode.addChildNode(skyLayer)
}
func createBlock(position: SCNVector3, animated: Bool) {
...
// Create box geometry
let box = SCNBox(width: 1.0, height: 1.0, length: 1.0, chamferRadius: 0.0)
box.firstMaterial!.diffuse.contents = curStyle.getContents() // "curStyle.getContents()" either returns UIColor or UIImage
box.firstMaterial!.specular.contents = UIColor.whiteColor()
// Add new block
let newBlock = SCNNode(geometry: box)
newBlock.position = position
blockLayer.addChildNode(newBlock)
}
Screenshots 1-2 (our app):
Screenshots 3-5 (ideal visual representation of blocks):
I still think there's a few easy things you can do that will make a big difference to how your scene is rendered. Apologies for not using your code, this example is something I had lying around.
Right now your scene is only lit by an ambient light.
let aLight = SCNLight()
aLight.type = SCNLightTypeAmbient
aLight.color = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
let aLightNode = SCNNode()
aLightNode.light = aLight
scene.rootNode.addChildNode(aLightNode)
If I use only this light in my scene I see the following. Note how all faces are lit the same irrespective of the direction they face. Some games do pull off this aesthetic very well.
The following block of code adds a directional light to this scene. The transformation applied in this light won't be valid for your scene, it's important to orientate the light according to where you want the light coming from.
let dLight = SCNLight()
dLight.type = SCNLightTypeDirectional
dLight.color = UIColor(red: 0.6, green: 0.6, blue: 0.6, alpha: 1.0)
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 37 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Now we have shading on each of the faces based on their angle relative to the direction of the light.
Currently SceneKit only supports shadows if you're using the SCNLightTypeSpot. Using a spotlight means we need to both orientate (as per directional light) and position it. I use this as a replacement for the directional light.
let sLight = SCNLight()
sLight.castsShadow = true
sLight.type = SCNLightTypeSpot
sLight.zNear = 50
sLight.zFar = 120
sLight.spotInnerAngle = 60
sLight.spotOuterAngle = 90
let sLightNode = SCNNode()
sLightNode.light = sLight
var sLightTransform = SCNMatrix4Identity
sLightTransform = SCNMatrix4Rotate(sLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
sLightTransform = SCNMatrix4Rotate(sLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
sLightTransform = SCNMatrix4Rotate(sLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
sLightTransform = SCNMatrix4Translate(sLightTransform, -20, 50, -10)
sLightNode.transform = sLightTransform
scene.rootNode.addChildNode(sLightNode)
In the above code we first tell the spotlight to cast a shadow, by default all nodes in your scene will then cast a shadow (this can be changed). The zNear and zFar settings are also important and must be specified so that the nodes casting shadows are within this range of distance from the light source. Nodes outside this range will not cast a shadow.
After shading/shadows there's a number of other effects you can apply easily. Depth of field effects are available for the camera. Fog is similarly easy to include.
scene.fogColor = UIColor.blackColor()
scene.fogStartDistance = 10
scene.fogEndDistance = 110
scenekitView.backgroundColor = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1.0)
Update
Turns out you can get shadows from a directional light. Modifying the spotlight code from above by changing its type and setting the orthographicScale. Default value for orthographicScale seems to be 1.0, obviously not suitable for scenes much larger than 1.
let dLight = SCNLight()
dLight.castsShadow = true
dLight.type = SCNLightTypeDirectional
dLight.zNear = 50
dLight.zFar = 120
dLight.orthographicScale = 30
let dLightNode = SCNNode()
dLightNode.light = dLight
var dLightTransform = SCNMatrix4Identity
dLightTransform = SCNMatrix4Rotate(dLightTransform, -90 * Float(M_PI)/180, 1, 0, 0)
dLightTransform = SCNMatrix4Rotate(dLightTransform, 65 * Float(M_PI)/180, 0, 0, 1)
dLightTransform = SCNMatrix4Rotate(dLightTransform, -20 * Float(M_PI)/180, 0, 1, 0)
dLightTransform = SCNMatrix4Translate(dLightTransform, -20, 50, -10)
dLightNode.transform = dLightTransform
scene.rootNode.addChildNode(dLightNode)
Produces the following image.
The scene size is 60x60, so in this case setting the orthographic scale to 30 produces shadows for the objects close to the light. The directional light shadows appear different to the spot light due to the difference in projections (orthographic vs perspective) used when rendering the shadow map.
Ambient occlusion calculations will give you the best results, but is very expensive, particularly in a dynamically changing world, which it looks like this is.
There are several ways to cheat, and get the look of Ambient occlusion.
Here's one:
place transparent, gradient shadow textures on geometry "placards" used to place/present the shadows at the places required. This will involve doing checks of geometry around the new block before determining what placards to place, with which desired texture for the shadowing. But this can be made to look VERY good, at a very low cost in terms of polygons, draw calls and filtrate. It's probably the cheapest way to do this, and have it look good/great, and can only really be done (with a good look) in a world of blocks. A more organic world rules this technique out. Please excuse the pun.
Or, another, similar: Place additional textures onto/into objects that have the shadow, and blend this with the other textures/materials in the object. This will be a bit fiddly, and I'm not an expert on the powers of materials in Scene Kit, so can't say for sure this is possible and/or easy in it.
Or: Use a blend of textures with a vertex shader that's adding a shadow from the edges that touch or otherwise need/desire a shadow based on your ascertaining what and where you want shadows and to what extent. Will still need the placards trick on the floors/walls unless you add more vertices inside flat surfaces for the purpose of vertex shading for shadows.
Here's something I did for a friend's CD cover... shows the power of shadows. It's orthographic, not true 3D perspective, but the shadows give the impression of depths and create the illusions of space:
all answers above (or below) seem to be good ones (at the time of this writing) however,
what I use (just for setting up a simple scene) is one ambient light (lights everything in all directions) to make things visible.And then one omnidirectional light positioned somewhere in the middle of your scene, the omni light can be raised up (Y up I mean) to light the whole of your scene. The omni light gives the user a sense of shading and the ambient light makes it more like a sun light.
for example:
Imagine sitting in a living room (like I am right now) and the sun-light peers through the window to your right.
You can obviously see a shadow of an area that the couch is not getting sun light, however you can still see details of what is in the shadow.
Now! all the sudden your wold gets rid of ambient light BOOM! The shadow is now pitch black, you can't anymore see what is in the shadow.
Or say the ambient light came back again (what a relief), but all the sudden the omni light stopped working. (probably my fault :( ) Everything now is lighted the same, no shadow, no difference, but if you lay a paper on the table, and look at it from above, there is no shadow! So you think it is part of the table! In a world like this your rely on the contour of something in order to see it- you would have to look at the table from side view, to see the thickness of the paper.
Hope this helps (at least a little)
Note: ambient lighting give a similar effect to emissive material

How to add transparency with a shader in SceneKit?

I would like to have a transparency effect from an image, for now I just test with a torus, but the shader does not seem to work with alpha. From what I understood from this thread (Using Blending Functions in Scenekit) and this wiki link about transparency : (http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Transparency), GLBlendFunc is replaced by pragma transparency in SceneKit.
Would you know what is wrong with this code?
I created a new project with SceneKit, and I changed the ship mesh for a torus.
EDIT :
I am trying with a plane, but the image below does not appear inside the plane, instead I get the image with the red and brownish boxes below.
My image with alpha :
The result (the image with alpha should replace the brownish color) :
let plane = SCNPlane(width: 2, height: 2)
var texture = SKTexture(imageNamed:"small")
texture.filteringMode = SKTextureFilteringMode.Nearest
plane.firstMaterial?.diffuse.contents = texture
let ship = SCNNode(geometry: plane) //SCNTorus(ringRadius: 1, pipeRadius: 0.5)
ship.position = SCNVector3(x: 0, y: 0, z: 15)
scene.rootNode.addChildNode(ship)
let myscale : CGFloat = 10
let box = SCNBox(width: myscale, height: myscale, length: myscale, chamferRadius: 0)
box.firstMaterial?.diffuse.contents = UIColor.redColor()
let theBox = SCNNode(geometry: box)
theBox.position = SCNVector3(x: 0, y: 0, z: 5)
scene.rootNode.addChildNode(theBox)
let scnView = self.view as SCNView
scnView.scene = scene
scnView.backgroundColor = UIColor.blackColor()
var shaders = NSMutableDictionary()
shaders[SCNShaderModifierEntryPointFragment] = String(contentsOfFile: NSBundle.mainBundle().pathForResource("test", ofType: "shader")!, encoding: NSUTF8StringEncoding, error: nil)
var material = SCNMaterial()
material.shaderModifiers = shaders
ship.geometry?.materials = [material]
The shader :
#pragma transparent
#pragma body
_output.color.rgba = vec4(0.0, 0.2, 0.0, 0.2);
SceneKit uses premultiplied alpha (r, g and b fields should be multiplied by the desired a) :
vec4(0.0, 0.2, 0.0, 0.2); // `vec4(0.0, 1.0, 0.0, 1.0) * alpha` with alpha = 0.2
I was struggling with this problem too. Finally I found out that to make '#pragma transparent' work, I had to add it to another shader other than the one executing my transparency code.
For example, I added transparency code to the surface shader, and added '#pragma transparent' to the geometry shader. The Apple API document also added '#pragma transparent' to the geometry shader, don't know if they were intended to do so.
NSString *geometryScript = #""
"#pragma transparent";
NSString *surfaceScript = #""
//"#pragma transparent" // You must not put it together with the transparency code
"float a = 0.1;"
"_surface.diffuse = vec4(_surface.diffuse.rgb * a, a);";
// This works for the transparency code in surface shader too.
//NSString *fragmentScript = #""
//"#pragma transparent";
yourMaterial.shaderModifiers = #{SCNShaderModifierEntryPointGeometry:geometryScript,
SCNShaderModifierEntryPointSurface:surfaceScript};
This code works in iOS 11.2, Xcode 9.2.
This rule applies to SCNShaderModifierEntryPointFragment shader as well. Likewise, if you want to change transparency there, you can add '#pragma transparent' to the geometry shader or the surface shader. I haven't tested SCNShaderModifierEntryPointLightingModel shader.
If you don't add any '#pragma transparent' to a shader, a black background may be blended with the transparent pixels.
Adding transparency can be quite easily done in the SCNShadable Surface or Fragment entry point
The SCNShaderModifierEntryPointSurface entry point version
#pragma transparent
#pragma body
_surface.diffuse.a = 0.5;
The SCNShaderModifierEntryPointFragment entry point version
#pragma transparent
#pragma body
_output.color.a = 0.5;

Resources