How to write a sceneKit shader modifier for a dissolve in effect - ios

I'd like to build a dissolve in effect for a Scenekit game. I've been looking into shader modifiers since they seem to be the most light weight and haven't had any luck in replicating this effect:
Is it possible to use shader modifiers to create this effect?
How would you go about implementing one?

You can get pretty close to the intended effect with a fragment shader modifier. The basic approach is as follows:
Sample from a noise texture
If the noise sample is below a certain threshold (which I call "revealage"), discard it, making it fully transparent
Otherwise, if the fragment is close to the edge, replace its color with your preferred edge color (or gradient)
Apply bloom to make the edges glow
Here's the shader modifier code for doing this:
#pragma arguments
float revealage;
texture2d<float, access::sample> noiseTexture;
#pragma transparent
#pragma body
const float edgeWidth = 0.02;
const float edgeBrightness = 2;
const float3 innerColor = float3(0.4, 0.8, 1);
const float3 outerColor = float3(0, 0.5, 1);
const float noiseScale = 3;
constexpr sampler noiseSampler(filter::linear, address::repeat);
float2 noiseCoords = noiseScale * _surface.ambientTexcoord;
float noiseValue = noiseTexture.sample(noiseSampler, noiseCoords).r;
if (noiseValue > revealage) {
discard_fragment();
}
float edgeDist = revealage - noiseValue;
if (edgeDist < edgeWidth) {
float t = edgeDist / edgeWidth;
float3 edgeColor = edgeBrightness * mix(outerColor, innerColor, t);
_output.color.rgb = edgeColor;
}
Notice that the revealage parameter is exposed as a material parameter, since you might want to animate it. There are other internal constants, such as edge width and noise scale that can be fine-tuned to get the desired effect with your content.
Different noise textures produce different dissolve effects, so you can experiment with that as well. I just used this multioctave value noise image:
Load the image as a UIImage or NSImage and set it on the material property that gets exposed as noiseTexture:
material.setValue(SCNMaterialProperty(contents: noiseImage), forKey: "noiseTexture")
You'll need to add bloom as a post-process to get that glowy, e-wire effect. In SceneKit, this is as simple as enabling the HDR pipeline and setting some parameters:
let camera = SCNCamera()
camera.wantsHDR = true
camera.bloomThreshold = 0.8
camera.bloomIntensity = 2
camera.bloomBlurRadius = 16.0
camera.wantsExposureAdaptation = false
All of the numeric parameters will potentially need to be tuned to your content.
To keep things tidy, I prefer to keep shader modifiers in their own text files (I named mine "dissolve.fragment.txt"). Here's how to load some modifier code and attach it to a material.
let modifierURL = Bundle.main.url(forResource: "dissolve.fragment", withExtension: "txt")!
let modifierString = try! String(contentsOf: modifierURL)
material.shaderModifiers = [
SCNShaderModifierEntryPoint.fragment : modifierString
]
And finally, to animate the effect, you can use a CABasicAnimation wrapped with a SCNAnimation:
let revealAnimation = CABasicAnimation(keyPath: "revealage")
revealAnimation.timingFunction = CAMediaTimingFunction(name: .linear)
revealAnimation.duration = 2.5
revealAnimation.fromValue = 0.0
revealAnimation.toValue = 1.0
let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation)
material.addAnimation(scnRevealAnimation, forKey: "Reveal")

Related

How to render a SceneKit shader at a lower resolution?

I'm adding some visual elements to my app with SceneKit shader modifiers like this:
// A SceneKit scene with orthographic projection
let shaderBundle = Bundle(for: Self.self)
let shaderUrl = shaderBundle.url(forResource: "MyShader.frag", withExtension: nil)!
let shaderString = try! String(contentsOf: shaderUrl)
let plane = SCNPlane(width: 512, height: 512) // 1024x1024 pixels on devices with x2 screen resolution
plane.firstMaterial!.shaderModifiers = [SCNShaderModifierEntryPoint.fragment: shaderString]
let planeNode = SCNNode(geometry: plane)
rootNode.addChildNode(planeNode)
The problem is slow performance because SceneKit is painstakingly rendering every single pixel of the plane that's screening the shader. How do I decrease the resolution of the shader keeping the plain's size unchanged?
I've already tried making plane smaller and using an enlarging scale transformation on planeNode but fruitless, the rendition of the shader remained as highly detailed as before.
Using plane.firstMaterial!.diffuse.contentsTransform didn't help either (or maybe I was doing it wrong).
I know I could make the global SCNView smaller and then apply an affine scale transform if that shader was the only node in the scene but it's not, there are other nodes (that aren't shaders) in the same scene and I'd prefer to avoid altering their appearance in any way.
Seems like I managed to solve it using a sort of "render to texture" approach by nesting a SceneKit scene inside a SpriteKit scene being displayed by the top level SceneKit scene.
Going into more detail, the following subclass of SCNNode is placing a downscaled shader plane within a SpriteKit's SK3DNode, then taking that SK3DNode and putting it inside a SpriteKit scene as a SceneKit's SKScene, and then using that SKScene as the diffuse contents of an upscaled plane put inside the top level SceneKit scene.
Strangely, for keeping the native resolution I need to use scaleFactor*2, so for halving the rendering resolution (normally scale factor 0.5) I actually need to use scaleFactor = 1.
If anyone happens to know the reason for this strange behavior or a workaround for it, please let me know in a comment.
import Foundation
import SceneKit
import SpriteKit
class ScaledResolutionFragmentShaderModifierPlaneNode: SCNNode {
private static let nestedSCNSceneFrustumLength: CGFloat = 8
// For shader parameter input
let shaderPlaneMaterial: SCNMaterial
// shaderModifier: the shader
// planeSize: the size of the shader on the screen
// scaleFactor: the scale to be used for the shader's rendering resolution; the lower, the faster
init(shaderModifier: String, planeSize: CGSize, scaleFactor: CGFloat) {
let scaledSize = CGSize(width: planeSize.width*scaleFactor, height: planeSize.height*scaleFactor)
// Nested SceneKit scene with orthographic projection
let nestedSCNScene = SCNScene()
let camera = SCNCamera()
camera.zFar = Double(Self.nestedSCNSceneFrustumLength)
camera.usesOrthographicProjection = true
camera.orthographicScale = Double(scaledSize.height/2)
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.simdPosition = simd_float3(x: 0, y: 0, z: Float(Self.nestedSCNSceneFrustumLength/2))
nestedSCNScene.rootNode.addChildNode(cameraNode)
let shaderPlane = SCNPlane(width: scaledSize.width, height: scaledSize.height)
shaderPlaneMaterial = shaderPlane.firstMaterial!
shaderPlaneMaterial.shaderModifiers = [SCNShaderModifierEntryPoint.fragment: shaderModifier]
let shaderPlaneNode = SCNNode(geometry: shaderPlane)
nestedSCNScene.rootNode.addChildNode(shaderPlaneNode)
// Intermediary SpriteKit scene
let nestedSCNSceneSKNode = SK3DNode(viewportSize: scaledSize)
nestedSCNSceneSKNode.scnScene = nestedSCNScene
nestedSCNSceneSKNode.position = CGPoint(x: scaledSize.width/2, y: scaledSize.height/2)
nestedSCNSceneSKNode.isPlaying = true
let intermediarySKScene = SKScene(size: scaledSize)
intermediarySKScene.backgroundColor = .clear
intermediarySKScene.addChild(nestedSCNSceneSKNode)
let intermediarySKScenePlane = SCNPlane(width: scaledSize.width, height: scaledSize.height)
intermediarySKScenePlane.firstMaterial!.diffuse.contents = intermediarySKScene
let intermediarySKScenePlaneNode = SCNNode(geometry: intermediarySKScenePlane)
let invScaleFactor = 1/Float(scaleFactor)
intermediarySKScenePlaneNode.simdScale = simd_float3(x: invScaleFactor, y: invScaleFactor, z: 1)
super.init()
addChildNode(intermediarySKScenePlaneNode)
}
required init?(coder: NSCoder) {
fatalError()
}
}
In general, without a fairly new GPU feature called variable rasterization rate in Metal or variable rate shading elsewhere, you can’t make one object in a scene run its fragment shader at a different resolution than the rest of the scene.
For this case, depending on what your setup is, you might be able to use SCNTechnique to render the plane in a separate pass at a different resolution, then composite that back into your scene, in the same way some game engines render particles at a lower resolution to save on fill rate. Here’s an example.
First, you’ll need a Metal file in your project (if you already have one, just add to it), containing the following:
#include <SceneKit/scn_metal>
struct QuadVertexIn {
float3 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 uv [[ attribute(SCNVertexSemanticTexcoord0) ]];
};
struct QuadVertexOut {
float4 position [[ position ]];
float2 uv;
};
vertex QuadVertexOut quadVertex(QuadVertexIn v [[ stage_in ]]) {
QuadVertexOut o;
o.position = float4(v.position.x, -v.position.y, 1, 1);
o.uv = v.uv;
return o;
}
constexpr sampler compositingSampler(coord::normalized, address::clamp_to_edge, filter::linear);
fragment half4 compositeFragment(QuadVertexOut v [[ stage_in ]], texture2d<half, access::sample> compositeInput [[ texture(0) ]]) {
return compositeInput.sample(compositingSampler, v.uv);
}
Then, in your SceneKit code, you can set up and apply the technique like this:
let technique = SCNTechnique(dictionary: [
"passes": ["drawLowResStuff":
["draw": "DRAW_SCENE",
// only draw nodes that are in this category
"includeCategoryMask": 2,
"colorStates": ["clear": true, "clearColor": "0.0"],
"outputs": ["color": "lowResStuff"]],
"drawScene":
["draw": "DRAW_SCENE",
// don’t draw nodes that are in the low-res-stuff category
"excludeCategoryMask": 2,
"colorStates": ["clear": true, "clearColor": "sceneBackground"],
"outputs": ["color": "COLOR"]],
"composite":
["draw": "DRAW_QUAD",
"metalVertexShader": "quadVertex",
"metalFragmentShader": "compositeFragment",
// don’t clear what’s currently there (the rest of the scene)
"colorStates": ["clear": false],
// use alpha blending
"blendStates": ["enable": true, "colorSrc": "srcAlpha", "colorDst": "oneMinusSrcAlpha"],
// supply the lowResStuff render target to the fragment shader
"inputs": ["compositeInput": "lowResStuff"],
// draw into the main color render target
"outputs": ["color": "COLOR"]]
],
"sequence": ["drawLowResStuff", "drawScene", "composite"],
"targets": ["lowResStuff": ["type": "color", "scaleFactor": 0.5]]
])
// mark the plane node as belonging to the category of stuff that gets drawn in the low-res pass
myPlaneNode.categoryBitMask = 2
// apply the technique to the scene view
mySceneView.technique = technique
With a test scene consisting of two spheres with the same texture, and the scaleFactor set to 0.25 instead of 0.5 to exaggerate the effect, the result looks like this.
If you’d prefer sharp pixelation instead of the blurrier resizing depicted above, change filter::linear to filter::nearest in the Metal code. Also, note that the low-res content being composited in is not taking into account the depth buffer, so if your plane is supposed to appear “behind” other objects then you’ll have to do some more work in the compositing function to fix that.

How can I get normal shading on a SKSpriteNode with a custom shader?

I've been doing some work in SpriteKit, and I can't seem to get custom shaders and the pseudo 3D lighting effects from a normal texture to work at the same time.
I have a pair of PNG textures, representing a shape with its basic coloring, and a normal map of the same image. If I create an SKSpriteNode, using those textures and add a light to the scene, I see the bumpiness and beveled edges I expect.
cactus = SKSpriteNode(imageNamed: "Saguaro.png")
cactus.normalTexture = SKTexture(imageNamed: "Saguaro_n")
cactus.position = sceneCenter
cactus.lightingBitMask = 1
light = SKLightNode()
light.position = CGPoint.zero
light.lightColor = UIColor.white
light.isEnabled = true
light.categoryBitMask = 1
light.ambientColor = UIColor.white
light.falloff = 0.3
If, however, I add a custom shader, I get the flat color of just the colors in the texture image. (Code below)
// Assign a shader to the SpriteNode
// Shader loaded from a file with code below
cactus.shader = myShader
// Shader code
void main(void) {
gl_FragColor = SKDefaultShading();
}
Is there something I can do in the shader to use the built-in lighting effects from the normal map? I'm not too familiar with writing custom fragment shaders, so perhaps there's something obvious I'm not doing.

How do you play a video with alpha channel using AVFoundation?

I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.
I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)

How to add transparency with a shader in SceneKit?

I would like to have a transparency effect from an image, for now I just test with a torus, but the shader does not seem to work with alpha. From what I understood from this thread (Using Blending Functions in Scenekit) and this wiki link about transparency : (http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Transparency), GLBlendFunc is replaced by pragma transparency in SceneKit.
Would you know what is wrong with this code?
I created a new project with SceneKit, and I changed the ship mesh for a torus.
EDIT :
I am trying with a plane, but the image below does not appear inside the plane, instead I get the image with the red and brownish boxes below.
My image with alpha :
The result (the image with alpha should replace the brownish color) :
let plane = SCNPlane(width: 2, height: 2)
var texture = SKTexture(imageNamed:"small")
texture.filteringMode = SKTextureFilteringMode.Nearest
plane.firstMaterial?.diffuse.contents = texture
let ship = SCNNode(geometry: plane) //SCNTorus(ringRadius: 1, pipeRadius: 0.5)
ship.position = SCNVector3(x: 0, y: 0, z: 15)
scene.rootNode.addChildNode(ship)
let myscale : CGFloat = 10
let box = SCNBox(width: myscale, height: myscale, length: myscale, chamferRadius: 0)
box.firstMaterial?.diffuse.contents = UIColor.redColor()
let theBox = SCNNode(geometry: box)
theBox.position = SCNVector3(x: 0, y: 0, z: 5)
scene.rootNode.addChildNode(theBox)
let scnView = self.view as SCNView
scnView.scene = scene
scnView.backgroundColor = UIColor.blackColor()
var shaders = NSMutableDictionary()
shaders[SCNShaderModifierEntryPointFragment] = String(contentsOfFile: NSBundle.mainBundle().pathForResource("test", ofType: "shader")!, encoding: NSUTF8StringEncoding, error: nil)
var material = SCNMaterial()
material.shaderModifiers = shaders
ship.geometry?.materials = [material]
The shader :
#pragma transparent
#pragma body
_output.color.rgba = vec4(0.0, 0.2, 0.0, 0.2);
SceneKit uses premultiplied alpha (r, g and b fields should be multiplied by the desired a) :
vec4(0.0, 0.2, 0.0, 0.2); // `vec4(0.0, 1.0, 0.0, 1.0) * alpha` with alpha = 0.2
I was struggling with this problem too. Finally I found out that to make '#pragma transparent' work, I had to add it to another shader other than the one executing my transparency code.
For example, I added transparency code to the surface shader, and added '#pragma transparent' to the geometry shader. The Apple API document also added '#pragma transparent' to the geometry shader, don't know if they were intended to do so.
NSString *geometryScript = #""
"#pragma transparent";
NSString *surfaceScript = #""
//"#pragma transparent" // You must not put it together with the transparency code
"float a = 0.1;"
"_surface.diffuse = vec4(_surface.diffuse.rgb * a, a);";
// This works for the transparency code in surface shader too.
//NSString *fragmentScript = #""
//"#pragma transparent";
yourMaterial.shaderModifiers = #{SCNShaderModifierEntryPointGeometry:geometryScript,
SCNShaderModifierEntryPointSurface:surfaceScript};
This code works in iOS 11.2, Xcode 9.2.
This rule applies to SCNShaderModifierEntryPointFragment shader as well. Likewise, if you want to change transparency there, you can add '#pragma transparent' to the geometry shader or the surface shader. I haven't tested SCNShaderModifierEntryPointLightingModel shader.
If you don't add any '#pragma transparent' to a shader, a black background may be blended with the transparent pixels.
Adding transparency can be quite easily done in the SCNShadable Surface or Fragment entry point
The SCNShaderModifierEntryPointSurface entry point version
#pragma transparent
#pragma body
_surface.diffuse.a = 0.5;
The SCNShaderModifierEntryPointFragment entry point version
#pragma transparent
#pragma body
_output.color.a = 0.5;

Punching alpha fillled holes to render-to-textures in Three.js

I am using render-to-texture to do postprocessing and then blending several 2D layers together.
Currently I am using stencil mask to make "holes" in render-to-texture targets and leaving some of the areas transparent. However, this is little cumbersome in my case. I'd rather ignore the stencil mask and then just would use normal polyfill operations to draw the holes.
What kind of methods there exist for rendering "fill to alpha 0.0" areas in the scene? I.e. the existing rendet-to-texture destination alpha value would be ignored and just replaced with 0.0 value. I assume you can set OpenGL mode bits so (how?) that this can done, without the need of using a custom fragment shader.
I already know how to set depth mask to ignore mode, so I can redraw over the top of the existing polygons.
You just have to use the THREE.NoBlending blending mode in the material used in the polygons you draw to make the holes.The material should be a ShaderMaterial so you can write the desired alpha, like here:
var r = 0.5;
var g = 0;
var b = 0;
var a = 0.8;
var material = new THREE.ShaderMaterial( {
uniforms: {
col: { type: "v4", value: new THREE.Vector4( r, g, b, a ) }
},
fragmentShader: "uniform vec4 col; void main() {\n\tgl_FragColor = col;\n}",
side: THREE.DoubleSide
} );
material.transparent = true;
material.blending = THREE.NoBlending;
(Note that the DoubleSide parameter is not related to the problem but it is useful sometimes.)

Resources