I have a SCNLight with type SCNLightTypeDirectional. When scene rendered, model casts shadows on itself and It wasn't I expectation. How to exclude model's shadows on itself?
Or how to smooth the shadows edge? It looks very unnatural now.
There is the scenario :
Well, I find a simple way to achieve this but loss some material details.
Change the light model of material to SCNLightingModelConstant and exclude model from lighting calculation of your SCNLight.
1. set light model
SCNLightingModelConstant only consider ambient light to shading, so We need ambient lights to keep model visible.
model.geometry.materials.firstObject.lightingModelName = SCNLightingModelConstant;
2. set category bit mask of model and lights
model.categoryBitMask = 1;
directionalLight.categoryBitMask = ~1UL;
If results of bitwise AND of categoryBitMask is zero, node will not take consideration into light illumination, so there no self-shadows anymore. Shadows model casted will still remain in scene.
Related
The Issue
I've set up a minimal SceneKit project with a scene that contains the default airplane with a transparent plane that acts as a shadow receiver. I've duplicated this setup so there are two airplanes and two transparent shadow planes.
There is a directional light that cast shadows and has its shadowMode property set to .deferred. When the two shadow planes overlap, the plane that is closer to the camera 'cuts out' the shadow on the plane that is further away from the camera.
I know this is due to the fact that the plane's material has its .writesToDepthBuffer property set to true. However, without this the deferred shadows don't work.
The Question
Is there a way to show shadows on multiple overlapping planes? I know I can use SCNFloor to show multiple shadows but I specifically want shadows on multiple planes with a different Y position. Think of a scenario in ARKit where multiple planes are detected.
The Code
I've set up a minimal project on GitHub here.
Making both Y values of shadow planes closer enough will solve the cutoff issue.
In SceneKit it's a regular behaviour of two different planes that have a shadow projections. For getting a robust shadows use just one 3d object (plane or custom-shape geometry if you need different floor levels) as a shadow catcher.
If you have several 3D objects with Writes depth option turned On use Rendering order properties for each object. Nodes with greater rendering orders are rendered last. Default value of Rendering order is zero.
For instance:
geoNodeOne.renderingOrder = -1 /* Rendered first */
geoNodeTwo.renderingOrder = 50 /* Rendered last */
But in your case Rendering order property is useless because one shadow-projected plane blocks the other one.
To model a custom-shape geometry use Extrude Tool in 3D modelling app (like Maya or 3dsMax):
I have a metal view that displays some textured quads. The textures are loaded from PNGs so are premultiplied. Some of the textures have transparent pixels.
When I enable blending and draw in the right order, the transparency works and you can see quads beneath other quads through the transparent parts of the textures. However, I'm having to calculate the right draw order by sorting which is expensive and slowing down my rendering a lot.
When I've tried to use depth stencils and draw in any order, I can get the order working correctly using z position, but then the blending stops working. The transparent parts of the texture reveal the background color of the metal scene rather than the quad below.
What am I doing wrong? Is there a way to get this working and could someone provide some example code?
The other option I see is to try and do the sorting on the GPU, which would be fine as the GPU frame time is significantly smaller than the CPU frame time. However, I'm also not sure how to do this.
Any help would be greatly appreciated. :)
Alpha blending is an order-dependent transparency technique. This means that the (semi-)transparent objects cannot be rendered in any arbitrary order as is the case for (more expensive) order-independent transparency techniques.
Make sure your transparent 2D objects (e.g., circle, rectangle, etc.) have different depth values. (This way you can define the draw ordering yourself. Otherwise the draw ordering depends on the implementation of the sorting algorithm and the initial ordering before sorting.)
Sort these 2D objects based on their depth value from back to front.
Draw the 2D objects from back to front (painter's algorithm) using alpha blending. (Of course, your 2D objects need an alpha value < 1 to actually see some blending.)
And you need to setup pipelineStateDescriptor correctly:
// To have depth buffer.
pipelineStateDescriptor.depthAttachmentPixelFormat = .depth32Float
// To use transparency.
pipelineStateDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = .sourceAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Hope this helps. From here
The goal is to simulate lighting similar to these images:
http://i.stack.imgur.com/4Kh0S.jpg
http://i.stack.imgur.com/LMePj.jpg
http://i.stack.imgur.com/mGfva.jpg
There is little documentation on SceneKit lighting, and how different lighting types interact with each other (e.g., what happens if you add a spot light to a scene with an ambient light already there), so through painful trial-and-error, we have gotten this far:
As shown in the Scene Graph, there is an ambient light and a spot light. (The omni light and the directional light are hidden.) The shadows and lighting are pretty good inside the spot's cone, but everything beyond the cone of light is black.
Question 1: how do you make it so the area outside the spot's cone is not black? There is an ambient light in the scene (not the default one, one was explicitly added), so shouldn't that brighten the areas outside the cone?
Question 2: Ideally, the whole scene would be litas if inside the cone while preserving the shadows. Is this possible? Moving the spot to a high Y value (e.g., 1000) lights up the whole scene, but the cool shadows vanish.
Question 3: In the screenshot below, enabling the omni light washes out the spot's cone. Is this expected behavior? How can you combine the lights so they don't wash each other out?
Screenshot 2 (enabling omni light washes out spot lighting):
You can add additional light source to the scene with ambient type and low intensity.
Here is swift 4 example:
let light = SCNLight()
light.type = .ambient
let node = SCNNode()
node.light = alight
self.scene.rootNode.addChildNode(node)
Would love help understanding directional lights and scene shadows in Scene Kit.
The class reference on SCNLight says zFar represents the maximum distance between the light and a visible surface for casting shadows. It further suggests this value only applies to spot lights.
However, in the Xcode Scene Editor, under the Attributes Inspector, there is a field for Far Clipping. Changing this value affects shadows projected by a directional light as illustrated by the screenshots below.
The scenes below were produced by dragging a directional light into the scene and changing the X Euler Angle value to -60 and ticking the "Casts Shadows" box. The floor texture is taken from the WWDC Fox demo,
Is Far Clipping the same as zFar? If not, what's the difference?
Since directional lights ignore the position property, why does changing the Far Clipping value affect the shadows produced by a directional light?
The goal is to light the whole scene, and project shadows on nodes, as if the sun was at 3 PM in the afternoon on a cloudless day. Is it possible to use a directional light to achieve this? So far, using directional lights can achieve the look where the whole scene is lit, but cannot control shadows as well as a spotlight.
Screenshot #1: Far Clipping value is 10.
Screenshot #2: Far Clipping value is 30.
Despite what Apple's documentation says, the position of a directional light is very important when it casts shadows. zNear and zFar are distances from the directional light position.
To remove the artifact you are seeing, you will need to increase zFar or move the directional light closer to the ground. The artifact you are seeing is caused by the shadowed part being further away from the directional light than zFar.
What is the theory behind the Light Glow effect of "After Effects"?
I wanna use GLSL to make it happen. But if I at least get closer to the theory behind it, I could replicate it.
I've recently been implementing something similar. My render pipeline looks something like this:
Render Scene to texture (full screen)
Filter scene ("bright pass") to isolate the high luminance, shiny bits
Down-sample (2) to smaller texture (for performance), and do H Gaussian blur
Perform a V Gaussian blur on (3).
Blend output from (4) with the output from (1)
Display to screen.
With some parameter tweaking, you get get it looking pretty nice. Google things like "bright pass" (low pass filter), Gaussian Blur, FBO (Frame Buffer Objects) and so on. Effects like "bloom" and "HDR" also have a wealth of information about different ways of doing each of these things. I tried out about 4 different ways of doing Gaussian blur before settling on my current one.
Look at how to make shadow volumes, and instead of stenciling out a shadow, you could run a multi-pass blur on the volume, set its material to a very emissive, additive blended shader, and I imagine you'll get a similar effect.
Atlernatively, you could do the GPUGems implementation:
I will answer my own question just in case someone gets to here at the same point. With more precision (actually 100% of precision) I got to the exact After Effects's glow. The way it works is:
Apply a gaussian blur to the original image.
Extract the luma of this blurred image
Like in After Effects, you have two colors (A and B). So the secret is to make a gradient map between these color, acoording to the desired "Color Looping". If you don't know, a gradient map is an interpolation between colors (A and B in this case). Following the same vocabulary of After Effects, you need to loop X times over the "Color Looping" you chose... it means, suppose you are using the Color Looping like A->B->A, it will be considered one loop over your image (one can try this on Photoshop).
Take the luma your extract on step 2 and use as a parameter of your gradient map... in other words: luma=(0%, 50%, 100%) maps to color (A, B, A) respectively... the mid points are interpolated.
Blend your image with the original image according to the "Glow Operation" desired (Add, Multiply, etc)
This procedure work like After Effects for every single pixel. The other details of the Glow may be easily done after in basic procedure... things like "Glow Intensity", "Glow Threshold" and so on needs to be calibrated in order to get the same results with the same parameters.