I hope you can help with this. I am currently using a deferred renderer and am trying to implement SSAO but that needs view space normals and I currently have world space normals, I am trying to work out how to convert them but keep getting stuck.
This is the main part of my vertex shaders, instanceTransform is either the World matrix or transpose of the instance matrix as the same code is used for instanced models as well.
VertexShaderOutput VertexShaderFunctionCommon(VertexShaderInput input, float4x4 instanceTransform)
{
VertexShaderOutput output = (VertexShaderOutput)0;
float4x4 worldViewProjection = mul(instanceTransform, ViewProjection);
output.Position = mul(float4(input.Position.xyz,1), worldViewProjection);
output.Depth = output.Position.zw;
// calculate tangent space to world space matrix using the world space tangent, binormal, and normal as basis vectors
output.TangentToWorld[0] = normalize(mul(input.Tangent, instanceTransform));
output.TangentToWorld[1] = normalize(mul(input.Binormal, instanceTransform));
output.TangentToWorld[2] = normalize(mul(input.Normal, instanceTransform));
return output;
}
The normal calculation in the pixel shader is:
// read the normal from the normal map
float3 normalFromMap = tex2D(normalSampler, input.TexCoord);
//tranform to [-1,1]
normalFromMap = 2.0f * normalFromMap - 1.0f;
//transform into world space
normalFromMap = mul(normalFromMap, input.TangentToWorld);
//normalize the result
normalFromMap = normalize(normalFromMap);
//output the normal, in [0,1] space
output.Normal.rgb = NormalEncode(normalFromMap);
Can you help please?
Related
I'm using Metal to render a scene with a z buffer and now need to integrate this z-buffer into SceneKit's rendering. However I can't figure out how to get SceneKit to use this depth better correctly and am not even 100% sure what format SceneKit expects it's z-buffers to be in
Base on this question, my understanding was that SceneKit uses a reverse logarithmic z-buffer in the range of 1 (near) to 0 (far). However I can't get this working and objects I draw with SceneKit don't properly respect the depth buffer: they are either always showing or always hidden
First, here's how the generate a z-buffer texture in a Metal render pass:
struct FragmentOut {
float4 color [[color(0)]];
float depth [[depth(any)]];
};
fragment FragmentOut metalRenderFragment(const InOut in [[ stage_in ]]) {
FragmentOut out;
out.depth = 0; // 0 is far with reverse z buffer
...
float cameraSpaceZ = ...; // Computed in shader
// There constants are taken from SceneKit's camera and inlined here
const float zNear = 0.0010000000474974513;
const float zFar = 1000.0;
float logDepth = log(z / zNear) / log(zFar / zNear);
out.depth = 1.0 - logDepth; // Reverse the depth for scenekit
return out;
}
Then to integrate the depth buffer into SceneKit, I render a full screen quad in scenekit with a SCNProgram that uses the depth texture generated in the previous step:
fragment FragmentOut sceneKitFullScreenQuadFragment(const InOut in [[ stage_in ]],
depth2d<float, access::sample> depthTexture [[texture(1)]])
{
constexpr sampler sampler(filter::linear);
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = depth,
};
}
So two questions:
What format does SceneKit use for its z-buffer? Is it a reversed logarithmic z-buffer?
What am I doing wrong in generating the z-buffer values for SceneKit?
SceneKit uses a reverse logarithmic Z-Buffer. This post and this post show you how to get a normalized linear mapping space [0...1]. You need the opposite formula.
Also, you can toggle the value from reverseZ to directZ this way:
let sceneView = self.view as! SCNView
sceneView.usesReverseZ = true // default
Andy Jazz's answer helped but I still found the links confusing. Here's what ultimately worked for me (although there are possibly other ways to do this):
When generating the depth map (this would be inside the the metal shader in my original example) pass in SceneKit's projection transform matrix and use this to transform the depth value:
// In a metal shader generating the depth map
// The z distance from the camera, e.g. if the object
// at the current position is 5 units away, this would be 5.
const float z = ...;
// The camera points along the -z axis, so transform the -z position
// with SceneKit's projection matrix (you can get this from SCNCamera)
const float4 depthPos = (sceneKitState.projectionTransform * float4(0, 0, -z, 1));
// Then do perspective division to get the final depth value
out.depth = depthPos.z / depthPos.w;
Then inside of the SceneKit shader, simply write out the depth, taking into account usesReverseZ:
// In a scenekit, full screen quad shader
const float depth = depthTexture.sample(sampler, in.uv);
return {
.color = float4(0),
.depth = 1.0 - depth,
};
❗️ The above assumes you are using sceneView.usesReverseZ = true (the default). If you are using usesReverseZ = false, simply do .depth = depth instead
I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)
So I've been working on a Directx11/hlsl rendering engine with the goal of creating a realistic planet which you can view from both on the surface and also at a planetary level. The planet is a normalized cube, which is procedurally generated using noise and as you move closer to the surface of the planet, a binary-based triangle tree splits until the desired detail level is reached. I got vertex normal calculations to work correctly, and I recently started trying to implement normal mapping for my terrain textures, and I have gotten something that seems to work for the most part. However, when the sun is pointing almost perpendicular to the ground (90 degrees), it is way more lit up
However, from the opposite angle (270 degrees), I am getting something that seems
, but may as well be just as off.
The debug lines that are being rendered are the normal, tangent, and bitangents (which all appear to be correct and fit the topology of the terrain)
Here is my shader code:
Vertex shader:
PSIn mainvs(VSIn input)
{
PSIn output;
output.WorldPos = mul(float4(input.Position, 1.f), Instances[input.InstanceID].WorldMatrix); // pass pixel world position as opposed to screen space position for lighitng calculations
output.Position = mul(output.WorldPos, CameraViewProjectionMatrix);
output.TexCoord = input.TexCoord;
output.CameraPos = CameraPosition;
output.Normal = normalize(mul(input.Normal, (float3x3)Instances[input.InstanceID].WorldMatrix));
float3 Tangent = normalize(mul(input.Tangent, (float3x3)Instances[input.InstanceID].WorldMatrix));
float3 Bitangent = normalize(cross(output.Normal, Tangent));
output.TBN = transpose(float3x3(Tangent, Bitangent, output.Normal));
return output;
}
Pixel shader (Texcoord scalar is for smaller textures closer to planet surface):
float3 FetchNormalVector(float2 TexCoord)
{
float3 Color = NormalTex.Sample(Samp, TexCoord * TexcoordScalar);
Color *= 2.f;
return normalize(float3(Color.x - 1.f, Color.y - 1.f, Color.z - 1.f));
}
float3 LightVector = -SunDirection;
float3 TexNormal = FetchNormalVector(input.TexCoord);
float3 WorldNormal = normalize(mul(input.TBN, TexNormal));
float nDotL = max(0.0, dot(WorldNormal, LightVector));
float4 SampleColor = float4(1.f, 1.f, 1.f, 1.f);
SampleColor *= nDotL;
return float4(SampleColor.xyz, 1.f);
Thanks in advance, and let me know if you have any insight as to what could be the issue here.
Edit 1: I tried it with a fixed blue value instead of sampling from the normal texture, which gives me the correct and same results as if I had not applied mapping (as expected). Still don't have a lead on what would be causing this issue.
Edit 2: I just noticed the strangest thing. At 0, 0, +Z, there are these hard seams that only appear with normal mapping enabled
It's a little hard to see, but it seems almost like there are multiple tangents associated to the same vertex (since I'm not using indexing yet) because the debug lines appear to split on the seams.
Here is my code that I'm using to generate the tangents (bitangents are calculated in the vertex shader using cross(Normal, Tangent))
v3& p0 = Chunk.Vertices[0].Position;
v3& p1 = Chunk.Vertices[1].Position;
v3& p2 = Chunk.Vertices[2].Position;
v2& uv0 = Chunk.Vertices[0].UV;
v2& uv1 = Chunk.Vertices[1].UV;
v2& uv2 = Chunk.Vertices[2].UV;
v3 deltaPos1 = p1 - p0;
v3 deltaPos2 = p2 - p0;
v2 deltaUV1 = uv1 - uv0;
v2 deltaUV2 = uv2 - uv0;
f32 r = 1.f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
v3 Tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y) * r;
Chunk.Vertices[0].Tangent = Normalize(Tangent - (Chunk.Vertices[0].Normal * DotProduct(Chunk.Vertices[0].Normal, Tangent)));
Chunk.Vertices[1].Tangent = Normalize(Tangent - (Chunk.Vertices[1].Normal * DotProduct(Chunk.Vertices[1].Normal, Tangent)));
Chunk.Vertices[2].Tangent = Normalize(Tangent - (Chunk.Vertices[2].Normal * DotProduct(Chunk.Vertices[2].Normal, Tangent)));
Also for reference, this is the main article I was looking at while implementing all of this: link
Edit 3:
Here is an image of the planet from a distance with normal mapping enabled:
And one from the same angle without:
I've tried to understand how ESM is working - I have regular Shadow Mapping in Place (occluded/not occluded) in a deferred rendering pipeline and are trying to use ESM instead.
I've tried to adapt this from Cansin:
http://homepage.lnu.se/staff/tblma/Deferred Rendering in XNA 4.pdf
But as he does not use directional lights, I may have a misunderstanding. This is basically my approach on adapting it to directional lights:
Create ShadowMap:
float4 PS(VSO input) : COLOR0
{
float depth = input.Position2D.z / input.Position2D.w;
return exp(depth);
}
I am using an Orthogonal Projection Matrix (same NearFarClip as actual cam), as I do with regular Shadow Mapping (Position2D is ScreenSpace, because it's a directional light, Z is always the distance surface/light, or am I wrong?)
Get Shadow Factor - basically like regular Shadow Mapping, I transform into Light/Screenspace, getting the depth from the ShadowMap
float4 Position = 1;
Position.xy = input.ScreenPosition.xy;
Position.z = Depth; // saved depth from gbuffer
Position = mul(Position, InverseViewProjection);
Position /= Position.w;
float4 LightScreenPos = mul(Position, LightViewProjection);
LightScreenPos /= LightScreenPos.w;
float2 LUV = 0.5f * (float2(LightScreenPos.x, -LightScreenPos.y) + 1.0f);
float shadowDepth = tex2D(sampler_shadow, LUV).r;
float shadow = shadowDepth * exp(-10 * LightScreenPos.z);
Is my thinking fundamentally wrong?
I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.