fullscreen quad in pixel shader has screen coordinates? - directx

i have a 640x480 rendertarget (its the main backbuffer).
im passing a fullscreen quad to the vertex shader, the fullscreen quad has coordinates between [-1,1] for both X and Y, that means that i only pass the coordinates into the pixel shader with no calculation:
struct VSInput
{
float4 Position : SV_POSITION0;
};
struct VSOutput
{
float4 Position : SV_POSITION0;
};
VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;
output.Position = input.Position;
return output;
}
but on the pixel shader, the x and y coordinate for each fragment is in screen space (0 < x < 640 and 0 < y < 480).
why is that? i always thought that the coordinates would get interpolated on their way to the pixel shader and be set between -1,1 and in this case even more so because I'm passing the coordinates between -1 and 1 hardcoded on the vertex shader!
but truth is, this pixel shader works:
float x = input.Position.x;
if(x < 200)
output.Diffuse = float4(1.0, 0.0, 0.0, 1.0);
else if( x > 400)
output.Diffuse = float4(0.0, 0.0, 1.0, 1.0);
else
output.Diffuse = float4(0.0, 1.0, 0.0, 1.0);
return output;
it outputs 3 color stripes on my rendering window, but if i change the values from screen space (the 200 and 400 from the code above) to -1,1 space and use something like if(x < 0.5) it wont work.
i already tried
float x = input.Position.x / input.Position.w;
because i read somewhere that that way i could get values between -1,1 but it doesn't work either.
thanks in advance.

From the MSDN on the Semantics-page about SV_POSITION:
When used in a pixel shader, SV_Position describes the pixel location.
So you are seeing expected behavior.
The best solution is to pass screen space coordinates as an additional parameter. I like to use this "full-screen-triangle" vertex shader:
struct VSQuadOut {
float4 position : SV_Position;
float2 uv: TexCoord;
};
// outputs a full screen triangle with screen-space coordinates
// input: three empty vertices
VSQuadOut VSQuad( uint vertexID : SV_VertexID ){
VSQuadOut result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
(Original source: Full screen quad without vertex buffer?)

Related

Shadow Mapping - Space Transformations are going bad

I am currently studying shadow mapping, and my biggest issue right now is the transformations between spaces. This is my current working theory/steps.
Pass 1:
Get depth of pixel from camera, store in depth buffer
Get depth of pixel from light, store in another buffer
Pass 2:
Use texture coordinate to sample camera's depth buffer at current pixel
Convert that depth to a view space position by multiplying the projection coordinate with invProj matrix. (also do a perspective divide).
Take that view position and multiply by invV (camera's inverse view) to get a world space position
Multiply world space position by light's viewProjection matrix.
Perspective divide that projection-space coordinate, and manipulate into [0..1] to sample from light depth buffer.
Get current depth from light and closest (sampled) depth, if current depth > closest depth, it's in shadow.
Shader Code
Pass1:
PS_INPUT vs(VS_INPUT input) {
output.pos = mul(input.vPos, mvp);
output.cameraDepth = output.pos.zw;
..
float4 vPosInLight = mul(input.vPos, m);
vPosInLight = mul(vPosInLight, light.viewProj);
output.lightDepth = vPosInLight.zw;
}
PS_OUTPUT ps(PS_INPUT input){
float cameraDepth = input.cameraDepth.x / input.cameraDepth.y;
//Bundle cameraDepth in alpha channel of a normal map.
output.normal = float4(input.normal, cameraDepth);
//4 Lights in total -- although only 1 is active right now. Going to use r/g/b/a for each light depth.
output.lightDepths.r = input.lightDepth.x / input.lightDepth.y;
}
Pass 2 (Screen Quad):
float4 ps(PS_INPUT input) : SV_TARGET{
float4 pixelPosView = depthToViewSpace(input.texCoord);
..
float4 pixelPosWorld = mul(pixelPosView, invV);
float4 pixelPosLight = mul(pixelPosWorld, light.viewProj);
float shadow = shadowCalc(pixelPosLight);
//For testing / visualisation
return float4(shadow,shadow,shadow,1);
}
float4 depthToViewSpace(float2 xy) {
//Get pixel depth from camera by sampling current texcoord.
//Extract the alpha channel as this holds the depth value.
//Then, transform from [0..1] to [-1..1]
float z = (_normal.Sample(_sampler, xy).a) * 2 - 1;
float x = xy.x * 2 - 1;
float y = (1 - xy.y) * 2 - 1;
float4 vProjPos = float4(x, y, z, 1.0f);
float4 vPositionVS = mul(vProjPos, invP);
vPositionVS = float4(vPositionVS.xyz / vPositionVS.w,1);
return vPositionVS;
}
float shadowCalc(float4 pixelPosL) {
//Transform pixelPosLight from [-1..1] to [0..1]
float3 projCoords = (pixelPosL.xyz / pixelPosL.w) * 0.5 + 0.5;
float closestDepth = _lightDepths.Sample(_sampler, projCoords.xy).r;
float currentDepth = projCoords.z;
return currentDepth > closestDepth; //Supposed to have bias, but for now I just want shadows working haha
}
CPP Matrices
// (Position, LookAtPos, UpDir)
auto lightView = XMMatrixLookAtLH(XMLoadFloat4(&pos4), XMVectorSet(0,0,0,1), XMVectorSet(0,1,0,0));
// (FOV, AspectRatio (1000/680), NEAR, FAR)
auto lightProj = XMMatrixPerspectiveFovLH(1.57f , 1.47f, 0.01f, 10.0f);
XMStoreFloat4x4(&_cLightBuffer.light.viewProj, XMMatrixTranspose(XMMatrixMultiply(lightView, lightProj)));
Current Outputs
White signifies that a shadow should be projected there. Black indicates no shadow.
CameraPos (0, 2.5, -2)
CameraLookAt (0, 0, 0)
CameraFOV (1.57)
CameraNear (0.01)
CameraFar (10.0)
LightPos (0, 2.5, -2)
LightLookAt (0, 0, 0)
LightFOV (1.57)
LightNear (0.01)
LightFar (10.0)
If I change the CameraPosition to be (0, 2.5, 2), basically just flipped on the Z axis, this is the result.
Obviously a shadow shouldn't change its projection depending on where the observer is, so I think I'm making a mistake with the invV. But I really don't know for sure. I've debugged the light's projView matrix, and the values seem correct - going from CPU to GPU. It's also entirely possible I've misunderstood some theory along the way because this is quite a tricky technique for me.
Aha! Found my problem. It was a silly mistake, I was calculating the depth of pixels from each light, but storing them in a texture that was based on the view of the camera. The following image should explain my mistake better than I can with words.
For future reference, the solution I decided was to scrap my idea for storing light depths in texture channels. Instead, I basically make a new pass for each light, and bind a unique depth-stencil texture to render the geometry to. When I want to do light calculations, I bind each of the depth textures to a shader resource slot and go from there. Obviously this doesn't scale well with many lights, but for my student project where I'm only required to have 2 shadow casters, it suffices.
_context->DrawIndexed(indexCount, 0, 0); //Draw to regular render target
_sunlight->use(1, _context); //Use sunlight shader (basically just runs a Vertex Shader & Null Pixel shader so depth can be written to depth map)
_sunlight->bindDSVSetNullRenderTarget(_context);
_context->DrawIndexed(indexCount, 0, 0); //Draw to sunlight depth target
bindDSVSetNullRenderTarget(ctx){
ID3D11RenderTargetView* nullrv = { nullptr };
ctx->OMSetRenderTargets(1, &nullrv, _sunlightDepthStencilView);
}
//The purpose of setting a null render target before doing the draw call is
//that a draw call with only a depth target bound is much faster.
//(At least I believe so, from my reading online)

SKSpriteNode nearest neighbor using SKShader fragment shader

I'm trying to apply palette swapping capabilities in my spritekit 2d pixel art game and it appears that when applying an SKShader the filteringMode on the SKSpriteNode's texture is ignored.
As a result, I believe I need to apply nearest neighbor coloring first, then do the palette swapping logic second.
Based on some code found here on shader toy I've made this attempt which seems like the right direction and the logic seems sound to me if the coordinates are normalized and (0.0, 0.0) is the bottom left and (1.0, 1.0) top right, but the result is coming out WAY too blocky.
https://www.shadertoy.com/view/MllSzX
My adaptation for a shader.fsh file:
void main() {
float texSize = 48.0;
vec2 pixel = v_tex_coord * texSize;
float c_onePixel = 1.0 / texSize;
pixel = (floor(pixel) / texSize);
gl_FragColor = texture2D(u_texture, pixel + vec2(c_onePixel/2.0));
}
How can I get nearest neighbor working on my SKShader before I move on to my palette swapping?
Not a perfect answer to my own but I managed to prevent this problem by adding PrefersOpenGL to YES in my info.plist however I understand that it is preferred to use Metal when possible on ios
I ran into a similar issue. I wanted to create an outline around pixels while keeping it pixelated but it was blurring the existing pixels making it look bad. I ended up implementing nearest neighbor then checking the neighbor pixels after calculating the nearest neighbor location to see if the pixel had an alpha greater than 0. If it did I'd fill in the pixel. Here is how I did it:
Outline.fsh:
vec2 nearestNeighbor(vec2 loc, vec2 size) {
vec2 onePixel = vec2(1.0, 1.0) / size;
vec2 coordinate = floor(loc * size) / size;
return coordinate + onePixel / 2.0;
}
void main() {
vec2 onePixel = vec2(1.0, 1.0) / a_sprite_size;
vec4 texture = texture2D(u_texture, nearestNeighbor(v_tex_coord, a_sprite_size)); // Nearest neighbor for the current pixel
if (texture.a == 0.0) {
// Pixel has no alpha, check if any neighboring pixels have a non 0 alpha
vec4 outlineColor = vec4(0.9, 0.9, 0.0, 1.0);
if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(onePixel.x, 0), a_sprite_size)).a > 0.0) {
// Right neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(-onePixel.x, 0), a_sprite_size)).a > 0.0) {
// Left neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(0, onePixel.y), a_sprite_size)).a > 0.0) {
// Top neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else if (texture2D(u_texture, nearestNeighbor(v_tex_coord + vec2(0, -onePixel.y), a_sprite_size)).a > 0.0) {
// Bottom neighbor has an alpha > 0
gl_FragColor = outlineColor;
} else {
// No neighbors with an alpha > 0, don't change the color
gl_FragColor = texture;
}
} else {
// Pixel has an alpha > 0
gl_FragColor = texture;
}
}
You then need to add the shader to your sprite and set the defined attributes on your sprite and shader so values defined in the shader can be used.
spriteNode.setValue(SKAttributeValue(vectorFloat2: vector_float2(Float(spriteNode.size.width), Float(spriteNode.size.height))), forAttribute: "a_sprite_size")
let shader = SKShader(fileNamed: "Outline.fsh")
shader.attributes = [
SKAttribute(name: "a_sprite_size", type: .vectorFloat2)
]
spriteNode.shader = shader
Hopefully this helps anyone else that has a similar issue!

DirectX + GLM Depth Reconstruction issues

I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.
So basically I get my GLM matrices:
struct DefferedUBO {
glm::mat4 view;
glm::mat4 invProj;
glm::vec4 eyePos;
glm::vec4 resolution;
};
DefferedUBO deffUBOBuffer;
// ...
glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
transform->GetPosition(),
transform->GetPosition() + transform->GetForward(),
transform->GetUp()
);
deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);
if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}
// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());
Then in HLSL, I simply use the following:
cbuffer MatrixInfoType {
matrix invView;
matrix invProj;
float4 eyePos;
float4 resolution;
};
float4 ViewPosFromDepth(float depth, float2 TexCoord) {
float z = depth; // * 2.0 - 1.0;
float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
float4 viewSpacePosition = mul(invProj, clipSpacePosition);
viewSpacePosition /= viewSpacePosition.w;
return viewSpacePosition;
}
float3 WorldPosFromViewPos(float4 view) {
float4 worldSpacePosition = mul(invView, view);
return worldSpacePosition.xyz;
}
float3 WorldPosFromDepth(float depth, float2 TexCoord) {
return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}
// ...
// Sample the hardware depth buffer.
float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;
Here's the result:
This just looks like random colors multiplied with the depth.
Ironically when I remove transposing, I get something closer to the truth, but not quite:
You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.
The correct version, along with Albedo, Specular, and Normals.
I fixed my problem at gamedev.net. There was a matrix majorness issue as well as a depth handling issue.
https://www.gamedev.net/forums/topic/692095-d3d-glm-depth-reconstruction-issues

Linear Depth to World Position

I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.

HLSL Modify Tessellation shader to make equilateral triangles?

Details:
I'm in the proccess of procedural planet generation; so far I have done the dynamic LOD work, but my current software algorithm is very very slow. I decided to do it using DX11's new tessellation features instead.
Currently my sphere is a subdivided icosahedron. (20 sides all equilateral triangles)
Back when I was subdividing using my software algorithm, one triangle would be
split into four children across the midpoints of the parent forming the Hyrule symbol each time...like this: http://puu.sh/1xFIx
As you can see, each triangle subdivided created more and more equilateral triangles, i.e. each one was exactly the same shape.
But now that I am using the GPU to tessellate in HLSL, the result is definately not
what I am looking for: http://puu.sh/1xFx7
Questions:
Is there anything I can do in the Hull and Domain shaders to change the tessellation
so that it subdivides into sets of equilateral triangles like the first image?
Should I be using the geometry shader for something like this? If so, would it be
slower then the tessellator?
I tried using Tessellation Shader, but I encontred a problem: the domain shader only pass the uv coordinate (SV_DomainLocation) and the input patch for positionining the vertices, when the domain location for vertex is 0.3, 0.3, 0.3 (center vertex) is impossible to know the correct position because you need information about the other vertices or a index(x, y) of iteration that's not provided by the Domain Shader Stage.
because this problem I write the code in geometry shader, this shader is very limited for tessellations because the output stream cannot have a size bigger than 1024 bytes (in shader model 5.0). I implemented the calculation of vertex positions using the uv (like SV_DomainLocation) but this only tessellate the triangles, you must use part of your code to calculate added position in center of triangles to create the precise final result.
this is the code for equilateral triangles tessellation:
// required for array
#define MAX_ITERATIONS 5
void DrawTriangle(float4 p0, float4 p1, float4 p2, inout TriangleStream<VS_OUT> stream)
{
VS_OUT v0;
v0.pos = p0;
stream.Append(v0);
VS_OUT v1;
v1.pos = p1;
stream.Append(v1);
VS_OUT v2;
v2.pos = p2;
stream.Append(v2);
stream.RestartStrip();
}
[maxvertexcount(128)] // directx rule: maxvertexcount * sizeof(VS_OUT) <= 1024
void gs(triangle VS_OUT input[3], inout TriangleStream<VS_OUT> stream)
{
int itc = min(tess, MAX_ITERATIONS);
float fitc = itc;
float4 past_pos[MAX_ITERATIONS];
float4 array_pass[MAX_ITERATIONS];
for (int pi = 0; pi < MAX_ITERATIONS; pi++)
{
past_pos[pi] = float4(0, 0, 0, 0);
array_pass[pi] = float4(0, 0, 0, 0);
}
// -------------------------------------
// Tessellation kernel for the control points
for (int x = 0; x <= itc; x++)
{
float4 last;
for (int y = 0; y <= x; y++)
{
float2 seg = float2(x / fitc, y / fitc);
float3 uv;
uv.x = 1 - seg.x;
uv.z = seg.y;
uv.y = 1 - (uv.x + uv.z);
// ---------------------------------------
// Domain Stage
// uv Domain Location
// x,y IterationIndex
float4 fpos = input[0].pos * uv.x;
fpos += input[1].pos * uv.y;
fpos += input[2].pos * uv.z;
if (x > 0 && y > 0)
{
DrawTriangle(past_pos[y - 1], last, fpos, stream);
if (y < x)
{
// add adjacent triangle
DrawTriangle(past_pos[y - 1], fpos, past_pos[y], stream);
}
}
array_pass[y] = fpos;
last = fpos;
}
for (int i = 0; i < MAX_ITERATIONS; i++)
{
past_pos[i] = array_pass[i];
}
}
}

Resources