I'm using the rastertek framework for terrain generation. I've got the terrain rendered from the vertex shader but I don't know how to calculate the normals in the shader. Theres a function call in one of the classes that generates the normals from the terrain but this only works if the terrain was generated on the cpu. Heres the code for the vertex shader I'm using:
////////////////////////////////////////////////////////////////////////////////
// Filename: terrain.vs
////////////////////////////////////////////////////////////////////////////////
#include "terrain.fx"
/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType TerrainVertexShader(VertexInputType input)
{
PixelInputType output;
input.position.y = input.position.y + terrain(input.position.x,input.position.z);
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);
// Normalize the normal vector.
output.normal = normalize(output.normal);
return output;
}
Your big problem with generating normals in a shader is that you need knowledge of surrounding vertices. This is something you can overcome with a geometry shader but not with a vertex shader. A simple way to calculate the normal is to calculate the polygon normals (take the vector formed from 2 edges and cross product them for the face normal) of all polys that hit the vertex you are looking at and then add them up and normalise. As such if you haven't got access to a geometry shader the only real solution is to use the CPU. Even then this is not the best way to calculate the vertex normals. You may still find it better to use a more complex algorithm yet and that will give you even more problems! So yeah, CPU or geometry shader ... those are, basically, your options.
Related
I'm using vs2015 and studying dx11.
I'll show you code first.
cbuffer cbperobject {
float4x4 gWorldViewProj;
};
struct VertexIn {
float3 Pos : POSITION;
float4 Color : COLOR;
};
struct VertexOut {
float4 PosH : SV_POSITION;
float4 Color : COLOR;
};
VertexOut main( VertexIn vin )
{
VertexOut vOut;
vOut.PosH = mul(float4(vin.Pos, 1.0f), gWorldViewProj);
vOut.Color = vin.Color;
return vOut;
}
This is my vertex shader code. I rahter copied it from internet.
HRESULT result;
D3D11_MAPPED_SUBRESOURCE mappedResource;
XMMATRIX* dataPtr;
UINT bufferNumber;
// Transpose the matrices to prepare them for the shader.
// Lock the constant buffer so it can be written to.
result = mD3dDContext->Map(contantBuff, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
{
return false;
}
// Get a pointer to the data in the constant buffer.
dataPtr = (XMMATRIX*)mappedResource.pData;
// Copy the matrices into the constant buffer.
XMMATRIX world = XMLoadFloat4x4(&mWorld); // 버텍스의 월드변환
XMMATRIX view = XMLoadFloat4x4(&mView); // 카메라
XMMATRIX proj = XMLoadFloat4x4(&mProj); // 직교투영
XMMATRIX worldViewProj = world*view*proj;
worldViewProj = XMMatrixTranspose(worldViewProj);
*dataPtr = worldViewProj;
// Unlock the constant buffer.
mD3dDContext->Unmap(contantBuff, 0);
// Set the position of the constant buffer in the vertex shader.
bufferNumber = 0;
// Finanly set the constant buffer in the vertex shader with the updated values.
mD3dDContext->VSSetConstantBuffers(bufferNumber, 1, &contantBuff);
return true;
This is my setting constant buffer in shader code.
First, what is difference between POSITION and SV_POSITION semantic? Would you recommend good HLSL tutorial book? I'm Korean and I'm living in Korea. There is no good book in here; I don't know why, all good book is out of print. What a bad country for studying programming.
Second, why should I transpose my camera matrix(worldviewproj matrix) before CPU gives data to GPU? It's Vertex * matrix = processed Vertex. Why should I transpose it?
Well POSITION(Semantic) gives directive to GPU, that concrete values will be placed as points in coordinate space and SV_POSITION is giving directive for pixel shader. Actually it gives order to GPU about pixels location on screen mainly in range -1 to 1. Look at this https://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx
Well seems you need Linear Algebra lessons mate. Matrix transposition is the key stone in 3d graphics. With Matrix transpositions(And same time transposed Matrix is inverse Matrix and Inverse Matrix is always Orthogonal) all Matrix Transformations are happening(Translation, Rotation, Scaling). First of all you need Linear Algebra stuff and about Rendering Api be it OpenGL or DirectX(never mind they are just API's) you can grab any book or online documentation you can look at amazon.com. Happy graphics coding pal ;).
I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.
I just started messing around with shadow mapping. I understand the algorithm used. The thing is I cannot for the life of me figure out where I am messing up in the HLSL code. Here it is:
//These change
float4x4 worldViewProj;
float4x4 world;
texture tex;
//These remain constant
float4x4 lightSpace;
float4x4 lightViewProj;
float4x4 textureBias;
texture shadowMap;
sampler TexS = sampler_state
{
Texture = <tex>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = WRAP;
AddressV = WRAP;
};
sampler TexShadow = sampler_state
{
Texture = <shadowMap>;
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
};
struct A2V
{
float3 posL : POSITION0;
float2 texCo : TEXCOORD0;
};
struct OutputVS
{
float4 posH : POSITION0;
float2 texCo : TEXCOORD0;
float4 posW : TEXCOORD2;
};
//Vertex Shader Depth Pass
OutputVS DepthVS(A2V IN)
{
OutputVS OUT = (OutputVS)0;
//Get screen coordinates in light space for texture map
OUT.posH = mul(float4(IN.posL, 1.f), lightViewProj);
//Get the depth by performing a perspective divide on the projected coordinates
OUT.posW.x = OUT.posH.z/OUT.posH.w;
return OUT;
}
//Pixel shader depth Pass
float4 DepthPS(OutputVS IN) : COLOR
{
//Texture only uses red channel, just store it there
return float4(IN.posW.x, 0, 0, 1);
}
//VertexShader Draw Pass
OutputVS DrawVS(A2V IN)
{
OutputVS OUT = (OutputVS)0;
//Get the screen coordinates for this pixel
OUT.posH = mul(float4(IN.posL, 1.f), worldViewProj);
//Send texture coordinates through
OUT.texCo = IN.texCo;
//Pass its world coordinates through
OUT.posW = mul(float4(IN.posL, 1.f), world);
return OUT;
}
//PixelShader Draw Pass
float4 DrawPS(OutputVS IN) : COLOR
{
//Get the pixels screen position in light space
float4 texCoord = mul(IN.posW, lightViewProj);
//Perform perspective divide to normalize coordinates [-1,1]
//texCoord.x = texCoord.x/texCoord.w;
//texCoord.y = texCoord.y/texCoord.w;
//Multiply by texture bias to bring in range 0-1
texCoord = mul(texCoord, textureBias);
//Get corresponding depth value
float prevDepth = tex2D(TexShadow, texCoord.xy);
//Check if it is in shadow
float4 posLight = mul(IN.posW, lightViewProj);
float currDepth = posLight.z/posLight.w;
if (currDepth >= prevDepth)
return float4(0.f, 0.f, 0.f, 1.f);
else
return tex2D(TexS, IN.texCo);
}
//Effect info
technique ShadowMap
{
pass p0
{
vertexShader = compile vs_2_0 DepthVS();
pixelShader = compile ps_2_0 DepthPS();
}
pass p1
{
vertexShader = compile vs_2_0 DrawVS();
pixelShader = compile ps_2_0 DrawPS();
}
}
I have verified that all my matrices are correct and the depth map is being drawn correctly. Rewrote all of the C++ that handles this code and made it neater and I am still getting the same problem. I am not currently blending the shadows, just drawing them flat black until I can get them to draw correctly. The light uses an orthogonal projection because it is a directional light. I dont have enough reputation points to embed images but here are the URLs: Depth Map - http://i.imgur.com/T2nITid.png
Program output - http://i.imgur.com/ae3U3N0.png
Any help or insight would be greatly appreciated as its for a school project. Thanks
The value you get from the depth buffer is float value that is from 0 to 1. As you probably already know, floating points are not accurate and the more decimal places you request the less accurate it is and this is where you end up with artifacts.
There are some things you can do. The easiest way is to make the value of the far and near Z in the projection matrix closer to each other so that the depth buffer will not use so many decimal places to represent how far away the object is. I usually find that having a value of 1-200 gives me a fairly good accurate result.
Another easy thing you can do is increase the size of the texture you are drawing on as that will give you more pixels and therefore it will represent the scene more accurately.
There are also a lot of complex things that games engines can do to improve on shadow mapping artifacts but you can write a book about that and if you really do want to get into it than I would recommended you start with the blog.
I generate simple 2D grid with triangle strip representing water surface. First generated vertex has position [0,0] and the last one has [1,1]. For my water simulation I need to store current positions of vertices to a texture and then sample these values from the texture in the next frame to get the previous state of the water surface.
So, I created the texture in a size of vertices. For example if I will have a 10x10 vertices grid, I use a texture with 10x10 pixels (one pixel for one vertex data). And set this texture as a render target to render all vertex data into it.
According to this: MSDN Coordinate Systems, If I will use current positions of vertices in the grid (bottom-left at [0;0], top-right at [1;1]), rendered texture looks like this:
So I need to do some conversion to NDC. I convert it in a vertex shader like this:
[vertex.x * 2 - 1; vertex.y * 2 - 1]
Consider this 3x3 grid:
Now, grid is stretched to whole texture size. Texture coordinates are different from NDC and apparently I can use original coordinates of the grid (before conversion) to sample values from the texture and get previous values (positions) of vertices.
Here is a sample of my vertex/pixel shader code:
This vertex shader converts coordinates and sends it to pixel shader with SV_POSITION semantics (describes the pixel location).
struct VertexInput
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct VertexOutput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
// convertes coordinates from 0,0 origin to -1,-1, etc.
float2 toNDC(float2 px)
{
return float2(px.x * 2 - 1, px.y * 2 - 1);
}
VertexOutput main( VertexInput input )
{
VertexOutput output;
float2 ndc = toNDC(float2(input.pos.x, input.pos.z));
output.pos = float4(ndc, 1, 1);
output.tex = float2(input.pos.x, input.pos.z);
return output;
}
And here's the pixel shader saving values from vertex shader at defined pixel location (SV_POSITION).
struct PixelInput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PixelInput input) : SV_TARGET
{
return float4(input.tex.x, input.tex.y, 0, 1);
}
And we're finally getting to my problem! I use graphics debugger in Visual Studio 2012 which allows me to look at the rendered texture and its values. I would expect that at the pixel location [0,1] (in texel coordinate system) should be value [0,0] (or [0,0,0,1] to be precise, for RGBA format) but it seems that value of final pixel is interpolated between 3 vertices and I have a wrong value for a given vertex.
Screenshot from VS graphics debugger:
Rendered 3x3 texture ([0;1] location in texel coordinate system):
Values from vertex and pixel shader:
How to render the exact value from vertex shader to texture for a given pixel?
I am pretty new to computer graphics and Direct3D 11, so please excuse my deficiencies.
I've been writing a program using directx11, and I have written a basic camera class which manipulates a view matrix. When I test the program, the result is that the scene does not move, but when I move the camera it has the effect of cutting off what is visible at an arbitrary location. I've attached some pictures to show what I mean.
I have left my pixel shader only outputting red pixels for now.
My vertex shader is based on the SDK example:
cbuffer cbChangeOnResize : register(b1)
{
matrix Projection;
};
cbuffer cbChangesEveryFrame : register(b2)
{
matrix View;
matrix World;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD0;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT TEX_VS(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul(input.Pos, World);
output.Pos = mul(output.Pos, View);
output.Pos = mul(output.Pos, Projection);
output.Tex = input.Tex;
return output;
}
I have been scratching my head for a couple of days about this problem, but I don't know what is causing this, or even which pieces of code are relevant. PIX shows that the world, view and projection matrices appear to exist and are being applied, although it is evident that something is not right.
Thank you.
You can use row_major modifier instead of transposing matrices before passing them inside shader
Mathematical fail, I had sent the view matrix instead of its transpose to the shader.