Camera unusual behaviour, DX11 - directx

I've been writing a program using directx11, and I have written a basic camera class which manipulates a view matrix. When I test the program, the result is that the scene does not move, but when I move the camera it has the effect of cutting off what is visible at an arbitrary location. I've attached some pictures to show what I mean.
I have left my pixel shader only outputting red pixels for now.
My vertex shader is based on the SDK example:
cbuffer cbChangeOnResize : register(b1)
{
matrix Projection;
};
cbuffer cbChangesEveryFrame : register(b2)
{
matrix View;
matrix World;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD0;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT TEX_VS(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul(input.Pos, World);
output.Pos = mul(output.Pos, View);
output.Pos = mul(output.Pos, Projection);
output.Tex = input.Tex;
return output;
}
I have been scratching my head for a couple of days about this problem, but I don't know what is causing this, or even which pieces of code are relevant. PIX shows that the world, view and projection matrices appear to exist and are being applied, although it is evident that something is not right.
Thank you.

You can use row_major modifier instead of transposing matrices before passing them inside shader

Mathematical fail, I had sent the view matrix instead of its transpose to the shader.

Related

Custom HLSL shader making weird patterns across icosphere

really hoping that someone can help me here - I rarely can't resolve bugs in C# since I have a fair amount of experience in it but I don't have a lot to go on with HLSL.
The picture linked to below is of the same model (programmatically generated on run) twice, the first (white) time using BasicEffect and the second time using my custom shader, listed below. The fact that it works with BasicEffect makes me think that it's not an issue with generating the normals for the model or anything like that.
I've included different levels of subdividing to better illustrate the issue. It's worth mentioning that both effects are using the same lighting direction.
https://imagizer.imageshack.us/v2/801x721q90/673/qvXyBk.png
Here's my shader code (feel free to pick it apart, any tips are very welcome):
float4x4 WorldViewProj;
float4x4 NormalRotation = float4x4(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1);
float4 ModelColor = float4(1, 1, 1, 1);
bool TextureEnabled = false;
Texture ModelTexture;
sampler ColoredTextureSampler = sampler_state
{
texture = <ModelTexture>;
magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR;
AddressU = mirror; AddressV = mirror;
};
float4 AmbientColor = float4(1, 1, 1, 1);
float AmbientIntensity = 0.1;
float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;
struct VertexShaderInput
{
float4 Position : POSITION0;
float4 Normal : NORMAL0;
float2 TextureCoordinates : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Position = mul(input.Position, WorldViewProj);
float4 normal = mul(input.Normal, NormalRotation);
float lightIntensity = dot(normal, DiffuseLightDirection);
output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity);
output.TextureCoordinates = input.TextureCoordinates;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 pixBaseColor = ModelColor;
if (TextureEnabled == true)
{
pixBaseColor = tex2D(ColoredTextureSampler, input.TextureCoordinates);
}
float4 lighting = saturate((input.Color + AmbientColor * AmbientIntensity) * pixBaseColor);
return lighting;
}
technique BestCurrent
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
In general, when implementing a lighting equation, there are a few things to ensure:
Normals, light directions, and other directional vectors should be normalized before using them in a dot product. In your case you could add something like:
normal = normalize(normal);
The same should be done for DiffuseLightDirection if it is already not normalized. It already is with your default value, but if your app changes it, it might not be normalized anymore. For that, it would be better to normalize in the application code since it only needs to be done once when it changes, and not per vertex.
Also remember that if you are multiplying the vector by a matrix that contains a scale, the vector will no longer be normalized, so it will need to be re-normalized.
The light direction and the normal must point in the same direction which is out from the surface. Your default light direction is (1,0,0). If you want light to point in the +x direction, then you must actually negate the vector before performing the dot product with the normal so that it is pointing out from the surface just like the normal. If you already take this into account, then it's not a problem.
Vectors can't be translated since they are just a direction not a position. So it is important to ensure when you transform them with a matrix that either the fourth component (w) of the vector is 0 or the matrix you are transforming it with has no translation. Setting w to 0 will zero out any translation from the matrix during the multiply. Since your matrix is called NormalRotation, I'm assuming it only contains a rotation, so this probably isn't an issue.

Displacement shader rendering issue (back drawn on top of front)

I'm using sharpdx to access directX 11 and running this very simple displacement shader i wrote on a single plane
No multiple places, no other objects on the scene, just a high poly flat plane with the displacement shader in a single draw call
While it renders fine at first i rotate it over time and i get "really weird" artifacts X2 past a certain angle (it's not a backface issue, i'm only rotating it on the up axis and the way it is angled no backfaces are visible when the issue arises)
Some areas seem to get drawn over (polys in the back get drawn over front ones) You can see what this looks like here : https://www.dropbox.com/s/hy4k20ay1g77rky/Drawing%20issue.png
Another completely undescribable thing happens, not sure if i can convey it with words but if unclear let me know and i'll try to capture video : past a certain rotation points it's as if a ray came over the screen and "progressively" remodeled the surface from left to right leaving weird artifacts within the ray and lowered geometry past the ray
The shader i'm using :
struct VS_IN
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4x4 worldViewProj;
Texture2D<float4> diffuse: register(t0);
Texture2D<float4> height: register(t1);
Texture2D<float4> lightmap: register(t2);
SamplerState pictureSampler;
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN) 0;
input.pos.z += height.SampleLevel(pictureSampler, input.tex, 0).r /2;
output.pos = mul(input.pos, worldViewProj);
output.tex = input.tex;
return output;
}
float4 PS( PS_IN input ) : SV_Target
{
return diffuse.Sample(pictureSampler, input.tex) * lightmap.Sample(pictureSampler, input.tex);
}
It looks like you haven't set up depth testing properly. That would explain why it looks correct from some angles and not from others since it would be dependent on the draw order of the triangles as to which appeared on top.
Have you created a depth buffer, set it, cleared it and set a DepthStencilState to match?

DirectX 11 Shader Error. Pixel Shader receiving only NaN

Hy everyone. I started re-coding my engine to convert it to directx 11. I'm now trying to get the basics working, but this error is really stoping me.
I created a basic shader, a simple dot product of the normal and the view. I got it to compile without errors, but it dosnt works.
It just totaly deforms the input mesh. I started debugging in vs2012, and found out that the pixel shader was getting as input all NaNs. I attached two screens and the shader code, if someone can provide any ideas, it would be really apriciated.
Vertex Shader
//----------------------------------------------------------------------------
// Constant Buffer Variables
//----------------------------------------------------------------------------
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
//float3 CameraPos;
float Power;
}
//---------------------------------------------------------------------------
struct VS_INPUT
{
float4 Pos : POSITION;
float3 Normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float3 Normal : TEXCOORD0;
};
//--------------------------------------------------------------------------
// Vertex Shader
//-------------------------------------------------------------------------------
VS_OUTPUT VS( VS_INPUT input)
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Normal = mul( float4( input.Normal, 1 ), World ).xyz;
//output.wNormal = input.Normal;
return output;
}
And here the Pixel Shader
//------------------------------------------------------------------------------
// Constant Buffer Variables
//------------------------------------------------------------------------------
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
float Power;
}
//------------------------------------------------------------------------------
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float3 wNormal : TEXCOORD0;
};
//-----------------------------------------------------------------------------
// Pixel Shader
//-----------------------------------------------------------------------------
float4 PS( VS_OUTPUT input) : SV_Target
{
//return pow(dot(input.wNormal,float3(0,0,0)),Power); // fixed camera, just for now
return float4(0.1,0.6,0.1,1);
}
And at last, i created a xml file structure for my shaders, that i then parse, dont know if it relevant, but
<?xml version="1.0" encoding="utf-8"?>
<vs path = "D:\\Documentos\\Visual Studio 2012\\Projects\\Fusion Engine\\Tests\\ConstantLighting_VS.hlsl" name ="VS" target = "vs_4_0">
</vs>
<ps path = "D:\\Documentos\\Visual Studio 2012\\Projects\\Fusion Engine\\Tests\\ConstantLighting_PS.hlsl" name ="PS" target = "ps_4_0">
<val1 type = "scalar" value = "0.456645" name = "Power"/>
</ps>
You could just use #pragma pack_matrix(row_major) in your shader instead of tranposing matrices on the cpu side
Try simply concatenating the WorldViewProjection matrices into one, before calling the vertex function. Hopefully all of those compnenets were okay, not, say, all zero's? You might also want to explicitly set the matrix type in your HLSL code as "matrix <float, 4, 4>" or just declare the transform(s) as "float4x4".
Ok, i solved it. You MUST use DirectX::XMMatrixTranspose() when you pass any matrix to the shader.
Tricky DirectX ;)
Only in DirectX 11 you have to transpose the matrices berfore sending them to shader. They changed the way you keep and work with structures in shaders

errorX3013: vertex shader does not take 0 arguments

I am starting to play around with the Shader Model 4.0 and I'm trying to setup a basic sample project. (Basically rendering and light up a cube)
But atm. I am totally stuck at the most basic part. My Vertex Shader won't compile with the following error message:
Error 1 Errors compiling ...\x.fx:
...\x.fx(32,43): error X3013: 'VertexShaderFunction': function does not take 0 parameters ...\x.fx 32 43 ...
my code until now:
float4x4 World;
float4x4 View;
float4x4 Projection;
struct VS_INPUT
{
float4 Position : POSITION;
};
struct VS_OUTPUT
{
float4 Position : POSITION;
};
VS_OUTPUT VertexShaderFunction(in VS_INPUT input)
{
VS_OUTPUT output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
return output;
}
technique Technique1
{
pass Pass1
{
SetVertexShader( Compile( vs_4_1, VertexShaderFunction() ) );
SetGeometryShader(NULL);
SetPixelShader( NULL );
}
}
the VS_INPUT parameter is clearly marked as input and not as uniform. The struct assigns every member (well the only one) an input semantic. Does anyone have an idea why this is not compiling properly?
I am using Win7 Ultimate + DirectX11 + XNA GameStudio 4.0,
my graphics card is an Intel GMA 4500MHD (so it should allow for shader model 4.0)
and her I go, answering my own question - what a stupid mistake (and hard to find):
it is CompileShader() and not Compile()

HLSL getting values from texture position

I have mapped some values into my texture on my alpha channel. Actually I use my texture as 2Darray. What I need is a way to read the alpha value of the map at position e.g. [4][5] (representing x and y)
I need the returned value available in my pixelshader. Is there any way to do this?
I use DX9.
Thx in advance!
Do you want to use the texel at [4][5] (x,y) for your entire pixelshader?
if that is your question you could just precalc that cordinate on the vertex shader and passit along to every vertex, and then sample with that uv cords. this way it wont get interpolated. (or it will, but it will only have one value to interpolate with)
other than that you probably have to specifiy abit more on what you are trying to achive.
What are you using it for? when will it occure, what sort of mesh are you using it for?
Texture2DArray is a shader model 4 thing. I don't believe you're using it on dx9.
If you are using shader model 4, then just use the function Load(4, 5).
Otherwise, for sm1,2,3, you can put the numbers you want, e.g. 4.0f and 5.0 into your vertex as normal texcoord data. Then have the pixel shader scale it by the size of the texture.
struct VertexInput {
float4 pos : POSITION;
float2 uv : TEXCOORD0; //0.0, 1.0, 2.0, 3.0, 4.0 etc
};
struct PixelInput {
float4 position : POSITION;
float2 uv : TEXCOORD0;
};
PixelInput vsTex(VertexInput vtx)
{
PixelInput output;
float4 pos = vtx.pos;
output.position = mul(pos, MatWorld);
output.position = mul(output.position, MatView);
output.position = mul(output.position, MatProj);
output.uv = vtx.uv;
return output;
}
float4 PixelShader(PixelInput input) : SV_Target
{
float coords = pix.uv / float2(TEX_WIDTH, TEX_HEIGHT);
return tex = tex2D(mySampler, coords);
}
Where TEX_WIDTH, TEX_HEIGHT are passed in via the 'defines' parameter of D3DXCompileShader. And
OR: just do 4.0f/tex_width and 5.0/tex_height in software and just pass that number (which will be between [0.0f,1.0f] through to the pixel shader)

Resources