i try to do basic n-dot-l lighting in unity and CG.
to my understanding the lighting should not change depending from where you look at the object with your camera. but in my situation it does.
struct v2f {
float4 pos : SV_POSITION;
float3 LightDirection: TEXCOORD2;
float3 Normal: TEXCOORD3;
};
uniform float4 _LightPos;
v2f vert (appdata_tan v)
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.Texcoord = v.texcoord;
float3 p = mul(UNITY_MATRIX_MV, v.vertex);
o.LightDirection = _LightPos.xyz - p;
o.Normal = v.normal;
return o;
}
half4 frag (v2f i) : COLOR
{
float3 l = normalize(i.LightDirection);
float3 n = normalize(i.Normal);
float x = dot(n,l);
return float4(x,x,x,1);
}
so what am i missing in this code?
do i need to transform the lights position with the modelview matrix as well?
thanks!
v.normal is in object space while i.LightDirection is in view space.
Pick a space and stick to it. Transforming normals to view space is one way to go. It requires to multiply by the inverse-transpose of the modelview. Depending on your model-view model, it can be as simple as the 3x3 upper part of the modelview.
Related
I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.
So basically I get my GLM matrices:
struct DefferedUBO {
glm::mat4 view;
glm::mat4 invProj;
glm::vec4 eyePos;
glm::vec4 resolution;
};
DefferedUBO deffUBOBuffer;
// ...
glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
transform->GetPosition(),
transform->GetPosition() + transform->GetForward(),
transform->GetUp()
);
deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);
if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}
// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());
Then in HLSL, I simply use the following:
cbuffer MatrixInfoType {
matrix invView;
matrix invProj;
float4 eyePos;
float4 resolution;
};
float4 ViewPosFromDepth(float depth, float2 TexCoord) {
float z = depth; // * 2.0 - 1.0;
float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
float4 viewSpacePosition = mul(invProj, clipSpacePosition);
viewSpacePosition /= viewSpacePosition.w;
return viewSpacePosition;
}
float3 WorldPosFromViewPos(float4 view) {
float4 worldSpacePosition = mul(invView, view);
return worldSpacePosition.xyz;
}
float3 WorldPosFromDepth(float depth, float2 TexCoord) {
return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}
// ...
// Sample the hardware depth buffer.
float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;
Here's the result:
This just looks like random colors multiplied with the depth.
Ironically when I remove transposing, I get something closer to the truth, but not quite:
You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.
The correct version, along with Albedo, Specular, and Normals.
I fixed my problem at gamedev.net. There was a matrix majorness issue as well as a depth handling issue.
https://www.gamedev.net/forums/topic/692095-d3d-glm-depth-reconstruction-issues
I am currently working on a multi-textured terrain and I have problems with the Sample function of Texture2DArray.
In my example, I use a Texture2DArray to store a set of different terrain texture, e.g. grass, sand, asphalt, etc. Each of my vertices stores a texture coordinate (UV coordinate) and an index of the texture I want to use. So, if my index is 0, I use the first texture. If the index is 1, I use the second texture, and so on. This works fine, as long as my index is a natural number (0, 1, ..). However, it fails, if the index is a real number (like 1.5f).
In order to look for the problem, I reduced my entire pixel shader to this:
Texture2DArray DiffuseTextures : register(t0);
Texture2DArray NormalTextures : register(t1);
Texture2DArray EmissiveTextures : register(t2);
Texture2DArray SpecularTextures : register(t3);
SamplerState Sampler : register(s0);
struct PS_IN
{
float4 pos : SV_POSITION;
float3 nor : NORMAL;
float3 tan : TANGENT;
float3 bin : BINORMAL;
float4 col : COLOR;
float4 TextureIndices : COLOR1;
float4 tra : COLOR2;
float2 TextureUV : TEXCOORD0;
};
float4 PS(PS_IN input) : SV_Target
{
float4 texCol = DiffuseTextures.Sample(Sampler, float3(input.TextureUV, input.TextureIndices.r));
return texCol;
}
The following image shows the result of a sample scene on the left side. As you can see, there is a hard border between the used textures. There is no form of interpolation.
In order to check my texture indices, I changed my pixel shader from above by returning the texture indices as a color:
return float4(input.TextureIndices.r, input.TextureIndices.r, input.TextureIndices.r, 1.0f);
The result can be seen on the right side of the image. The texture indices are correct, since they range in the interval [0, 1] and you can clearly see the interpolation at the border of the area. However, my sampled texture does not show any form of interpolation.
Since my pixel shader is pretty simple, I wonder what causes this behaviour? Is there any setting in DirextX responsible for this?
I use DirectX 11, pixel shader ps_5_0 (I also tested with ps_4_0) and I use DDS textures (BC3 compression).
Edit
This is the sampler I am using:
SharpDX.Direct3D11.SamplerStateDescription samplerStateDescription = new SharpDX.Direct3D11.SamplerStateDescription()
{
AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressW = SharpDX.Direct3D11.TextureAddressMode.Wrap,
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear
};
SharpDX.Direct3D11.SamplerState samplerState = new SharpDX.Direct3D11.SamplerState(_device, samplerStateDescription);
_deviceContext.PixelShader.SetSampler(0, samplerState);
Solution
I made a function using the code presented by catflier for getting a texture color:
float4 GetTextureColor(Texture2DArray textureArray, float2 textureUV, float textureIndex)
{
float tid = textureIndex;
int id = (int)tid;
float l = frac(tid);
float4 texCol1 = textureArray.Sample(Sampler, float3(textureUV, id));
float4 texCol2 = textureArray.Sample(Sampler, float3(textureUV, id + 1));
return lerp(texCol1, texCol2, l);
}
This way, I can get the desired texture color for all texture types (diffuse, specular, emissive, ...) with a simple function call:
float4 texCol = GetTextureColor(DiffuseTextures, input.TextureUV, input.TextureIndices.r);
float4 bumpMap = GetTextureColor(NormalTextures, input.TextureUV, input.TextureIndices.g);
float4 emiCol = GetTextureColor(EmissiveTextures, input.TextureUV, input.TextureIndices.b);
float4 speCol = GetTextureColor(SpecularTextures, input.TextureUV, input.TextureIndices.a);
The result is as smooth as I wanted it to be: :-)
Texture arrays do not sample across slices, so technically, this is expected result.
If you want to interpolate between slices (eg: 1.5f gives you "half" of second texture and "half" of third texture), you can use a Texture3d instead, which allows this (but will cost some more as it will perform trilinear filtering)
Otherwise, you can perform your sampling that way :
float4 PS(PS_IN input) : SV_Target
{
float tid = input.TextureIndices.r;
int id = (int)tid;
float l = frac(tid); //lerp amount
float4 texCol1 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id));
float4 texCol2 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id+1));
return lerp(texCol1,texCol2, l);
}
Please note that this technique is quite more flexible, since you can also provide non adjacent slices as input (so you can lerp between slice 2 and 23 for example), and eventually use a different blend mode by changing lerp by some other function.
really hoping that someone can help me here - I rarely can't resolve bugs in C# since I have a fair amount of experience in it but I don't have a lot to go on with HLSL.
The picture linked to below is of the same model (programmatically generated on run) twice, the first (white) time using BasicEffect and the second time using my custom shader, listed below. The fact that it works with BasicEffect makes me think that it's not an issue with generating the normals for the model or anything like that.
I've included different levels of subdividing to better illustrate the issue. It's worth mentioning that both effects are using the same lighting direction.
https://imagizer.imageshack.us/v2/801x721q90/673/qvXyBk.png
Here's my shader code (feel free to pick it apart, any tips are very welcome):
float4x4 WorldViewProj;
float4x4 NormalRotation = float4x4(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1);
float4 ModelColor = float4(1, 1, 1, 1);
bool TextureEnabled = false;
Texture ModelTexture;
sampler ColoredTextureSampler = sampler_state
{
texture = <ModelTexture>;
magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR;
AddressU = mirror; AddressV = mirror;
};
float4 AmbientColor = float4(1, 1, 1, 1);
float AmbientIntensity = 0.1;
float3 DiffuseLightDirection = float3(1, 0, 0);
float4 DiffuseColor = float4(1, 1, 1, 1);
float DiffuseIntensity = 1.0;
struct VertexShaderInput
{
float4 Position : POSITION0;
float4 Normal : NORMAL0;
float2 TextureCoordinates : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR0;
float2 TextureCoordinates : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Position = mul(input.Position, WorldViewProj);
float4 normal = mul(input.Normal, NormalRotation);
float lightIntensity = dot(normal, DiffuseLightDirection);
output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity);
output.TextureCoordinates = input.TextureCoordinates;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 pixBaseColor = ModelColor;
if (TextureEnabled == true)
{
pixBaseColor = tex2D(ColoredTextureSampler, input.TextureCoordinates);
}
float4 lighting = saturate((input.Color + AmbientColor * AmbientIntensity) * pixBaseColor);
return lighting;
}
technique BestCurrent
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
In general, when implementing a lighting equation, there are a few things to ensure:
Normals, light directions, and other directional vectors should be normalized before using them in a dot product. In your case you could add something like:
normal = normalize(normal);
The same should be done for DiffuseLightDirection if it is already not normalized. It already is with your default value, but if your app changes it, it might not be normalized anymore. For that, it would be better to normalize in the application code since it only needs to be done once when it changes, and not per vertex.
Also remember that if you are multiplying the vector by a matrix that contains a scale, the vector will no longer be normalized, so it will need to be re-normalized.
The light direction and the normal must point in the same direction which is out from the surface. Your default light direction is (1,0,0). If you want light to point in the +x direction, then you must actually negate the vector before performing the dot product with the normal so that it is pointing out from the surface just like the normal. If you already take this into account, then it's not a problem.
Vectors can't be translated since they are just a direction not a position. So it is important to ensure when you transform them with a matrix that either the fourth component (w) of the vector is 0 or the matrix you are transforming it with has no translation. Setting w to 0 will zero out any translation from the matrix during the multiply. Since your matrix is called NormalRotation, I'm assuming it only contains a rotation, so this probably isn't an issue.
I am trying to build an infinite fog shader. This fog is applied on a 3D plane.
For the moment I have a Z-Depth Fog. And I encounter some issues.
As you can see in the screenshot, there are two views.
The green color is my 3D plane. The problem is in the red line. It seems that the this line depends of my camera which is not good because when I rotate my camera the line is affected by my camera position and rotation.
I don't know where does it comes from and how to have my fog limit not based on the camera position.
Shader
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform float4 _FogColor;
uniform sampler2D _CameraDepthTexture;
float _Depth;
float _DepthScale;
struct v2f {
float4 pos : SV_POSITION;
float4 projection : TEXCOORD0;
float4 screenPosition : TEXCOORD1;
};
v2f vert(appdata_base v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
// o.projection = ComputeGrabScreenPos(o.pos);
float4 position = o.pos;
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
float4 p = position * 0.5f;
p.xy = float2(p.x, p.y * scale) + p.w;
p.zw = position.zw;
o.projection = p;
// o.screenPosition = ComputeScreenPos(o.pos);
position = o.pos;
float4 q = position * 0.5f;
#if defined(UNITY_HALF_TEXEL_OFFSET)
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w * _ScreenParams.zw;
#else
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w;
#endif
#if defined(SHADER_API_FLASH)
q.xy *= unity_NPOTScale.xy;
#endif
q.zw = position.zw;
q.zw = 1.0f;
o.screenPosition = q;
return o;
}
sampler2D _GrabTexture;
float4 frag(v2f IN) : COLOR {
float3 uv = UNITY_PROJ_COORD(IN.projection);
float depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv));
depth = LinearEyeDepth(depth);
return saturate((depth - IN.screenPosition.w + _Depth) * _DepthScale);
}
ENDCG
}
Next I want to rotate my Fog to have an Y-Depth Fog but I don't know how to achieve this effect.
I see two ways to acheive what you want:
is to render depth of your plane to texture and calculate fog based on difference of depth of plane and depth of object, 0 if obj depth is less and (objDepth - planeDepth) * scale if it is bigger)
Is to instead of rendering to texture calculate distance to plane in shader and use it directly.
I am not sure what you do since I am not very familiar with Unity surface shaders, but djudging from the code and result something different.
It seems that this is caused by _CameraDepthTexture, that's why depth is calculated with the camera position.
But I don't know how to correct it... It seems that there is no way to get the depth from another point. Any idea ?
Here is another example. In green You can "see" the object and the blue line is for me the fog as it should be.
I have mapped some values into my texture on my alpha channel. Actually I use my texture as 2Darray. What I need is a way to read the alpha value of the map at position e.g. [4][5] (representing x and y)
I need the returned value available in my pixelshader. Is there any way to do this?
I use DX9.
Thx in advance!
Do you want to use the texel at [4][5] (x,y) for your entire pixelshader?
if that is your question you could just precalc that cordinate on the vertex shader and passit along to every vertex, and then sample with that uv cords. this way it wont get interpolated. (or it will, but it will only have one value to interpolate with)
other than that you probably have to specifiy abit more on what you are trying to achive.
What are you using it for? when will it occure, what sort of mesh are you using it for?
Texture2DArray is a shader model 4 thing. I don't believe you're using it on dx9.
If you are using shader model 4, then just use the function Load(4, 5).
Otherwise, for sm1,2,3, you can put the numbers you want, e.g. 4.0f and 5.0 into your vertex as normal texcoord data. Then have the pixel shader scale it by the size of the texture.
struct VertexInput {
float4 pos : POSITION;
float2 uv : TEXCOORD0; //0.0, 1.0, 2.0, 3.0, 4.0 etc
};
struct PixelInput {
float4 position : POSITION;
float2 uv : TEXCOORD0;
};
PixelInput vsTex(VertexInput vtx)
{
PixelInput output;
float4 pos = vtx.pos;
output.position = mul(pos, MatWorld);
output.position = mul(output.position, MatView);
output.position = mul(output.position, MatProj);
output.uv = vtx.uv;
return output;
}
float4 PixelShader(PixelInput input) : SV_Target
{
float coords = pix.uv / float2(TEX_WIDTH, TEX_HEIGHT);
return tex = tex2D(mySampler, coords);
}
Where TEX_WIDTH, TEX_HEIGHT are passed in via the 'defines' parameter of D3DXCompileShader. And
OR: just do 4.0f/tex_width and 5.0/tex_height in software and just pass that number (which will be between [0.0f,1.0f] through to the pixel shader)