I have a question about normal mapping in directx9 shader.
Currently my Terrain shader Output for Normal Map + Diffuse Color only result into this Image.
Which looks good to me.
If i use an empty Normal map image like this one.
My shader output for normal diffuse and color map looks like this.
But if i use 1 including a ColorMap i get a really stange result.
Does anyone have an idea what could cause this issue?
Here is some snippets.
float4 PS_TERRAIN(VSTERRAIN_OUTPUT In) : COLOR0
{
float4 fDiffuseColor;
float lightIntensity;
float3 bumpMap = 2.0f * tex2D( Samp_Bump, In.Tex.xy ).xyz-1.0f;
float3 bumpNormal = (bumpMap.x * In.Tangent) + (bumpMap.y * In.Bitangent) + (bumpMap.z * In.Normal);
bumpNormal = normalize(bumpNormal);
// Direction Light Test ( Test hardcoded )
float3 lightDirection = float3(0.0f, -0.5f, -0.2f);
float3 lightDir = -lightDirection;
// Bump
lightIntensity = saturate(dot( bumpNormal, lightDir));
// We are using a lightmap to do our alpha calculation for given pixel
float4 LightMaptest = tex2D( Samp_Lightmap, In.Tex.zw ) * 2.0f;
fDiffuseColor.a = LightMaptest.a;
if( !bAlpha )
fDiffuseColor.a = 1.0;
// Sample the pixel color from the texture using the sampler at this texture coordinate location.
float4 textureColor = tex2D( Samp_Diffuse, In.Tex.xy );
// Combine the color map value into the texture color.
textureColor = saturate(textureColor * LightMaptest);
textureColor.a = LightMaptest.a;
fDiffuseColor.rgb = saturate(lightIntensity * I_d).rgb;
fDiffuseColor = fDiffuseColor * textureColor; // If i enable this line it goes crazy
return fDiffuseColor;
}
I'm trying to port my engine to DirectX and I'm currently having issues with depth reconstruction. It works perfectly in OpenGL (even though I use a bit of an expensive method). Every part besides the depth reconstruction works so far. I use GLM because it's a good math library that has no need to install any dependencies or anything for the user.
So basically I get my GLM matrices:
struct DefferedUBO {
glm::mat4 view;
glm::mat4 invProj;
glm::vec4 eyePos;
glm::vec4 resolution;
};
DefferedUBO deffUBOBuffer;
// ...
glm::mat4 projection = glm::perspective(engine.settings.fov, aspectRatio, 0.1f, 100.0f);
// Get My Camera
CTransform *transform = &engine.transformSystem.components[engine.entities[entityID].components[COMPONENT_TRANSFORM]];
// Get the View Matrix
glm::mat4 view = glm::lookAt(
transform->GetPosition(),
transform->GetPosition() + transform->GetForward(),
transform->GetUp()
);
deffUBOBuffer.invProj = glm::inverse(projection);
deffUBOBuffer.view = glm::inverse(view);
if (engine.settings.graphicsLanguage == GRAPHICS_DIRECTX) {
deffUBOBuffer.invProj = glm::transpose(deffUBOBuffer.invProj);
deffUBOBuffer.view = glm::transpose(deffUBOBuffer.view);
}
// Abstracted so I can use OGL, DX, VK, or even Metal when I get around to it.
deffUBO->UpdateUniformBuffer(&deffUBOBuffer);
deffUBO->Bind());
Then in HLSL, I simply use the following:
cbuffer MatrixInfoType {
matrix invView;
matrix invProj;
float4 eyePos;
float4 resolution;
};
float4 ViewPosFromDepth(float depth, float2 TexCoord) {
float z = depth; // * 2.0 - 1.0;
float4 clipSpacePosition = float4(TexCoord * 2.0 - 1.0, z, 1.0);
float4 viewSpacePosition = mul(invProj, clipSpacePosition);
viewSpacePosition /= viewSpacePosition.w;
return viewSpacePosition;
}
float3 WorldPosFromViewPos(float4 view) {
float4 worldSpacePosition = mul(invView, view);
return worldSpacePosition.xyz;
}
float3 WorldPosFromDepth(float depth, float2 TexCoord) {
return WorldPosFromViewPos(ViewPosFromDepth(depth, TexCoord));
}
// ...
// Sample the hardware depth buffer.
float depth = shaderTexture[3].Sample(SampleType[0], input.texCoord).r;
float3 position = WorldPosFromDepth(depth, input.texCoord).rgb;
Here's the result:
This just looks like random colors multiplied with the depth.
Ironically when I remove transposing, I get something closer to the truth, but not quite:
You're looking at Crytek Sponza. As you can see, the green area moves and rotates with the bottom of the camera. I have no idea at all why.
The correct version, along with Albedo, Specular, and Normals.
I fixed my problem at gamedev.net. There was a matrix majorness issue as well as a depth handling issue.
https://www.gamedev.net/forums/topic/692095-d3d-glm-depth-reconstruction-issues
I have a supposedly simple task, but apparently I still don't understand how projections work in shaders. I need to do a 2D perspective transformation on a texture quad (2 triangles), but visually it doesn't look correct (e.g. trapezoid is slightly higher or more stretched than what it is in the CPU version).
I have this struct:
struct VertexInOut
{
float4 position [[position]];
float3 warp0;
float3 warp1;
float3 warp2;
float3 warp3;
};
And in the vertex shader I do something like (texCoords are pixel coords of the quad corners and homography is calculated in pixel coords):
v.warp0 = texCoords[vid] * homographies[0];
Then in the fragment shader like this:
return intensity.sample(s, inFrag.warp0.xy / inFrag.warp0.z);
The result is not what I expect. I spent hours on this, but I cannot figure it out. venting
UPDATE:
These are code and result for CPU (aka expected result):
// _image contains the original image
cv::Matx33d h(1.03140473, 0.0778113901, 0.000169219566,
0.0342947133, 1.06025684, 0.000459250761,
-0.0364957005, -38.3375587, 0.818259298);
cv::Mat dest(_image.size(), CV_8UC4);
// h is transposed because OpenCV is col major and using backwarping because it is what is used on the GPU, so better for comparison
cv::warpPerspective(_image, dest, h.t(), _image.size(), cv::WARP_INVERSE_MAP | cv::INTER_LINEAR);
These are code and result for GPU (aka wrong result):
// constants passed in buffers, image size 320x240
const simd::float4 quadVertices[4] =
{
{ -1.0f, -1.0f, 0.0f, 1.0f },
{ +1.0f, -1.0f, 0.0f, 1.0f },
{ -1.0f, +1.0f, 0.0f, 1.0f },
{ +1.0f, +1.0f, 0.0f, 1.0f },
};
const simd::float3 textureCoords[4] =
{
{ 0, IMAGE_HEIGHT, 1.0f },
{ IMAGE_WIDTH, IMAGE_HEIGHT, 1.0f },
{ 0, 0, 1.0f },
{ IMAGE_WIDTH, 0, 1.0f },
};
// vertex shader
vertex VertexInOut homographyVertex(uint vid [[ vertex_id ]],
constant float4 *positions [[ buffer(0) ]],
constant float3 *texCoords [[ buffer(1) ]],
constant simd::float3x3 *homographies [[ buffer(2) ]])
{
VertexInOut v;
v.position = positions[vid];
// example homography
simd::float3x3 h = {
{1.03140473, 0.0778113901, 0.000169219566},
{0.0342947133, 1.06025684, 0.000459250761},
{-0.0364957005, -38.3375587, 0.818259298}
};
v.warp = h * texCoords[vid];
return v;
}
// fragment shader
fragment int4 homographyFragment(VertexInOut inFrag [[stage_in]],
texture2d<uint, access::sample> intensity [[ texture(1) ]])
{
constexpr sampler s(coord::pixel, filter::linear, address::clamp_to_zero);
float4 targetIntensity = intensityRight.sample(s, inFrag.warp.xy / inFrag.warp.z);
return targetIntensity;
}
Original image:
UPDATE 2:
Contrary to the common belief that the perspective divide should be done in the fragment shader, I get a much more similar result if I divide in the vertex shader (and no distortion or seam between triangles), but why?
UPDATE 3:
I get the same (wrong) result if:
I move the perspective divide to the fragment shader
I simply remove the divide from the code
Very strange, it looks like the divide is not happening.
OK, the solution was of course a very small detail: the division of simd::float3 behaves absolutely nuts. In fact, if I do the perspective divide in the fragment shader like this:
float4 targetIntensity = intensityRight.sample(s, inFrag.warp.xy * (1.0 / inFrag.warp.z));
it works!
Which lead me to find out that multiplying by the pre-divided float is different than dividing by a float. The reason for this is still unknown to me, if anyone knows why we can unravel this mystery.
I am trying to build an infinite fog shader. This fog is applied on a 3D plane.
For the moment I have a Z-Depth Fog. And I encounter some issues.
As you can see in the screenshot, there are two views.
The green color is my 3D plane. The problem is in the red line. It seems that the this line depends of my camera which is not good because when I rotate my camera the line is affected by my camera position and rotation.
I don't know where does it comes from and how to have my fog limit not based on the camera position.
Shader
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform float4 _FogColor;
uniform sampler2D _CameraDepthTexture;
float _Depth;
float _DepthScale;
struct v2f {
float4 pos : SV_POSITION;
float4 projection : TEXCOORD0;
float4 screenPosition : TEXCOORD1;
};
v2f vert(appdata_base v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
// o.projection = ComputeGrabScreenPos(o.pos);
float4 position = o.pos;
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
float4 p = position * 0.5f;
p.xy = float2(p.x, p.y * scale) + p.w;
p.zw = position.zw;
o.projection = p;
// o.screenPosition = ComputeScreenPos(o.pos);
position = o.pos;
float4 q = position * 0.5f;
#if defined(UNITY_HALF_TEXEL_OFFSET)
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w * _ScreenParams.zw;
#else
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w;
#endif
#if defined(SHADER_API_FLASH)
q.xy *= unity_NPOTScale.xy;
#endif
q.zw = position.zw;
q.zw = 1.0f;
o.screenPosition = q;
return o;
}
sampler2D _GrabTexture;
float4 frag(v2f IN) : COLOR {
float3 uv = UNITY_PROJ_COORD(IN.projection);
float depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv));
depth = LinearEyeDepth(depth);
return saturate((depth - IN.screenPosition.w + _Depth) * _DepthScale);
}
ENDCG
}
Next I want to rotate my Fog to have an Y-Depth Fog but I don't know how to achieve this effect.
I see two ways to acheive what you want:
is to render depth of your plane to texture and calculate fog based on difference of depth of plane and depth of object, 0 if obj depth is less and (objDepth - planeDepth) * scale if it is bigger)
Is to instead of rendering to texture calculate distance to plane in shader and use it directly.
I am not sure what you do since I am not very familiar with Unity surface shaders, but djudging from the code and result something different.
It seems that this is caused by _CameraDepthTexture, that's why depth is calculated with the camera position.
But I don't know how to correct it... It seems that there is no way to get the depth from another point. Any idea ?
Here is another example. In green You can "see" the object and the blue line is for me the fog as it should be.
First of all I'm new to XNA and HLSL so me knowledge is very limited.
I'm writing a small Application to display a digital elevation model consisting of 16Bit values in 2D by using different colors for different height.
The colormapping is done by a Pixelshader via a lookup texture.
At the moment I'm putting the values into red an green components of a texture2D and map them to colors in a 256x256 texture.
As the coloring is discrete/not continously I set minfilter/magfilter to point what leads to a blocky look when zooming in.
Is there a way to get the linear filtering back after the lookup? Or does anybody know a better way to do the mapping?
Shader:
sampler2D tex1 : register(s0) = sampler_state
{
MinFilter = Point;
MagFilter = Point;
MipFilter = linear;
};
texture2D lookupTex;
sampler2D lookup = sampler_state
{
Texture = <lookupTex>;
MinFilter = Point;
MagFilter = Point;
MipFilter = Point;
};
float4 PixelShaderLookup(float4 incol : COLOR, float2 UV : TEXCOORD0) : COLOR0
{
float4 inCol = tex2D(tex1, UV);
half3 scale = (256 - 1.0) / 256;
half3 offset = 1.0 / (2.0 * 256);
float4 outCol = tex2D(lookup, scale * inCol.gr + offset);
return outCol;
}
Thanks for your help and a happy new year :)