Stretching a texture over a rectangular area in xna - xna

I have a problem with drawing a texture stretched over a rectangular area in XNA. For some background : it is a 3d project, where at some place I'm putting a big broadcast screen 'filming' some other area. This is done by renderstates and the texture is rendered perfectly well (I checked by rendering into a file). The problem is, when I set this as a texture of my defined area, the mapped texture consists of tiles as you can see on the picture I'm uploading. The area where the texture should be stretched is defined like this:
VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[6];
vertices[0].Position = new Vector3(0.0F, 20.0F, 0.0F);
vertices[0].TextureCoordinate.X = 0;
vertices[0].TextureCoordinate.Y = 0;
vertices[1].Position = new Vector3(50.0F, 0.0F, 0.0F);
vertices[1].TextureCoordinate.X = 1f;
vertices[1].TextureCoordinate.Y = 1f;
vertices[2].Position = new Vector3(0.0F, 0.0F, 0.0F);
vertices[2].TextureCoordinate.X = 0f;
vertices[2].TextureCoordinate.Y = 1f;
vertices[3].Position = new Vector3(50.0F, 0.0F, 0.0F);
vertices[3].TextureCoordinate.X = 1f;
vertices[3].TextureCoordinate.Y = 1f;
vertices[4].Position = new Vector3(0.0F, 20.0F, 0.0F);
vertices[4].TextureCoordinate.X = 0f;
vertices[4].TextureCoordinate.Y = 0f;
vertices[5].Position = new Vector3(50.0F, 20.0F, 0.0F);
vertices[5].TextureCoordinate.X = 1f;
vertices[5].TextureCoordinate.Y = 0f;
telebimBuffer = new VertexBuffer(GraphicsDevice, VertexPositionNormalTexture.VertexDeclaration, vertices.ToList().Count, BufferUsage.WriteOnly);
telebimBuffer.SetData<VertexPositionNormalTexture>(vertices);
What worries me is that I tried playing with there texture coordinates and could not see any difference :/. I'm rendering with my own effect file, this might be a source of problems, you can get this file here: http://gamma.mini.pw.edu.pl/~zabak/random/PhongAlternative.fx
The rendered texture is passed as "xTexture" to the effect file.
Finally the screen showing my problem: http://gamma.mini.pw.edu.pl/~zabak/random/gk3d_2error.jpg
As you can see I get a large number of tile in the broadcast screen, the goal is to get the texture stretched over.
(For purpose of doing the screenshot I turned off lights by commenting the
color = (color+specular+diffuse*diffuseIntensity+ambientLightColor)+light1+light2+light3;
line in my effect file.)
UPDATE:
I changed declaration from the effect file from this one:
struct VertexShaderOutputPerPixelDiffuse
{
float4 Position : POSITION;
float3 WorldNormal : TEXCOORD0;
float3 WorldPosition : TEXCOORD1;
float2 TextureCoords: TEXCOORD2;
};
to this one:
struct VertexShaderOutputPerPixelDiffuse
{
float4 Position : POSITION;
float3 WorldNormal : TEXCOORD0;
float3 WorldPosition : TEXCOORD2;
float2 TextureCoords: TEXCOORD1;
};
And the broadcast screen works but the lights are messed up. Can someone tell me please what I'm doing wrong?

As pointed by borillis the issue was that my VertexShader output was not matching the PixelShader input. I modified the PixelShader input as follows:
struct PixelShaderInputPerPixelDiffuse
{
float4 Position : POSITION;
float3 WorldNormal : TEXCOORD0;
float3 WorldPosition : TEXCOORD2;
float2 TextureCoords: TEXCOORD1;
};
struct VertexShaderOutputPerPixelDiffuse
{
float4 Position : POSITION;
float3 WorldNormal : TEXCOORD0;
float3 WorldPosition : TEXCOORD2;
float2 TextureCoords: TEXCOORD1;
};
And now it works well.

Related

Directx 9 Normal Mapping Pixelshader

I have a question about normal mapping in directx9 shader.
Currently my Terrain shader Output for Normal Map + Diffuse Color only result into this Image.
Which looks good to me.
If i use an empty Normal map image like this one.
My shader output for normal diffuse and color map looks like this.
But if i use 1 including a ColorMap i get a really stange result.
Does anyone have an idea what could cause this issue?
Here is some snippets.
float4 PS_TERRAIN(VSTERRAIN_OUTPUT In) : COLOR0
{
float4 fDiffuseColor;
float lightIntensity;
float3 bumpMap = 2.0f * tex2D( Samp_Bump, In.Tex.xy ).xyz-1.0f;
float3 bumpNormal = (bumpMap.x * In.Tangent) + (bumpMap.y * In.Bitangent) + (bumpMap.z * In.Normal);
bumpNormal = normalize(bumpNormal);
// Direction Light Test ( Test hardcoded )
float3 lightDirection = float3(0.0f, -0.5f, -0.2f);
float3 lightDir = -lightDirection;
// Bump
lightIntensity = saturate(dot( bumpNormal, lightDir));
// We are using a lightmap to do our alpha calculation for given pixel
float4 LightMaptest = tex2D( Samp_Lightmap, In.Tex.zw ) * 2.0f;
fDiffuseColor.a = LightMaptest.a;
if( !bAlpha )
fDiffuseColor.a = 1.0;
// Sample the pixel color from the texture using the sampler at this texture coordinate location.
float4 textureColor = tex2D( Samp_Diffuse, In.Tex.xy );
// Combine the color map value into the texture color.
textureColor = saturate(textureColor * LightMaptest);
textureColor.a = LightMaptest.a;
fDiffuseColor.rgb = saturate(lightIntensity * I_d).rgb;
fDiffuseColor = fDiffuseColor * textureColor; // If i enable this line it goes crazy
return fDiffuseColor;
}

why sPos.z is uesd to get texcoord in shadow mapping

why use sPos.z here to get tescoord?
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
It is a shader which achieves shadow mapping in "Shaders for Game Programming and Artists".
The first pass render depth texture in light space.( light is camera and watch towards the origin )
The second pass get the depth and calculate the shadow.
Before these codes, the model has already been transformed to light space.
Then the texcoord should be calculated to read depth texture.
But I can't understand the algorithm of calculating the texcoord. Why sPos.z will be here?
Here is the whole vertex shader of the second pass
float distanceScale;
float4 lightPos;
float4 view_position;
float4x4 view_proj_matrix;
float4x4 proj_matrix;
float time_0_X;
struct VS_OUTPUT
{
float4 Pos: POSITION;
float3 normal: TEXCOORD0;
float3 lightVec : TEXCOORD1;
float3 viewVec: TEXCOORD2;
float4 shadowCrd: TEXCOORD3;
};
VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL)
{
VS_OUTPUT Out;
// Animate the light position.
float3 lightPos;
lightPos.x = cos(1.321 * time_0_X);
lightPos.z = sin(0.923 * time_0_X);
lightPos.xz = 100 * normalize(lightPos.xz);
lightPos.y = 100;
// Project the object's position
Out.Pos = mul(view_proj_matrix, inPos);
// World-space lighting
Out.normal = inNormal;
Out.lightVec = distanceScale * (lightPos - inPos.xyz);
Out.viewVec = view_position - inPos.xyz;
// Create view vectors for the light, looking at (0,0,0)
float3 dirZ = -normalize(lightPos);
float3 up = float3(0,0,1);
float3 dirX = cross(up, dirZ);
float3 dirY = cross(dirZ, dirX);
// Transform into light's view space.
float4 pos;
inPos.xyz -= lightPos;
pos.x = dot(dirX, inPos);
pos.y = dot(dirY, inPos);
pos.z = dot(dirZ, inPos);
pos.w = 1;
// Project it into light space to determine she shadow
// map position
float4 sPos = mul(proj_matrix, pos);
// Use projective texturing to map the position of each fragment
// to its corresponding texel in the shadow map.
sPos.z += 10;
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
return Out;
}
Pixel Shader:
float shadowBias;
float backProjectionCut;
float Ka;
float Kd;
float Ks;
float4 modelColor;
sampler ShadowMap;
sampler SpotLight;
float4 ps_main(
float3 inNormal: TEXCOORD0,
float3 lightVec: TEXCOORD1,
float3 viewVec: TEXCOORD2,
float4 shadowCrd: TEXCOORD3) : COLOR
{
// Normalize the normal
inNormal = normalize(inNormal);
// Radial distance and normalize light vector
float depth = length(lightVec);
lightVec /= depth;
// Standard lighting
float diffuse = saturate(dot(lightVec, inNormal));
float specular = pow(saturate(
dot(reflect(-normalize(viewVec), inNormal), lightVec)),
16);
// The depth of the fragment closest to the light
float shadowMap = tex2Dproj(ShadowMap, shadowCrd);
// A spot image of the spotlight
float spotLight = tex2Dproj(SpotLight, shadowCrd);
// If the depth is larger than the stored depth, this fragment
// is not the closest to the light, that is we are in shadow.
// Otherwise, we're lit. Add a bias to avoid precision issues.
float shadow = (depth < shadowMap + shadowBias);
// Cut back-projection, that is, make sure we don't lit
// anything behind the light.
shadow *= (shadowCrd.w > backProjectionCut);
// Modulate with spotlight image
shadow *= spotLight;
// Shadow any light contribution except ambient
return Ka * modelColor +
(Kd * diffuse * modelColor + Ks * specular) * shadow;
}

xna 4.0 sprite + MRT + pixelshader

I'm trying to combine 2 rendertargets, color and normal, for diffuse lightning and to render the result on screen. The idea is to use a sprite with an effect containing only a pixelshader to combine the rendertargets from textures.
XNA code:
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
effect.CurrentTechnique = effect.Techniques["show_buffer"];
effect.Parameters["normalTex"].SetValue(normalRendertarget);
effect.Parameters["colorTex"].SetValue(colorRendertarget);
effect.Parameters["AmbientIntensity"].SetValue(ambientIntesity);
effect.Parameters["LightDirection"].SetValue(lightDirection);
effect.Parameters["DiffuseIntensity"].SetValue(diffuseIntensity);
spriteBatch.Begin(0, BlendState.Opaque, null, null, null,effect);
spriteBatch.Draw(normalRT, Vector2.Zero, Color.White);
spriteBatch.End();
For some reason the rendertarget used in spriteBatch.Draw() influences the result.
Pixel Shader:
void Tex_PixelShader(float2 texCoord : TEXCOORD0, out float4 color : COLOR0)
{
float4 normal = tex2D(normalTexSampler, texCoord);
//tranform normal back into [-1,1] range
normal.rgb = (normal.rgb*2)-1;
float4 baseColor = tex2D(colorTexSampler, texCoord);
float3 lightDirectionNorm = normalize(LightDirection);
float diffuse = saturate(dot(-lightDirectionNorm,normal.rgb));
//only works with normalRT in spriteBatch.Draw()
//colorRT in spriteBatch.Draw() gives colorRT but darker as result
color = float4 (baseColor.rgb * (AmbientIntensity + diffuse*DiffuseIntensity), 1.0f);
//only works with colorRT in spriteBatch.Draw()
//normalRT in spriteBatch.Draw() gives normalRT as result
//color = tex2D(colorTexSampler, texCoord);
//only works with NormalRT
//colorRT in spriteBatch.Draw() gives colorRT as result
//color=tex2D(normalTexSampler, texCoord);
// works with any rendertarget in spriteBatch.Draw()
//color = float4(0.0f,1.0f,0.0f,1.0f);
}
The alpha value in both rendertargets is always 1. Adding a vertex shader to the effect results in black. Drawing one rendertarget without any effect with spriteBatch.Draw() shows that the content of each rendertarget is fine. I can't make sense of this. Any ideas?
Setting the textures with GraphicsDevice.Textures[1]=tex; instead of effect.Parameters["tex"]=tex; worked. Thanks Andrew.
Changed the xna code to:
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
effect.CurrentTechnique = effect.Techniques["show_buffer"];
GraphicsDevice.Textures[1] = normalRT; //changed
GraphicsDevice.Textures[2] = colorRT; //changed
effect.Parameters["AmbientIntensity"].SetValue(ambientIntesity);
effect.Parameters["LightDirection"].SetValue(lightDirection);
effect.Parameters["DiffuseIntensity"].SetValue(diffuseIntensity);
spriteBatch.Begin(0, BlendState.Opaque, null, null, null,effect);
spriteBatch.Draw((Texture2D)colorRT, Vector2.Zero, Color.White);
spriteBatch.End();
And the shader code to:
sampler normalSampler: register(s1); //added
sampler colorSampler: register(s2); //added
void Tex_PixelShader(float2 texCoord : TEXCOORD0, out float4 color : COLOR0)
{
float4 normal = tex2D(normalSampler, texCoord); //changed
normal.rgb = (normal.rgb*2)-1;
float4 baseColor = tex2D(colorSampler, texCoord); //changed
float3 lightDirectionNorm = normalize(LightDirection);
float diffuse = saturate(dot(-lightDirectionNorm,normal.rgb));
color = float4 (baseColor.rgb * (AmbientIntensity + diffuse*DiffuseIntensity), 1.0f);
}

Applying Perspective Transformation to Geometry and Passing Constant Buffers to Shader

I am attempting to apply a perspective transformation in DirectX-11 to a rendered cube centered at the origin (0, 0, 0), and with edges that span 1.0 unit (-0.5 to 0.5). However, I am not seeing anything rendering. I have tried the following:
shaders.hlsl
cbuffer VSHADER_CB
{
matrix mWorld;
matrix mView;
matrix mProj;
};
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = mul(position, mWorld);
output.position = mul(output.position, mView);
output.position = mul(output.position, mProj);
output.color = color;
return output;
}
...
void InitConstantBuffer()
...
D3DXVECTOR3 position(0.0f, 0.0f, -5.0f);
D3DXVECTOR3 lookAt(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXMatrixIdentity(&(cbMatrix.mWorld));
D3DXMatrixLookAtLH(&(cbMatrix.mView), &position, &lookAt, &up);
D3DXMatrixPerspectiveFovLH(&(cbMatrix.mProj), 70.0f, (FLOAT)(width / height), 1.0f, 100.0f);
D3D11_BUFFER_DESC cbd;
ZeroMemory(&cbd, sizeof(D3D11_BUFFER_DESC));
cbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbd.ByteWidth = sizeof(cbMatrix);
cbd.Usage = D3D11_USAGE_DEFAULT;
D3D11_SUBRESOURCE_DATA cbdInitData;
ZeroMemory(&cbdInitData, sizeof(D3D11_SUBRESOURCE_DATA));
cbdInitData.pSysMem = &cbMatrix;
mD3DDevice->CreateBuffer(&cbd, &cbdInitData, &mD3DCBuffer);
mD3DImmediateContext->VSSetConstantBuffers(0, 1, &mD3DCBuffer);
When i simply do not include any transformations (output.position = position) in the shader file, everything renders correctly, I see the front face of the cube. Is this all I need to pass constant buffers to my shader and utilize them completely? What am i missing here?
I figured out the answer to my own question, I needed to transpose the matrices by calling D3DXMatrixTranspose() before sending them to the shader.

Camera unusual behaviour, DX11

I've been writing a program using directx11, and I have written a basic camera class which manipulates a view matrix. When I test the program, the result is that the scene does not move, but when I move the camera it has the effect of cutting off what is visible at an arbitrary location. I've attached some pictures to show what I mean.
I have left my pixel shader only outputting red pixels for now.
My vertex shader is based on the SDK example:
cbuffer cbChangeOnResize : register(b1)
{
matrix Projection;
};
cbuffer cbChangesEveryFrame : register(b2)
{
matrix View;
matrix World;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD0;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT TEX_VS(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul(input.Pos, World);
output.Pos = mul(output.Pos, View);
output.Pos = mul(output.Pos, Projection);
output.Tex = input.Tex;
return output;
}
I have been scratching my head for a couple of days about this problem, but I don't know what is causing this, or even which pieces of code are relevant. PIX shows that the world, view and projection matrices appear to exist and are being applied, although it is evident that something is not right.
Thank you.
You can use row_major modifier instead of transposing matrices before passing them inside shader
Mathematical fail, I had sent the view matrix instead of its transpose to the shader.

Resources