I'm trying to render depth texture in XNA 4.0. I'm read few different tutorials several times and realy cannot understand what I'm doing wrong.
Depth shader:
float4x4 WVPMatrix;
struct VertexShaderOutput
{
float4 Position : position0;
float Depth : texcoord0;
};
VertexShaderOutput VertexShader1(float4 pPosition : position0)
{
VertexShaderOutput output;
output.Position = mul(pPosition, WVPMatrix);
output.Depth.x = 1 - (output.Position.z / output.Position.w);
return output;
}
float4 PixelShader1(VertexShaderOutput pOutput) : color0
{
return float4(pOutput.Depth.x, 0, 0, 1);
}
technique Technique1
{
pass Pass1
{
AlphaBlendEnable = false;
ZEnable = true;
ZWriteEnable = true;
VertexShader = compile vs_2_0 VertexShader1();
PixelShader = compile ps_2_0 PixelShader1();
}
}
Drawing:
this.depthRenderTarget = new RenderTarget2D(
this.graphicsDevice,
this.graphicsDevice.PresentationParameters.BackBufferWidth,
this.graphicsDevice.PresentationParameters.BackBufferHeight);
...
public void Draw(GameTime pGameTime, Camera pCamera, Effect pDepthEffect, Effect pOpaqueEffect, Effect pNotOpaqueEffect)
{
this.graphicsDevice.SetRenderTarget(this.depthRenderTarget);
this.graphicsDevice.Clear(Color.CornflowerBlue);
this.DrawChunksDepth(pGameTime, pCamera, pDepthEffect);
this.graphicsDevice.SetRenderTarget(null);
this.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointClamp, null, null);
this.spriteBatch.Draw(this.depthRenderTarget, Vector2.Zero, Color.White);
this.spriteBatch.End();
}
private void DrawChunksDepth(GameTime pGameTime, Camera pCamera, Effect pDepthEffect)
{
// ...
this.graphicsDevice.RasterizerState = RasterizerState.CullClockwise;
this.graphicsDevice.DepthStencilState = DepthStencilState.Default;
// draw mesh with pDepthEffect
}
Result:
As I see output.Position.z always equals output.Position.w, but why?
There are several depth definitions that might be useful. Here are some of them.
The easiest is the z-coordinate in camera space (i.e. after applying world and view transform). It usually has the same units as the world coordinate system and it is linear. However, it is always measured in parallel to the view direction. This means that moving left/right and up/down does not change the distance because you stay at the same plane (parallel to the znear/zfar clipping planes). A slight variation is z/far which just scales the values to the [0, 1] interval.
If real distances (in the Euclidean metric) are needed, you have to calculate them in the shader. If you just need coarse values, the vertex shader is enough. If the values should be accurate, do this in the pixel shader. Basically, you need to calculate the length of the position vector after applying world and view transforms. Units are equal to world space units.
Depth buffer depth is non-linear and optimized for depth buffering. This is the depth that is returned by the projection transform (and following w-clip). The near clipping plane is mapped to a depth of 0, the far clipping plane to a depth of 1. If you change the location of a very near pixel along the view direction, the depth value changes a lot more than a far pixel that is moved equally. This is because display errors (due to floating point imprecision) at near pixels are a lot more visible than at far pixels. This depth is also measured in parallel to the view direction.
Related
Thanks for taking the time to check out my issue.
I am working on improving the ocean in my first attempt at a game. I have decided on a using a bump map against my ocean tiles to add a little texture to the water. To do this, I draw my water tiles to a renderTarget and then apply a pixel shader while drawing the render target to the backbuffer.
The problem I am having is that the pixel shader seems to offset or displace the position of render target that is drawn. Observe these two photos:
This image is the game without running the pixel shader. Notice the "shallow water" around the islands which is a solid color here.
With the pixel shader is run, that shallow water is offset to the right consistently.
I am using the bump map provided in riemers novice bump mapping. One possible thought I had was that the dimensions of this bump map do not match the render target I am applying it on. However, I'm not entirely sure how I would create/resize this bump map.
My HLSL pixel shader looks like this:
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
matrix WorldViewProjection;
float xWaveLength;
float xWaveHeight;
texture bumpMap;
sampler2D bumpSampler = sampler_state
{
Texture = <bumpMap>;
};
texture water;
sampler2D waterSampler = sampler_state
{
Texture = <water>;
};
// MAG,MIN,MIRRR SETTINGS? SEE RIEMERS
struct VertexShaderInput
{
float4 Position : POSITION0;
float2 TextureCords : TEXCOORD;
float4 Color : COLOR0;
};
struct VertexShaderOutput
{
float4 Pos : SV_POSITION;
float2 BumpMapSamplingPos : TEXCOORD2;
float4 Color : COLOR0;
};
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.BumpMapSamplingPos = input.TextureCords/xWaveLength;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, texCoord.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = texCoord.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
technique oceanRipple
{
pass P0
{
//VertexShader = compile VS_SHADERMODEL MainVS();
PixelShader = compile PS_SHADERMODEL MainPS();
}
};
And my monogame draw call looks like this:
public void DrawMap(SpriteBatch sbWorld, SpriteBatch sbStatic, RenderTarget2D worldScene, GameTime gameTime)
{
// Set Water RenderTarget
_graphics.SetRenderTarget(waterScene);
_graphics.Clear(Color.CornflowerBlue);
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.OceanTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
// set up gamescene draw
_graphics.SetRenderTarget(worldScene);
_graphics.Clear(Color.PeachPuff);
// water
sbWorld.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
oceanRippleEffect.Parameters["bumpMap"].SetValue(waterBumpMap);
oceanRippleEffect.Parameters["water"].SetValue(waterScene);
//oceanRippleEffect.Parameters["xWaveLength"].SetValue(3f);
oceanRippleEffect.Parameters["xWaveHeight"].SetValue(0.3f);
ExecuteTechnique("oceanRipple");
sbWorld.Draw(waterScene, Vector2.Zero, Color.White);
sbWorld.End();
// land
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.LandTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
}
Can anyone see any issues with my code or otherwise that might be causing this offset issue?
Any help is much appreciated. Thanks!
EDIT
If I modify the xWaveHeight shader parameter, it changes where the offset appears. A value of 0 will not offset, but then the bump mapping is not applied. Is there any way around this?
I understand that the offset is being caused by the pixel shader perturbation, but I'm wondering if there is a way to undo this offset while preserving the bump mapping. In the linked riemer's tutorial, a vertex shader is included. I'm not quite sure if I need this, but when I include my vertex shader in the technique, and modify the pixel shader to the following, no water is drawn.
float4 MainPS(in VertexShaderOutput output) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, output.BumpMapSamplingPos.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = output.BumpMapSamplingPos.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
First of all, for what you seem to be wanting to do, bump mapping is actually the wrong approach: bump mapping is about changing the surface normal (basicly "rotating" the pixel in 3D space), so following light calculations (such as reflection) see your surface as more complex then it really is (Notice that the texture of that pixel stays where it is). So, bump mapping would not at all modify the position of the ocean tile texture, but modify what is reflected by the ocean (for example, by changing the sample position of a skybox, so the reflection of the sky in the water is distorted). The way you are implementing it is more like "What if my screen would be an ocean and would reflect an image of tiles with ocean textures".
If you really want to use bump mapping, you would need some kind of big sky texture, and then, while (not after) drawing the ocean tiles, you would calculate a sample position of the reflection of this sky texture (based on the position of the tile on the screen) and then modify that sample position with bump mapping. All while drawing the tiles, not after drawing them to a render target.
It is also possible to do this deffered (more similar to what you are doing now) - actually, there are multiple ways of doing so - but either way you would still need to sample the final color from a sky texture, not from the render target your tiles were drawn on. The render target from your tiles would instead contain "meta" informations (depending on how exactly you want to do this). These informations could be a color that is multiplied with the color from the sky texture (creating "colored" water, eg. for different bioms or to simulate sun sets/sun rises), or a simple 1 or 0 to tell wether or not there is any ocean, or a per-tile bump map (which would you allow to apply a "screen global" and a "per tile" bump mapping in one go. You would still need a way to say "this pixel is not an ocean, don't do anything for that" in the render target), or - if you use multiple render targets - all of these at once. In any way, the sample position to sample from your render target(s) is not modified by bump mapping, only the sample position of the texture that is reflected by the ocean is. That way, there's also no displacement of the ocean, since we aren't touching that sample positions at all.
Now, to create a look that is more similar to what you seem to be wanting (according to your images), you wouldn't use bump mapping, but instead apply a small noise to the sample position in your pixel shader (the rest of the code doesn't need to change). For that, your shader would look more like this:
texture noiseTexture;
sampler2D noiseSampler = sampler_state
{
Texture = <noiseTexture>;
MipFilter = LINEAR;
MinFilter = LINEAR;
MagFilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
float2 noiseOffset;
float2 noisePower;
float noiseFrequency;
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 noise = tex2D(noiseSampler, (texCoord.xy + noiseOffset.xy) * noiseFrequency);
float2 offset = noisePower * (noise.xy - 0.5f) * 2.0f;
float4 color = tex2D(waterSampler, texCoord.xy + offset.xy);
return color;
}
Where noisePower would be (at most) approx. 1 over the number of horizontal/vertical tiles on the screen, noiseOffset can be used to "move" the noise over time on the screen (should be in range [-1;1]), and noiseFrequency is an artistic parameter (I would start with twice the max noise power, and then modify it from there, with higher values making the ocean more distorted). This way, the border of the tiles is distorted, but never moved more then one tile in any direction (thanks to the noisePower parameter). It is also important to use the correct kind of noise texture here: white noise, blue noise, maybe a "not really noise" texture that's build out of sinus waves, etc. Important is the fact that the "average" value of each pixel is about 0.5, so there's no overall displacement happening, and that the values are well distributed in the texture. Appart from that, see what kind of noise looks best for you.
Side note to the shader code: I haven't tested that code. Just that you know, not that there would be much room for mistakes.
Edit: As a side node: Of course the sky texture doesn't need to actualy look like a sky ;)
I am using XNA, implementing the HLSL shader and i have a problem with transparency
in the shader;
When rendering two models and they are facing each other the model behind is seen only
when it is first rendered
Let me explain...
blue cone = vector3(0,0,0) - first target
green cone = vector3(50,0,50) - second target
here rendering blue first and then the green and blue cone is seen
can see
now the other way before the green then blue and you do not see
cannot see
As long as they are two cones I can calculate the distance from the camera and render
before the most distant (the only solution I found by searching on the net), but if I
have several models and sizes, it can happen that a model A is most distant of a model B
but that its size may lead him to hide the model B.
Here put some code i use
.fx file
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 Color;
texture ColorTexture : DIFFUSE ;
sampler2D ColorSampler = sampler_state {
Texture = <ColorTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
float4 projPosition = mul(viewPosition, Projection);
output.Position = projPosition;
output.UV = input.UV;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 map = tex2D(ColorSampler,input.UV);
return float4(map.rgb * Color, map.a);
}
technique Textured
{
pass Pass1
{
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = true;
DestBlend = DestAlpha;
SrcBlend=BlendFactor;
VertexShader = compile vs_3_0 VertexShaderFunction();
PixelShader = compile ps_3_0 PixelShaderFunction();
}
}
draw code in XNA project
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.DarkGray);
for (int i = 0; i < 2; i++)
{
ModelEffect.Parameters["View"].SetValue(this.View);
ModelEffect.Parameters["Projection"].SetValue(this.Projection);
ModelEffect.Parameters["ColorTexture"].SetValue(this.colorTextured);
ModelEffect.CurrentTechnique = ModelEffect.Techniques["Textured"];
Vector3 Scala = new Vector3(2, 3, 1);
if (i == 0)
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(FirstTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 0, 255));// blu
}
else
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(SecondTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 255, 0));// verde
}
ModelEffect.Parameters["World"].SetValue(World);
foreach (ModelMesh mesh in Emitter.Meshes)
{
foreach (ModelMeshPart effect in mesh.MeshParts)
{
effect.Effect = ModelEffect;
}
mesh.Draw();
}
}
base.Draw(gameTime);
}
the for cycle I need for chage sort rendering...
Is there any procedure to be able to work around this problem?
I hope I explained myself, I think we should work on the file .fx ...or not?
I'm on the high seas :)
This is a classic computer graphics problem -- what you'll need to do depends on your specific model and need, but transparency is as you've discovered order dependent -- as Nico mentioned, if you sort entire meshes back-to-front (draw the back ones first) for each frame, you will be okay, but what about curved surfaces that need to draw sometimes in front of themselves (that is, they are self-occluding from the camera's point of view)? Then you have to go much farther and sort the polys within the mesh (adios, high performance!). If you don't sort, chances are, the order will look correct on average 50% or less of the time (unless your model is posed just right).
If you are drawing transparent cones, as they rotate to different views they will look correct sometimes and wrong other times, if two-sided rendering is enabled. Most of the time wrong.
One option is to just turn off depth-write buffering during the pass(es) where you draw the transparent items. Again, YMMV according to the scene's needs, but this can be a useful fix in many cases. Another is to segment the model and sort the meshes.
In games, many strategies have been followed, including re-ordering the models by hand, forcing the artists to limit transparency to certain passes only, drawing two passes per transparent layer (one with transparent color, and another with opaque but no color write to get the Z buffer correct), sending models back for a re-do, or even, if the errors are small or the action is fast, just accepting broken transparency.
There have been various solutions proposed to this general problem -- "OIT" (Order Independent Transparency) is a big enough topic to get its own wikipedia page: http://en.wikipedia.org/wiki/Order-independent_transparency
I’m trying to store the depth information of model in XNA from the lights point of view. The model renders fine using a normal shader.
This is the shader code
float4x4 xShadowMapLight;
///DEPTH MAP
float4 RenderShadowMapPS( VS_SHADOW_OUTPUT In ) : COLOR
{
return float4(1,0,0,0); //In.Depth.x,In.Depth.x,In.Depth.x,0);
}
VS_SHADOW_OUTPUT RenderShadowMapVS(float4 vPos: POSITION)
{
VS_SHADOW_OUTPUT Out;
Out.Position = mul(vPos, xShadowMapLight);
Out.Depth.x = 1-(Out.Position.z/Out.Position.w);
return Out;
}
technique ToonShaderDepthMap
{
pass P0
{
VertexShader = compile vs_2_0 RenderShadowMapVS();
PixelShader = compile ps_2_0 RenderShadowMapPS();
}
}
And this is the way I generate the light matrix that is passed to the shader.
Matrix view = Matrix.CreateLookAt(modelTransform.Translation, new Vector3(0, 500, 75), Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 1.0f, (float)globals.NEAR_PLANE, (float)globals.FAR_PLANE);
Matrix ShadowMapLight = modelTransform * view * projection;
But apparently I’m not always getting to the pixel shader and it stops in the vertex for some reason.
(I have modified the pixel shader to output always a red pixel, so I know when it goes through).
xShadowMapLight is the Matrix containing all the transformations from the lights point of view.
Apparently the problem is in mul(vPos, xShadowMapLight); cause if I change the matrix for an Identity one the shader gets to the pixel one.
Any idea why this might happen?
Thanks in advance,
Walfrido.
What's the efficient way to render a bunch of layered textures? I have some semitransparent textured rectangles that I position randomly in 3D space and render them from back to front.
Currently I call d3dContext->PSSetShaderResources() to feed the pixel shader with a new texture before each call to d3dContext->DrawIndexed(). I have a feeling that I am copying the texture to the GPU memory before each draw. I might have 10-30 ARGB textures roughly 1024x1024 pixels each and they are associated across 100-200 rectangles that I render on screen. My FPS is OK at 100, but goes pretty bad around 200. I possibly have some inefficiencies elsewhere since this is my first semi-serious D3D code, but I strongly suspect this has to do with copying the textures back and forth. 30*1024*1024*4 is 120MB, which is a bit high for a Metro Style App that should target any Windows 8 device. So putting them all in there might be a stretch, but maybe I could at least cache a few somehow? Any ideas?
*EDIT - Some code snippets added
Constant Buffer
struct ModelViewProjectionConstantBuffer
{
DirectX::XMMATRIX model;
DirectX::XMMATRIX view;
DirectX::XMMATRIX projection;
float opacity;
float3 highlight;
float3 shadow;
float textureTransitionAmount;
};
The Render Method
void RectangleRenderer::Render()
{
// Clear background and depth stencil
const float backgroundColorRGBA[] = { 0.35f, 0.35f, 0.85f, 1.000f };
m_d3dContext->ClearRenderTargetView(
m_renderTargetView.Get(),
backgroundColorRGBA
);
m_d3dContext->ClearDepthStencilView(
m_depthStencilView.Get(),
D3D11_CLEAR_DEPTH,
1.0f,
0
);
// Don't draw anything else until all textures are loaded
if (!m_loadingComplete)
return;
m_d3dContext->OMSetRenderTargets(
1,
m_renderTargetView.GetAddressOf(),
m_depthStencilView.Get()
);
UINT stride = sizeof(BasicVertex);
UINT offset = 0;
// The vertext buffer only has 4 vertices of a rectangle
m_d3dContext->IASetVertexBuffers(
0,
1,
m_vertexBuffer.GetAddressOf(),
&stride,
&offset
);
// The index buffer only has 4 vertices
m_d3dContext->IASetIndexBuffer(
m_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT,
0
);
m_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
m_d3dContext->IASetInputLayout(m_inputLayout.Get());
FLOAT blendFactors[4] = { 0, };
m_d3dContext->OMSetBlendState(m_blendState.Get(), blendFactors, 0xffffffff);
m_d3dContext->VSSetShader(
m_vertexShader.Get(),
nullptr,
0
);
m_d3dContext->PSSetShader(
m_pixelShader.Get(),
nullptr,
0
);
m_d3dContext->PSSetSamplers(
0, // starting at the first sampler slot
1, // set one sampler binding
m_sampler.GetAddressOf()
);
// number of rectangles is in the 100-200 range
for (int i = 0; i < m_rectangles.size(); i++)
{
// start rendering from the farthest rectangle
int j = (i + m_farthestRectangle) % m_rectangles.size();
m_vsConstantBufferData.model = m_rectangles[j].transform;
m_vsConstantBufferData.opacity = m_rectangles[j].Opacity;
m_vsConstantBufferData.highlight = m_rectangles[j].Highlight;
m_vsConstantBufferData.shadow = m_rectangles[j].Shadow;
m_vsConstantBufferData.textureTransitionAmount = m_rectangles[j].textureTransitionAmount;
m_d3dContext->UpdateSubresource(
m_vsConstantBuffer.Get(),
0,
NULL,
&m_vsConstantBufferData,
0,
0
);
m_d3dContext->VSSetConstantBuffers(
0,
1,
m_vsConstantBuffer.GetAddressOf()
);
m_d3dContext->PSSetConstantBuffers(
0,
1,
m_vsConstantBuffer.GetAddressOf()
);
auto a = m_rectangles[j].textureId;
auto b = m_rectangles[j].targetTextureId;
auto srv1 = m_textures[m_rectangles[j].textureId].textureSRV.GetAddressOf();
auto srv2 = m_textures[m_rectangles[j].targetTextureId].textureSRV.GetAddressOf();
ID3D11ShaderResourceView* srvs[2];
srvs[0] = *srv1;
srvs[1] = *srv2;
m_d3dContext->PSSetShaderResources(
0, // starting at the first shader resource slot
2, // set one shader resource binding
srvs
);
m_d3dContext->DrawIndexed(
m_indexCount,
0,
0
);
}
}
Pixel Shader
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
float opacity;
float3 highlight;
float3 shadow;
float textureTransitionAmount;
};
Texture2D baseTexture : register(t0);
Texture2D targetTexture : register(t1);
SamplerState simpleSampler : register(s0);
struct PixelShaderInput
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 tex : TEXCOORD0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
float3 lightDirection = normalize(float3(0, 0, -1));
float4 baseTexelColor = baseTexture.Sample(simpleSampler, input.tex);
float4 targetTexelColor = targetTexture.Sample(simpleSampler, input.tex);
float4 texelColor = lerp(baseTexelColor, targetTexelColor, textureTransitionAmount);
float4 shadedColor;
shadedColor.rgb = lerp(shadow.rgb, highlight.rgb, texelColor.r);
shadedColor.a = texelColor.a * opacity;
return shadedColor;
}
As Jeremiah has suggested, you are not probably moving texture from CPU to GPU for each frame as you would have to create new texture for each frame or using "UpdateSubresource" or "Map/UnMap" methods.
I don't think that instancing is going to help for this specific case, as the number of polygons is extremely low (I would start to worry with several millions of polygons). It is more likely that your application is going to be bandwidth/fillrate limited, as your are performing lots of texture sampling/blending (It depends on tecture fillrate, pixel fillrate and the nunber of ROP on your GPU).
In order to achieve better performance, It is highly recommended to:
Make sure that all your textures have all mipmaps generated. If they
don't have any mipmaps, It will hurt a lot the cache of the GPU. (I also assume that you are using texture.Sample method in HLSL, and not texture.SampleLevel or variants)
Use Direct3D11 Block Compressed texture on the GPU, by using a tool
like texconv.exe or preferably the sample from "Windows DirectX 11
Texture Converter".
On a side note, you will probably get more attention for this kind of question on https://gamedev.stackexchange.com/.
I don't think you are doing any copying back and forth from GPU to system memory. You usually have to explicitly do that a call to Map(...), or by blitting to a texture you created in system memory.
One issue, is you are making a DrawIndexed(...) call for each texture. GPUs work most efficiently if you send it a bunch of work to do by batching. One way to accomplish this is to set n-amount of textures to PSSetShaderResources(i, ...), and do a DrawIndexedInstanced(...). Your shader code would then read each of the shader resources and draw them. I do this in my C++ DirectCanvas code here (SpriteInstanced.cpp). This can make for a lot of code, but the result is very efficient (I even do the matrix ops in the shader for more speed).
One other, maybe a lot easier way, is to give the DirectXTK spritebatch a shot.
I used it here in this project...only for a simple blit but it may be a good start to see the minor amount of setup needed to use the spritebatch.
Also, if possible, try to "atlas" your texture. For instance, try to fit as many "images" in a texture as possible and blit from them vs having a single texture for each.
I want to apply a pixel shader onto my background sprite, to create some sort of lighting.
So i draw a Render Target with the light on it and want to merge it onto the background via the Pixel shader.
This is the essential code:
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
lighting.Parameters["lightMask"].SetValue(lightingMask);
lighting.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(hexBack, new Vector2(0, 0), Color.White);
spriteBatch.End();
In this case, hexBack is the Rendertarget with a simple sprite drawn in it and lightingMask is the rendertarget with the light texture in it.
Both are Backbuffer width and height.
So when i try to run the program, it crashes with:
XNA Framework Reach profile requires TextureAddressMode to be Clamp when using texture sizes that are not powers of two.
So i tried to set up clamping, but i cant find a way to get it working.
The shader code:
texture lightMask;
sampler mainSampler : register(s0);
sampler lightSampler = sampler_state{Texture = lightMask;};
struct PixelShaderInput
{
float4 TextureCoords: TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float2 texCoord = input.TextureCoords;
float4 mainColor = tex2D(mainSampler, texCoord);
float4 lightColor = tex2D(lightSampler, texCoord);
return mainColor * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
Thanks for your help!
pcnx
If you are unable to use power of two textures, you have to change your Spritebath.begin call and specify a SamplerState. The minimum to specify should be
public void Begin (
SpriteSortMode sortMode,
BlendState blendState,
SamplerState samplerState,
DepthStencilState depthStencilState,
RasterizerState rasterizerState
)
The error refers to your texture addressing mode (ie: does the texture wrap around at the edges, or is it clamped at the edges). Nothing to do with a shader.
Use one of the overloads for SpriteBatch.Begin (MSDN) that takes a SamplerState, and pass in SamplerState.LinearClamp (MSDN).
The default for SpriteBatch.Begin is SamplerState.LinearClamp, so you must be setting a different state (eg: LinearWrap) onto the graphics device somewhere else in your code? Don't do that.
(Alternately: change from the Reach profile to the HiDef profile in your project settings.)