XNA pixel shader clamping error - xna

I want to apply a pixel shader onto my background sprite, to create some sort of lighting.
So i draw a Render Target with the light on it and want to merge it onto the background via the Pixel shader.
This is the essential code:
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
lighting.Parameters["lightMask"].SetValue(lightingMask);
lighting.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(hexBack, new Vector2(0, 0), Color.White);
spriteBatch.End();
In this case, hexBack is the Rendertarget with a simple sprite drawn in it and lightingMask is the rendertarget with the light texture in it.
Both are Backbuffer width and height.
So when i try to run the program, it crashes with:
XNA Framework Reach profile requires TextureAddressMode to be Clamp when using texture sizes that are not powers of two.
So i tried to set up clamping, but i cant find a way to get it working.
The shader code:
texture lightMask;
sampler mainSampler : register(s0);
sampler lightSampler = sampler_state{Texture = lightMask;};
struct PixelShaderInput
{
float4 TextureCoords: TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float2 texCoord = input.TextureCoords;
float4 mainColor = tex2D(mainSampler, texCoord);
float4 lightColor = tex2D(lightSampler, texCoord);
return mainColor * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
Thanks for your help!
pcnx

If you are unable to use power of two textures, you have to change your Spritebath.begin call and specify a SamplerState. The minimum to specify should be
public void Begin (
SpriteSortMode sortMode,
BlendState blendState,
SamplerState samplerState,
DepthStencilState depthStencilState,
RasterizerState rasterizerState
)

The error refers to your texture addressing mode (ie: does the texture wrap around at the edges, or is it clamped at the edges). Nothing to do with a shader.
Use one of the overloads for SpriteBatch.Begin (MSDN) that takes a SamplerState, and pass in SamplerState.LinearClamp (MSDN).
The default for SpriteBatch.Begin is SamplerState.LinearClamp, so you must be setting a different state (eg: LinearWrap) onto the graphics device somewhere else in your code? Don't do that.
(Alternately: change from the Reach profile to the HiDef profile in your project settings.)

Related

Draw RGB pixel array to DirectX-11 render view

Given an array of RBG pixels that updates every frame (e.g. 1024x1024), a ID3D11RenderTargetView, ID3D11Device and ID3D11DeviceContext, what's the easiest way to draw these pixels to the render view?
I've been working the angle of creating a vertex buffer for a square (two triangles), trying to make pixels be a proper texture, and figuring out how to make a shader reference the texture sampler. I've been following this tutorial https://learn.microsoft.com/en-us/windows/uwp/gaming/applying-textures-to-primitives .... But to be honest, I don't see how this tutorial has shaders that even reference the texture data (shaders defined on the proceeding tutorial, here).
I am a total DirectX novice, but I am writing a plugin for an application where I am given a directx11 device/view/context, and need to fill it with my pixel data. Many thanks!
IF you can make sure your staging resource matches the exact resolution and format of the render target you are given:
Create a staging resource
Map the staging resource, and copy your data into it.
Unmap the staging resource
UseGetResource on the RTV to get the resource
CopyResource from your staging to that resource.
Otherwise, IF you can count on Direct3D Hardware Feature level 10.0 or better, the easiest way would be:
Create a texture with USAGE_DYNAMIC.
Map it and copy your data into the texture.
Unmap the resource
Render the dynamic texture as a 'full-screen' quad using the 'big-triangle' self-generation trick in the vertex shader:
SamplerState PointSampler : register(s0);
Texture2D<float4> Texture : register(t0);
struct Interpolators
{
float4 Position : SV_Position;
float2 TexCoord : TEXCOORD0;
};
Interpolators main(uint vI : SV_VertexId)
{
Interpolators output;
// We use the 'big triangle' optimization so you only Draw 3 verticies instead of 4.
float2 texcoord = float2((vI << 1) & 2, vI & 2);
output.TexCoord = texcoord;
output.Position = float4(texcoord.x * 2 - 1, -texcoord.y * 2 + 1, 0, 1);
return output;
}
and a pixel shader of:
float4 main(Interpolators In) : SV_Target0
{
return Texture.Sample(PointSampler, In.TexCoord);
}
Then draw with:
ID3D11ShaderResourceView* textures[1] = { texture };
context->PSSetShaderResources(0, 1, textures);
// You need a sampler object.
context->PSSetSamplers(0, 1, &sampler);
// Depending on your desired result, you may need state objects here
context->OMSetBlendState(nullptr, nullptr, 0xffffffff);
context->OMSetDepthStencilState(nullptr, 0);
context->RSSetState(nullptr);
context->IASetInputLayout(nullptr);
contet->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
Draw(3, 0);
For full source for the "Full Screen Quad" drawing, see GitHub.

2D Water Bump Mapping - Monogame

Thanks for taking the time to check out my issue.
I am working on improving the ocean in my first attempt at a game. I have decided on a using a bump map against my ocean tiles to add a little texture to the water. To do this, I draw my water tiles to a renderTarget and then apply a pixel shader while drawing the render target to the backbuffer.
The problem I am having is that the pixel shader seems to offset or displace the position of render target that is drawn. Observe these two photos:
This image is the game without running the pixel shader. Notice the "shallow water" around the islands which is a solid color here.
With the pixel shader is run, that shallow water is offset to the right consistently.
I am using the bump map provided in riemers novice bump mapping. One possible thought I had was that the dimensions of this bump map do not match the render target I am applying it on. However, I'm not entirely sure how I would create/resize this bump map.
My HLSL pixel shader looks like this:
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
matrix WorldViewProjection;
float xWaveLength;
float xWaveHeight;
texture bumpMap;
sampler2D bumpSampler = sampler_state
{
Texture = <bumpMap>;
};
texture water;
sampler2D waterSampler = sampler_state
{
Texture = <water>;
};
// MAG,MIN,MIRRR SETTINGS? SEE RIEMERS
struct VertexShaderInput
{
float4 Position : POSITION0;
float2 TextureCords : TEXCOORD;
float4 Color : COLOR0;
};
struct VertexShaderOutput
{
float4 Pos : SV_POSITION;
float2 BumpMapSamplingPos : TEXCOORD2;
float4 Color : COLOR0;
};
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.BumpMapSamplingPos = input.TextureCords/xWaveLength;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, texCoord.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = texCoord.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
technique oceanRipple
{
pass P0
{
//VertexShader = compile VS_SHADERMODEL MainVS();
PixelShader = compile PS_SHADERMODEL MainPS();
}
};
And my monogame draw call looks like this:
public void DrawMap(SpriteBatch sbWorld, SpriteBatch sbStatic, RenderTarget2D worldScene, GameTime gameTime)
{
// Set Water RenderTarget
_graphics.SetRenderTarget(waterScene);
_graphics.Clear(Color.CornflowerBlue);
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.OceanTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
// set up gamescene draw
_graphics.SetRenderTarget(worldScene);
_graphics.Clear(Color.PeachPuff);
// water
sbWorld.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
oceanRippleEffect.Parameters["bumpMap"].SetValue(waterBumpMap);
oceanRippleEffect.Parameters["water"].SetValue(waterScene);
//oceanRippleEffect.Parameters["xWaveLength"].SetValue(3f);
oceanRippleEffect.Parameters["xWaveHeight"].SetValue(0.3f);
ExecuteTechnique("oceanRipple");
sbWorld.Draw(waterScene, Vector2.Zero, Color.White);
sbWorld.End();
// land
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.LandTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
}
Can anyone see any issues with my code or otherwise that might be causing this offset issue?
Any help is much appreciated. Thanks!
EDIT
If I modify the xWaveHeight shader parameter, it changes where the offset appears. A value of 0 will not offset, but then the bump mapping is not applied. Is there any way around this?
I understand that the offset is being caused by the pixel shader perturbation, but I'm wondering if there is a way to undo this offset while preserving the bump mapping. In the linked riemer's tutorial, a vertex shader is included. I'm not quite sure if I need this, but when I include my vertex shader in the technique, and modify the pixel shader to the following, no water is drawn.
float4 MainPS(in VertexShaderOutput output) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, output.BumpMapSamplingPos.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = output.BumpMapSamplingPos.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
First of all, for what you seem to be wanting to do, bump mapping is actually the wrong approach: bump mapping is about changing the surface normal (basicly "rotating" the pixel in 3D space), so following light calculations (such as reflection) see your surface as more complex then it really is (Notice that the texture of that pixel stays where it is). So, bump mapping would not at all modify the position of the ocean tile texture, but modify what is reflected by the ocean (for example, by changing the sample position of a skybox, so the reflection of the sky in the water is distorted). The way you are implementing it is more like "What if my screen would be an ocean and would reflect an image of tiles with ocean textures".
If you really want to use bump mapping, you would need some kind of big sky texture, and then, while (not after) drawing the ocean tiles, you would calculate a sample position of the reflection of this sky texture (based on the position of the tile on the screen) and then modify that sample position with bump mapping. All while drawing the tiles, not after drawing them to a render target.
It is also possible to do this deffered (more similar to what you are doing now) - actually, there are multiple ways of doing so - but either way you would still need to sample the final color from a sky texture, not from the render target your tiles were drawn on. The render target from your tiles would instead contain "meta" informations (depending on how exactly you want to do this). These informations could be a color that is multiplied with the color from the sky texture (creating "colored" water, eg. for different bioms or to simulate sun sets/sun rises), or a simple 1 or 0 to tell wether or not there is any ocean, or a per-tile bump map (which would you allow to apply a "screen global" and a "per tile" bump mapping in one go. You would still need a way to say "this pixel is not an ocean, don't do anything for that" in the render target), or - if you use multiple render targets - all of these at once. In any way, the sample position to sample from your render target(s) is not modified by bump mapping, only the sample position of the texture that is reflected by the ocean is. That way, there's also no displacement of the ocean, since we aren't touching that sample positions at all.
Now, to create a look that is more similar to what you seem to be wanting (according to your images), you wouldn't use bump mapping, but instead apply a small noise to the sample position in your pixel shader (the rest of the code doesn't need to change). For that, your shader would look more like this:
texture noiseTexture;
sampler2D noiseSampler = sampler_state
{
Texture = <noiseTexture>;
MipFilter = LINEAR;
MinFilter = LINEAR;
MagFilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
float2 noiseOffset;
float2 noisePower;
float noiseFrequency;
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 noise = tex2D(noiseSampler, (texCoord.xy + noiseOffset.xy) * noiseFrequency);
float2 offset = noisePower * (noise.xy - 0.5f) * 2.0f;
float4 color = tex2D(waterSampler, texCoord.xy + offset.xy);
return color;
}
Where noisePower would be (at most) approx. 1 over the number of horizontal/vertical tiles on the screen, noiseOffset can be used to "move" the noise over time on the screen (should be in range [-1;1]), and noiseFrequency is an artistic parameter (I would start with twice the max noise power, and then modify it from there, with higher values making the ocean more distorted). This way, the border of the tiles is distorted, but never moved more then one tile in any direction (thanks to the noisePower parameter). It is also important to use the correct kind of noise texture here: white noise, blue noise, maybe a "not really noise" texture that's build out of sinus waves, etc. Important is the fact that the "average" value of each pixel is about 0.5, so there's no overall displacement happening, and that the values are well distributed in the texture. Appart from that, see what kind of noise looks best for you.
Side note to the shader code: I haven't tested that code. Just that you know, not that there would be much room for mistakes.
Edit: As a side node: Of course the sky texture doesn't need to actualy look like a sky ;)

what is the 'alpha' value of pixel shader?

hi there day i am in process to make 2d game using directx11 api.
and it come to point that i need to use transparent effect.
so i have a green background and one footprint on middle.
and simply without setting anything but alpha value of returning color in pixel shader, i made a bit of success, but the problem is that it doesn't work for white color.
this is Pixel Shader code
cbuffer CB_TRANSPARENCY : register(b0)
{
float tp;
};
Texture2D footprint : register(t0);
SamplerState samplerState : register(s0);
struct PS_INPUT
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PS_INPUT input) : SV_Target
{
float3 texColor = footprint.Sample(samplerState, input.tex).xyz;
return float4(texColor, tp);
}
it there something that i miss?
or should i use some blendingstate thing?
any help would be appreciated
[edit] here's something to edit. actually alpha value doesn't do anything without blending setting. just one variable to be used for any custom calculation.
In my project, i was using spritebathch,spritefont class for rendering font on screen,
So i guess in spritebatch class, there might be blendingState under the hood that blend black color, so that i have got this effect without setting my blendingState.
Yes, you need to create a blend state with appropriate alpha processing mode and then make sure that created blend state is attached to output merging stage of the rendering pipeline prior to drawing:
D3D11_BLEND_DESC blendStateDesc{};
blendStateDesc.AlphaToCoverageEnable = FALSE;
blendStateDesc.IndependentBlendEnable = FALSE;
blendStateDesc.RenderTarget[0].BlendEnable = TRUE;
blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_DEST_ALPHA;
blendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
if(not SUCCEEDED(p_device->CreateBlendState(&blendStateDesc, &blendState)))
{
std::abort();
}
p_device_context->OMSetBlendState(blendState, nullptr, 0xFFFFFFFF);
//draw calls...

HLSL 3 Can a Pixel Shader be declared alone?

I've been asked to split the question below into multiple questions:
HLSL and Pix number of questions
This is asking the first question, can I in HLSL 3 run a pixel shader without a vertex shader. In HLSL 2 I notice you can but I can't seem to find a way in 3?
The shader will compile fine, I will then however get this error from Visual Studio when calling SpriteBatch Draw().
"Cannot mix shader model 3.0 with earlier shader models. If either the vertex shader or pixel shader is compiled as 3.0, they must both be."
I don't believe I've defined anything in the shader to use anything earlier then 3. So I'm left a bit confused. Any help would be appreciated.
The problem is that the built-in SpriteBatch shader is 2.0. If you specify a pixel shader only, SpriteBatch still uses its built-in vertex shader. Hence the version mismatch.
The solution, then, is to also specify a vertex shader yourself. Fortunately Microsoft provides the source to XNA's built-in shaders. All it involves is a matrix transformation. Here's the code, modified so you can use it directly:
float4x4 MatrixTransform;
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
And then - because SpriteBatch won't set it for you - setting your effect's MatrixTransform correctly. It's a simple projection of "client" space (source from this blog post). Here's the code:
Matrix projection = Matrix.CreateOrthographicOffCenter(0,
GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
effect.Parameters["MatrixTransform"].SetValue(halfPixelOffset * projection);
You can try the simple examples here. The greyscale shader is a very good example to understand how a minimal pixel shader works.
Basically, you create a Effect under your content project like this one:
sampler s0;
float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0
{
// B/N
//float4 color = tex2D(s0, coords);
//color.gb = color.r;
// Transparent
float4 color = tex2D(s0, coords);
return color;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
You also need to:
Create an Effect object and load its content.
ambienceEffect = Content.Load("Effects/Ambient");
Call your SpriteBatch.Begin() method passing the Effect object you want to use
spriteBatch.Begin( SpriteSortMode.FrontToBack,
BlendState.AlphaBlend,
null,
null,
null,
ambienceEffect,
camera2d.GetTransformation());
Inside the SpriteBatch.Begin() - SpriteBatch.End() block, you must call the Technique inside the Effect
ambienceEffect.CurrentTechnique.Passes[0].Apply();

(D3D11) Transparent texture sections are showing previous texture instead of rendered scene

I have an application that renders multiple textured quads (images) in an essentially 2D context, which has worked fine. However after modifying it so that portions of some textures are transparent, I've ground to a halt trying to get it to behave in a seemingly standard, theoretically simplistic fashion: I just want it to draws the textures sequentially (as it has been doing), and when a texture has transparent pixels, to show whatever was previously drawn in those spots.
But what it is instead doing is showing a scaled version of each previously-drawn texture, behind the transparent sections, rather than the previously-rendered portion of the render target. So for instance if I tried to draw an opaque background texture and then a smaller entirely transparent texture, then the background would draw fine, but the transparent image would show the entire background scaled to the size/location of the new transparent image.
Subsequent rendered textures continue in this fashion, showing whatever the previous rendered texture ended up as (including elements from textures previous to it).
I'm obviously missing something fundamental about how textures/pixel shaders in DirectX work (which is no surprise, since I'm relatively new to it), but after reading everything online I could scrounge up, and experimenting in countless ways, I still can't figure out what I need to do.
I'm using one texture in the pixel shader, which may or may not be part of the problem. Each time I render the scene, I loop through all the textures I want to render, calling PSSetShaderResources() to bind a different texture to that pixel shader texture, each loop, and call DrawIndexed() after each time I change it. It seems like this is an effective way to proceed, since it doesn't seem to make sense to have a ton of shader textures when the pixel shader can't seem to be made to use an arbitrary one (it needs to be precompiled, no?).
At any rate, I'm hoping the symptoms will be sufficient for someone more knowledgeable than I to immediately realize the mistake I'm making. The code is pretty simple in these areas, but I might as well include a couple sections:
ever scene, for each shaderRV:
m_pd3d11ImmDevContext->PSSetShaderResources(0, 1, &shaderRV);
m_pd3d11ImmDevContext->DrawIndexed( ... )
Shader:
Texture2D aTexture : register(t0);
SamplerState samLinear : register(s0);
struct VS_INPUT
{
float3 position : POSITION;
float3 texcoord0 : TEXCOORD0;
};
struct VS_OUTPUT
{
float4 hposition : SV_POSITION;
float3 texcoord0 : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 color : COLOR;
};
// vertex shader
VS_OUTPUT CompositeVS( VS_INPUT IN )
{
VS_OUTPUT OUT;
float4 v = float4( IN.position.x,
IN.position.y,
0.1f,
1.0f );
OUT.hposition = v;
OUT.texcoord0 = IN.texcoord0;
OUT.texcoord0.z = IN.position.z ;
return OUT;
}
// pixel shader
PS_OUTPUT CompositePS( VS_OUTPUT IN ) : SV_Target
{
PS_OUTPUT ps;
ps.color = aTexture.Sample(samLinear, IN.texcoord0);
return ps;
}
Blend Description settings (don't think the problem's here):
blendDesc.RenderTarget[0].BlendEnable = true;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].DestBlendAlpha= D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
Please let me know if any other code segments would be useful!

Resources