Is it possible, using XNA 4, to include a Shader within another shader? I know you could do this within 3.1, but I seem to be having trouble getting this to work? If you can, any pointers would be great.
EDIT
//---------------------------------------------------------------------------//
// Name : Rain.fx
// Desc : Rain particle effect using cylindrical billboards
// Author : Justin Stoecker. Copyright (C) 2008-2009.
//---------------------------------------------------------------------------//
#include "common.inc" // It's this line that causes me a problem
float4x4 matWorld;
float3 vVelocity;
float3 vOrigin; // min point of the cube area
float fWidth; // width of the weather region (x-axis)
float fHeight; // height of the weather region (y-axis)
float fLength; // length of the weather region (z-axis)
... Rest of file ...
The "common.inc" file has variables in there, but I was wondering if you could put methods in there as well?
Yes it's possible, from memory I think the basic effect example shader example from the MS App Hub does it.
In any case, see code below!
In FractalBase.fxh
float4x4 MatrixTransform : register(vs, c0);
float2 Pan;
float Zoom;
float Aspect;
float ZPower = 2;
float3 Colour = 0;
float3 ColourScale = 0;
float ComAbs(float2 Arg)
{
}
float2 ComSquare(float2 Arg)
{
}
int GreaterThan(float x, float y)
{
}
float4 GetColour(int DoneIterations, float MaxIterations, float BailoutTest, float OldBailoutTest, float BailoutFigure)
{
}
void SpriteVertexShader(inout float4 Colour : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
// Convert the position into from screen space into complex coordinates
texCoord = (position) * Zoom * float2(1, Aspect) - float2(Pan.x, -Pan.y);
}
In FractalMandelbrot.fx
#include "FractalBase.fxh"
float4 FractalPixelShader(float2 texCoord : TEXCOORD0, uniform float Iterations) : COLOR0
{
}
technique Technique1
{
pass
{
VertexShader = compile vs_3_0 SpriteVertexShader();
PixelShader = compile ps_3_0 FractalPixelShader(128);
}
}
#includes work like this:
The preprocessor loads your main .fx file, and parses it, looking for anything that starts with a #. #includes cause the preprocessor to load the referenced file and insert its contents into the source buffer. Effectively, your #include directive is replaced by the entire contents of the included file.
So, yes, you can define anything in your #includes that you can define in a regular .fx file. I use this for keeping lighting functions, vertex type declarations, etc in common files that are used by several shaders.
Related
Thanks for taking the time to check out my issue.
I am working on improving the ocean in my first attempt at a game. I have decided on a using a bump map against my ocean tiles to add a little texture to the water. To do this, I draw my water tiles to a renderTarget and then apply a pixel shader while drawing the render target to the backbuffer.
The problem I am having is that the pixel shader seems to offset or displace the position of render target that is drawn. Observe these two photos:
This image is the game without running the pixel shader. Notice the "shallow water" around the islands which is a solid color here.
With the pixel shader is run, that shallow water is offset to the right consistently.
I am using the bump map provided in riemers novice bump mapping. One possible thought I had was that the dimensions of this bump map do not match the render target I am applying it on. However, I'm not entirely sure how I would create/resize this bump map.
My HLSL pixel shader looks like this:
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
matrix WorldViewProjection;
float xWaveLength;
float xWaveHeight;
texture bumpMap;
sampler2D bumpSampler = sampler_state
{
Texture = <bumpMap>;
};
texture water;
sampler2D waterSampler = sampler_state
{
Texture = <water>;
};
// MAG,MIN,MIRRR SETTINGS? SEE RIEMERS
struct VertexShaderInput
{
float4 Position : POSITION0;
float2 TextureCords : TEXCOORD;
float4 Color : COLOR0;
};
struct VertexShaderOutput
{
float4 Pos : SV_POSITION;
float2 BumpMapSamplingPos : TEXCOORD2;
float4 Color : COLOR0;
};
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.BumpMapSamplingPos = input.TextureCords/xWaveLength;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, texCoord.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = texCoord.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
technique oceanRipple
{
pass P0
{
//VertexShader = compile VS_SHADERMODEL MainVS();
PixelShader = compile PS_SHADERMODEL MainPS();
}
};
And my monogame draw call looks like this:
public void DrawMap(SpriteBatch sbWorld, SpriteBatch sbStatic, RenderTarget2D worldScene, GameTime gameTime)
{
// Set Water RenderTarget
_graphics.SetRenderTarget(waterScene);
_graphics.Clear(Color.CornflowerBlue);
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.OceanTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
// set up gamescene draw
_graphics.SetRenderTarget(worldScene);
_graphics.Clear(Color.PeachPuff);
// water
sbWorld.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
oceanRippleEffect.Parameters["bumpMap"].SetValue(waterBumpMap);
oceanRippleEffect.Parameters["water"].SetValue(waterScene);
//oceanRippleEffect.Parameters["xWaveLength"].SetValue(3f);
oceanRippleEffect.Parameters["xWaveHeight"].SetValue(0.3f);
ExecuteTechnique("oceanRipple");
sbWorld.Draw(waterScene, Vector2.Zero, Color.White);
sbWorld.End();
// land
sbWorld.Begin(_cam, SpriteSortMode.Texture);
foreach (var t in BoundingBoxLocations.LandTileLocationList)
{
TilePiece tile = (TilePiece)t;
tile.DrawTile(sbWorld);
}
sbWorld.End();
}
Can anyone see any issues with my code or otherwise that might be causing this offset issue?
Any help is much appreciated. Thanks!
EDIT
If I modify the xWaveHeight shader parameter, it changes where the offset appears. A value of 0 will not offset, but then the bump mapping is not applied. Is there any way around this?
I understand that the offset is being caused by the pixel shader perturbation, but I'm wondering if there is a way to undo this offset while preserving the bump mapping. In the linked riemer's tutorial, a vertex shader is included. I'm not quite sure if I need this, but when I include my vertex shader in the technique, and modify the pixel shader to the following, no water is drawn.
float4 MainPS(in VertexShaderOutput output) : COLOR
{
float4 bumpColor = tex2D(bumpSampler, output.BumpMapSamplingPos.xy);
//get offset
float2 perturbation = xWaveHeight * (bumpColor.rg - 0.5f)*2.0f;
//apply offset to coordinates in original texture
float2 currentCoords = output.BumpMapSamplingPos.xy;
float2 perturbatedTexCoords = currentCoords + perturbation;
//return the perturbed values
float4 color = tex2D(waterSampler, perturbatedTexCoords);
return color;
}
First of all, for what you seem to be wanting to do, bump mapping is actually the wrong approach: bump mapping is about changing the surface normal (basicly "rotating" the pixel in 3D space), so following light calculations (such as reflection) see your surface as more complex then it really is (Notice that the texture of that pixel stays where it is). So, bump mapping would not at all modify the position of the ocean tile texture, but modify what is reflected by the ocean (for example, by changing the sample position of a skybox, so the reflection of the sky in the water is distorted). The way you are implementing it is more like "What if my screen would be an ocean and would reflect an image of tiles with ocean textures".
If you really want to use bump mapping, you would need some kind of big sky texture, and then, while (not after) drawing the ocean tiles, you would calculate a sample position of the reflection of this sky texture (based on the position of the tile on the screen) and then modify that sample position with bump mapping. All while drawing the tiles, not after drawing them to a render target.
It is also possible to do this deffered (more similar to what you are doing now) - actually, there are multiple ways of doing so - but either way you would still need to sample the final color from a sky texture, not from the render target your tiles were drawn on. The render target from your tiles would instead contain "meta" informations (depending on how exactly you want to do this). These informations could be a color that is multiplied with the color from the sky texture (creating "colored" water, eg. for different bioms or to simulate sun sets/sun rises), or a simple 1 or 0 to tell wether or not there is any ocean, or a per-tile bump map (which would you allow to apply a "screen global" and a "per tile" bump mapping in one go. You would still need a way to say "this pixel is not an ocean, don't do anything for that" in the render target), or - if you use multiple render targets - all of these at once. In any way, the sample position to sample from your render target(s) is not modified by bump mapping, only the sample position of the texture that is reflected by the ocean is. That way, there's also no displacement of the ocean, since we aren't touching that sample positions at all.
Now, to create a look that is more similar to what you seem to be wanting (according to your images), you wouldn't use bump mapping, but instead apply a small noise to the sample position in your pixel shader (the rest of the code doesn't need to change). For that, your shader would look more like this:
texture noiseTexture;
sampler2D noiseSampler = sampler_state
{
Texture = <noiseTexture>;
MipFilter = LINEAR;
MinFilter = LINEAR;
MagFilter = LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
float2 noiseOffset;
float2 noisePower;
float noiseFrequency;
VertexShaderOutput MainVS(in VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Pos = mul(input.Position, WorldViewProjection);
output.Color = input.Color;
return output;
}
float4 MainPS(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0) : COLOR
{
float4 noise = tex2D(noiseSampler, (texCoord.xy + noiseOffset.xy) * noiseFrequency);
float2 offset = noisePower * (noise.xy - 0.5f) * 2.0f;
float4 color = tex2D(waterSampler, texCoord.xy + offset.xy);
return color;
}
Where noisePower would be (at most) approx. 1 over the number of horizontal/vertical tiles on the screen, noiseOffset can be used to "move" the noise over time on the screen (should be in range [-1;1]), and noiseFrequency is an artistic parameter (I would start with twice the max noise power, and then modify it from there, with higher values making the ocean more distorted). This way, the border of the tiles is distorted, but never moved more then one tile in any direction (thanks to the noisePower parameter). It is also important to use the correct kind of noise texture here: white noise, blue noise, maybe a "not really noise" texture that's build out of sinus waves, etc. Important is the fact that the "average" value of each pixel is about 0.5, so there's no overall displacement happening, and that the values are well distributed in the texture. Appart from that, see what kind of noise looks best for you.
Side note to the shader code: I haven't tested that code. Just that you know, not that there would be much room for mistakes.
Edit: As a side node: Of course the sky texture doesn't need to actualy look like a sky ;)
To improve performance, I offloaded some CPU tasks onto the GPU with Effects. This is what my HLSL code looks like:
float angle;
extern float2 direction;
float4 PixelShaderFunction(float4 pos : SV_POSITION, float4 color : COLOR0, float2 coords : TEXCOORD0) : COLOR
{
angle = atan2(direction.x, -direction.y);
return float4(1, 1, 1, 1);
}
technique DefaultTechnique
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
In the Emitter class, I set the direction, apply the technique, and then retrieve the "angle" variable:
gpu.SetValue("direction", particles[i].Velocity);
for (int i = 0; i < effect.CurrentTechnique.Passes.Count; i++)
effect.CurrentTechnique.Passes[i].Apply();
particles[i].Angle = gpu.RetrieveFloat("angle");
This runs fine with no crashing. However, the "angle" value is always 0. My HLSL skills aren't great, but the code looks like it should work as expected.
Any suggestions are appreciated.
I am currently working on implementing dynamic shader linkage into my shader reflection code. It works quite nicely, but to make my code as dynamic as possible i would like to automate the process of getting the offset into the dynamicLinkageArray. Microsoft suggests something like this in their sample:
g_iNumPSInterfaces = pReflector->GetNumInterfaceSlots();
g_dynamicLinkageArray = (ID3D11ClassInstance**) malloc( sizeof(ID3D11ClassInstance*) * g_iNumPSInterfaces );
if ( !g_dynamicLinkageArray )
return E_FAIL;
ID3D11ShaderReflectionVariable* pAmbientLightingVar = pReflector->GetVariableByName("g_abstractAmbientLighting");
g_iAmbientLightingOffset = pAmbientLightingVar->GetInterfaceSlot(0);
I would like to this without giving the exact name, so when the shader changes i do not have to manually change this code. To accomplish this i would need to get the name i marked below through shader reflection. Is this possible? I searched through the References of the Shader-Reflection but did not find anything useful, besides the number of interface slots (GetNumInterfaceSlots()).
#include "BasicShader_PSBuffers.hlsli"
iBaseLight g_abstractAmbientLighting;
^^^^^^^^^^^^^^^^^^^^^^^^^^
struct PixelInput
{
float4 position : SV_POSITION;
float3 normals : NORMAL;
float2 tex: TEXCOORD0;
};
float4 main(PixelInput input) : SV_TARGET
{
float3 Ambient = (float3)0.0f;
Ambient = g_txDiffuse.Sample(g_samplerLin, input.tex) * g_abstractAmbientLighting.IlluminateAmbient(input.normals);
return float4(saturate(Ambient), 1.0f);
}
If this is not possible, how would one go about this? Just add anything i can think of there so that i have to change as little as possible manually?
Thanks in advance
Well basically,I'm not quite sure how to properly use the Set and Get Parameter methods in DX to use the .fx files.I mean I can't find a good tutorial anywhere.I even had a book about D3D9 and while I got most of it,I'm still unable to use effect files.What's worse is the DirectX Samples provided by microsoft are packed with some DX Utility classes by microsoft and all sorts of other needless complications and I can't quite get it trough the 2k lines of code.I mean I get the basic idea(load,begin,loop with passes,end),but can anyone please point me out to a good tutorial on some simple example.The main thing I don't understand is how to work with the effect parameters :(
Here is a reference sheet I wrote back when I was first learning how to use HLSL shaders in DirectX9. Perhaps it will be of assistance.
IN THE APPLICATION:
Declare needed variables:
ID3DXEffect* shader;
Load the .fx file:
D3DXCreateEffectFromFile( d3dDevice,
_T("filepath.fx"),
0,
0,
0,
0,
&shader,
0
);
Clean up the effect object (some people use a SAFE_RELEASE macro):
if(shader)
shader->Release();
shader = nullptr;
Use the shader to render something:
void Application::Render()
{
unsigned passes = 0;
shader->Begin(&passes,0);
for(unsigned i=0;i<passes;++i)
{
shader->BeginPass(i);
// Set uniforms
shader->SetMatrix("gWorld",&theMatrix);
shader->CommitChanges(); // Not necessary if SetWhatevers are done OUTSIDE of a BeginPass/EndPass pair.
/* Insert rendering instructions here */
// BEGIN EXAMPLE:
d3dDevice->SetVertexDeclaration(vertexDecl);
d3dDevice->SetStreamSource(0,vertexBuffer,0,sizeof(VERT));
d3dDevice->SetIndices(indexBuffer);
d3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST,0,0,numVerts,0,8);
// END EXAMPLE
shader->EndPass();
}
shader->End();
}
IN THE .FX FILE:
Declare the uniforms (variables you want to set from within the application):
float4x4 gWorld : WORLD;
float4x4 gViewProj : VIEWPROJECTION;
float gTime : TIME;
Texture2D gDiffuseTexture; // requires a matching sampler
sampler gDiffuseSampler = sampler_state // here's the matching sampler
{
Texture = <gDiffuseTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
Define the vertex shader input structure:
struct VS_INPUT // make this match the vertex structure in Application
{
float3 untransformed_pos : POSITION0;
float3 untransformed_nrm : NORMAL0;
float4 color : COLOR0;
float2 uv_coords : TEXCOORD0;
};
Define the pixel shader input structure (vertex shader output):
struct PS_INPUT
{
float4 transformed_pos : POSITION0;
float4 transformed_nrm : NORMAL0;
float4 color : COLOR0;
float2 uv_coords : TEXCOORD0;
};
Define a vertex shader:
PS_INPUT mainVS (VS_INPUT input)
{
PS_INPUT output;
/* Insert shader instructions here */
return output;
}
Define a pixel shader:
float4 mainPS (PS_INPUT input) : COLOR
{
/* Insert shader instructions here */
return float4(resultColor,1.0f);
}
Define a technique:
technique myTechnique
{
// Here is a quick sample
pass FirstPass
{
vertexShader = compile vs_3_0 mainVS();
pixelShader = compile ps_3_0 mainPS();
// Setting a few of the many D3D renderstates via the effect framework
ShadeMode = FLAT; // flat color interpolation across triangles
FillMode = SOLID; // no wireframes, no point drawing.
CullMode = CCW; // cull any counter-clockwise polygons.
}
}
Can you be a bit more specific about where you're having problems?
The basic idea with the API for Effect parameters is to load your .fx file and then use ID3DXEffect::GetParameterByName() or GetParameterBySemantic() to retrieve a D3DXHANDLE to the parameters you want to modify at runtime. Then in your render loop you can set the values for those parameters using the ID3DXEffect::SetXXX() family of functions (which one you use depends on the type of the parameter you are setting, e.g. Float, Vector, Matrix), passing the D3DXHANDLE you retrieved when you loaded the effect.
The reason you work with D3DXHANDLEs and not directly with parameter name strings is performance - it saves doing lots of string compares in your render loop to look up parameters.
A simple example of how you might use this is defining a texture2D parameter called diffuseTex in your .fx file. When you load the .fx file, use
D3DXHANDLE diffuseTexHandle = effect->GetParameterByName(NULL, "diffuseTex");
and then in your render loop set the appropriate diffuse texture for each model you draw using
LPDIRECT3DTEXTURE9 diffuseTexturePtr = GetMeTheRightTexturePlease();
ID3DXEffect::SetTexture(diffuseTexHandle, diffuseTexturePtr);
I set up a basic pixel shader (right now, its configured for testing), and it doesn't seem to do anything. I set it up like so:
uniform extern texture ScreenTexture;
const float bloomThreshhold = 0.4;
const float existingPixelColorMult = 1.1;
sampler ScreenS = sampler_state
{
Texture = <ScreenTexture>;
};
float4 BloomedColor(float2 texCoord: TEXCOORD0) : COLOR
{
// pick a pixel on the screen for this pixel, based on
// the calculated offset and direction
float2 temp = texCoord;
temp.x += 1;
float4 mainPixelColor = 0;
/*
float4 pixelPlus1X = tex2D(ScreenS, temp);
temp.x -= 2;
float4 pixelMinus1X = tex2D(ScreenS, temp);
temp.x += 1;
temp.y += 1;
float4 pixelPlus1Y = tex2D(ScreenS, temp);
temp.y -= 2;
float4 pixelMinus1Y = tex2D(ScreenS, temp);
*/
return mainPixelColor;
}
technique Bloom
{
pass P0
{
PixelShader = compile ps_1_1 BloomedColor();
}
}
with the loading code like:
glowEffect = Content.Load<Effect>("GlowShader");
glowEffect.CurrentTechnique = glowEffect.Techniques[0];
and use code is:
spriteBatch.Begin();
glowEffect.Begin();
glowEffect.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(screenImage, Vector2.Zero, Color.White);
spriteBatch.End();
glowEffect.CurrentTechnique.Passes[0].End();
glowEffect.End();
Loading seems to work fine, and there are no errors thrown when I use that method to render the texture, but it acts like the effect code isn't in there. It can't be that I'm using the wrong version of shaders (I tested with 2.0 and 1.1 versions), so why? (Using XNA 3.1)
You're returning 0 for every pixel. You have commented out any code that would return a different value than 0. 0 is black and if you're doing any sort of render you'll either get black (if the blend mode shows this as a color) or no change (if the blend mode multiplies the result). You can of course (if you were just attempting to see if the shader is being loaded and operated) try using an oddball color. Neon green anyone? Then, once you confirm it is being at least processed, start uncommenting that code and assessing the result.
Finally, if Bloom is what you're after, Microsoft has a very useful sample you will probably learn a lot from here:
http://create.msdn.com/en-US/education/catalog/sample/bloom
If you're using XNA 4.0, see what Shawn Hargreaves has to say about this.