BasicEffect fog, code used - xna

I want to copy BasicEffect's fog method to use in my own shader so I don't have to declare a basiceffect shader and my own. The HLSL code of the basic effect was released with one of the downloadable samples on XNA Creators Club a while ago and I thought the method needed would be found within that HLSL file. However, all I can see is a function being called but no actual definition for that function. The function called is:
ApplyFog(color, pin.PositionWS.w);
Does anybody know where the definition is and if it's freely acceptable. Otherwise any help on how to replicate it's effect would be great.
I downloaded the sample from here.
Thanks.
Edit: Stil having problems. Think it's to do with getting depth:
VertexToPixel InstancedCelShadeVSNmVc(VSInputNmVc VSInput, in VSInstanceVc VSInstance)
{
VertexToPixel Output = (VertexToPixel)0;
Output.Position = mul(mul(mul(mul(VSInput.Position, transpose(VSInstance.World)), xWorld), xView), xProjection);
Output.ViewSpaceZ = -VSInput.Position.z / xCameraClipFar;
Is that right? Camera clip far is passed in as a constant.

Heres an example of how to achieve a similar effect
In your Vertex Shader Function, you pass the viewspace Z position, divided by the distance of your farplane, that gives you a nice 0..1 mapping for your depthvalues.
Than, in your pixelshader, you use the lerp function to blend between your original color value, and the fogcolor, heres some (pseudo)code:
cbuffer Input //Im used to DX10+ remove the cbuffer for DX9
{
float FarPlane;
float4 FogColor;
}
struct VS_Output
{
//...Whatever else you need
float ViewSpaceZ : TEXCOORD0; //or whatever semantic you'd like to use
}
VS_Output VertexShader(/*Your Input Here */)
{
VS_Output output;
//...Transform to viewspace
VS_Output.ViewSpaceZ = -vsPosition.Z / FarPlane;
return output;
}
float4 PixelShader(VS_Output input) : SV_Target0 // or COLOR0 depending on DX version
{
const float FOG_MIN = 0.9;
const float FOG_MAX = 0.99;
//...Calculate Color
return lerp(yourCalculatedColor, FogColor, lerp(FOG_MIN, FOG_MAX, input.ViewSpaceZ));
}
I've written this from the top of my head, hope it helps.
The constants i've chose will give you a pretty "steep" fog, choose a smaller value for FOG_MIN to get a smoother fog.

Related

what is the 'alpha' value of pixel shader?

hi there day i am in process to make 2d game using directx11 api.
and it come to point that i need to use transparent effect.
so i have a green background and one footprint on middle.
and simply without setting anything but alpha value of returning color in pixel shader, i made a bit of success, but the problem is that it doesn't work for white color.
this is Pixel Shader code
cbuffer CB_TRANSPARENCY : register(b0)
{
float tp;
};
Texture2D footprint : register(t0);
SamplerState samplerState : register(s0);
struct PS_INPUT
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PS_INPUT input) : SV_Target
{
float3 texColor = footprint.Sample(samplerState, input.tex).xyz;
return float4(texColor, tp);
}
it there something that i miss?
or should i use some blendingstate thing?
any help would be appreciated
[edit] here's something to edit. actually alpha value doesn't do anything without blending setting. just one variable to be used for any custom calculation.
In my project, i was using spritebathch,spritefont class for rendering font on screen,
So i guess in spritebatch class, there might be blendingState under the hood that blend black color, so that i have got this effect without setting my blendingState.
Yes, you need to create a blend state with appropriate alpha processing mode and then make sure that created blend state is attached to output merging stage of the rendering pipeline prior to drawing:
D3D11_BLEND_DESC blendStateDesc{};
blendStateDesc.AlphaToCoverageEnable = FALSE;
blendStateDesc.IndependentBlendEnable = FALSE;
blendStateDesc.RenderTarget[0].BlendEnable = TRUE;
blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA;
blendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_DEST_ALPHA;
blendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
if(not SUCCEEDED(p_device->CreateBlendState(&blendStateDesc, &blendState)))
{
std::abort();
}
p_device_context->OMSetBlendState(blendState, nullptr, 0xFFFFFFFF);
//draw calls...

Is it possible to get the interface name (Dynamic Shader Linkage)?

I am currently working on implementing dynamic shader linkage into my shader reflection code. It works quite nicely, but to make my code as dynamic as possible i would like to automate the process of getting the offset into the dynamicLinkageArray. Microsoft suggests something like this in their sample:
g_iNumPSInterfaces = pReflector->GetNumInterfaceSlots();
g_dynamicLinkageArray = (ID3D11ClassInstance**) malloc( sizeof(ID3D11ClassInstance*) * g_iNumPSInterfaces );
if ( !g_dynamicLinkageArray )
return E_FAIL;
ID3D11ShaderReflectionVariable* pAmbientLightingVar = pReflector->GetVariableByName("g_abstractAmbientLighting");
g_iAmbientLightingOffset = pAmbientLightingVar->GetInterfaceSlot(0);
I would like to this without giving the exact name, so when the shader changes i do not have to manually change this code. To accomplish this i would need to get the name i marked below through shader reflection. Is this possible? I searched through the References of the Shader-Reflection but did not find anything useful, besides the number of interface slots (GetNumInterfaceSlots()).
#include "BasicShader_PSBuffers.hlsli"
iBaseLight g_abstractAmbientLighting;
^^^^^^^^^^^^^^^^^^^^^^^^^^
struct PixelInput
{
float4 position : SV_POSITION;
float3 normals : NORMAL;
float2 tex: TEXCOORD0;
};
float4 main(PixelInput input) : SV_TARGET
{
float3 Ambient = (float3)0.0f;
Ambient = g_txDiffuse.Sample(g_samplerLin, input.tex) * g_abstractAmbientLighting.IlluminateAmbient(input.normals);
return float4(saturate(Ambient), 1.0f);
}
If this is not possible, how would one go about this? Just add anything i can think of there so that i have to change as little as possible manually?
Thanks in advance

Transparency shader cover model

I am using XNA, implementing the HLSL shader and i have a problem with transparency
in the shader;
When rendering two models and they are facing each other the model behind is seen only
when it is first rendered
Let me explain...
blue cone = vector3(0,0,0) - first target
green cone = vector3(50,0,50) - second target
here rendering blue first and then the green and blue cone is seen
can see
now the other way before the green then blue and you do not see
cannot see
As long as they are two cones I can calculate the distance from the camera and render
before the most distant (the only solution I found by searching on the net), but if I
have several models and sizes, it can happen that a model A is most distant of a model B
but that its size may lead him to hide the model B.
Here put some code i use
.fx file
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 Color;
texture ColorTexture : DIFFUSE ;
sampler2D ColorSampler = sampler_state {
Texture = <ColorTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
float4 projPosition = mul(viewPosition, Projection);
output.Position = projPosition;
output.UV = input.UV;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 map = tex2D(ColorSampler,input.UV);
return float4(map.rgb * Color, map.a);
}
technique Textured
{
pass Pass1
{
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = true;
DestBlend = DestAlpha;
SrcBlend=BlendFactor;
VertexShader = compile vs_3_0 VertexShaderFunction();
PixelShader = compile ps_3_0 PixelShaderFunction();
}
}
draw code in XNA project
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.DarkGray);
for (int i = 0; i < 2; i++)
{
ModelEffect.Parameters["View"].SetValue(this.View);
ModelEffect.Parameters["Projection"].SetValue(this.Projection);
ModelEffect.Parameters["ColorTexture"].SetValue(this.colorTextured);
ModelEffect.CurrentTechnique = ModelEffect.Techniques["Textured"];
Vector3 Scala = new Vector3(2, 3, 1);
if (i == 0)
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(FirstTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 0, 255));// blu
}
else
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(SecondTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 255, 0));// verde
}
ModelEffect.Parameters["World"].SetValue(World);
foreach (ModelMesh mesh in Emitter.Meshes)
{
foreach (ModelMeshPart effect in mesh.MeshParts)
{
effect.Effect = ModelEffect;
}
mesh.Draw();
}
}
base.Draw(gameTime);
}
the for cycle I need for chage sort rendering...
Is there any procedure to be able to work around this problem?
I hope I explained myself, I think we should work on the file .fx ...or not?
I'm on the high seas :)
This is a classic computer graphics problem -- what you'll need to do depends on your specific model and need, but transparency is as you've discovered order dependent -- as Nico mentioned, if you sort entire meshes back-to-front (draw the back ones first) for each frame, you will be okay, but what about curved surfaces that need to draw sometimes in front of themselves (that is, they are self-occluding from the camera's point of view)? Then you have to go much farther and sort the polys within the mesh (adios, high performance!). If you don't sort, chances are, the order will look correct on average 50% or less of the time (unless your model is posed just right).
If you are drawing transparent cones, as they rotate to different views they will look correct sometimes and wrong other times, if two-sided rendering is enabled. Most of the time wrong.
One option is to just turn off depth-write buffering during the pass(es) where you draw the transparent items. Again, YMMV according to the scene's needs, but this can be a useful fix in many cases. Another is to segment the model and sort the meshes.
In games, many strategies have been followed, including re-ordering the models by hand, forcing the artists to limit transparency to certain passes only, drawing two passes per transparent layer (one with transparent color, and another with opaque but no color write to get the Z buffer correct), sending models back for a re-do, or even, if the errors are small or the action is fast, just accepting broken transparency.
There have been various solutions proposed to this general problem -- "OIT" (Order Independent Transparency) is a big enough topic to get its own wikipedia page: http://en.wikipedia.org/wiki/Order-independent_transparency

HLSL 3 Can a Pixel Shader be declared alone?

I've been asked to split the question below into multiple questions:
HLSL and Pix number of questions
This is asking the first question, can I in HLSL 3 run a pixel shader without a vertex shader. In HLSL 2 I notice you can but I can't seem to find a way in 3?
The shader will compile fine, I will then however get this error from Visual Studio when calling SpriteBatch Draw().
"Cannot mix shader model 3.0 with earlier shader models. If either the vertex shader or pixel shader is compiled as 3.0, they must both be."
I don't believe I've defined anything in the shader to use anything earlier then 3. So I'm left a bit confused. Any help would be appreciated.
The problem is that the built-in SpriteBatch shader is 2.0. If you specify a pixel shader only, SpriteBatch still uses its built-in vertex shader. Hence the version mismatch.
The solution, then, is to also specify a vertex shader yourself. Fortunately Microsoft provides the source to XNA's built-in shaders. All it involves is a matrix transformation. Here's the code, modified so you can use it directly:
float4x4 MatrixTransform;
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
And then - because SpriteBatch won't set it for you - setting your effect's MatrixTransform correctly. It's a simple projection of "client" space (source from this blog post). Here's the code:
Matrix projection = Matrix.CreateOrthographicOffCenter(0,
GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
effect.Parameters["MatrixTransform"].SetValue(halfPixelOffset * projection);
You can try the simple examples here. The greyscale shader is a very good example to understand how a minimal pixel shader works.
Basically, you create a Effect under your content project like this one:
sampler s0;
float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0
{
// B/N
//float4 color = tex2D(s0, coords);
//color.gb = color.r;
// Transparent
float4 color = tex2D(s0, coords);
return color;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
You also need to:
Create an Effect object and load its content.
ambienceEffect = Content.Load("Effects/Ambient");
Call your SpriteBatch.Begin() method passing the Effect object you want to use
spriteBatch.Begin( SpriteSortMode.FrontToBack,
BlendState.AlphaBlend,
null,
null,
null,
ambienceEffect,
camera2d.GetTransformation());
Inside the SpriteBatch.Begin() - SpriteBatch.End() block, you must call the Technique inside the Effect
ambienceEffect.CurrentTechnique.Passes[0].Apply();

General confusion about .fx files and shaders use in DirectX9 in C++ - how exactly do you make the connection with the app?

Well basically,I'm not quite sure how to properly use the Set and Get Parameter methods in DX to use the .fx files.I mean I can't find a good tutorial anywhere.I even had a book about D3D9 and while I got most of it,I'm still unable to use effect files.What's worse is the DirectX Samples provided by microsoft are packed with some DX Utility classes by microsoft and all sorts of other needless complications and I can't quite get it trough the 2k lines of code.I mean I get the basic idea(load,begin,loop with passes,end),but can anyone please point me out to a good tutorial on some simple example.The main thing I don't understand is how to work with the effect parameters :(
Here is a reference sheet I wrote back when I was first learning how to use HLSL shaders in DirectX9. Perhaps it will be of assistance.
IN THE APPLICATION:
Declare needed variables:
ID3DXEffect* shader;
Load the .fx file:
D3DXCreateEffectFromFile( d3dDevice,
_T("filepath.fx"),
0,
0,
0,
0,
&shader,
0
);
Clean up the effect object (some people use a SAFE_RELEASE macro):
if(shader)
shader->Release();
shader = nullptr;
Use the shader to render something:
void Application::Render()
{
unsigned passes = 0;
shader->Begin(&passes,0);
for(unsigned i=0;i<passes;++i)
{
shader->BeginPass(i);
// Set uniforms
shader->SetMatrix("gWorld",&theMatrix);
shader->CommitChanges(); // Not necessary if SetWhatevers are done OUTSIDE of a BeginPass/EndPass pair.
/* Insert rendering instructions here */
// BEGIN EXAMPLE:
d3dDevice->SetVertexDeclaration(vertexDecl);
d3dDevice->SetStreamSource(0,vertexBuffer,0,sizeof(VERT));
d3dDevice->SetIndices(indexBuffer);
d3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST,0,0,numVerts,0,8);
// END EXAMPLE
shader->EndPass();
}
shader->End();
}
IN THE .FX FILE:
Declare the uniforms (variables you want to set from within the application):
float4x4 gWorld : WORLD;
float4x4 gViewProj : VIEWPROJECTION;
float gTime : TIME;
Texture2D gDiffuseTexture; // requires a matching sampler
sampler gDiffuseSampler = sampler_state // here's the matching sampler
{
Texture = <gDiffuseTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
Define the vertex shader input structure:
struct VS_INPUT // make this match the vertex structure in Application
{
float3 untransformed_pos : POSITION0;
float3 untransformed_nrm : NORMAL0;
float4 color : COLOR0;
float2 uv_coords : TEXCOORD0;
};
Define the pixel shader input structure (vertex shader output):
struct PS_INPUT
{
float4 transformed_pos : POSITION0;
float4 transformed_nrm : NORMAL0;
float4 color : COLOR0;
float2 uv_coords : TEXCOORD0;
};
Define a vertex shader:
PS_INPUT mainVS (VS_INPUT input)
{
PS_INPUT output;
/* Insert shader instructions here */
return output;
}
Define a pixel shader:
float4 mainPS (PS_INPUT input) : COLOR
{
/* Insert shader instructions here */
return float4(resultColor,1.0f);
}
Define a technique:
technique myTechnique
{
// Here is a quick sample
pass FirstPass
{
vertexShader = compile vs_3_0 mainVS();
pixelShader = compile ps_3_0 mainPS();
// Setting a few of the many D3D renderstates via the effect framework
ShadeMode = FLAT; // flat color interpolation across triangles
FillMode = SOLID; // no wireframes, no point drawing.
CullMode = CCW; // cull any counter-clockwise polygons.
}
}
Can you be a bit more specific about where you're having problems?
The basic idea with the API for Effect parameters is to load your .fx file and then use ID3DXEffect::GetParameterByName() or GetParameterBySemantic() to retrieve a D3DXHANDLE to the parameters you want to modify at runtime. Then in your render loop you can set the values for those parameters using the ID3DXEffect::SetXXX() family of functions (which one you use depends on the type of the parameter you are setting, e.g. Float, Vector, Matrix), passing the D3DXHANDLE you retrieved when you loaded the effect.
The reason you work with D3DXHANDLEs and not directly with parameter name strings is performance - it saves doing lots of string compares in your render loop to look up parameters.
A simple example of how you might use this is defining a texture2D parameter called diffuseTex in your .fx file. When you load the .fx file, use
D3DXHANDLE diffuseTexHandle = effect->GetParameterByName(NULL, "diffuseTex");
and then in your render loop set the appropriate diffuse texture for each model you draw using
LPDIRECT3DTEXTURE9 diffuseTexturePtr = GetMeTheRightTexturePlease();
ID3DXEffect::SetTexture(diffuseTexHandle, diffuseTexturePtr);

Resources