HLSL Variable Returning Zero To MonoGame - xna

To improve performance, I offloaded some CPU tasks onto the GPU with Effects. This is what my HLSL code looks like:
float angle;
extern float2 direction;
float4 PixelShaderFunction(float4 pos : SV_POSITION, float4 color : COLOR0, float2 coords : TEXCOORD0) : COLOR
{
angle = atan2(direction.x, -direction.y);
return float4(1, 1, 1, 1);
}
technique DefaultTechnique
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
In the Emitter class, I set the direction, apply the technique, and then retrieve the "angle" variable:
gpu.SetValue("direction", particles[i].Velocity);
for (int i = 0; i < effect.CurrentTechnique.Passes.Count; i++)
effect.CurrentTechnique.Passes[i].Apply();
particles[i].Angle = gpu.RetrieveFloat("angle");
This runs fine with no crashing. However, the "angle" value is always 0. My HLSL skills aren't great, but the code looks like it should work as expected.
Any suggestions are appreciated.

Related

How to change Texture coordinates in MGFX Shaders in Monogame

I'm trying to change texture coordinates from Effect class in Monogame, how can I do that?
this is code in C#:
lighting.CurrentTechnique = lighting.Techniques["LightDrawing"]
lighting.Parameters["RenderTargetTexture"].SetValue(_melttown[0].Texture);
lighting.Parameters["MaskTexture"].SetValue(_lightMask[0].Texture);
this is MGFX code:
float4 MainPS(float2 textureCoords : TEXCOORD0) : COLOR
{
float4 pixelColor = tex2D(RenderTargetSampler, textureCoords);
float4 lightColor = tex2D(MaskSampler, textureCoords);
return pixelColor * lightColor;
}
technique LightDrawing
{
pass P0
{
PixelShader = compile PS_SHADERMODEL MainPS();
}
};
I tried to change Texture Coordinates and Light will be moving on the screen

Transparency shader cover model

I am using XNA, implementing the HLSL shader and i have a problem with transparency
in the shader;
When rendering two models and they are facing each other the model behind is seen only
when it is first rendered
Let me explain...
blue cone = vector3(0,0,0) - first target
green cone = vector3(50,0,50) - second target
here rendering blue first and then the green and blue cone is seen
can see
now the other way before the green then blue and you do not see
cannot see
As long as they are two cones I can calculate the distance from the camera and render
before the most distant (the only solution I found by searching on the net), but if I
have several models and sizes, it can happen that a model A is most distant of a model B
but that its size may lead him to hide the model B.
Here put some code i use
.fx file
float4x4 World;
float4x4 View;
float4x4 Projection;
float3 Color;
texture ColorTexture : DIFFUSE ;
sampler2D ColorSampler = sampler_state {
Texture = <ColorTexture>;
FILTER = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexShaderInput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float3 UV : TEXCOORD0;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
float4 projPosition = mul(viewPosition, Projection);
output.Position = projPosition;
output.UV = input.UV;
return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 map = tex2D(ColorSampler,input.UV);
return float4(map.rgb * Color, map.a);
}
technique Textured
{
pass Pass1
{
ZEnable = true;
ZWriteEnable = true;
AlphaBlendEnable = true;
DestBlend = DestAlpha;
SrcBlend=BlendFactor;
VertexShader = compile vs_3_0 VertexShaderFunction();
PixelShader = compile ps_3_0 PixelShaderFunction();
}
}
draw code in XNA project
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.DarkGray);
for (int i = 0; i < 2; i++)
{
ModelEffect.Parameters["View"].SetValue(this.View);
ModelEffect.Parameters["Projection"].SetValue(this.Projection);
ModelEffect.Parameters["ColorTexture"].SetValue(this.colorTextured);
ModelEffect.CurrentTechnique = ModelEffect.Techniques["Textured"];
Vector3 Scala = new Vector3(2, 3, 1);
if (i == 0)
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(FirstTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 0, 255));// blu
}
else
{
World = Matrix.CreateScale(Scala) * Matrix.CreateWorld(SecondTarget, Vector3.Forward, Vector3.Up);
ModelEffect.Parameters["Color"].SetValue(new Vector3(0, 255, 0));// verde
}
ModelEffect.Parameters["World"].SetValue(World);
foreach (ModelMesh mesh in Emitter.Meshes)
{
foreach (ModelMeshPart effect in mesh.MeshParts)
{
effect.Effect = ModelEffect;
}
mesh.Draw();
}
}
base.Draw(gameTime);
}
the for cycle I need for chage sort rendering...
Is there any procedure to be able to work around this problem?
I hope I explained myself, I think we should work on the file .fx ...or not?
I'm on the high seas :)
This is a classic computer graphics problem -- what you'll need to do depends on your specific model and need, but transparency is as you've discovered order dependent -- as Nico mentioned, if you sort entire meshes back-to-front (draw the back ones first) for each frame, you will be okay, but what about curved surfaces that need to draw sometimes in front of themselves (that is, they are self-occluding from the camera's point of view)? Then you have to go much farther and sort the polys within the mesh (adios, high performance!). If you don't sort, chances are, the order will look correct on average 50% or less of the time (unless your model is posed just right).
If you are drawing transparent cones, as they rotate to different views they will look correct sometimes and wrong other times, if two-sided rendering is enabled. Most of the time wrong.
One option is to just turn off depth-write buffering during the pass(es) where you draw the transparent items. Again, YMMV according to the scene's needs, but this can be a useful fix in many cases. Another is to segment the model and sort the meshes.
In games, many strategies have been followed, including re-ordering the models by hand, forcing the artists to limit transparency to certain passes only, drawing two passes per transparent layer (one with transparent color, and another with opaque but no color write to get the Z buffer correct), sending models back for a re-do, or even, if the errors are small or the action is fast, just accepting broken transparency.
There have been various solutions proposed to this general problem -- "OIT" (Order Independent Transparency) is a big enough topic to get its own wikipedia page: http://en.wikipedia.org/wiki/Order-independent_transparency

2D Pixel Shader has no effect?

I set up a basic pixel shader (right now, its configured for testing), and it doesn't seem to do anything. I set it up like so:
uniform extern texture ScreenTexture;
const float bloomThreshhold = 0.4;
const float existingPixelColorMult = 1.1;
sampler ScreenS = sampler_state
{
Texture = <ScreenTexture>;
};
float4 BloomedColor(float2 texCoord: TEXCOORD0) : COLOR
{
// pick a pixel on the screen for this pixel, based on
// the calculated offset and direction
float2 temp = texCoord;
temp.x += 1;
float4 mainPixelColor = 0;
/*
float4 pixelPlus1X = tex2D(ScreenS, temp);
temp.x -= 2;
float4 pixelMinus1X = tex2D(ScreenS, temp);
temp.x += 1;
temp.y += 1;
float4 pixelPlus1Y = tex2D(ScreenS, temp);
temp.y -= 2;
float4 pixelMinus1Y = tex2D(ScreenS, temp);
*/
return mainPixelColor;
}
technique Bloom
{
pass P0
{
PixelShader = compile ps_1_1 BloomedColor();
}
}
with the loading code like:
glowEffect = Content.Load<Effect>("GlowShader");
glowEffect.CurrentTechnique = glowEffect.Techniques[0];
and use code is:
spriteBatch.Begin();
glowEffect.Begin();
glowEffect.CurrentTechnique.Passes[0].Begin();
spriteBatch.Draw(screenImage, Vector2.Zero, Color.White);
spriteBatch.End();
glowEffect.CurrentTechnique.Passes[0].End();
glowEffect.End();
Loading seems to work fine, and there are no errors thrown when I use that method to render the texture, but it acts like the effect code isn't in there. It can't be that I'm using the wrong version of shaders (I tested with 2.0 and 1.1 versions), so why? (Using XNA 3.1)
You're returning 0 for every pixel. You have commented out any code that would return a different value than 0. 0 is black and if you're doing any sort of render you'll either get black (if the blend mode shows this as a color) or no change (if the blend mode multiplies the result). You can of course (if you were just attempting to see if the shader is being loaded and operated) try using an oddball color. Neon green anyone? Then, once you confirm it is being at least processed, start uncommenting that code and assessing the result.
Finally, if Bloom is what you're after, Microsoft has a very useful sample you will probably learn a lot from here:
http://create.msdn.com/en-US/education/catalog/sample/bloom
If you're using XNA 4.0, see what Shawn Hargreaves has to say about this.

Shader including another shader?

Is it possible, using XNA 4, to include a Shader within another shader? I know you could do this within 3.1, but I seem to be having trouble getting this to work? If you can, any pointers would be great.
EDIT
//---------------------------------------------------------------------------//
// Name : Rain.fx
// Desc : Rain particle effect using cylindrical billboards
// Author : Justin Stoecker. Copyright (C) 2008-2009.
//---------------------------------------------------------------------------//
#include "common.inc" // It's this line that causes me a problem
float4x4 matWorld;
float3 vVelocity;
float3 vOrigin; // min point of the cube area
float fWidth; // width of the weather region (x-axis)
float fHeight; // height of the weather region (y-axis)
float fLength; // length of the weather region (z-axis)
... Rest of file ...
The "common.inc" file has variables in there, but I was wondering if you could put methods in there as well?
Yes it's possible, from memory I think the basic effect example shader example from the MS App Hub does it.
In any case, see code below!
In FractalBase.fxh
float4x4 MatrixTransform : register(vs, c0);
float2 Pan;
float Zoom;
float Aspect;
float ZPower = 2;
float3 Colour = 0;
float3 ColourScale = 0;
float ComAbs(float2 Arg)
{
}
float2 ComSquare(float2 Arg)
{
}
int GreaterThan(float x, float y)
{
}
float4 GetColour(int DoneIterations, float MaxIterations, float BailoutTest, float OldBailoutTest, float BailoutFigure)
{
}
void SpriteVertexShader(inout float4 Colour : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
// Convert the position into from screen space into complex coordinates
texCoord = (position) * Zoom * float2(1, Aspect) - float2(Pan.x, -Pan.y);
}
In FractalMandelbrot.fx
#include "FractalBase.fxh"
float4 FractalPixelShader(float2 texCoord : TEXCOORD0, uniform float Iterations) : COLOR0
{
}
technique Technique1
{
pass
{
VertexShader = compile vs_3_0 SpriteVertexShader();
PixelShader = compile ps_3_0 FractalPixelShader(128);
}
}
#includes work like this:
The preprocessor loads your main .fx file, and parses it, looking for anything that starts with a #. #includes cause the preprocessor to load the referenced file and insert its contents into the source buffer. Effectively, your #include directive is replaced by the entire contents of the included file.
So, yes, you can define anything in your #includes that you can define in a regular .fx file. I use this for keeping lighting functions, vertex type declarations, etc in common files that are used by several shaders.

HLSL: Using arrays inside a struct

I came across a weird behavior of HLSL. I am trying to use an array that is contained within a struct, like this (Pixel Shader code):
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
struct Something {
float a[17];
};
float4 shMain (VSOUT input) : SV_Target {
Something s;
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
s.a[(int)input.pos.x] = input.pos.x;
return col * s.a[(int)input.pos.x];
}
The code makes no sense logically, it's just a sample. The problem is that when I try to compile this code, I get the following error (line 25 is the for-loop line):
(25,7): error X3511: Forced to unroll
loop, but unrolling failed.
However, when I put the array outside the struct (just declare float a[17] in shMain), everything works as expected.
My question is, why is DirectX trying to unroll the (unrollable) for-loop when using the struct? Is this a documented behavior? Is there any available workaround except for putting the array outside the struct?
I am using shader model 4.0, DirectX 10 SDK from June 2010.
EDIT:
For clarification I am adding the working code, it only replaces usage of the struct Something with plain array:
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
float4 shMain (VSOUT input) : SV_Target {
float a[17]; // Direct declaration of the array
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
a[(int)input.pos.x] = input.pos.x;
return col * a[(int)input.pos.x];
}
This code compiles and works as expected. It works even if I add [loop] attribute in front of the for-loop which means it is not unrolled (which is a correct behavior).
I'm not sure but what I know is that the hardware schedule and process fragments by block of 2x2 (for computing derivatives). This could be a reason that fxc try to unroll the for loop so that the shader program is executed in lockstep mode.
Also did you try to use [loop] attribute for generating code that uses flow control?

Resources