How do I make pixel shaders with multiple passes in XNA? - xna

I'm very new to shaders and am very confused about the whole thing, even after following several tutorials (in fact this is my second question about shaders today).
I'm trying to make a shader with two passes :
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 HorizontalBlur();
}
pass Pass2
{
PixelShader = compile ps_2_0 VerticalBlur();
}
}
However, this only applies VerticalBlur(). If I remove Pass2, it falls back to the HorizontalBlur() in Pass1. Am I missing something? Maybe it's simply not passing the result of the first pass to the second pass, in which case how would I do that?
Also, in most of the tutorials I've read, I'm told to put effect.CurrentTechnique.Passes[0].Apply(); after I start my spritebatch with the effect. However, this doesn't seem to change anything; I can set it to Passes[1] or even remove it entirely, and I still get only Pass2. (I do get an error when I try to set it to Passes[2], however.) What's the use of that line then? Has the need for it been removed in recent versions?
Thanks a bunch!

To render multiple passes:
For the first pass, render your scene onto a texture.
For the second, third, fourth, etc passes:
Draw a quad that uses the texture from the previous pass. If there are more passes, to follow, render this pass to another texture, otherwise if this is the last pass, render it to the back buffer.
In your example, say you are rendering a car.
First you render the car to a texture.
Then you draw a big rectangle, the size of the screen in pixels, place at a z depth of 0.5, with identity world, view, and projection matrices, and with your car scene as the texture, and apply the horizontal blur pass. This is rendered to a new texture that now has a horizontally blurred car.
Finally, render the same rectangle but with the "horizontally burred car" texture, and apply the vertical blur pass. Render this to the back buffer. You have now drawn a blurred car scene.

The reason for the following
effect.CurrentTechnique.Passes[0].Apply();
is that many effects only have a single pass.
To run multiple passes, I think you have to do this instead:
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
//Your draw code here:
}

Related

Where to call SetRenderTarget?

I'd like to change my RenderTargets between SpriteBatch.Begin and SpriteBatch.End. I already know this works:
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Begin()
SpriteBatch.Draw(...)
SpriteBatch.End()
GraphicsDevice.SetRenderTarget(target2);
SpriteBatch.Begin()
Spritebatch.Draw(...)
SPriteBatch.End()
But I'd really like to make this work:
SpriteBatch.Begin()
GraphicsDevice.SetRenderTarget(target1);
SpriteBatch.Draw(...)
GraphicsDevice.SetRenderTarget(target2);
Spritebatch.Draw(...)
SpriteBatch.End()
I've ever seen anybody doing this, but I didn't find any reason why.
EDIT: a little more about why I want to do this:
In my project, I use SpriteSortMode.Immediate (to be able to change the BlendState when I want), and I simply iterate through a sorted list of sprites, and draw them all.
But now I want to apply mutli passes shader on some sprites, but not all! I'm quite new to shaders, but from what I understood, I have to draw my sprite on an intermediate one using first pass, and the draw the intermediate sprite on the final render target using the second pass. (I'm using a gaussian blur pixel shader).
That's why I'd like to draw on the target I want, using the desired shader, without having to make a new begin/end.
The question is: Why do you want to change the render target there?
You won't have any performance improvements, since the batch would have to be split anyways when the render target (or any other render state) changes.
SpriteBatch tries to group the sprites by common attributes, for example the texture when SpriteSortMode.Texture is used. That means sprites sharing a texture will be drawn in the same draw call (batch). Having less batches can improve performance. But you can't change the GPU state during a draw call. So when you change the render target you are bound to use two draw calls anyways.
Ergo, even if the second example would work, the number of batches would be the same.

What's the easiest way to use glow/blur/wind effect on my scene?

I want to use glow/blur/wind effect on my models (or on my complete scene). How should I do this? What's the easiest way?
You would get a better answer if you provided more concrete details of what you want to implement.
For a full screen pass:
Render scene as normal to off screen texture.
Bind texture containing rendered scene as input texture for next pass:
Render a full-screen quad (two triangles) with a simple vertex shader.
Inside fragment shader you do your blur/glow/whatever effect by sampling texture in interesting ways.
Note if you have any HUD elements you want to render these after the fullscreen effect.

Depth of Field issue

I have converted a Depth Of Field shader from XNA 3.1 to 4.0. The problem is, it is completely draining my colours and not rendering anything else. You can see the problem here:
Project Vanquish - Depth of Field issue
Any ideas would be very grateful.
EDIT
I thought it best to add the Render method too:
public void PostProcess(GraphicsDevice device)
{
// Gaussian Blur Horizontal
device.SetRenderTarget(this.GaussianHRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(1.0f / this.device.Viewport.Width, 0);
this.DrawQuad(this.resolveTarget, this.gaussianBlur.Effect);
// Gaussian Blur Vertical
device.SetRenderTarget(this.GaussianVRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(0, 1.0f / this.device.Viewport.Height);
this.DrawQuad(this.GaussianHRT, this.gaussianBlur.Effect);
// Render result
device.Textures[0] = this.resolveTarget;
device.Textures[1] = this.GaussianVRT;
device.Textures[2] = this.depthRT;
this.DrawQuad(this.resolveTarget, this.combine.Effect);
// Reset RenderStates
this.ResetRenderStates();
}
By outputting this to seperate RenderTargets, I can see some potential issues, but I can't understand why I get a blank RenderTarget even though I set the resolveTarget RenderTarget before I render all of the scene. This is then rendered after the scene has been rendered.
are you saying that you ported it line for line to xna 4 and it now exhibits this behavior, or did you make code changes? I mean, it looks to me like the pixel shader is just returning invalid color values.
can you give us some details about what changed in your shader from 3.1 -> 4 ?
Edit: To go down a different line of thought ... another reason that you'd see black rendering such as this is around lighting. I would double check and verify that all of your light parameters are being passed into the shader, and that they are calculating accordingly.
You call DrawQuad function before setting the textures; this should result in invalid textures bound to Combine shader inputs.
There is a number of other things that can go wrong here, but this is the most likely suspect.
The problem existed in a function before this where I set the DepthBuffer. Thanks for the assistance.

How to scale on-screen pixels?

I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.

Dynamically alter or destroy a Texture2D for drawing and collision detection

I am using XNA for a 2D project. I have a problem and I don't know which way to solve it. I have a texture (an image) that is drawn to the screen for example:
|+++|+++|
|---|---|
|+++|+++|
Now I want to be able to destroy part of that structure/image so that it looks like:
|+++|
|---|---|
|+++|+++|
so that collision now will work as well for the new image.
Which way would be better to solve this problem:
Swap the whole texture with another texture, that is transparent in the places where it is destroyed.
Use some trickery with spriteBatch.Draw(sourceRectangle, destinationRectangle) to get the desired rectangles drawn, and also do collision checking with this somehow.
Split the texture into 4 smaller textures each of which will be responsible for it's own drawing/collision detection.
Use some other smart-ass way I don't know about.
Any help would be appreciated. Let me know if you need more clarification/examples.
EDIT: To clarify I'll provide an example of usage for this.
Imagine a 4x4 piece of wall that when shot at, a little 1x1 part of it is destroyed.
I'll take the third option:
3 - Split the texture into 4 smaller
textures each of which will be
responsible for it's own
drawing/collision detection.
It's not hard do to. Basically it's just the same of TileSet struct. However, you'll need to change your code to fit this approach.
Read a little about Tiles on: http://www-cs-students.stanford.edu/~amitp/gameprog.html#tiles
Many sites and book said about Tiles and how to use it to build game worlds. But you can use this logic to everything which the whole is compost from little parts.
Let me quick note the other options:
1 - Swap the whole texture with
another texture, that is transparent
in the places where it is destroyed.
No.. have a different image to every different position is bad. If you need to change de texture? Will you remake every image again?
2- Use some trickery with
spriteBatch.Draw(sourceRectangle,
destinationRectangle) to get the
desired rectangles drawn, and also do
collision checking with this somehow.
Unfortunately it's don't work because spriteBatch.Draw only works with Rectangles :(
4 Use some other smart-ass way I don't
know about.
I can't imagine any magic to this. Maybe, you can use another image to make masks. But it's extremely processing-expensive.
Check out this article at Ziggyware. It is about Deformable Terrain, and might be what you are looking for. Essentially, the technique involves settings the pixels you want to hide to transparent.
Option #3 will work.
A more robust system (if you don't want to be limited to boxes) would use per-pixel collision detection. The process basically works as follows:
Calculate a bounding box (or circle) for each object
Check to see if two objects overlap
For each overlap, blit the sprites onto a hidden surface, comparing pixel values as you go. If a pixel is already set when you try to draw the pixel from the second sprite, you have a collision.
Here's a good XNA example (another Ziggyware article, actually): 2D Per Pixel Collision Detection
Some more links:
Can someone explain per-pixel collision detection
XNA 2-d per-pixel collision
I ended up choosing option 3.
Basically I have a Tile class that contains a texture and dimention. Dimention n means that there are n*n subtiles within that tile. I also have an array that keeps track of which tiles are destroyed or not. My class looks like this in pseudo code:
class Tile
texture
dimention
int [,] subtiles; //0 or 1 for each subtile
public Tile() // constructor
subtiles = new int[dimention, dimention];
intialize_subtiles_to(1);
public Draw() // this is how we know which one to draw
//iterate over subtiles
for(int i..
for(int j ...)
if(subtiles[i,j] == 1)
Vector2 draw_pos = Vector2(i*tilewidth,
j*tileheight)
spritebatch.Draw(texture, draw_pos)
In a similar fashion I have a collision method that will check for collision:
public bool collides(Rectangle rect)
//iterate over subtiles
for i...
for j..
if(subtiles[i,j]==0) continue;
subtile_rect = //figure out the rect for this subtile
if(subtile_rect.intersects(rect))
return true;
return false;
And so on. You can imagine how to "destroy" certain subtiles by setting their respective value to 0, and how to check if the whole tile is destroyed.
Granted with this technique, the subtiles will all have the same texture. So far I can't think of a simpler solution.

Resources