Problem with alpha blending in XNA - xna

Hi
i have a background and two png sprites
I want to make this effect using the provided background and sprites using XNA 3.1
I'm doing something wrong because i only get this As you noticed its not the effect i wanna do
It is possible do this effect with a few lines of code using alpha blending in XNA 3.1? A practical example would be really great!

First, render textures that contain the shapes that you want to be transparent to texture A.
The textures containing the shapes should contain black shapes, on a transparent background -- easily constructed with image editing software like Photoshop.
Then take texture A and draw it over top of your scene using an effect (an HLSL shader) that does:
output = float4(0, 0, 0, A.r);
Effectively making the output image's alpha lower where A is darker.
The image will have clear portions where you drew your shapes on A, and will be black everywhere else.
Here are the details of the shader code:
sampler TextureSampler : register(s0);
float4 PS(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0
{
float4 Color = tex2D(TextureSampler, texCoord);
Color = float4(0, 0, 0, Color.r);
return Color;
}
technique Vicky
{
pass P0
{
PixelShader = compile ps_2_0 PS();
}
}

If you want a solution without shader.
You first need your fog of war textures to be black with the transparent parts as White.
Render your map and entity normally, to a RenderTarget2D
Clear your background to black
Start sprite batch with Additive blend
Render you fog of war textures
Start a new sprite batch with Multiply blend
Render your map RenderTarget2D on top of the whole screen

Related

OpenGL/GLSE alpha masking

I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?
Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.
Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.

WebGL Gradient Shader

I am trying to learn WebGL and would like to have shader that gives a mesh a gradient effect from top to bottom. For example, the bottom of a ball or wall having no blue color and the top having all blue. I know I need to modify the fragment color with the y component of gl_Position but my implementations have thus far given me a black screen. Any help would be appreciated.
Are you sure that the fragments are getting actually drawn on the screen (disable culling, depth test), no GL errors ? If yes, only issue can be the Alpha setting with blending enabled. Try disabling GL_BLEND, or changing the value of Alpha to 1.0, like below, setting RGB to your colors:
gl_FragColor = vec4(R,G,B, 1.0);

Why are there dark edges / halos between transparent and opaque areas of textured surfaces (in opengl es 2.0 fragment shaders)

I'm using a PNG texture image to control the opacity of fragments in an Opengl es 2.0 shader (on iOS). The result I am after is light grey text on top of my scene's medium grey background (the fragments shader is applied to a triangle strip in the scene). The problem is that there are dark edges around my text -- they look like artifacts. I'm using PNG transparency for the alpha information -- but I'm open to other approaches. What is going on and how can I do this correctly?
First, look at this answer regarding PNG transparency and premultiplied alpha. Long story short, the pixels in the PNG image that have less than 100% opacity are being premultiplied, so they are in effect getting darker as they get more transparent. Hence the dark artifacts around the edges.
Even without PNG and premultiplied transparency, you may still run into the problem if you forget to set your fragment shader's color before applying transparency.
A solution to this problem (where you want text to be a light grey color, and everything in the texture map that's not text to be transparent) would be to create a texture map where your text is white and the background is black.
This texture map will control the alpha of your fragment. The RGB values of your fragment will be set to your light grey color.
For example:
// color used for text
lowp vec4 textColor = vec4(.82,.82,.82,1.0);
gl_FragColor = textColor;
// greyscale texture passed in as uniform
lowp vec4 alphaMap = texture2D(u_alpha_texture,v_texture);
// set the alpha using the texture
gl_FragColor.w = alphaMap.x;
In cases where your color texture varies, this approach would require two separate texture map images (one for the color, and one for the alpha). Clearly, this is less efficient then dealing with one PNG that has alpha transparency baked-in. However, in my experience it is a good tradeoff (premultiplied pixels can be counter-intuitive to deal with, and the other approaches to loading PNG transparency without pre-multiplication introduce added complexity).
An upside to this approach is that you can vary the color of the text independently of the texture map image. For instance if you wanted red text, you could change the textColor value to:
lowp vec4 textColor = vec4(1.0,0.0,0.0,1.0);
You could even vary the color over time, etc, all independently of the alpha. That's why I find this approach to be flexible.

XNA - Render to a texture's alpha channel

I have a texture that I want to modify it's alpha channel in runtime.
Is there a way to draw on a texture's alpha channel ?
Or maybe replace the channel with that of another texture ?
Thanks,
SW.
Ok, based on your comment, what you should do is use a pixel shader. Your source image doesn't even need an alpha channel - let the pixel shader apply an alpha.
In fact you should probably calculate the values for the alpha channel (ie: run your fluid solver) on the GPU as well.
Your shader might look something like this:
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 c = tex2D(textureSampler, uv);
c.A = /* calculate alpha value here */;
return c;
}
A good place to start would be the XNA Sprite Effects sample.
There's even an effect similar to what you are doing:
(source: msdn.com)
The effect in the sample reads from a second texture to get values for the calculation of the alpha channel of the first texture when it is drawn.

How to multiply two sprites in SpriteBatch Draw XNA (2D)

I am writing simple hex engine for action-rpg in XNA 3.1. I want to light ground near hero and torches just as they were lighted in Diablo II. I though the best way to do so was to calculate field-of-view, hide any tiles and their's content that player can't see and draw special "Light" texture on top of any light source: Texture that is black with white, blurred circle in it's center.
I wanted to multiply this texture with background (as in blending mode: multiply), but - unfortunately - I do not see option for doing that in SpriteBatch. Could someone point me in right direction?
Or perhaps there is other - better - way to achive lighting model as in Diablo II?
If you were to multiply your light texture with the scene, you will darken the area, not brighten it.
You could try rendering with additive blending; this won't quite look right, but is easy and may be acceptable. You will have to draw your light with a fairly low alpha for the light texture to not just over saturate that part of the image.
Another, more complicated, way of doing lighting is to draw all of your light textures (for all the lights in the scene) additively onto a second render target, and then multiply this texture with your scene. This should give much more realistic lighting, but has a larger performance overhead and is more complex.
Initialisation:
RenderTarget2D lightBuffer = new RenderTarget2D(graphicsDevice, screenWidth, screenHeight, 1, SurfaceFormat.Color);
Color ambientLight = new Color(0.3f, 0.3f, 0.3f, 1.0f);
Draw:
// set the render target and clear it to the ambient lighting
graphicsDevice.SetRenderTarget(0, lightBuffer);
graphicsDevice.Clear(ambientLight)
// additively draw all of the lights onto this texture. The lights can be coloured etc.
spriteBatch.Begin(SpriteBlendMode.Additive);
foreach (light in lights)
spriteBatch.Draw(lightFadeOffTexture, light.Area, light.Color);
spriteBatch.End();
// change render target back to the back buffer, so we are back to drawing onto the screen
graphicsDevice.SetRenderTarget(0, null);
// draw the old, non-lit, scene
DrawScene();
// multiply the light buffer texture with the scene
spriteBatch.Begin(SpriteBlendMode.Additive, SpriteSortMode.Immediate, SaveStateMode.None);
graphicsDevice.RenderState.SourceBlend = Blend.Zero;
graphicsDevice.RenderState.DestinationBlend = Blend.SourceColor;
spriteBatch.Draw(lightBuffer.GetTexture(), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
spriteBatch.End();
As far as I know there is no way to do this without using your own custom shaders.
A custom shader for this would work like so:
Render your scene to a texture
Render your lights to another texture
As a post process on a blank quad, sample the two textures and the result is Scene Texture * Light Texture.
This will output a lit scene, but it won't do any shadows. If you want shadows I'd suggest following this excellent sample from Catalin Zima
Perhaps using the same technique as in the BloomEffect component could be an idea.
Basically what the effect does is grabbing the rendered scene, calculates a bloom image from the brightest areas in the scene, the blurs and combines the two. The result is highlighting areas depending on color.
The same approach could be used here. It will be simpler since you won't have to calculate the bloom image based on the background, only based on the position of the character.
You could even reuse this further to provide highlighting for other light sources as well, such as torches, magic effects and whatnot.

Resources