XNA - Render to a texture's alpha channel - xna

I have a texture that I want to modify it's alpha channel in runtime.
Is there a way to draw on a texture's alpha channel ?
Or maybe replace the channel with that of another texture ?
Thanks,
SW.

Ok, based on your comment, what you should do is use a pixel shader. Your source image doesn't even need an alpha channel - let the pixel shader apply an alpha.
In fact you should probably calculate the values for the alpha channel (ie: run your fluid solver) on the GPU as well.
Your shader might look something like this:
float4 main(float2 uv : TEXCOORD) : COLOR
{
float4 c = tex2D(textureSampler, uv);
c.A = /* calculate alpha value here */;
return c;
}
A good place to start would be the XNA Sprite Effects sample.
There's even an effect similar to what you are doing:
(source: msdn.com)
The effect in the sample reads from a second texture to get values for the calculation of the alpha channel of the first texture when it is drawn.

Related

Does Metal support anything like glDepthRange()?

I'm writing some metal code that draws a skybox. I'd like for the depth output by the vertex shader to always be 1, but of course, I'd also like the vertices to be drawn in their correct positions.
In OpenGL, you could use glDepthRange(1,1) to have the depth always be written out as 1.0 in this scenario. I don't see anything similar in Metal. Does such a thing exist? If not, is there another way to always output 1.0 as the depth from the vertex shader?
What I'm trying to accomplish is drawing the scenery first and then drawing the skybox to avoid overdraw. If I just set the z component of the outgoing vertex to 1.0, then the geometry doesn't draw correctly, obviously. What are my options here?
Looks like you can specify the fragment shader output (return value) format roughly so:
struct MyFragmentOutput {
// color attachment 0
float4 color_att [[color(0)]];
// depth attachment
float depth_att [[depth(depth_argument)]]
}
as seen in the section "Fragment Function Output Attributes" on page 88 of the Metal Shading Language Specification (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf). Looks like any is a working value for depth_argument (see here for more: In metal how to clear the depth buffer or the stencil buffer?)
Then you would set you fragment shader to use that format
fragment MyFragmentOutput interestingShaderFragment
// instead of: fragment float4 interestingShaderFragment
and finally just write to the depth buffer in your fragment shader:
MyFragmentOutput out;
out.color_att = float(rgb_color_here, 1.0);
out.depth_att = 1.0;
return out;
Tested and it worked.

HLSL correct pixel position in deferred shading

In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject

GPU Texture Splatting

Just as a quick example, I'm trying to do the following:
+
+
=
With the third image as an alpha map, how could this be implemented in a DX9-compatible pixel shader to "blend" between the first two images, creating an effect similar to the fourth image?
Furthermore, how could this newly created texture be given back to the CPU, where it could be placed back inside the original array of textures?
The rough way is to blend the colors of the textures with the alphamap and return it from the pixelshader:
float alpha = tex2D(AlphaSampler,TexCoord).r;
float3 texture1 = tex2D(Texture1Sampler,TexCoord).rgb;
float3 texture2 = tex2D(Texture2Sampler,TexCoord).rgb;
float3 color = lerp(texture1,texture2,alpha);
return float4(color.rgb,1);
Therefore you need a texture as rendertarget (doc) with the size of the inputtextures and a fullscreen quad as geometry for rendering, a xyzrhw quad would be the easiest. This texture you can use further for rendering. If you want to read the texels or something else, where you must lock the result you could work with StretchRect (doc) or UpdateSurface (doc) to copy the data into a normal texture.
If the performance isn't important (e.g. you preprocess the textures), you could easier compute this on the cpu (but it's slower). Lock the 4 textures, iterate over the pixels and merge them directly.

Problem with alpha blending in XNA

Hi
i have a background and two png sprites
I want to make this effect using the provided background and sprites using XNA 3.1
I'm doing something wrong because i only get this As you noticed its not the effect i wanna do
It is possible do this effect with a few lines of code using alpha blending in XNA 3.1? A practical example would be really great!
First, render textures that contain the shapes that you want to be transparent to texture A.
The textures containing the shapes should contain black shapes, on a transparent background -- easily constructed with image editing software like Photoshop.
Then take texture A and draw it over top of your scene using an effect (an HLSL shader) that does:
output = float4(0, 0, 0, A.r);
Effectively making the output image's alpha lower where A is darker.
The image will have clear portions where you drew your shapes on A, and will be black everywhere else.
Here are the details of the shader code:
sampler TextureSampler : register(s0);
float4 PS(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0
{
float4 Color = tex2D(TextureSampler, texCoord);
Color = float4(0, 0, 0, Color.r);
return Color;
}
technique Vicky
{
pass P0
{
PixelShader = compile ps_2_0 PS();
}
}
If you want a solution without shader.
You first need your fog of war textures to be black with the transparent parts as White.
Render your map and entity normally, to a RenderTarget2D
Clear your background to black
Start sprite batch with Additive blend
Render you fog of war textures
Start a new sprite batch with Multiply blend
Render your map RenderTarget2D on top of the whole screen

Writing texture data onto depth buffer

I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Resources