I am very new with MonoGame library.
I load a texture from .xnb file
_background = content.Load<Texture2D>(_backgroundKey);
and then i want to change it transparancy(alpha) at the runtime.
Oh i found how to do it myself
spriteBatch.Draw(texture, position, sourceRect, Color.White * 0.5f, .......);
This line of code will draw the texture at half transparency.
You can change the opacity of a texture by using a (semi-)transparent color in the draw call:
spriteBatch.Draw(texture, position, new Color(Color.Pink, 0.5f);
The values range from 0 (completely transparent) to 1 (completely opaque). Color has a lot of different constructors, so you can also pass a byte (0-255) instead of a float, which will result in the same thing.
Related
Context: I'm doing all of the following using OpenGLES 2 on iOS 11
While implementing different blend modes used to blend two textures together I came across a weird issue that I managed to reduce to the following:
I'm trying to blend the following two textures together, only using the fragment shader and not the OpenGL blend functions or equations. GL_BLEND is disabled.
Bottom - dst:
Top - src:
(The bottom image is the same as the top image but rotated and blended onto an opaque white background using "normal" (as in Photoshop 'normal') blending)
In order to do the blending I use the
#extension GL_EXT_shader_framebuffer_fetch
extension, so that in my fragment shader I can write:
void main()
{
highp vec4 dstColor = gl_LastFragData[0];
highp vec4 srcColor = texture2D(textureUnit, textureCoordinateInterpolated);
gl_FragColor = blend(srcColor, dstColor);
}
The blend doesn't perform any blending itself. It only chooses the correct blend function to apply based on a uniform blendMode integer value. In this case the first texture gets drawn with an already tested normal blending function and then the second texture gets drawn on top with the following blendTest function:
Now here's where the problem comes in:
highp vec4 blendTest(highp vec4 srcColor, highp vec4 dstColor) {
highp float threshold = 0.7; // arbitrary
highp float g = 0.0;
if (dstColor.r > threshold && srcColor.r > threshold) {
g = 1.0;
}
//return vec4(0.6, g, 0.0, 1.0); // no yellow lines (Case 1)
return vec4(0.8, g, 0.0, 1.0); // shows yellow lines (Case 2)
}
This is the output I would expect (made in Photoshop):
So red everywhere and green/yellow in the areas where both textures contain an amount of red that is larger than the arbitrary threshold.
However, the results I get are for some reason dependent on the output value I choose for the red component (0.6 or 0.8) and none of these outputs matches the expected one.
Here's what I see (The grey border is just the background):
Case 1:
Case 2:
So to summarize: If I return a red value that is larger than the threshold, e.g
return vec4(0.8, g, 0.0, 1.0);
I see vertical yellow lines, whereas if the red component is less than the threshold there will be no yellow/green in the result whatsoever.
Question:
Why does the output of my fragment shader determine whether or not the conditional statement is executed and even then, why do I end up with green vertical lines instead of green boxes (which indicates that the dstColor is not being read properly)?
Does it have to do with the extension that I'm using?
I also want to point out that the textures are both being loaded in and bound properly. I can see them just fine if I just return the individual texture info without blending or even with a normal blending function that I've implemented everything works as expected.
I found out what the problem was (and I realize that it's not something anyone could have known from just reading the question):
There is an additional fully transparent texture being drawn between the two textures you can see above, which I had forgotten about.
Instead of accounting for that and just returning the dstColor in case the srcColor alpha is 0, the transparent texture's color information (which is (0.0, 0.0, 0.0, 0.0)) was being used when blending, therefore altering the framebuffer content.
Both the transparent texture and the final texture were drawn with the blendTest function, so the output of the first function call was then being read in when blending the final texture.
I'm drawing a Texture2D like this
//background_texture is white in color
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.Red)
The texture is white; however, on screen it's displayed as red.
Why is the draw method requiring a Color?
How does one simply draw the texture, and only the texture without having Color.something distort the graphic?
take a look at the documentation here:
http://msdn.microsoft.com/en-us/library/ff433986.aspx
you want to try Color.White, that additional parameter of a color typically refers to a tint, while a white "tint" should display the sprite without a tint
Color.White does not change the color of your image. Use
spritebatch.Draw(content.Load<Texture2D>("background_texture"),
new Rectangle(10, 10, 100, 100),
Color.White);
Instead of Color.Red, which applies a tint.
Note: Be careful. Intellisense will want to make this Color.Wheat, so be sure to type the first 3 letters before you hit space.
Color.White is uneccesary because in default sprite shader it looks like this:
PixelShader....
{
....
return Texture * Color;
}
Where color is Color that is given from Vertex shader defined by that Color in spritebatch.Draw... if it would be null, black, it would create invisible sprites. Whole point is that by this you set vertex color of each vertex that is used as multiplicative to texture you set for sprite.
I am creating a particle emitter with a texture that is a white circle with alpha. Unable to color the sprites using color passed to the fragment shader.
I tried the following:
gl_FragColor = texture2D(Sampler, gl_PointCoord) * colorVarying;
This seems to be doing some kind of additive coloring.
What I am attempting is porting this:
http://maniacdev.com/2009/07/source-code-particle-based-explosions-in-iphone-opengl-es/
from ES 1.1 to ES 2.0
with your code, consider the following example:
texture2D = (1,0,0,1) = red - fully opaque
colorVarying = (0,1,0,0.5) = green - half transparent
then gl_FragColor would be (0,0,0,0.5) black - half transparent.
Generally, you can use mix to interpolate values, but if I understood your problem then its even easier.
Basically, you only want the alpha channel from your texture and apply it to another color, right? then you could do this:
gl_FragColor = vec4(colorVarying.rgb, texture2D(Sampler, gl_PointCoord).a)
Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.
What I am trying to do is use alpha blending in XNA to make part of a drawn texture transparent. So for instance, I clear the screen to some color, lets say Blue. Then I draw a texture that is red. Finally I draw a texture that is just a radial gradient from completely transparent in the center to completely black at the edge. What I want is the Red texture drawn earlier to be transparent in the same places as the radial gradient texture. So you should be able to see the blue back ground through the red texture.
I thought that this would work.
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteBlendMode.None);
spriteBatch.Draw(bg, new Vector2(0, 0), Color.White);
spriteBatch.End();
spriteBatch.Begin(SpriteBlendMode.None);
GraphicsDevice.RenderState.AlphaBlendEnable = true;
GraphicsDevice.RenderState.AlphaSourceBlend = Blend.One;
GraphicsDevice.RenderState.AlphaDestinationBlend = Blend.Zero;
GraphicsDevice.RenderState.SourceBlend = Blend.Zero;
GraphicsDevice.RenderState.DestinationBlend = Blend.One;
GraphicsDevice.RenderState.BlendFunction = BlendFunction.Add;
spriteBatch.Draw(circle, new Vector2(0, 0), Color.White);
spriteBatch.End();
GraphicsDevice.RenderState.AlphaBlendEnable = false;
But it just seems to ignore all my RenderState settings. I also tried setting the SpriteBlendMode to AlphaBlend. It blends the textures, but that is not the effect I want.
Any help would be appreciated.
What you're trying to to is alpha channel masking.The easiest way is to bake the alpha channel using the content pipeline. But if for some reason you want to do it at runtime here's how (roughly) using a render target (A better and faster solution would be to write a shader)
First create a RenderTarget2D to store and intermediate masked texture
RenderTarget2D maskRenderTarget = GfxComponent.CreateRenderTarget(GraphicsDevice,
1, SurfaceFormat.Single);
Set the renderTarget, and device state
GraphicsDevice.SetRenderTarget(0, maskRenderTarget);
GraphicsDevice.RenderState.AlphaBlendEnable = true;
GraphicsDevice.RenderState.DestinationBlend = Blend.Zero;
GraphicsDevice.RenderState.SourceBlend = Blend.One;
Set the channels to write to the R, G, B channels and draw the first texture using a sprite batch
GraphicsDevice.RenderState.ColorWriteChannels = ColorWriteChannels.Red | ColorWriteChannels.Green | ColorWriteChannels.Blue;
spriteBatch.Draw(bg, new Vector2(0, 0), Color.White);
Set channels to alpha only, and draw the alpha mask
GraphicsDevice.RenderState.ColorWriteChannels = ColorWriteChannels.Alpha;
spriteBatch.Draw(circle, new Vector2(0, 0), Color.White);
you can now restore the render target to the back buffer and draw your texture using alpha blending.
maskedTexture = shadowRenderTarget.GetTexture();
...
Also don't forget to restore the state:
GraphicsDevice.RenderState.ColorWriteChannels = ColorWriteChannels.All;
...