I am trying to achieve the following challenging effect:
I want to move the white "curtain" down in order to reveal the red box.
(Note: in the screenshots below the curtain is white and the background is grey)
The problem is in the view hierarchy.
In order for the box to stay hidden in the initial position, it has to be placed behind the curtain, but in order to be shown in the final position, it has to be on top of the curtain.
How can I "cheat" and make it seem like the curtain really reveals the box with a smooth animation?
Thanks!
You need 2 images and a mask. Your fully obscured gray area and your box with white background. The image for your curtain is only a mask of the bottom edge. This is so it can draw the bottom fringe for the curtain and not obliterate the gray overlapping region.
Set a starting position at the top, each frame:
Draw/copy only the size of the curtain mask, copying the corresponding red box region through the curtain mask.
Move the starting position down one scan line and wait for the next frame. Repeat until done.
Essentially, there is no white curtain, only what is revealed of the "hidden" image which contains white background for the box. Depending on how you're drawing, your mask image could be another image with an alpha channel.
Edit: As requested, some example code. However, it is very possible that whatever you are using to get graphics on the screen already has draw routines with masking and your would be better off using that.
This snippet is untested but should provide the logic and work pretty much anywhere. I'm not familiar with iOS and have no idea what format your image pixels are, 24 bit, 32 bit, etc. and use "PixelType" as a substitute.
This also assumes the white curtain edge with a black background was made as an 8 bit image in a paint program, and black is zero and white anything else. It should be the same width as both of the other images and only as tall as needed for the curtain edge.
struct Mask
{
char *mData; // set this to the image data of your 8 bit mask
int mWidth; // width in pixels, should be the same as your 2 images
int mHeight; // height in pixels of the mask
};
int iRevealPos = 0; // increment each frame value until box is revealed.
// Hopefully, your pixel type is a basic type like byte, short or int.
void Reveal(PixelType *foreground, PixelType *background, Mask *mask)
{
int height = (iRevealPos < mask->mHeight) ? iRevealPos : mask->mHeight; // account for initial slide in
PixelType *src = background + (iRevealPos * mask->mWidth); // background box at current reveal position
PixelType *dst = foreground + (iRevealPos * mask->mWidth); // matching foreground screen position
int count = mask->mWidth * height;
char *filter = mask->mData;
if ((iRevealPos < mask->mHeight)) // adjust for initial slide in
filter += (mask->mHeight - iRevealPos) * mask->mWidth;
while (count--)
{
if (*filter++) // not black?
*dst++ = *src++; // copy the box image
else // skip this pixel
{
src++;
dst++;
}
}
// if you create your mask with a solid white line at the top, you don't need this
if (iRevealPos > mask->mHeight) // fixup, so the mask doesn't leave a trail
{
src = background + ((iRevealPos-1) * mask->mWidth);
dst = foreground + ((iRevealPos-1) * mask->mWidth);
count = mask->mWidth;
while (count--)
*dst++ = *src++;
}
iRevealPos++; // bump position for next time
}
If you create your mask with a solid white line or 2 at the top you don't need the second loop which fixes up any trail the mask leaves behind. I also allowed for the curtain to slide in rather than fully pop in at the start. This is untested so I may have got the adjustments for this wrong.
Related
Above is an example of my problem. I have two alpha masks that are exactly the same, just a circle white gradient with transparent background.
I am drawing to a RenderTexture2D that is rendered above the screen to creating lighting. It clears a semi transparent black color, and then the alpha masks are drawn in the correct position to appear like lights..
On their own it works fine, but if two clash, like the below "torch" against the blue glowing mushrooms, you can see the bounding box transparency is overwriting the already drawn orange glow.
Here is my approach:
This is creating the render target:
RenderTarget2D = new RenderTarget2D(Global.GraphicsDevice, Global.Resolution.X+4, Global.Resolution.Y+4);
SpriteBatch = new SpriteBatch(Global.GraphicsDevice);
This is drawing to the render target:
private void UpdateRenderTarget()
{
Global.GraphicsDevice.SetRenderTarget(RenderTarget2D);
Global.GraphicsDevice.Clear(ClearColor);
// Draw textures
float i = 0;
foreach (DrawableTexture item in DrawableTextures)
{
i += 0.1f;
item.Update?.Invoke(item);
SpriteBatch.Begin(SpriteSortMode.Immediate, item.Blend,
SamplerState.PointClamp, DepthStencilState.Default,
RasterizerState.CullNone);
SpriteBatch.Draw(
item.Texture,
(item.Position - Position) + (item.Texture.Size() / 2 * (1 - item.Scale)),
null,
item.Color,
0,
Vector2.Zero,
item.Scale,
SpriteEffects.None,
i
);
SpriteBatch.End();
}
Global.GraphicsDevice.SetRenderTarget(null);
}
I have heard about depth stencils etc.. and I feel like I have tried so many combinations of things but I am still getting the issue. I haven't had any troubles with this while building all the other graphics in my game.
Any help is greatly appreciated thanks! :)
Ah, this turned out to be a problem with the BlendState itself rather than the SpriteBatch. I had created a custom BlendState "Multiply" which I picked up online that was causing the issue.
"whats causing" the problem was the real question here.
This was the solution to get my effect without "overlapping":
public static BlendState Lighting = new BlendState
{
ColorSourceBlend = Blend.One,
ColorDestinationBlend = Blend.One,
AlphaSourceBlend = Blend.Zero,
AlphaDestinationBlend = Blend.InverseSourceColor
};
This allows the textures to overlap, and also "subtracts" from the "darkness" layer. It would be easier to see if the darkness was more opaque.
I have answered this just incase some other fool mistakes a blend state problem with the sprite batch itself.
To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!
I have an input image looking like this:
As you can see, it's a restore window icon with a blue tint and a background in a pink color.
There are some pixels which are a mix of both colors by an amount I want to calculate, but don't know how. The 100% background and 100% foreground color is given.
Eventually, I want to create an alpha bitmap in which the RGB amounts of every pixel is the foreground color RGB, but the alpha channel is the mix amount:
I found the answer (myself) after discussing with some mathematicians over at math.stackexchange.
Please read my answer there for the logic behind this; in C#, the code would be like this:
private static double GetMixAmount(Color fore, Color back, Color input)
{
double lengthForeToBack = GetLengthBetween3DVectors(fore, back);
double lengthForeToInput = GetLengthBetween3DVectors(fore, input);
return 1 / (lengthForeToBack / lengthForeToInput);
}
private static double GetLengthBetween3DVectors(Color a, Color b)
{
// Typical length between two 3-dimensional points - simply handle RGB as XYZ!
return Math.Sqrt(
Math.Pow(a.R - b.R, 2) + Math.Pow(a.G - b.G, 2) + Math.Pow(a.B - b.B, 2));
}
If you don't know if the foreground and background colors can really be mixed to result in the input color, make sure to clamp the alpha value to lie between 0.0 and 1.0.
I'm trying to implement transparent objects in D3D11. I've setup my blend state like this:
D3D11_BLEND_DESC blendDesc;
ZeroMemory(&blendDesc, sizeof (D3D11_BLEND_DESC));
blendDesc.RenderTarget[0].BlendEnable = TRUE;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; //0x0f;
// set blending
m_d3dDevice->CreateBlendState(&blendDesc, &blendState);
float blendFactor[4] = {1,1,1, 1 };
m_d3dContext->OMSetBlendState(blendState, blendFactor, 0xffffffff);
Rendering transparent object on top of nontransparent object looks fine. Problem is, when I draw transparent object, and another transparent object on top of it, their colors add up and are less transparent. How to prevent this? Thank you very much
Your alphablending follows the formula ResultingColor = alpha * BackbufferColor + (1-alpha) * RenderedColor. At the overlapping parts of your transparent objects this formula will be applied twice. For example if your alpha is 0.5, the first object will replace the backbuffercolor for 50%. The second object interpolates its color for 50% from the previous color, which is 50% background and 50% first object, leading to a total of 25% of your background. This is why overlapping transparent objects looks more oqaque.
If you want an equal transparency over the whole screen, you could render your transparent objects onto a offscreen texture. Afterwards you render this texture over the backbuffer with a fix transparency or encode the transparency in the texture if you need different values.
Here's what I'm trying to do: On the left is a generic, uncolorized RGBA image that I've created off-screen and cached for speed (it's very slow to create initially, but very fast to colorize with any color later, as needed). It's a square image with a circular swirl. Inside the circle, the image has an alpha/opacity of 1. Outside the circle, it has an alpha/opacity of 0. I've displayed it here inside a UIView with a background color of [UIColor scrollViewTexturedBackgroundColor]. On the right is what happens when I attempt to colorize the image by filling a solid red rectangle over the top of it after setting CGContextSetBlendMode(context, kCGBlendModeColor).
That's not what I want, nor what I expected. Evidently, colorizing a completely transparent pixel (e.g., alpha value of 0) results in the full-on fill color for some strange reason, rather than remaining transparent as I would have expected.
What I want is actually this:
Now, in this particular case, I can set the clipping region to a circle, so that the area outside the circle remains untouched — and that's what I've done here as a workaround.
But in my app, I also need to be able to colorize arbitrary shapes where I don't know the clipping/outline path. One example is colorizing white text by overlaying a gradient. How is this done? I suspect there must be some way to do it efficiently — and generally, with no weird path/clipping tricks — using image masks... but I have yet to find a tutorial on this. Obviously it's possible because I've seen colored-gradient text in other games.
Incidentally, what I can't do is start with a gradient and clip/clear away parts I don't need — because (as shown in the example above) my uncolorized source images are, in general, grayscale rather than pure white. So I really need to start with the uncolorized image and then colorize it.
p.s. — kCGBlendModeMultiply also has the same flaws / shortcomings / idiosyncrasies when it comes to colorizing partially transparent images. Does anyone know why Apple decided to do it that way? It's as if the Quartz colorizing code treats RGBA(0,0,0,0) as RGBA(0,0,0,1), i.e., it completely ignores and destroys the alpha channel.
One approach that you can take that will work is to construct a mask from the original image and then invoke the CGContextClipToMask() method before rendering your image with the multiply blend mode set. Here is the CoreGraphics code that would set the mask before drawing the image to color.
CGContextRef context = [frameBuffer createBitmapContext];
CGRect bounds = CGRectMake( 0.0f, 0.0f, width, height );
CGContextClipToMask(context, bounds, maskImage.CGImage);
CGContextDrawImage(context, bounds, greyImage.CGImage);
The slightly more tricky part will be to take the original image and generate a maskImage. What you can do for that is write a loop that will examine each pixel and write either a black or white pixel as the mask value. If the original pixel in the image to color is completely transparent, then write a black pixel, otherwise write a white pixel. Note that the mask value will be a 24BPP image. Here is some code to give you the right idea.
uint32_t *inPixels = (uint32_t*) MEMORY_ADDR_OF_ORIGINAL_IMAGE;
uint32_t *maskPixels = malloc(numPixels * sizeof(uint32_t));
uint32_t *maskPixelsPtr = maskPixels;
for (int rowi = 0; rowi < height; rowi++) {
for (int coli = 0; coli < width; coli++) {
uint32_t inPixel = *inPixels++;
uint32_t inAlpha = (inPixel >> 24) & 0xFF;
uint32_t cval = 0;
if (inAlpha != 0) {
cval = 0xFF;
}
uint32_t outPixel = (0xFF << 24) | (cval << 16) | (cval << 8) | cval;
*maskPixelsPtr++ = outPixel;
}
}
You will of course need to fill in all the details and create the graphics contexts and so on. But the general idea is to simply create your own mask to filter out drawing of the red parts around the outside of the circle.