Intersection Region Color - xna

i have drawn two semitransparent circles which intersect each other. I have found that the intersection region is deeper in color than other regions. is there any way to make the whole shape as one semitransparent color (color shouldn't be deeper in one area than others)?
is it possible to send me any sample code to solve the problem?
right now in the draw method, i am using the following code:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend);
spriteBatch.Draw(textureCircle1, spritePositionCircle1, new Color(255, 255, 255, (int)(150)));
spriteBatch.Draw(textureCircle2, spritePositionCircle2, new Color(255, 255, 255, (int)(150)));
spriteBatch.End();
base.Draw(gameTime);

I'm not an XNA guy, so you might have to do some translation.
Could you perhaps render them off screen on as white on black monochrome, and take the resulting image, make the white the transparent color you want and the black completely transparent?
I'm not sure how you'd code that up, but that's the approach I'd research.

Check the alpha values of the pixels which you think are less transparent (you're drawing these onto their own surface, not directly to the backbuffer, right?). They may just seem less transparent because their combined color is darker.
If they truly are less transparent, change the transparency of every pixel on the surface to the same value (I'm afraid I don't know how to do this in XNA).
If they only seem less transparent, try drawing your sprites onto the surface completely opaquely (so one will completely overwrite the other), then again change the transparency of the surface as a whole.

Related

Metal - How to overlap textures based on color

I'm trying to use a render pass descriptor to draw two grayscale textures. I am drawing a black square first, then a light gray square after. The second square partially covers the first.
With this setup, the light gray square will always appear in front of the black square because it was drawn most recently in the render pass. However, I would like to know if there is a way to draw the black square above the light gray one based on its brightness. Since the squares only partially overlap is there a way to still have the black square appear on top simply because it has a darker pixel value?
Currently it looks something like this, where the gray square is drawn second so it appears on top.
What I would like is to be able to still draw the gray square second, but have it appear underneath based on the pixel brightness, like so:
I think MTLBlendOperationMin will do what you want: https://developer.apple.com/documentation/metal/mtlblendoperation/mtlblendoperationmin?language=objc

Detecting "surrounded" areas in a binary image

from performing another operation, I get an B/W ( binary ) image which has white and black areas. Now I want to find and floodfill the blacj areas that are completely surrounded by white and not touching the Image border.
The "brute-force" approach i used, which is basically iterating over all pixels( all but the "border" rows/cols), if it finds a black one, I look at the neighbours ( mark them as "visited" ) and if they are black recursively go to their neighbours. And if I only hit white pixels and don't end up at a border I floodfill the area.
This can take a while on a high resolution image.
Is there a not too complicated faster way to to this?
Thank you.
As you have a binary image, you can perform a connected component labeling of the black components. All the components found are surrounded with white. Then you go along the borders in order to find the components that touch the border, and you delete them.
An other simpler and faster solution, would be to go along the borders, and as soon as you find a black pixel, you set a seed that expands until all the black pixels that touch the original pixel are white. Doing that, you delete all the black pixels touching the borders. It will remain only the black components that do not touch the borders.
If most black areas are not touching the border, doing the reverse is likely faster (but equally complicated).
From the border mark any reachable pixel (reachable meaning you can get to the border via only black pixels). After this do a pass of the whole image. Anything black and not visited will be a surrounded area.

why resizableImageWithCapInsets's best performance is tiled by 1x1 rather than block by block

UIImage resizableImageWithCapInsets official document description are below.
During scaling or resizing of the image, areas covered by a cap are not scaled or resized. Instead, the pixel area not covered by the cap in each direction is tiled, left-to-right and top-to-bottom, to resize the image. This technique is often used to create variable-width buttons, which retain the same rounded corners but whose center region grows or shrinks as needed. For best performance, use a tiled area that is a 1x1 pixel area in size.
I don't understand why use 1x1 pixel tiled area is the best performance. I think tiled block by block, the performance is better than 1x1 area. In theory, block by block is fast than point by point, is that right? who can told me the implementation of this in machine?
#jhabbott makes a good guess in his comment on the accepted answer to the question How does UIEdgeInsetsMake work?
So, I think if the tiled area is just 1x1 pixel. Then, resizableImageWithCapInsets: can just use that pixel's color as the fill color. That way, it doesn't have to do any tiling at all. So, essentially, it's like setting view.backgroundColor = color. Have you ever written any drawing code? Basically, I think filling an area with a color is easier than tiling that area with a rectangle of pixels, since the latter probably takes more calculations, like where to position the next tile, etc. But, I'm just guessing here. But, if you try to write the drawing code to fill a rect with a color vs to tile a rect of pixels onto another rect, you'll see where I'm coming from.

lightness algorithm using HSI

Anyone know any algorithm to non-linearly change lightness using HSI model?
I am currently doing something like this.
new intensity = old intensity^(1/4)
It increases lightness of dark color more than lightness of bright color.
The problem is that before enhancement, if I have some pixels look like black color because of very low lightness, their lightness increase after enhancement and their actual colors appear which make black area of photo has different colors(eg: grey,blue). I have tried quite a few ways to solve it by lowering new lightness of black spot but I have no luck so far.
Is there anyway to solve it or is there better algorithm? The problem is only with color which appear to be black before enhancement.
Please help. Thank a lot.
The HSI values of dark pixels are usually degenerate. This is because, for example, a fully saturated maximally-dark blue = black, is identical in appearance to a completely de-saturated (grey) pixel at its darkest = black (this is the reason the 3D space shape usually has a pointed tip at the degenerate/singular colors).
You should not enhance pixels under a certain threshold value, or alternatively, use some weighting function that inhibits enhancement at the very dark values.

XNA Layered Sprite problem

I have a game object that manages several sprite objects. Each of the sprites overlap each other a bit, and drawing them looks just fine when they are at 100% opacity. If I set their opacity to say, 50% that is when it all goes to pot because any overlapping area is not 50% opaque due to the multiple layers.
EDIT: Ooops! For some reason I thought that I couldn't upload images. Anyway....
http://postimage.org/image/2fhcmn6s/ --> Here it is. Guess I need more rep for proper inclusion.
From left to right:
1. Multiple sprites, 100% opacity. Great!
2. Both are 50%, but notice how the overlap region distinguishes them as two sprites.
3. This is the desired behavior. They are 50% opaque, but in terms of the composite image.
What is the best way to mitigate this problem? Is a render target a good idea? What if I have hundreds of these 'multi-sprites'?
Hope this makes sense. Thanks!
Method 1:
If you care about the individual opacity of each sprite, then render the image on the background to a rendertarget texture of the same size using 50% or whatever opacity you want the sprite to have against the background. Then draw this rendertarget with 100% opacity.
In this way, all sprites will be blended against the background only, and other sprites will be ignored.
Method 2:
If you don't care about setting the individual opacity of each sprite, then you can just draw all sprites with 100% opacity to a rendertarget. Then draw that render target over your background at 50% opacity.
Performance concerns:
I mentioned two examples of drawing to rendertargets, each for a different effect.
Method 1:
You want to be able to specify a different opacity for each sprite.
If so, you need to render every sprite to a rendertarget and then draw that rendertarget texture to the final texture. Effectively, this is the same cost as drawing twice as many sprites as you need. In this case, that's 400 draw calls, which can be very expensive.
If you batch the calls though, and use a single large rendertarget for all of the sprites, you might get away with just 2 draw calls (depending on how big your sprites are, and the max size of a texture).
Method 2:
You don't need different opacity per each sprite.
In this case you can almost certainly get away with just 2 draw calls, regardless of sprite size.
Just batch all draw calls of the sprites (with 100% opacity) to draw to a rendertarget. That's one draw call.
Now draw that rendertarget on top of your background image with the desired opacity (e.g. 50% opacity), and all sprites will have this opacity.
This case is easier to implement.
The first thing your example images reminded me of is the "depth-buffer and translucent surfaces" problem.
In a 3D game you must sort your translucent surfaces from back-to-front and draw them only after you have rendered the rest of your scene - all with depth reading and writing turned on. If you don't do this you end up with your 3rd image, when you normally want your 2nd image with the glass being translucent over the top of what is behind it.
But you want the 3rd image - with some transparent surfaces obscuring other ones - so you could just deliberately cause this depth problem!
To do this you need to turn on depth reads and writes and set your depth function so that a second sprite drawn at the same depth as a previously drawn sprite does not render.
To achieve this in XNA 4.0 you need to pass, to SpriteBatch.Begin, a DepthStencilState with its DepthBufferFunction set to CompareFunction.Less (by default it is less-than-or-equal-to) and DepthBufferEnable and DepthBufferWriteEnable set to true.
There may be interactions with the sprite's layerDepth parameter (I cannot remember how it maps to depth by default).
You may also need to use BasicEffect as your shader for your sprite batch - specifically so you can set a projection matrix with appropriate near and far planes. This article explains how to do that. And you may also need to clear your depth buffer before hand.
Finally, you need to draw your sprites in the correct order - with the unobscured sprite first.
I am not entirely sure if this will work and if it will work reliably (perhaps you will get some kind of depth fighting issue, I am not sure). But I think it's worth a try, given that you can leave your rendering code essentially normal and just adjust your render state.
You should try the stuff in Andrew's answer first, but if that doesn't work, you could still render all of the sprites (assuming they all have the same opacity) onto a RenderTarget(2D) with 100% opacity, and then render that RenderTarget to the screen with 50%.
Something like this in XNA 4.0:
RenderTarget2D rt = new RenderTarget2D(graphicsDevice,
graphicsDevice.PresentationParameters.BackBufferWidth,
graphicsDevice.PresentationParameters.BackBufferHeight);
GraphicsDevice.SetRenderTarget(rt);
//Draw sprites
GraphicsDevice.SetRenderTarget(null);
//Then draw rt (also a Texture2D) with 50% opacity. For example:
spriteBatch.Begin();
spriteBatch.Draw(rt, Vector2.Zero, Color.FromArgb(128, Color.White));
spriteBatch.End();

Resources