How to avoid transparency overlap using OpenGL? - ios

I am working on a handwriting application on iOS. I found the sample project "GLPaint" from iOS documentation which is implemented by OpenGL ES, and I did something modification on it.
I track the touch points and calculate the curves between the points and draw particle images alone the curve to make it looks like where the finger passby.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); // burshData is from CGImage, it is
// vertexBuffer is generated based on the calculated points, it's just a sequence of point where need to draw image.
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
What I got is a solid line which looks quite good. But now I want to draw semi-transparent highlight instead of solid line. So I replace the particle image with a 50% transparency one without changing code.
Result of 50% transparency particle image
There is something wrong with blend.
What I need
I draw three points using the semi-transparency particle image, and the intersection area should keep 50% transparency.
What's the solution?

Im maybe two years later answering that question, but i hope it helps somebody who comes here looking for a solution to this problem, like it happened to me.
You are going to need to assign to each cirle a different z value. It doesn't matter how big or small this difference is, we only need them to not be strictly equal.
First, you disable writing in the color buffer glColorMask(false,false,false,false) , and then draw the circles normally. The Z-buffer will be updated as desired, but no circles will be drawn yet.
Then, you enable writing in the color buffer (glColorMask(true,true,true,true) ) and set the depthFunc to LEQUAL ( glDepthFunc(GL_LEQUAL) ). Only the nearest circle pixels will pass the depth test (Setting it to LEQUAL instead of EQUAL deals with some rare but possible floating point approximation errors). Enabling blending and drawing them again will produce the image you wanted, with no transparency overlap.

You have to change the blend function. You can play around it with:
glBlendFunc(GL_SRC_ALPHA,GL_ONE);
Maybe (GL_ONE, GL_ONE), forgot how to handle your case, but the solution is in that function.
http://www.opengl.org/sdk/docs/man/xhtml/glBlendFunc.xml

Late reply but hopefully useful for others.
Another way to avoid that effect is to grab the color buffer before transparent circles are drawn (ie. do a GrabPass) and then read and blend manually with the opaque buffer in the fragment shader of your circles.

Related

How to define order when drawing 2D triangles in OpenGL ES 1.1?

I'm drawing triangles with only x and y coordinates per vertex:
glVertexPointer(2, GL_FLOAT, 0, vertices);
Sometimes when I draw a triangle over another triangle they seem to be coplanar and the surface jerks because they share the exact same surface in space.
Is there a way of saying "OpenGL, I want that you draw this triangle on top of whatever is below it" without using 3D coordinates, or do I have to enable depth test and use 3D coordinates to control a Z-index?
If you want to render the triangle just on top of whatever was in the framebuffer before, you can just disable the depth test entirely. But if you need some custom ordering different from draw order, then you won't get around adding additional depth information (in the form of a 3rd z-coordinate). There is no way to say to OpenGL "render the following stuff but with the z-coordinate collectively set to some value". You can either say "render the follwing stuff on top of whatever is there" or "render the following stuff on whatever depth results from its transformed vertices".

OpenGL subtractive blending

In a paint app I am developing, I want my user to be able to draw with a transparent brush, for example black paint over white background should result in grey colour. When more paint is applied, the resulting colour will be closer to black.
However no matter how many times I draw over the place, the resulting colour never turnts black; in fact it stops changing after a few lines. Photoshop says that the alpha of the blob drawn on the left in OpenGL is max 0.8, where I expect it to be 1.
My app works by drawing series of stamps as in Apple's GLPaint sample to form a line. The stamps are blended with the following function:
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA, GL_ONE);
glBlendEquation(GL_FUNC_ADD);
My fragment shader:
uniform lowp sampler2D u_texture;
varying highp vec4 f_color;
void main(){
gl_FragColor = texture2D(u_texture, gl_PointCoord).aaaa*f_color*vec4(f_color.aaa, 1);
}
How should I configure the blending in order to get full colour when drawing repeatedly?
Update 07/11/2013
Perhaps I should also note that I first draw to a texture, and then draw the texture onscreen. The texture is generated using the following code:
glGenFramebuffers(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
glGenTextures(1, &drawingTexture);
glBindTexture(GL_TEXTURE_2D, drawingTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pixelWidth, pixelHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
Update 02/12/2013
I tried modifying Apple's GLPaint program to and it turned out that this behaviour is observable only on iOS7. As it can be seen on the screenshots bellow, the colours on iOS 7 are a bit pale and don't blend nicely. The
GL_ONE, GL_ONE_MINUS_SRC_ALPHA
blend function does well on iOS6. Can this behaviour be caused by iOS7's implementation of CALAyer or something else and how do I solve it?
Update 10/07/2014
Apple recently updated their GLPaint sample for iOS7 and the issue is observable there, too. I made a separate thread based on their code: Unmodified iOS7 Apple GLPaint example blending issue
Just because your brush does "darken" the image doesn't mean, that this was subtractive blending. This is in face regular additive blending, where the black brush merely overdraws the picture. You want a (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) blending function (nonseparated). The brush is contained only within the alpha channel of the texture, there's no color channels in the texture, the brush color is determined by glColor or an equivalent uniform.
Using the destination alpha value is not required in most cases.

render a small texture every frame and then scale it up?

Using OpenGL on iOS, is it possible to update a small texture (by setting each pixel individually) and then scale it up to fill the screen (60 frames per second)?
You should be able to update the content of a texture using glTexImage2D.
Untested example:
GLubyte data[1024]; // 32x32 (power of two)
for (int i=0; i<1024; i+=4) {
// write a red pixel (RGBA)
data[i] = 255;
data[i+1] = 0;
data[i+2] = 0;
data[i+3] = 255;
}
glBindTexture(GL_TEXTURE_2D, my_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// then simply render a quad with this texture.
In general the answer is yes, it is possible. But it might depend on what you need to draw.
Since you don't provide more details I will describe the general approach:
Bind a texture to a framebuffer (Here is a good explanation with code on how to do that. See "Example 6.10. Initialize() for Supersampling" code example)
Now draw what you need in the same way as you would on the screen (transformations, modelview matrix etc). If you need pixel accuracy (to modify each and every pixel) you might consider using an orthographic projection. If this is possible or not, depends on what you need to draw. All this drawing will be performed on your texture achieving the "update the texture" part.
Bind the normal framebuffer that you use, to draw on the screen. Draw a rectangle (possibly using orthographic projection again) that uses the texture from the previous step. You can scale this rectangle to fill the screen.
If the above approach would be able to achieve a 60 fps, depends on your target device and the scene you need to render.
Hope that helps

OpenGL ES Fill Effect

How to create a fill effect? I have an irregular closed shape created using:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
(.......)
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
(.......)
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
Now I would like to have an "fill/paint bucket" effect like in photoshop. The background inside the shape is white (for example) and by clicking inside the shape I want to change the color to red or green.
Can somebody give me some hints, please.
So you render a set of points and want OpenGL to "magically" fill the enclosed region. That's not possible. OpenGL doesn't realize that these points enclose some region. You don't even draw a line strip, it's just a set of points. Even a human has to put in a reasonable effort of thinking to see that the points "enclose" a region, let aside a computer, or just a simple interface for drawing points, lines and triangles onto the screen.
Instead of drawing points, just draw a polygon (use GL_POLYGON or GL_TRIANGLE_FAN instead of GL_POINTS). But if the enclosed region is non-convex that won't work in all cases. What you always have to realize is, that OpenGL is nothing more than a drawing API. It just draws points, lines and triangles to screen. Yes, with fancy effects, but all in all it just draws simple primitives. It doesn't manage any underlying graphics scene or something. The moment a primitive (like a single point, line or triangle) has been drawn, OpenGL doesn't remember it anymore.
What you want to achieve (given you at least change the point set to a line loop that really encloses a region), is not to be achieved by simple means. In the simplest case you need some kind of flood fill algorithm that fills the region you enclosed by the lines. But for this you don't profit from OpenGL in any way, as this requires you to analyse the image on the CPU and set individual pixels. And neither can shaders do this in a simple (or any?) way.

How to multiply two sprites in SpriteBatch Draw XNA (2D)

I am writing simple hex engine for action-rpg in XNA 3.1. I want to light ground near hero and torches just as they were lighted in Diablo II. I though the best way to do so was to calculate field-of-view, hide any tiles and their's content that player can't see and draw special "Light" texture on top of any light source: Texture that is black with white, blurred circle in it's center.
I wanted to multiply this texture with background (as in blending mode: multiply), but - unfortunately - I do not see option for doing that in SpriteBatch. Could someone point me in right direction?
Or perhaps there is other - better - way to achive lighting model as in Diablo II?
If you were to multiply your light texture with the scene, you will darken the area, not brighten it.
You could try rendering with additive blending; this won't quite look right, but is easy and may be acceptable. You will have to draw your light with a fairly low alpha for the light texture to not just over saturate that part of the image.
Another, more complicated, way of doing lighting is to draw all of your light textures (for all the lights in the scene) additively onto a second render target, and then multiply this texture with your scene. This should give much more realistic lighting, but has a larger performance overhead and is more complex.
Initialisation:
RenderTarget2D lightBuffer = new RenderTarget2D(graphicsDevice, screenWidth, screenHeight, 1, SurfaceFormat.Color);
Color ambientLight = new Color(0.3f, 0.3f, 0.3f, 1.0f);
Draw:
// set the render target and clear it to the ambient lighting
graphicsDevice.SetRenderTarget(0, lightBuffer);
graphicsDevice.Clear(ambientLight)
// additively draw all of the lights onto this texture. The lights can be coloured etc.
spriteBatch.Begin(SpriteBlendMode.Additive);
foreach (light in lights)
spriteBatch.Draw(lightFadeOffTexture, light.Area, light.Color);
spriteBatch.End();
// change render target back to the back buffer, so we are back to drawing onto the screen
graphicsDevice.SetRenderTarget(0, null);
// draw the old, non-lit, scene
DrawScene();
// multiply the light buffer texture with the scene
spriteBatch.Begin(SpriteBlendMode.Additive, SpriteSortMode.Immediate, SaveStateMode.None);
graphicsDevice.RenderState.SourceBlend = Blend.Zero;
graphicsDevice.RenderState.DestinationBlend = Blend.SourceColor;
spriteBatch.Draw(lightBuffer.GetTexture(), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
spriteBatch.End();
As far as I know there is no way to do this without using your own custom shaders.
A custom shader for this would work like so:
Render your scene to a texture
Render your lights to another texture
As a post process on a blank quad, sample the two textures and the result is Scene Texture * Light Texture.
This will output a lit scene, but it won't do any shadows. If you want shadows I'd suggest following this excellent sample from Catalin Zima
Perhaps using the same technique as in the BloomEffect component could be an idea.
Basically what the effect does is grabbing the rendered scene, calculates a bloom image from the brightest areas in the scene, the blurs and combines the two. The result is highlighting areas depending on color.
The same approach could be used here. It will be simpler since you won't have to calculate the bloom image based on the background, only based on the position of the character.
You could even reuse this further to provide highlighting for other light sources as well, such as torches, magic effects and whatnot.

Resources