Changing alpha changes color - ios

I'm rendering elements in OpenGL ES and they have the same color, but a different alpha. The problem is that when the alpha is .1, the color changes, not just the opacity. For example, the color will change from black to purple when the alpha component is set to a value of 0.1. When the alpha value is not .1, or some other tenth values, it works fine.
I'm setting this blending prior to drawing:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Drawing:
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, [element numberOfSteps] * [element numberOfVerticesPerStep]);
}

This is normal behavior for pre-multiplied alpha (which your blend function implies). You must change color and alpha together when you use pre-multiplied alpha.
Thus, the new color should be equal to originalColor * alpha whenever you change the value of alpha. Note, you should use the floating-point normalized value of alpha (e.g. 0.0 - 1.0) and not the fixed-point value (e.g. 0 - 255)
Consider the blend function you have selected:
             DestinationRGB = (SourceRGB * 1.0) + (DestinationRGB * (1.0 - SourceA));
Traditional alpha blending is:
             DestinationRGB = (SourceRGB * SourceA) + (DestinationRGB * (1.0 - SourceA));
Notice how the more traditional blend equation already performs SrcRGB * SrcA? Your particular blend equation only works correctly if the RGB components are literally pre-multiplied by the alpha component.

Related

Trails effect, clearing a frame buffer with a transparent quad

I want to get a trails effect. I am drawing particles to a frame buffer. which is never cleared (accumulates draw calls). Fading out is done by drawing a black quad with small alpha, for example 0.0, 0.0, 0.0, 0.1. A two step process, repeated per frame:
- drawing a black quad
- drawing particles at new positions
All works nice, the moving particles produce long trails EXCEPT the black quad does not clear the FBO down to perfect zero. Faint trails remain forever (e.g. buffer's RGBA = 4,4,4,255).
I assume the problem starts when a blending function multiplies small values of FBO's 8bit RGBA (destination color) by, for example (1.0-0.1)=0.9 and rounding prevents further reduction. For example 4 * 0.9 = 3.6 -> rounded back to 4, for ever.
Is my method (drawing a black quad) inherently useless for trails? I cannot find a blend function that could help, since all of them multiply the DST color by some value, which must be very small to produce long trails.
The trails are drawn using a code:
GLuint drawableFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &drawableFBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO); /// has an attached texture glFramebufferTexture2D -> FBOTextureId
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, 0.0, 0.0, 0.0, 0.1);
glUniformMatrix4fv(fboClearShader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glBindVertexArray(fboClearShaderBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glUseProgram(particlesShader);
glUniformMatrix4fv(shader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glUniform1f(shader.uniforms.globalAlpha, 0.9);
glBlendFunc(GL_ONE, GL_ONE);
glBindTexture(particleTextureId);
glBindVertexArray(particlesBuffer);
glDrawArrays(GL_TRIANGLES, 0, 1000*6);
/// back to drawable buffer
glBindFramebuffer(GL_FRAMEBUFFER, drawableFBO);
glUseProgram(fullScreenShader);
glBindVertexArray(screenQuad);
glBlendFuncGL_ONE dFactor:GL_ONE];
glBindTexture(FBOTextureId);
glDrawArrays(GL_TRIANGLES, 0, 6);
Blending is not only defined by the by the blend function glBlendFunc, it is also defined by the blend equation glBlendEquation.
By default the source value and the destination values are summed up, after they are processed by the blend function.
Use a blend function which subtracts a tiny value from the destination buffer, so the destination color will slightly decreased in each frame and finally becomes 0.0.
The the results of the blend equations is clamped to the range [0, 1].
e.g.
dest_color = dest_color - RGB(0.01)
The blend equation which subtracts the source color form the destination color is GL_FUNC_REVERSE_SUBTRACT:
float dec = 0.01f; // should be at least 1.0/256.0
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, dec, dec, dec, 0.0);

CGContext color changes when alpha is really small

I'm making a painting app and at each point along a stroke the width of the brush can change size and the alpha decreases as if the brush was running out of paint. The problem is that when the alpha gets really small the color changes.
Once the alpha goes below .01 I start to get the color changes in the brush strokes. I have to have such a low alpha because at every pixel along the line I am overlaying the brush layer into the context and to get the needed transparency as the brush is running out of paint, the alpha ends up needing to be very small.
Here is the code where I am drawing my layer into the context:
CGContextSaveGState(_cacheContext);
CGContextSetAlpha(_cacheContext, _brushAlpha); // below .01 color starts to change
CGContextDrawLayerAtPoint (_cacheContext, bottomLeft, _brushShapeLayer);
CGContextRestoreGState(_cacheContext);
If the RGB color is all one color such as (1,0,0) then it works great. But when the color is something else like (.4,.6,.2) thats when I see the color changes at low alphas.
Thanks for any help!
* Update *
I tried to use kCGBitmapFloatComponents but I am getting an error:
Unsupported pixel description - 3 components, 32 bits-per-component,
128 bits-per-pixel
I assumed this meant that I couldn't use it in iOS but maybe I'm not setting it up correctly. Here is what I have for creating the context:
bitmapBytesPerRow = (self.frame.size.width * 8 * sizeof(float));
bitmapByteCount = (bitmapBytesPerRow * self.frame.size.height);
void* bitmap = malloc( bitmapByteCount );
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapFloatComponents;
self.cacheContext = CGBitmapContextCreate (bitmap, self.frame.size.width, self.frame.size.height, 32, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), bitmapInfo);
My Deployment Target is set at 8.2

GLKit does not display texture when using GL_MODULATE with alpha in vertex color

I have a texture image that I am using with GLKit. If I use GL_MODULATE on the texture and have vertex RGBA (1.0, 1.0, 1.0, 1.0) then the texture shows up entirely as it would do in GL_REPLACE. Fully opaque.
Then if I use Red (1.0, 0.0, 0.0, 1.0) for vertex RGB the texture shows up again as Red modulating the texture.
So far so good.
But when I change the transparency in the vertex color and I use RGBA(1.0, 0.0, 0.0, 0.5), then only a light red color is seen and the texture is not visible, so the color is replacing the texture entirely.
The texture itself has no alpha, it is RGB565 texture.
I am using GLKit with GLKTextureEnvModeModulate.
self.effect.texture2d0.envMode = GLKTextureEnvModeModulate;
Any help on why the texture would disappear, when I specify the alpha?
Adding snapshots:
This is the original texture
RGBA (1.0, 1.0, 1.0, 1.0) - white color , no premultiplication, opaque, texture visible
RGBA (1.0, 1.0, 1.0, 0.5) - white color, no premultiplication, alpha = 0.5, texture lost
RGBA (1.0, 0, 0, 1.0) - red color , no premultiplication, opaque, texture visible
RGBA (1.0, 0, 0, 0.5) - red color, no premultiplication, alpha = 0.5, texture lost
RGBA (0.5, 0, 0, 0.5) - red color, premultiplication, alpha = 0.5 per #andon, texture visible, but you may need to magnify to see it
RGBA (0.1, 0, 0, 0.1) - red color, premultiplication, alpha = 0.1 per #andon, texture lost, probably because not enough contrast is there
RGBA (0.9, 0, 0, 0.9) - red color, premultiplication, alpha = 0.9 per #andon, texture visible, but you may need to magnify to see it
The texture itself has no alpha, it is RGB565 texture
RGB565 implicitly has constant alpha (opaque -> 1.0). That may not sound important, but modulating vertex color with texture color does a component-wise multiplication and that would not work at all if alpha were not 1.0.
My blend function is for pre-multiplied - One, One - Src.
This necessitates pre-multiplying the RGB components of vertex color by the A component. All colors must be pre-multiplied, this includes texels and vertex colors.
You can see why below:
Vtx = (1.0, 0.0, 0.0, 0.5)
Tex = (R, G, B, 1.0)
// Modulate Vertex and Tex
Src = Vtx * Tex = (R, 0, 0, 0.5)
// Pre-multiplied Alpha Blending (done incorrectly)
Blend_RGB = Src * 1 + (1 - Src.a) * Dst
= Src + Dst / 2.0
= (R, 0, 0) + Dst / 2.0
The only thing this does is divide the destination color by 2 and add the unaltered source color to it. It is supposed to resemble linear interpolation (a * c + (1 - c) * b).
Proper blending should look like this:
// Traditional Blending
Blend_RGB = Src * Src.a + (1 - Src.a) * Dst
= (0.5R, 0, 0) + Dst / 2.0
This can be accomplished using the original blend function if you multiply the RGB part of the vertex color by A.
Correct pre-multiplied alpha blending (by pre-multiplying vertex color):
Vtx = (0.5, 0.0, 0.0, 0.5) // Pre-multiply: RGB *= A
Tex = (R, G, B, 1.0)
// Modulate Vertex and Tex
Src = Vtx * Tex = (0.5R, 0, 0, 0.5)
// Pre-multiplied Alpha Blending (done correctly)
Blend_RGB = Src * 1 + (1 - Src.a) * Dst
= (0.5R, 0, 0) + Dst / 2.0

How do I draw thousands of squares with glkit, opengl es2?

I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?

Overlapping 2 transparent Texture2Ds in OpenGL ES

I'm trying to make a 2D game for the iPad with OpenGL. I'm new to OpenGL in general so this blending stuff is new.
My drawing code looks like this:
static CGFloat r=0;
r+=2.5;
r=remainder(r, 360);
glLoadIdentity();
//you can ignore the rotating and scaling
glRotatef(90, 0,0, -1);
glScalef(1, -1, 1);
glTranslatef(-1024, -768, 0);
glClearColor(0.3,0.8,1, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glEnable (GL_BLEND);
glBlendFunc (GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
[texture drawInRect:CGRectMake(512-54, abs(sin(((r+45)/180)*3.14)*500), 108, 108)];
[texture drawInRect:CGRectMake(512-54, abs(sin((r/180)*3.14)*500), 108, 108)];
("texture" is a Texture2D that has a transparent background)
All I need to know how to do is make it so that a blue box around the texture doesnt cover up the other one.
Sounds like you just need to open up the texture image in your favourite image editor and set the blue area to be 0% opaque (i.e. 0) in the alpha channel. The SRC_ALPHA part of GL_ONE_MINUS_SRC_ALPHA means the alpha value in the source texture.
Chances are you're using 32-bit colour, in which case you'll have four channels, 8 bits for Red, Green, Blue and Alpha.

Resources