I am trying to apply mask in OpenGl to image which has transparency.
My current code looks like this:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(1, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
[self.bgSprite render];
glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
[self.maskSprite render];
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
[self.contentSprite render];
}
So, first of all I am drawing my background sprite, then cutting a hole into it with my mask, and then drawing masked image inside that hole. But this approach is not what I need, because I want to see my background texture in places, where my content image is transparent, but now I am seeing there white color.
Before that I worked with android, and there I was able to use Pointer Duff modes. First step was drawing mask with SOURCE mode, then content image with SOURCE_IN, and then background with DESTINATION_OVER. But I can't find it in OpenGL.
I got it.
Here are he steps
glClearColor(1, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
[self.maskSprite render];
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
[self.contentSprite render];
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
[self.bgSprite render];
Related
I'm currently trying to divide the OpenGL ES 2.0 drawing process onto two halves: the first half where I render an object of interest (i.e. a cube or triangle) to a framebuffer that has a texture attached to it, and the second half where I apply that texture onto the face of a shape drawn in another framebuffer (i.e. another cube or triangle).
I cleared the framebuffer binded to the texture with a green color, and have been able to get that color to appear onto a triangle that I've drawn in another framebuffer that has the main renderbuffer attached and that I call [context presentRenderbuffer: renderbuffer] on. However, no matter what I do I'm not able to additionally draw another shape into that texture after I've cleared it to a green background, and render that onto the shape I've drawn.
For some visual reference, currently I'm drawing a square to the screen in my main framebuffer, and then applying a texture that is supposed to have a green background plus a triangle in the middle, but all that I get is this green screen.
It has everything that I currently want, except there is no triangle that is also in the middle. Essentially, this should look like a big green square with a black triangle in the middle of it, where the green and the triangle all came from the texture (the square would have originally been black).
My texture drawing method and main framebuffer drawing methods are included below (without the setup code):
- (BOOL) loadModelToTexture: (GLuint*) tex {
GLuint fb;
GLenum status;
glGenFramebuffers(1, &fb);
// Set up the FBO with one texture attachment
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glGenTextures(1, tex);
glBindTexture(GL_TEXTURE_2D, *tex);
NSLog(#"Error1: %x", glGetError());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, *tex, 0);
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
// Handle error here
NSLog(#"Loading model to texture failed");
return FALSE;
}
glClearColor(0.0f, 1.0f, 0.0f, 1.0f); // Set color's clear-value to red
glClearDepthf(1.0f); // Set depth's clear-value to farthest
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width*self.contentsScale, self.frame.size.height*self.contentsScale);
NSLog(#"Error2: %x", glGetError());
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
NSLog(#"Error3: %x", glGetError());
glDrawArrays(GL_TRIANGLES, 0, 3);
return TRUE;
}
- (void) draw {
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glClearColor(0.0f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width*self.contentsScale, self.frame.size.height*self.contentsScale);
// Use shader program.
glUseProgram(program);
// Update attribute values.
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
//for some reason this is unneeded, but I'm not sure why
glVertexAttribPointer(ATTRIB_TEXTURE_COORD, 2, GL_FLOAT, 0, 0, texCoords);
glEnableVertexAttribArray(ATTRIB_TEXTURE_COORD);
glBindTexture(GL_TEXTURE_2D, textureName0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDrawArrays(GL_TRIANGLES, 3, 3);
[context presentRenderbuffer:renderbuffer];
}
What other steps do I need to take to get it to draw correctly to the texture/apply correctly to the main framebuffer drawing?
Never mind, it turns out that I was drawing my triangle to the texture, but the triangle just automatically defaulted to the same color as the background texture, so it's a completely different issue.
Basically what I'm doing is making a simple finger drawing application. I have a single class that takes the input touch points and does all the fun work of turning those touch points into bezier curves, calculating vertices from those, etc. That's all working fine.
The only interesting constraint I'm working with is that I need strokes to blend on on top of each other, but not with themselves. Imagine having a scribbly line that crosses itself and has 50% opacity. Where the line crosses itself, there should be no visible blending (it should all look like the same color). However, the line SHOULD blend with the rest of the drawing below it.
To accomplish this, I'm using two textures. A back texture and a scratch texture. While the line is actively being updated (during the course of the stroke), I disable blending, draw the vertices on the scratch texture, then enable blending, and draw the back texture and scratch texture into my frame buffer. When the stroke is finished, I draw the scratch texture into the back texture, and we're ready to start the next stroke.
This all works very smoothly on a newer device, but on older devices the frame rate takes a severe hit. From some testing, it seems that the biggest performance hit is in drawing the textures to the frame buffer, because they're relatively large textures (due to the iPhone's retina resolution).
Does anybody have any hints on some strategies to work around this? I'm happy to provide more specifics or code, I'm just not sure where to start.
I am using OpenGL ES 2.0, targeting iOS 7.0, but testing on an iPhone 4S
The following is code I'm using to draw into the framebuffers:
- (void)drawRect:(CGRect)rect
{
[self drawRect:rect
ofTexture:_backTex
withOpacity:1.0];
if (_activeSpriteStroke)
{
[self drawStroke:_activeSpriteStroke
intoFrameBuffer:0];
}
}
Those rely on the following few methods:
- (void)drawRect:(CGRect)rect
ofTexture:(GLuint)tex
withOpacity:(CGFloat)opacity
{
_texShader.color = GLKVector4Make(1.0, 1.0, 1.0, opacity);
[_texShader prepareToDraw];
glBindTexture(GL_TEXTURE_2D, tex);
glBindVertexArrayOES(_texVertexVAO);
glBindBuffer(GL_ARRAY_BUFFER, _texVertexVBO);
[self bufferTexCoordsForRect:rect];
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArrayOES(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, tex);
}
- (void)drawStroke:(AHSpriteStroke *)stroke
intoFrameBuffer:(GLuint)frameBuffer
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self renderStroke:stroke
ontoTexture:_scratchTex
inFrameBuffer:_scratchFrameBuffer];
if (frameBuffer == 0)
{
[self bindDrawable];
}
else
{
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
}
[self setScissorRect:_activeSpriteStroke.boundingRect];
glEnable(GL_SCISSOR_TEST);
[self drawRect:self.bounds
ofTexture:_scratchTex
withOpacity:stroke.lineOpacity];
glDisable(GL_SCISSOR_TEST);
glDisable(GL_BLEND);
}
- (void)renderStroke:(AHSpriteStroke *)stroke
ontoTexture:(GLuint)tex
inFrameBuffer:(GLuint)framebuffer
{
glBindFramebuffer(GL_FRAMEBUFFER, _msFrameBuffer);
glBindTexture(GL_TEXTURE_2D, tex);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
[stroke render];
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, framebuffer);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msFrameBuffer);
glResolveMultisampleFramebufferAPPLE();
const GLenum discards[] = { GL_COLOR_ATTACHMENT0 };
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, discards);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
And a couple of the helper methods just for completeness so you can follow it:
- (void)bufferTexCoordsForRect:(CGRect)rect
{
AHTextureMap textureMaps[4] =
{
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMaxY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMaxY(rect))
inRect:self.bounds]
};
glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(AHTextureMap), textureMaps, GL_DYNAMIC_DRAW);
}
- (AHTextureMap)textureMapForPoint:(CGPoint)point
inRect:(CGRect)outerRect
{
CGPoint pt = CGPointApplyAffineTransform(point, CGAffineTransformMakeScale(self.contentScaleFactor, self.contentScaleFactor));
return (AHTextureMap) { { pt.x, pt.y }, { point.x / outerRect.size.width, 1.0 - (point.y / outerRect.size.height) } };
}
From what I understand you are drawing each quad in a separate draw call.
If your stroke consist of a lot of quads(from sampling the bezier curve) your code will make many draw calls per frame.
Having many draw calls in OpenGL ES 2 on older iOS devices will probably generate a bottle neck on the CPU.
The reason is that draw calls in OpenGL ES 2 can have a lot of overhead in the driver.
The driver tries to organize the draw calls you make into something the GPU can digest and it does this organization using the CPU.
If you intend to draw many quads to simulate a brush stroke you should update a vertex buffer to contain many quads and then draw it with one draw call instead of making a draw call per quad.
You can verify that your bottle neck is in the CPU with the Time Profiler instrument.
You can then check if the CPU is spending most of his time on the OpenGL draw call methods or rather on your own functions.
If the CPU spends most of it's time on the OpenGL draw call methods it is likely because you are making too many draw calls per frame.
I am new to iOS and OpenGL programming, and I am currently writing a simple program using OpenGL ES 2.0 and GLKit for practicing. Right now I can successfully load a PNG file and display it on the screen.
I used GLKViewController in my program, and did some initialization in viewDidLoad. Here's the code in my glkView:drawInRect method:
glClearColor(115.0/255.0, 171.0/255.0, 245.0/255.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
self.effect.texture2d0.name = self.textureInfo.name;
self.effect.texture2d0.enabled = YES;
[self.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(ImageVertex), (void*)(offset + offsetof(ImageVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(ImageVertex), (void*)(offset + offsetof(ImageVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
The above code works pretty well. Now I want to set the opacity of the PNG image. This may sound simple, but I have no idea how I can change the opacity...
I suspect your fragment shader is giving a constant 1.0 value for the alpha channel to gl_FragColor. That value should vary to produce blending. Please see the answers to this question:
How can I get Alpha blending transparency working in OpenGL ES 2.0?
I'm trying to draw some OpenGL graphics over the video from camera.
I've modified Apple's GLCameraRipple sample with code that draws a couple of textured triangles. This code works well in my another OpenGL project (but without GLKit).
Unfortunately, it only works here that way: when my app starts I see screen filled with ClearColor with my textured triangles on it (but no video), and in a moment the screen turns to black and I don't see anything.
Could you explain me what's the problem is?
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.5, 0.0, 0.0, 0.3);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
glUniform1i(uniforms[UNIFORM_Y], 0);
glUniform1i(uniforms[UNIFORM_UV], 1);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
}
[self drawAnimations];
}
- (void) drawAnimations{
// Use shader program.
glUseProgram(_texturingProgram);
GLfloat modelviewProj[16];
[self MakeMatrix:modelviewProj
OriginX:100.0
OriginY:100.0
Width:200.0
Height:200.0
Rotation:0.0];
// update uniform values
glUniformMatrix4fv(texturing_uniforms[TEXTURING_UNIFORM_MODEL_VIEW_PROJECTION_MATRIX], 1, GL_FALSE, modelviewProj);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _animationTexture);
glUniform1i(texturing_uniforms[TEXTURING_UNIFORM_TEXTURE], 0);
glVertexAttribPointer(TEXTURING_ATTRIB_VERTEX,3, GL_FLOAT, GL_FALSE, sizeof(vertexDataTextured), &plain[0].vertex);
glEnableVertexAttribArray(TEXTURING_ATTRIB_VERTEX);
glVertexAttribPointer(TEXTURING_ATTRIB_TEX_COORDS, 2, GL_FLOAT, GL_FALSE, sizeof(vertexDataTextured), &plain[0].texCoord);
glEnableVertexAttribArray(TEXTURING_ATTRIB_TEX_COORDS);
glDrawArrays(GL_TRIANGLES, 0, 6);
if (![self validateProgram:_texturingProgram]) {
NSLog(#"Failed to validate program: (%d)", _texturingProgram);
}
}
You could place a transparent UIView containing your Open GL drawing layer over another UIView containing the camera preview image layer.
I'm starting OpenGL with Apple's GLKit hand I'm having some trouble to get my sprites displayed properly. The Problem is that they all are surrounded with thin dark lines. The screen shot below shows two rectangles with a png image textures containing transparency (obviously).
The black shadows, surrounding them are definitely not part of the pngS. The green png is done without anti-aliasing the blue one has an anti-aliased border. The black border is also apparent if I draw only one sprite.
Te relevant part (hope so...) of code is:
//render the scene
-(void)render
{
glClearColor(69./255., 115./255., 213./255., 1.);
glClear(GL_COLOR_BUFFER_BIT);
[shapes enumerateObjectsUsingBlock:^(AAAShape *shape, NSUInteger idx, BOOL *stop)
{
[shape renderInScene:self];
}];
}
//creating and storing the effect inside shape class
-(GLKBaseEffect *)effect
{
if(!effect)
{
effect = [[GLKBaseEffect alloc] init];
}
return effect;
}
//rendering the shape (including effect configuration)
-(void)renderInScene:(AAAScene *)scene
{
//TODO: Storing vertices in Buffer
self.effect.transform.projectionMatrix = scene.projectionMatrix;
self.effect.transform.modelviewMatrix = self.objectMatrix;
if(texture)
{
self.effect.texture2d0.enabled = GL_TRUE;
self.effect.texture2d0.envMode = GLKTextureEnvModeReplace;
self.effect.texture2d0.target = GLKTextureTarget2D;
self.effect.texture2d0.name = texture.name;
}
[self.effect prepareToDraw];
if(texture)
{
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, self.textureCoordinates);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, self.vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, self.vertexCount);
glDisableVertexAttribArray(GLKVertexAttribPosition);
if(texture)
{
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
glDisable(GL_BLEND);
}
}
Any ideas anyone? Thank you.
This worked for me:
glEnable( GLES20.GL_BLEND );
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Come on stackoverflowers, 14 hours and no answers ;-). On gamedev it took David 14 minutes to give this great answer. Vote him up!