I am trying to draw a Core Graphics image generated (at screen resolution) into OpenGL. However, the image is rendering more aliased than the CG output (antialiasing is disabled in CG). The text is the texture (the blue background is respectively drawn in Core Graphics for the first image and OpenGL for the second).
CG Output:
OpenGL Render (in simulator):
Framebuffer setup:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenRenderbuffers(1, &onscrRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, onscrRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.layer];
glGenFramebuffers(1, &onscrFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, onscrFramebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, onscrRenderBuffer);
Texture Loading Code:
-(GLuint) loadTextureFromImage:(UIImage*)image {
CGImageRef textureImage = image.CGImage;
size_t width = CGImageGetWidth(textureImage);
size_t height = CGImageGetHeight(textureImage);
GLubyte* spriteData = (GLubyte*) malloc(width*height*4);
CGColorSpaceRef cs = CGImageGetColorSpace(textureImage);
CGContextRef c = CGBitmapContextCreate(spriteData, width, height, 8, width*4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -CGContextGetClipBoundingBox(c).size.height);
CGContextDrawImage(c, (CGRect){CGPointZero, {width, height}}, textureImage);
CGContextRelease(c);
GLuint glTex;
glGenTextures(1, &glTex);
glBindTexture(GL_TEXTURE_2D, glTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glBindTexture(GL_TEXTURE_2D, 0);
free(spriteData);
return glTex;
}
Vertices:
struct vertex {
float position[3];
float color[4];
float texCoord[2];
};
typedef struct vertex vertex;
const vertex bgVertices[] = {
{{1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {0, 0}} // BL (3)
};
const vertex textureVertices[] = {
{{1, -1, 0}, {0, 0, 0, 0}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 0, 0, 0}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 0, 0, 0}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 0, 0, 0}, {0, 0}} // BL (3)
};
const GLubyte indicies[] = {
3, 2, 0, 1
};
Render Code:
glClear(GL_COLOR_BUFFER_BIT);
GLsizei width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glViewport(0, 0, width, height);
glBindBuffer(GL_ARRAY_BUFFER, bgVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_ARRAY_BUFFER, textureVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(textureUniform, 0);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
I am using the blend function glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in case that has anything to do with it.
Any ideas where the problem is?
Your GL-rendered output looks all pixelated because it has fewer pixels. Per the Drawing and Printing Guide for iOS, the default scale factor for a CAEAGLLayer is 1.0, so when you set up your GL render buffers, you get one pixel in the buffer per point. (Remember, a point is a unit of UI layout, which on modern devices with Retina displays works out to several hardware pixels.) When you render that buffer full-screen, everything gets scaled up (by about 2.61x on an iPhone 6(s) Plus).
To render at the native screen resolution, you need to increase the contentScaleFactor of your view. (Preferably, you should do this early on, before, setting up renderbuffers, so that they get the new scale factor from the view's layer.)
Watch out, though: you want to use the UIScreen property nativeScale, not scale. The scale property reflects UI rendering, where, on iPhone 6(s) Plus, everything gets done at 3x and then scaled down slightly to the native resolution of the display. The nativeScale property reflects the number of actual device pixels per point — if you're doing GPU rendering, you want to target that so you don't sap performance by drawing more pixels than you need to. (On current devices other than the "Plus" iPhones, scale and nativeScale are the same. But using the latter is probably a good insurance policy.)
You can avoid a lot of these kinds of issues (and others) by letting GLKView do renderbuffer setup for you. Even if you're writing cross-platform GL, that part of your code is going to have to be pretty platform- and device-specific anyway, so you might as well reduce the amount of it that you have to write and maintain.
(Addressing previous edits of the question, for posterity's sake: this has nothing to do with multisampling or the quality of the GL texture data. Multisampling has to do with rasterization of polygon edges — points in the interior of a polygon get one fragment per pixel, but points on the edges get multiple fragments whose colors are blended in the resolve stage. And if you bind the texture to an FBO and glReadPixels from it, you'll find the image is pretty much the same one you put in.)
Related
I am trying to use:
layout (binding = 0, rgba8ui) readonly uniform uimage2D input;
in a compute shader. In order to to bind a texture to this I am using:
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8);
and it seems that in order for this bind to work the texture has to be immutable so I've switched from:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
to:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8UI, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
But this generates "Invalid operation" (specifically the glTexSubImage2D() call generates it). Looking in the documentation I discovered that this call may cause 1282 for the following reasons:
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation whose internalformat matches the format of glTexSubImage2D.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1 and format is not GL_RGBA
but none of these is my case.
The first of them might seem to be the problem (considering I am using glTexStorage2D(), not glTexImage2D() )but this is not the problem because in case of float texture the same mechanism works:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
instead of:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, pixels);
This is probably irrelevant but both methods work well on PC.
Any suggestions on why is this happening?
The internalFormat you use in glTexImage2D and glBindImageTexture should be the same and be compatible with your sampler. For a uimage2D, try using GL_RGBA8UI everywhere.
Also, for transfers to GL_RGBA8UI (and other integer formats) you need to use GL_RGBA_INTEGER as format.
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8UI);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, width, height, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pixels);
Using the format GL_RGBA_INTEGER should also make the glTexSubImage2D variant work.
I am trying to develop a POC which helps to visualize a 3D object on camera feed. The kind 3D object I have, easily gets rendered using this project. And I am referring Camera Ripple code by Apple for showing camera feed. Both of these are separate objects in the same context. Each of these uses its own shader program. I am confused how to switch from one program to another.
My glkview:drawInRect: method looks like this
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
}
glUseProgram(_program1);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, self.viewMode.selectedSegmentIndex);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, cubeOBJTexCoords);
// Load MTL Data
for(int i=0; i<cubeMTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, cubeMTLAmbient[i][0], cubeMTLAmbient[i][1], cubeMTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, cubeMTLDiffuse[i][0], cubeMTLDiffuse[i][1], cubeMTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, cubeMTLSpecular[i][0], cubeMTLSpecular[i][1], cubeMTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, cubeMTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, cubeMTLFirst[i], cubeMTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
this cause a crash by throwing this error gpus_ReturnGuiltyForHardwareRestart
I found solution to my problem is reseting everything between the use of both programs. Now my glkview:drawInRect: looks like below,
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
[self resetProgrameOne];
}
glUseProgram(_program1);
glClear(GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, 1);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, table1OBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, table1OBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, table1OBJTexCoords);
// Load MTL Data
for(int i=0; i<table1MTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, table1MTLAmbient[i][0], table1MTLAmbient[i][1], table1MTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, table1MTLDiffuse[i][0], table1MTLDiffuse[i][1], table1MTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, table1MTLSpecular[i][0], table1MTLSpecular[i][1], table1MTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, table1MTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, table1MTLFirst[i], table1MTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
and resetProgrameOne method resets all the buffers and necessary things by deleting buffers and disabling glDisableVertexAttribArrays.
I've been bashing my head on the wall this afternoon persuading my openGLES2.0 code to perform correctly when I move from using VBO to VAO / VBO. Basically I'm working my way through Apple's "expert" advice on openGLES and moving to using Vertex Array Objects was top of the list ...
I've reviewed the similar question and response here but that didn't seem to help me, other than re-assure me that other people run into similar problems :(
My scenario is that I have approximately 500 rectangular textures moving around the screen. The code all works fine without VAO, but when I define USE_VAO (my constant) it's crashing on the first draw elements call. I'm obviously not understanding VAO properly ... but I can't see the error of my ways!
The setupBeforeRender method is called as the last part of my setup before entering the render loop.
-(void) setupBeforeRender {
glClearColor(0.6, 0.6, 0.6, 1);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glEnable(GL_DEPTH_TEST);
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE0);
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
glEnableVertexAttribArray(_texCoordSlot);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
And here's the render method
- (void)render:(CADisplayLink*)displayLink {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Model view matrix and projection code removed for clarity
GLsizei stride = sizeof(Vertex);
const GLvoid* colourOffset = (GLvoid *) sizeof(float[3]);
const GLvoid* textureOffset = (GLvoid *) sizeof(float[7]);
for (my objectToDraw in objectToDrawArray)
{
if (objectToDraw.vertexBufferObject == 0)
{
#ifdef USE_VAO
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
objectToDraw.vertexArrayObject = _vao;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
#else
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
#endif
}
// Texture binding removed for clarity
#ifdef USE_VAO
// This code crashes with EXC_BAD_ACCESS on the glDrawElements
glBindVertexArrayOES(objectToDraw.vertexArrayObject);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
glBindVertexArrayOES(0);
#else
// This path works fine. So turning VAO off works :(
glBindBuffer(GL_ARRAY_BUFFER, storyTile.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE, stride, textureOffset);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
#endif
} // End for each object
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
Finally, my create and bind VBO method looks like this;
-(GLuint) createAndBindVBO {
const float* rgba = CGColorGetComponents([self.colour CGColor]);
Vertex Vertices[] = {
{{0, 1, 0}, {1, 0, 1, 1}, {0,1}},
{{0, 0, 0}, {1, 0, 1, 1}, {0,0}},
{{1, 1, 0}, {1, 0, 1, 1}, {1,1}},
{{1, 0, 0}, {1, 0, 1, 1}, {1,0}}
};
// Code removed for clarity - sets up geometry and colours
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
return vertexBuffer;
}
I've tried various permutations of this and have sprinkled the code with glGetError() to see if that helps point to where the problem arises. Alas I get no errors, other than the BAD_ACCESS crash on the drawElements call.
EDIT: As suggested, this unfortunately also doesn't work
objectToDraw.vertexBufferObject = [objectToDraw createVBO];
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glBindBuffer(GL_ARRAY_BUFFER, objectToDraw.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexArrayObject = _vao;
glBindVertexArrayOES(0);
I must be doing something dumb with the vertex array object ... but can someone figure out what the problem is?
The vertex array enabled flags are part of the VAO state, so you need to enable the vertex attribute arrays using glEnableVertexAttribArray while the VAO is bound.
From: http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
The resulting
vertex array object is a new state vector, comprising all the state values (listed in Table 6.2, except ARRAY_BUFFER_BINDING):
VERTEX_ATTRIB_ARRAY_ENABLED
VERTEX_ATTRIB_ARRAY_SIZE,
VERTEX_ATTRIB_ARRAY_STRIDE,
VERTEX_ATTRIB_ARRAY_TYPE,
VERTEX_ATTRIB_ARRAY_NORMALIZED,
VERTEX_ATTRIB_ARRAY_POINTER,
ELEMENT_ARRAY_BUFFER_BINDING,
VERTEX_ATTRIB_ARRAY_BUFFER_BINDING.
You should call glVertexAttribPointer after glBindBuffer function call.
I had a similar problem, didn't know what was causing it.
Eventually it turned out that i have to put in a const int number of vertices in glDrawArrays.
sizeof() was not doing it right.
I try to use the "stencil buffer" to display a part of my rendering from a texture, but my render is displayed without any mask effect.
It's for a 2D iOS project, with OpenGL ES 2.0
This is the concerned part of my code :
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glStencilFunc( GL_ALWAYS, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glBindTexture(GL_TEXTURE_2D, _maskTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// scene rendering
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
glStencilFunc( GL_EQUAL, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
glBindTexture(GL_TEXTURE_2D, _viewTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any help would be greatly appreciated !
(As usual for a French developer, sorry for my English !)
precision : "_maskTexture" is a black & white picture.
Solution :
I finnaly solved my problem with the indications of rotoglub and tim.
Thank you both.
1/ The stencil buffer has not been created correctly.
It should be initialized like this:
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH24_STENCIL8_OES,
backingWidth,
backingHeight);
It's the principal reason why my rendering was not affected by my mask.
2/ To be able to make use of a texture as a mask, I replaced the black color with an alpha channel and enable blending in my rendering.
My final rendering code looks like this :
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glClearStencil(0);
glClearColor (0.0,0.0,0.0,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glEnable(GL_STENCIL_TEST);
glEnable(GL_ALPHA_TEST);
glBlendFunc( GL_ONE, GL_ONE );
glAlphaFunc( GL_NOTEQUAL, 0.0 );
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, _mask);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
// scene rendering
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glDisable(GL_STENCIL_TEST);
glDisable(GL_ALPHA_TEST);
glBindTexture(GL_TEXTURE_2D, _texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Simply, the problem is that you're just drawing a texture to the scene without doing any testing of what's in the texture. The stencil buffer doesn't care about the colors in the texture, it just checks:
Did you draw a fragment ? (Update stencil buffer) : (Don't update stencil buffer);
You're drawing a fragment for every pixel of your texture, so any mask effect in the texture is useless.
If you want to mask with a texture, you need to discard any fragments that you don't want updated in the stencil buffer.
This is either done with discard keyword in shader, or glAlphaTest/glAlphaFunc in OpenGLES 1.1
Why is glTexSubImage2D() suddenly causing GL_INVALID_OPERATION?
I'm trying to upgrade my hopelessly outdated augmented reality app from iOS4.x to iOS5.x, but I'm having difficulties. I run iOS5.0. Last week I ran iOS4.3. My device is an iPhone4.
Here is a snippet from my captureOutput:didOutputSampleBuffer:fromConnection: code
uint8_t *baseAddress = /* pointer to camera buffer */
GLuint texture = /* the texture name */
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 480, 360, GL_BGRA, GL_UNSIGNED_BYTE, baseAddress);
/* now glGetError(); -> returns 0x0502 GL_INVALID_OPERATION on iOS5.0, works fine on iOS4.x */
Here is a snippet from my setup code
GLuint texture = /* the texture name */
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
For simplicity I have inserted hardcoded values here. In my actual code I obtain these values with CVPixelBufferGetWidth/Height/BaseAddress. The EAGLContext is initialized with kEAGLRenderingAPIOpenGLES2.
Ah.. I fixed it immediately after posting this question. Had to change GL_RGBA into GL_BRGA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Hope it helps someone.
BTW. If you want to write AR apps then consider using CVOpenGLESTextureCache instead of using glTexSubImage2d. It's supposed to be faster.