Use stencil buffer with iOS - ios

I try to use the "stencil buffer" to display a part of my rendering from a texture, but my render is displayed without any mask effect.
It's for a 2D iOS project, with OpenGL ES 2.0
This is the concerned part of my code :
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glStencilFunc( GL_ALWAYS, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glBindTexture(GL_TEXTURE_2D, _maskTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// scene rendering
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
glStencilFunc( GL_EQUAL, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
glBindTexture(GL_TEXTURE_2D, _viewTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any help would be greatly appreciated !
(As usual for a French developer, sorry for my English !)
precision : "_maskTexture" is a black & white picture.
Solution :
I finnaly solved my problem with the indications of rotoglub and tim.
Thank you both.
1/ The stencil buffer has not been created correctly.
It should be initialized like this:
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH24_STENCIL8_OES,
backingWidth,
backingHeight);
It's the principal reason why my rendering was not affected by my mask.
2/ To be able to make use of a texture as a mask, I replaced the black color with an alpha channel and enable blending in my rendering.
My final rendering code looks like this :
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glClearStencil(0);
glClearColor (0.0,0.0,0.0,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glEnable(GL_STENCIL_TEST);
glEnable(GL_ALPHA_TEST);
glBlendFunc( GL_ONE, GL_ONE );
glAlphaFunc( GL_NOTEQUAL, 0.0 );
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, _mask);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
// scene rendering
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glDisable(GL_STENCIL_TEST);
glDisable(GL_ALPHA_TEST);
glBindTexture(GL_TEXTURE_2D, _texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

Simply, the problem is that you're just drawing a texture to the scene without doing any testing of what's in the texture. The stencil buffer doesn't care about the colors in the texture, it just checks:
Did you draw a fragment ? (Update stencil buffer) : (Don't update stencil buffer);
You're drawing a fragment for every pixel of your texture, so any mask effect in the texture is useless.
If you want to mask with a texture, you need to discard any fragments that you don't want updated in the stencil buffer.
This is either done with discard keyword in shader, or glAlphaTest/glAlphaFunc in OpenGLES 1.1

Related

How to view a renderbuffer of GLuints on the screen?

To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)
glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();

OpenGL ES texture degrades in quality

I am trying to draw a Core Graphics image generated (at screen resolution) into OpenGL. However, the image is rendering more aliased than the CG output (antialiasing is disabled in CG). The text is the texture (the blue background is respectively drawn in Core Graphics for the first image and OpenGL for the second).
CG Output:
OpenGL Render (in simulator):
Framebuffer setup:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenRenderbuffers(1, &onscrRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, onscrRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.layer];
glGenFramebuffers(1, &onscrFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, onscrFramebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, onscrRenderBuffer);
Texture Loading Code:
-(GLuint) loadTextureFromImage:(UIImage*)image {
CGImageRef textureImage = image.CGImage;
size_t width = CGImageGetWidth(textureImage);
size_t height = CGImageGetHeight(textureImage);
GLubyte* spriteData = (GLubyte*) malloc(width*height*4);
CGColorSpaceRef cs = CGImageGetColorSpace(textureImage);
CGContextRef c = CGBitmapContextCreate(spriteData, width, height, 8, width*4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -CGContextGetClipBoundingBox(c).size.height);
CGContextDrawImage(c, (CGRect){CGPointZero, {width, height}}, textureImage);
CGContextRelease(c);
GLuint glTex;
glGenTextures(1, &glTex);
glBindTexture(GL_TEXTURE_2D, glTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glBindTexture(GL_TEXTURE_2D, 0);
free(spriteData);
return glTex;
}
Vertices:
struct vertex {
float position[3];
float color[4];
float texCoord[2];
};
typedef struct vertex vertex;
const vertex bgVertices[] = {
{{1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {0, 0}} // BL (3)
};
const vertex textureVertices[] = {
{{1, -1, 0}, {0, 0, 0, 0}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 0, 0, 0}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 0, 0, 0}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 0, 0, 0}, {0, 0}} // BL (3)
};
const GLubyte indicies[] = {
3, 2, 0, 1
};
Render Code:
glClear(GL_COLOR_BUFFER_BIT);
GLsizei width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glViewport(0, 0, width, height);
glBindBuffer(GL_ARRAY_BUFFER, bgVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_ARRAY_BUFFER, textureVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(textureUniform, 0);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
I am using the blend function glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in case that has anything to do with it.
Any ideas where the problem is?
Your GL-rendered output looks all pixelated because it has fewer pixels. Per the Drawing and Printing Guide for iOS, the default scale factor for a CAEAGLLayer is 1.0, so when you set up your GL render buffers, you get one pixel in the buffer per point. (Remember, a point is a unit of UI layout, which on modern devices with Retina displays works out to several hardware pixels.) When you render that buffer full-screen, everything gets scaled up (by about 2.61x on an iPhone 6(s) Plus).
To render at the native screen resolution, you need to increase the contentScaleFactor of your view. (Preferably, you should do this early on, before, setting up renderbuffers, so that they get the new scale factor from the view's layer.)
Watch out, though: you want to use the UIScreen property nativeScale, not scale. The scale property reflects UI rendering, where, on iPhone 6(s) Plus, everything gets done at 3x and then scaled down slightly to the native resolution of the display. The nativeScale property reflects the number of actual device pixels per point — if you're doing GPU rendering, you want to target that so you don't sap performance by drawing more pixels than you need to. (On current devices other than the "Plus" iPhones, scale and nativeScale are the same. But using the latter is probably a good insurance policy.)
You can avoid a lot of these kinds of issues (and others) by letting GLKView do renderbuffer setup for you. Even if you're writing cross-platform GL, that part of your code is going to have to be pretty platform- and device-specific anyway, so you might as well reduce the amount of it that you have to write and maintain.
(Addressing previous edits of the question, for posterity's sake: this has nothing to do with multisampling or the quality of the GL texture data. Multisampling has to do with rasterization of polygon edges — points in the interior of a polygon get one fragment per pixel, but points on the edges get multiple fragments whose colors are blended in the resolve stage. And if you bind the texture to an FBO and glReadPixels from it, you'll find the image is pretty much the same one you put in.)

Achieving a persistence effect in GLKit view

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?
I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

OpenGLES2 iOS vertex array objects causing bad access error on drawElements

I've been bashing my head on the wall this afternoon persuading my openGLES2.0 code to perform correctly when I move from using VBO to VAO / VBO. Basically I'm working my way through Apple's "expert" advice on openGLES and moving to using Vertex Array Objects was top of the list ...
I've reviewed the similar question and response here but that didn't seem to help me, other than re-assure me that other people run into similar problems :(
My scenario is that I have approximately 500 rectangular textures moving around the screen. The code all works fine without VAO, but when I define USE_VAO (my constant) it's crashing on the first draw elements call. I'm obviously not understanding VAO properly ... but I can't see the error of my ways!
The setupBeforeRender method is called as the last part of my setup before entering the render loop.
-(void) setupBeforeRender {
glClearColor(0.6, 0.6, 0.6, 1);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glEnable(GL_DEPTH_TEST);
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE0);
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
glEnableVertexAttribArray(_texCoordSlot);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
And here's the render method
- (void)render:(CADisplayLink*)displayLink {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Model view matrix and projection code removed for clarity
GLsizei stride = sizeof(Vertex);
const GLvoid* colourOffset = (GLvoid *) sizeof(float[3]);
const GLvoid* textureOffset = (GLvoid *) sizeof(float[7]);
for (my objectToDraw in objectToDrawArray)
{
if (objectToDraw.vertexBufferObject == 0)
{
#ifdef USE_VAO
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
objectToDraw.vertexArrayObject = _vao;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
#else
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
#endif
}
// Texture binding removed for clarity
#ifdef USE_VAO
// This code crashes with EXC_BAD_ACCESS on the glDrawElements
glBindVertexArrayOES(objectToDraw.vertexArrayObject);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
glBindVertexArrayOES(0);
#else
// This path works fine. So turning VAO off works :(
glBindBuffer(GL_ARRAY_BUFFER, storyTile.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE, stride, textureOffset);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
#endif
} // End for each object
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
Finally, my create and bind VBO method looks like this;
-(GLuint) createAndBindVBO {
const float* rgba = CGColorGetComponents([self.colour CGColor]);
Vertex Vertices[] = {
{{0, 1, 0}, {1, 0, 1, 1}, {0,1}},
{{0, 0, 0}, {1, 0, 1, 1}, {0,0}},
{{1, 1, 0}, {1, 0, 1, 1}, {1,1}},
{{1, 0, 0}, {1, 0, 1, 1}, {1,0}}
};
// Code removed for clarity - sets up geometry and colours
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
return vertexBuffer;
}
I've tried various permutations of this and have sprinkled the code with glGetError() to see if that helps point to where the problem arises. Alas I get no errors, other than the BAD_ACCESS crash on the drawElements call.
EDIT: As suggested, this unfortunately also doesn't work
objectToDraw.vertexBufferObject = [objectToDraw createVBO];
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glBindBuffer(GL_ARRAY_BUFFER, objectToDraw.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexArrayObject = _vao;
glBindVertexArrayOES(0);
I must be doing something dumb with the vertex array object ... but can someone figure out what the problem is?
The vertex array enabled flags are part of the VAO state, so you need to enable the vertex attribute arrays using glEnableVertexAttribArray while the VAO is bound.
From: http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
The resulting
vertex array object is a new state vector, comprising all the state values (listed in Table 6.2, except ARRAY_BUFFER_BINDING):
VERTEX_ATTRIB_ARRAY_ENABLED
VERTEX_ATTRIB_ARRAY_SIZE,
VERTEX_ATTRIB_ARRAY_STRIDE,
VERTEX_ATTRIB_ARRAY_TYPE,
VERTEX_ATTRIB_ARRAY_NORMALIZED,
VERTEX_ATTRIB_ARRAY_POINTER,
ELEMENT_ARRAY_BUFFER_BINDING,
VERTEX_ATTRIB_ARRAY_BUFFER_BINDING.
You should call glVertexAttribPointer after glBindBuffer function call.
I had a similar problem, didn't know what was causing it.
Eventually it turned out that i have to put in a const int number of vertices in glDrawArrays.
sizeof() was not doing it right.

What can make glDrawArrays with a VBO not draw anything?

I'm trying to figure out how to work with VBOs, using an OpenGL 2.0 rendering context. I've got a 2D (ortho) rendering context set up, and I can draw a simple rectangle like this:
glBegin(GL_QUADS);
glColor4f(1, 1, 1, 1);
glVertex2f(0, 0);
glVertex2f(0, 10);
glVertex2f(100, 10);
glVertex2f(100, 0);
glEnd;
But when I try to do it with a VBO, it fails. I set up the VBO like this, with the same data as before:
procedure initialize;
const
VERTICES: array[1..8] of single =
(
0, 0,
0, 10,
100, 10,
100, 0
);
begin
glEnable(GL_VERTEX_ARRAY);
glGenBuffers(1, #VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(VERTICES), #VERTICES[1], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
and I try to draw like this:
begin
glColor4f(1, 1, 1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, 1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
From everything I've read, that ought to work. I run it through gDEBugger and there are no GL errors, and the data in the VBO is getting loaded correctly, but nothing actually appears when I swap the buffers. Changing the data in the vertex array to use normalized coordinates (from 0..1.0) also ends up displaying nothing. Any idea what I'm doing wrong? (Assume the render context itself is set up correctly and the GL function pointers have all been loaded correctly.)
glDrawArrays(GL_QUADS, 0, 1);
Looks like you're trying to draw a quad with a single vertex. You need three more:
glDrawArrays(GL_QUADS, 0, 4);
Or switch to points:
glDrawArrays(GL_POINTS, 0, 1);

Resources