iOS OpenGL ES VBO weird rendering - ios

I'm not familiar with OpenGL but i have to improve code from other people. The problem, that they copied vertex data every draw call that cause very CPU intensive work. Now i'm trying to rewrite it to use VBO. New code is drawing but there is some weird size problem. Jigsaw pieces are bigger, sometimes exactly the same as an old code.
I understand that VBO is just creates buffer once inside GPU memory. But iw's obviously something wrong with it. Look i have pieces rendered with this code:
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, background);
glEnableVertexAttribArray(0);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, indicies);
And shadow with next code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, indicies);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
And result is next:
As you see 2 pieces are stacked together but the shadow is rendered wrong. What can cause of this weird problem?
UPDATE
Piece of code where i create buffer:
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(background), background, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

Related

Indexing in opengl-es iOS

I was going to try to optimize my VBOs to use indices instead of just clumping all vertices together but somehow I can't use GL_INDEX_ARRAY. It just says 'Use of undeclared identifier GL_INDEX_ARRAY' and it's not even defined in gl.h (I looked). Is there another way I'm supposed to index my VBOs?
I use this code to create my VBOs:
glGenVertexArraysOES(1, &vertexArray);
glBindVertexArrayOES(vertexArray);
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*vertexDataSize, vertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 40, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 40, BUFFER_OFFSET(12));
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, 40, BUFFER_OFFSET(24));
At first I thought there was some GLKVertexAttribIndex but since there wasn't I guessed I was supposed to use glEnableClientState(GL_INDEX_ARRAY); but that doesn't exist apparently. So how am I supposed to use an index array with my VBOs?
Use GL_ELEMENT_ARRAY_BUFFER to indicate an index buffer. Indices are usually defined as a short, so something like:
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLshort)*indexDataSize, indexData, GL_STATIC_DRAW);

Multiple Vertex Buffers OpenGL ES on iOS

I recently solved a problem which prevented my lighting from working in an OpenGL ES iOS app:
Solved Question
I solved the problem by replacing
glEnable(GL_DEPTH_TEST);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfVerticies * sizeof(GLfloat) * 3, arrayOfVerticies, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glGenVertexArraysOES(1, &_normalArray);
glBindVertexArrayOES(_normalArray);
glGenBuffers(1, &_normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _normalBuffer);
glBufferData(GLKVertexAttribNormal, loader.currentCountOfNormals * sizeof(GLfloat) * 3,loader.arrayOfNormals , GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindVertexArrayOES(0);
with
glEnable(GL_DEPTH_TEST);
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, total * sizeof(GLfloat), mergedArray, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12));
glBindVertexArrayOES(0);
Combining the array of Vertices and normals to a single GLfloat array and passing that array to a single buffer. This resolved my problem, but I don't understand why. To my knowledge I should have been able to use 2 buffers?
The problem wasn't the usage of multiple vertex buffers, but the usage of multiple vertex array objects. A vertex array object (VAO) is a lightweight object (meaning it doesn't contain any actual vertex attribute data) encapsulating all the state required for rendering a bunch of vertex arrays with a single draw call, in particular
The settings made with glVertexAttribPointer for each attribute index
The enabled attribute arrays
The bound element array buffer
It is therefore one level higher than the individual vertex attribute arrays, comprising all the vertex attribute array settings of a single (conceptual) scene object, or more correctly, a single draw call.
But in your original code you create a new vertex array object for each individual attribute. When then rendering you only bind _vertexArray which in turn only sets and enables the GLKVertexAttribPosition attribute, thus no normals or whatever else.
So you should have rather replaced the original code with:
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, loader.currentCountOfVerticies * sizeof(GLfloat) * 3, arrayOfVerticies, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
//this is the error
//glGenVertexArraysOES(1, &_normalArray);
//glBindVertexArrayOES(_normalArray);
glGenBuffers(1, &_normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _normalBuffer);
glBufferData(GLKVertexAttribNormal, loader.currentCountOfNormals * sizeof(GLfloat) * 3,loader.arrayOfNormals , GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindVertexArrayOES(0);

OpenGLES2 iOS vertex array objects causing bad access error on drawElements

I've been bashing my head on the wall this afternoon persuading my openGLES2.0 code to perform correctly when I move from using VBO to VAO / VBO. Basically I'm working my way through Apple's "expert" advice on openGLES and moving to using Vertex Array Objects was top of the list ...
I've reviewed the similar question and response here but that didn't seem to help me, other than re-assure me that other people run into similar problems :(
My scenario is that I have approximately 500 rectangular textures moving around the screen. The code all works fine without VAO, but when I define USE_VAO (my constant) it's crashing on the first draw elements call. I'm obviously not understanding VAO properly ... but I can't see the error of my ways!
The setupBeforeRender method is called as the last part of my setup before entering the render loop.
-(void) setupBeforeRender {
glClearColor(0.6, 0.6, 0.6, 1);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glEnable(GL_DEPTH_TEST);
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE0);
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
glEnableVertexAttribArray(_texCoordSlot);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
And here's the render method
- (void)render:(CADisplayLink*)displayLink {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Model view matrix and projection code removed for clarity
GLsizei stride = sizeof(Vertex);
const GLvoid* colourOffset = (GLvoid *) sizeof(float[3]);
const GLvoid* textureOffset = (GLvoid *) sizeof(float[7]);
for (my objectToDraw in objectToDrawArray)
{
if (objectToDraw.vertexBufferObject == 0)
{
#ifdef USE_VAO
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
objectToDraw.vertexArrayObject = _vao;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
#else
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
#endif
}
// Texture binding removed for clarity
#ifdef USE_VAO
// This code crashes with EXC_BAD_ACCESS on the glDrawElements
glBindVertexArrayOES(objectToDraw.vertexArrayObject);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
glBindVertexArrayOES(0);
#else
// This path works fine. So turning VAO off works :(
glBindBuffer(GL_ARRAY_BUFFER, storyTile.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE, stride, textureOffset);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
#endif
} // End for each object
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
Finally, my create and bind VBO method looks like this;
-(GLuint) createAndBindVBO {
const float* rgba = CGColorGetComponents([self.colour CGColor]);
Vertex Vertices[] = {
{{0, 1, 0}, {1, 0, 1, 1}, {0,1}},
{{0, 0, 0}, {1, 0, 1, 1}, {0,0}},
{{1, 1, 0}, {1, 0, 1, 1}, {1,1}},
{{1, 0, 0}, {1, 0, 1, 1}, {1,0}}
};
// Code removed for clarity - sets up geometry and colours
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
return vertexBuffer;
}
I've tried various permutations of this and have sprinkled the code with glGetError() to see if that helps point to where the problem arises. Alas I get no errors, other than the BAD_ACCESS crash on the drawElements call.
EDIT: As suggested, this unfortunately also doesn't work
objectToDraw.vertexBufferObject = [objectToDraw createVBO];
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glBindBuffer(GL_ARRAY_BUFFER, objectToDraw.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexArrayObject = _vao;
glBindVertexArrayOES(0);
I must be doing something dumb with the vertex array object ... but can someone figure out what the problem is?
The vertex array enabled flags are part of the VAO state, so you need to enable the vertex attribute arrays using glEnableVertexAttribArray while the VAO is bound.
From: http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
The resulting
vertex array object is a new state vector, comprising all the state values (listed in Table 6.2, except ARRAY_BUFFER_BINDING):
VERTEX_ATTRIB_ARRAY_ENABLED
VERTEX_ATTRIB_ARRAY_SIZE,
VERTEX_ATTRIB_ARRAY_STRIDE,
VERTEX_ATTRIB_ARRAY_TYPE,
VERTEX_ATTRIB_ARRAY_NORMALIZED,
VERTEX_ATTRIB_ARRAY_POINTER,
ELEMENT_ARRAY_BUFFER_BINDING,
VERTEX_ATTRIB_ARRAY_BUFFER_BINDING.
You should call glVertexAttribPointer after glBindBuffer function call.
I had a similar problem, didn't know what was causing it.
Eventually it turned out that i have to put in a const int number of vertices in glDrawArrays.
sizeof() was not doing it right.

Use stencil buffer with iOS

I try to use the "stencil buffer" to display a part of my rendering from a texture, but my render is displayed without any mask effect.
It's for a 2D iOS project, with OpenGL ES 2.0
This is the concerned part of my code :
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glStencilFunc( GL_ALWAYS, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glBindTexture(GL_TEXTURE_2D, _maskTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// scene rendering
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
glStencilFunc( GL_EQUAL, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
glBindTexture(GL_TEXTURE_2D, _viewTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any help would be greatly appreciated !
(As usual for a French developer, sorry for my English !)
precision : "_maskTexture" is a black & white picture.
Solution :
I finnaly solved my problem with the indications of rotoglub and tim.
Thank you both.
1/ The stencil buffer has not been created correctly.
It should be initialized like this:
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH24_STENCIL8_OES,
backingWidth,
backingHeight);
It's the principal reason why my rendering was not affected by my mask.
2/ To be able to make use of a texture as a mask, I replaced the black color with an alpha channel and enable blending in my rendering.
My final rendering code looks like this :
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glClearStencil(0);
glClearColor (0.0,0.0,0.0,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glEnable(GL_STENCIL_TEST);
glEnable(GL_ALPHA_TEST);
glBlendFunc( GL_ONE, GL_ONE );
glAlphaFunc( GL_NOTEQUAL, 0.0 );
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, _mask);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
// scene rendering
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glDisable(GL_STENCIL_TEST);
glDisable(GL_ALPHA_TEST);
glBindTexture(GL_TEXTURE_2D, _texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Simply, the problem is that you're just drawing a texture to the scene without doing any testing of what's in the texture. The stencil buffer doesn't care about the colors in the texture, it just checks:
Did you draw a fragment ? (Update stencil buffer) : (Don't update stencil buffer);
You're drawing a fragment for every pixel of your texture, so any mask effect in the texture is useless.
If you want to mask with a texture, you need to discard any fragments that you don't want updated in the stencil buffer.
This is either done with discard keyword in shader, or glAlphaTest/glAlphaFunc in OpenGLES 1.1

What can make glDrawArrays with a VBO not draw anything?

I'm trying to figure out how to work with VBOs, using an OpenGL 2.0 rendering context. I've got a 2D (ortho) rendering context set up, and I can draw a simple rectangle like this:
glBegin(GL_QUADS);
glColor4f(1, 1, 1, 1);
glVertex2f(0, 0);
glVertex2f(0, 10);
glVertex2f(100, 10);
glVertex2f(100, 0);
glEnd;
But when I try to do it with a VBO, it fails. I set up the VBO like this, with the same data as before:
procedure initialize;
const
VERTICES: array[1..8] of single =
(
0, 0,
0, 10,
100, 10,
100, 0
);
begin
glEnable(GL_VERTEX_ARRAY);
glGenBuffers(1, #VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(VERTICES), #VERTICES[1], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
and I try to draw like this:
begin
glColor4f(1, 1, 1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, 1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
From everything I've read, that ought to work. I run it through gDEBugger and there are no GL errors, and the data in the VBO is getting loaded correctly, but nothing actually appears when I swap the buffers. Changing the data in the vertex array to use normalized coordinates (from 0..1.0) also ends up displaying nothing. Any idea what I'm doing wrong? (Assume the render context itself is set up correctly and the GL function pointers have all been loaded correctly.)
glDrawArrays(GL_QUADS, 0, 1);
Looks like you're trying to draw a quad with a single vertex. You need three more:
glDrawArrays(GL_QUADS, 0, 4);
Or switch to points:
glDrawArrays(GL_POINTS, 0, 1);

Resources