What can make glDrawArrays with a VBO not draw anything? - delphi

I'm trying to figure out how to work with VBOs, using an OpenGL 2.0 rendering context. I've got a 2D (ortho) rendering context set up, and I can draw a simple rectangle like this:
glBegin(GL_QUADS);
glColor4f(1, 1, 1, 1);
glVertex2f(0, 0);
glVertex2f(0, 10);
glVertex2f(100, 10);
glVertex2f(100, 0);
glEnd;
But when I try to do it with a VBO, it fails. I set up the VBO like this, with the same data as before:
procedure initialize;
const
VERTICES: array[1..8] of single =
(
0, 0,
0, 10,
100, 10,
100, 0
);
begin
glEnable(GL_VERTEX_ARRAY);
glGenBuffers(1, #VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(VERTICES), #VERTICES[1], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
and I try to draw like this:
begin
glColor4f(1, 1, 1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, 1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
end;
From everything I've read, that ought to work. I run it through gDEBugger and there are no GL errors, and the data in the VBO is getting loaded correctly, but nothing actually appears when I swap the buffers. Changing the data in the vertex array to use normalized coordinates (from 0..1.0) also ends up displaying nothing. Any idea what I'm doing wrong? (Assume the render context itself is set up correctly and the GL function pointers have all been loaded correctly.)

glDrawArrays(GL_QUADS, 0, 1);
Looks like you're trying to draw a quad with a single vertex. You need three more:
glDrawArrays(GL_QUADS, 0, 4);
Or switch to points:
glDrawArrays(GL_POINTS, 0, 1);

Related

Rendering cube on top of square having video feed as texture

I am trying to develop a POC which helps to visualize a 3D object on camera feed. The kind 3D object I have, easily gets rendered using this project. And I am referring Camera Ripple code by Apple for showing camera feed. Both of these are separate objects in the same context. Each of these uses its own shader program. I am confused how to switch from one program to another.
My glkview:drawInRect: method looks like this
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
}
glUseProgram(_program1);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, self.viewMode.selectedSegmentIndex);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, cubeOBJTexCoords);
// Load MTL Data
for(int i=0; i<cubeMTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, cubeMTLAmbient[i][0], cubeMTLAmbient[i][1], cubeMTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, cubeMTLDiffuse[i][0], cubeMTLDiffuse[i][1], cubeMTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, cubeMTLSpecular[i][0], cubeMTLSpecular[i][1], cubeMTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, cubeMTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, cubeMTLFirst[i], cubeMTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
this cause a crash by throwing this error gpus_ReturnGuiltyForHardwareRestart
I found solution to my problem is reseting everything between the use of both programs. Now my glkview:drawInRect: looks like below,
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
[self resetProgrameOne];
}
glUseProgram(_program1);
glClear(GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, 1);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, table1OBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, table1OBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, table1OBJTexCoords);
// Load MTL Data
for(int i=0; i<table1MTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, table1MTLAmbient[i][0], table1MTLAmbient[i][1], table1MTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, table1MTLDiffuse[i][0], table1MTLDiffuse[i][1], table1MTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, table1MTLSpecular[i][0], table1MTLSpecular[i][1], table1MTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, table1MTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, table1MTLFirst[i], table1MTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
and resetProgrameOne method resets all the buffers and necessary things by deleting buffers and disabling glDisableVertexAttribArrays.

iOS OpenGL ES VBO weird rendering

I'm not familiar with OpenGL but i have to improve code from other people. The problem, that they copied vertex data every draw call that cause very CPU intensive work. Now i'm trying to rewrite it to use VBO. New code is drawing but there is some weird size problem. Jigsaw pieces are bigger, sometimes exactly the same as an old code.
I understand that VBO is just creates buffer once inside GPU memory. But iw's obviously something wrong with it. Look i have pieces rendered with this code:
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, background);
glEnableVertexAttribArray(0);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, indicies);
And shadow with next code:
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, indicies);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
And result is next:
As you see 2 pieces are stacked together but the shadow is rendered wrong. What can cause of this weird problem?
UPDATE
Piece of code where i create buffer:
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(background), background, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

GLKBaseEffect not loading texture (texture appears black on object)

I'm using GLKit in an OpenGL project. Everything is based on GLKView and GLKBaseEffect (no custom shaders). In my project I have several views that have GLKViews for showing 3D objects, and occasionally several of those view can be "open" at once (i.e. are in the modal view stack).
While until now everything was working great, in a new view I was creating I needed to have a rectangle with texture to simulate a measuring tape for the 3D world of my app. For some unknown reason, in that view only, the texture isn't loaded right into the opengl context: the texture is loaded right by GLKTextureLoader, but when drawing the rectangle is black, and looking at the OpenGL frame in debug, I can see that an empty texture is loaded (there's a reference to a texture, but it's all zeroed out or null).
The shape I'm drawing is defined by: (it was originally a triangle strip, but I switched for triangles to make sure it's not the issue)
static const GLfloat initTape[] = {
-TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, 0, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
};
static const GLfloat initTapeTex[] = {
0, 0,
1, 0,
0, 1.0,
1, 0,
1, 1,
0, 1,
};
I set the effect variable as:
effect.transform.modelviewMatrix = modelview;
effect.light0.enabled = GL_FALSE;
// Projection setup
GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(self.fov), ratio, 0.1f, 1000.0f);
effect.transform.projectionMatrix = projection;
// Set the color of the wireframe.
if (tapeTex == nil) {
NSError* error;
tapeTex = [GLKTextureLoader textureWithContentsOfFile:[[[NSBundle mainBundle] URLForResource:#"ruler_texture" withExtension:#"png"] path] options:nil error:&error];
}
effect.texture2d0.enabled = GL_TRUE;
effect.texture2d0.target = GLKTextureTarget2D;
effect.texture2d0.envMode = GLKTextureEnvModeReplace;
effect.texture2d0.name = tapeTex.name;
And the rendering loop is:
[effect prepareToDraw];
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribPosition, COORDS, GL_FLOAT, GL_FALSE, 0, tapeVerts);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, tapeTexCoord);
glDrawArrays(GL_TRIANGLES, 0, TAPE_VERTS);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
I've also tested the texture itself in another view with other objects and it works fine, so it's not the texture file fault.
Any help would be greatly appreciated, as I'm stuck on this issue for over 3 days.
Update: Also, there are no glErrors during the rendering loop.
After many many days I've finally found my mistake - When using multiple openGL contexts, it's important to create a GLKTextureLoader using a shareGroup, or else the textures aren't necessarily loaded to the right context.
Instead of using the class method textureWithContentOf, every context needs it's own GLKTextureLoader that is initialized with context.sharegroup, and use only that texture loader for that view. (actually the textures can be saved between different contexts, but I didn't needed that feature of sharegroups).
Easy tutorial http://games.ianterrell.com/how-to-texturize-objects-with-glkit/
I think it will help you.

OpenGLES2 iOS vertex array objects causing bad access error on drawElements

I've been bashing my head on the wall this afternoon persuading my openGLES2.0 code to perform correctly when I move from using VBO to VAO / VBO. Basically I'm working my way through Apple's "expert" advice on openGLES and moving to using Vertex Array Objects was top of the list ...
I've reviewed the similar question and response here but that didn't seem to help me, other than re-assure me that other people run into similar problems :(
My scenario is that I have approximately 500 rectangular textures moving around the screen. The code all works fine without VAO, but when I define USE_VAO (my constant) it's crashing on the first draw elements call. I'm obviously not understanding VAO properly ... but I can't see the error of my ways!
The setupBeforeRender method is called as the last part of my setup before entering the render loop.
-(void) setupBeforeRender {
glClearColor(0.6, 0.6, 0.6, 1);
glViewport(0, 0, self.frame.size.width, self.frame.size.height);
glEnable(GL_DEPTH_TEST);
glUniform1i(_textureUniform, 0);
glActiveTexture(GL_TEXTURE0);
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
glEnableVertexAttribArray(_texCoordSlot);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
}
And here's the render method
- (void)render:(CADisplayLink*)displayLink {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Model view matrix and projection code removed for clarity
GLsizei stride = sizeof(Vertex);
const GLvoid* colourOffset = (GLvoid *) sizeof(float[3]);
const GLvoid* textureOffset = (GLvoid *) sizeof(float[7]);
for (my objectToDraw in objectToDrawArray)
{
if (objectToDraw.vertexBufferObject == 0)
{
#ifdef USE_VAO
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
objectToDraw.vertexArrayObject = _vao;
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArrayOES(0);
#else
objectToDraw.vertexBufferObject = [objectToDraw createAndBindVBO];
#endif
}
// Texture binding removed for clarity
#ifdef USE_VAO
// This code crashes with EXC_BAD_ACCESS on the glDrawElements
glBindVertexArrayOES(objectToDraw.vertexArrayObject);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
glBindVertexArrayOES(0);
#else
// This path works fine. So turning VAO off works :(
glBindBuffer(GL_ARRAY_BUFFER, storyTile.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE, stride, textureOffset);
glDrawElements(GL_TRIANGLES, sizeof(Indices) / sizeof(Indices[0]), GL_UNSIGNED_SHORT,0);
#endif
} // End for each object
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
Finally, my create and bind VBO method looks like this;
-(GLuint) createAndBindVBO {
const float* rgba = CGColorGetComponents([self.colour CGColor]);
Vertex Vertices[] = {
{{0, 1, 0}, {1, 0, 1, 1}, {0,1}},
{{0, 0, 0}, {1, 0, 1, 1}, {0,0}},
{{1, 1, 0}, {1, 0, 1, 1}, {1,1}},
{{1, 0, 0}, {1, 0, 1, 1}, {1,0}}
};
// Code removed for clarity - sets up geometry and colours
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
return vertexBuffer;
}
I've tried various permutations of this and have sprinkled the code with glGetError() to see if that helps point to where the problem arises. Alas I get no errors, other than the BAD_ACCESS crash on the drawElements call.
EDIT: As suggested, this unfortunately also doesn't work
objectToDraw.vertexBufferObject = [objectToDraw createVBO];
glGenVertexArraysOES(1,&_vao);
glBindVertexArrayOES(_vao);
glBindBuffer(GL_ARRAY_BUFFER, objectToDraw.vertexBufferObject);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, stride, 0);
glVertexAttribPointer(_colorSlot, 4, GL_FLOAT, GL_FALSE, stride, colourOffset);
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE,stride, textureOffset);
objectToDraw.vertexArrayObject = _vao;
glBindVertexArrayOES(0);
I must be doing something dumb with the vertex array object ... but can someone figure out what the problem is?
The vertex array enabled flags are part of the VAO state, so you need to enable the vertex attribute arrays using glEnableVertexAttribArray while the VAO is bound.
From: http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
The resulting
vertex array object is a new state vector, comprising all the state values (listed in Table 6.2, except ARRAY_BUFFER_BINDING):
VERTEX_ATTRIB_ARRAY_ENABLED
VERTEX_ATTRIB_ARRAY_SIZE,
VERTEX_ATTRIB_ARRAY_STRIDE,
VERTEX_ATTRIB_ARRAY_TYPE,
VERTEX_ATTRIB_ARRAY_NORMALIZED,
VERTEX_ATTRIB_ARRAY_POINTER,
ELEMENT_ARRAY_BUFFER_BINDING,
VERTEX_ATTRIB_ARRAY_BUFFER_BINDING.
You should call glVertexAttribPointer after glBindBuffer function call.
I had a similar problem, didn't know what was causing it.
Eventually it turned out that i have to put in a const int number of vertices in glDrawArrays.
sizeof() was not doing it right.

Use stencil buffer with iOS

I try to use the "stencil buffer" to display a part of my rendering from a texture, but my render is displayed without any mask effect.
It's for a 2D iOS project, with OpenGL ES 2.0
This is the concerned part of my code :
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glStencilFunc( GL_ALWAYS, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE );
glBindTexture(GL_TEXTURE_2D, _maskTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// scene rendering
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
glStencilFunc( GL_EQUAL, 1, 1 );
glStencilOp( GL_KEEP, GL_KEEP, GL_KEEP );
glBindTexture(GL_TEXTURE_2D, _viewTexture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any help would be greatly appreciated !
(As usual for a French developer, sorry for my English !)
precision : "_maskTexture" is a black & white picture.
Solution :
I finnaly solved my problem with the indications of rotoglub and tim.
Thank you both.
1/ The stencil buffer has not been created correctly.
It should be initialized like this:
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencilRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH24_STENCIL8_OES,
backingWidth,
backingHeight);
It's the principal reason why my rendering was not affected by my mask.
2/ To be able to make use of a texture as a mask, I replaced the black color with an alpha channel and enable blending in my rendering.
My final rendering code looks like this :
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glClearStencil(0);
glClearColor (0.0,0.0,0.0,1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// mask rendering
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
glEnable(GL_STENCIL_TEST);
glEnable(GL_ALPHA_TEST);
glBlendFunc( GL_ONE, GL_ONE );
glAlphaFunc( GL_NOTEQUAL, 0.0 );
glStencilFunc(GL_ALWAYS, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, _mask);
glDrawArrays(GL_TRIANGLE_STRIP, 4, 4);
// scene rendering
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glDisable(GL_STENCIL_TEST);
glDisable(GL_ALPHA_TEST);
glBindTexture(GL_TEXTURE_2D, _texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Simply, the problem is that you're just drawing a texture to the scene without doing any testing of what's in the texture. The stencil buffer doesn't care about the colors in the texture, it just checks:
Did you draw a fragment ? (Update stencil buffer) : (Don't update stencil buffer);
You're drawing a fragment for every pixel of your texture, so any mask effect in the texture is useless.
If you want to mask with a texture, you need to discard any fragments that you don't want updated in the stencil buffer.
This is either done with discard keyword in shader, or glAlphaTest/glAlphaFunc in OpenGLES 1.1

Resources