Memory usage keeps increasing over time (GLKit - iOS) - ios

I've almost finished my app. One of the views uses GLKit. I just have a problem with memory. Basically what happens is that when GLKView is displayed, the memory consumption constantly rises (seen with Instruments). At a certain point it obviously crashes.
I don't know much about GLKit, so I hope you can help me.
The problem is a 3d arrow that I'm displaying. If I don't draw it, all the other things don't create any problem.
This is the header file that contains the arrow vertex data:
#import <GLKit/GLKit.h>
struct arrowVertexData
{
GLKVector3 vertex;
GLKVector3 normal;
GLKVector2 texCoord;
};
typedef struct arrowVertexData arrowVertexData;
typedef arrowVertexData* vertexDataPtr;
static const arrowVertexData MeshVertexData[] = {
{/*v:*/{{-0.000004, 0.0294140, -0.0562387}}, /*n:*/{{0.000000, 1.000000, 0.000000}}, /*t:*/{{0.500000, 0.333333}}},
... etc...
And this is the draw code:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self.arrowEffect prepareToDraw];
//glGenVertexArraysOES(1, &arrowVertexArray);
//glBindVertexArrayOES(arrowVertexArray);
glGenBuffers(1, &arrowVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, arrowVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(MeshVertexData), MeshVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(arrowVertexData), 0);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(arrowVertexData), (void *)offsetof(arrowVertexData, normal));
glBindVertexArrayOES(arrowVertexArray);
// Render the object with GLKit
glDrawArrays(GL_TRIANGLES, 0, sizeof(MeshVertexData) / sizeof(arrowVertexData));
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//disable atttributes
glDisableVertexAttribArray(GLKVertexAttribNormal);
glDisableVertexAttribArray(GLKVertexAttribPosition);
}
Any suggestion?
Thank you very much for you help!

You are creating a new vertex buffer (VBO) each time drawInRect is called, and never deleting them. GLGenBuffers and GLBindBuffer set up and a new buffer and make it current, but the real damage is done with GLBufferData, which copies the data into the new buffer.
glBindBuffer(GL_ARRAY_BUFFER, 0); resets GL to not use the buffer, and glDisableVertexAttribArray(GLKVertexAttribPosition); tells GL not to look for position data in a buffer anymore, but neither of these calls does anything to free the memory. If you wanted to free the memory each time, you would need to call GLDeleteBuffers(1, &arrowVertexBuffer); to free the memory.
A better approach would be to generate the buffer once at startup and delete it when terminating, and hang on to arrowVertexBuffer, rebinding and unbinding it each time through as needed, as sell as reseting the pointers, assuming other parts of your program are modifying GL state.
It looks like you also started down the path of using a Vertex Array Object (VAO), which would be another way to capture state once for reuse, although it may be better to wait until you have the VBO working correctly before attempting that. Both VBOs and VAOs are methods for caching state that evolved over time to reduce the load each time through your rendering loop, but VAOs cast a much broader net, which could make it trickier to get it right.
As a general suggestion, you may be able to get more attention for a question like this by adding a more general and popular tag, such as [Open GL].
Another debugging tool you should definitely try is OpenGL Profiler. If you did not install it with XCode, look it up in the documentation and you should find a link to download the graphic tools package. The resources window will allow you to track the buffer objects in use.

Have you tried running the static analyzer in Xcode?
It's very good at pointing out allocated memory that isn't released and that kind of thing.
To use it hold the mouse down on the "Run" button and select "Analyze" from the drop down list.
If it does find anything it usually points them out in blue and you can see the lines tracing back to where memory is being allocated and not released, etc...
Let me know if that has any effect.

Related

iOS OpenGL drawing lines : Not anti-aliasing

I'm trying to render a waveform in an EAGLContext View, and I can't for the life of me get it to anti-alias. Is anything clearly wrong with my OpenGL Code? Is anymore information required?
glLineWidth( 0.4f);// - pass * 1.0f);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glColor4f(1., 1., 1., 1.);
// Set up vertex pointer,
glVertexPointer(2, GL_FLOAT, 0, oscilLine);
// and draw the line.
glDrawArrays(GL_LINE_STRIP, 0, kDefaultDrawSamples);
Is anything clearly wrong with my OpenGL Code?
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
^^^^^^^^^^^^^^^^^^^^
Try changing this to GL_LINE_SMOOTH_HINT?
Also, since you're using the deprecated 1.x APIs you should use glGet with GL_LINE_WIDTH_RANGE and GL_LINE_WIDTH_GRANULARITY to verify that the 0.4 value is actually a supported width. You also need to ensure that the rendering target you've created has bits for an alpha channel. Can you add the context creation code?
Finally, it's not quite canon, but according to this article this mechanism of line smoothing doesn't work on iOS devices, though it apparently may on the simulator.

GLKit Doesn't draw GL_POINTS or GL_LINES

I am working hard on a new iOS game that is drawn only with procedurally generated lines. All is working well, except for a few strange hiccups with drawing some primitives.
I am at a point where I need to implement text, and the characters are set up to be a series of points in an array. When I go to draw the points (which are CGPoints) some of the drawing modes are working funny.
effect.transform.modelviewMatrix = matrix;
[effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, 0, 0, &points);
glDrawArrays(GL_POINTS, 0, ccc);
I am using this code to draw from the array, and when the mode is set to GL_LINE_LOOP or GL_LINE_STRIP all works well. But if I set it to GL_POINTS, I get a gpus_ReturnGuiltyForHardwareRestert error. And if I try GL_LINES it just doesn't draw anything.
What could possibly be going on?
When you draw with GL_POINTS in ES2 or ES3, you need to specify gl_PointSize in the vertex shader or you'll get undefined behavior (ugly rendering on device at best, the crash you're seeing at worst). The vertex shader GLKBaseEffect uses doesn't do gl_PointSize, so you can't use it with GL_POINTS. You'll need to implement your own shaders. (For a starting point, try the ones in the "OpenGL Game" template you get when creating a new Xcode project, or using the Xcode Frame Debugger to look at the GLSL that GLKBaseEffect generates.)
GL_LINES should work fine as long as you're setting an appropriate width with glLineWidth() in client code.

OpenGL ES iOS drawing performance a lot slower with VBOs than without

I've recently changed drawing in my current project from standard drawing from a memory array to VBOs. To my surprise the framerate dropped significantly from 60fps to 30fps drawing a model with 1200verts 8 times. Doing further profiling showed that glDrawElements took 10 times as long when using VBOs compared to drawing from memory.
I am really puzzled why this is happening. Does anyone know what could be the cause for a performance decrease?
I am testing on an iPhone 5 running iOS 6.1.2.
I've isolated my VBO handling into a single function where I create the vertex/index buffer once statically at the top of the function. I can switch between normal and VBO rendering with an #ifdef USE_VBO
- (void)drawDuck:(Toy*)toy reflection:(BOOL)reflection
{
ModelOBJ* model = _duck[0].model;
int stride = sizeof(ModelOBJ::Vertex);
#define USE_VBO
#ifdef USE_VBO
static bool vboInitialized = false;
static unsigned int vbo, ibo;
if (!vboInitialized) {
vboInitialized = true;
// Generate VBO
glGenBuffers(1, &vbo);
int numVertices = model->getNumberOfVertices();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, stride*numVertices, model->getVertexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Generate index buffer
glGenBuffers(1, &ibo);
int numIndices = model->getNumberOfIndices();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned short)*numIndices, model->getIndexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
#endif
[self setupDuck:toy reflection:reflection];
#ifdef USE_VBO
// Draw with VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, position));
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, texCoord));
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, normal));
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
#else
// Draw with array
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->position);
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->texCoord);
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->normal);
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, model->getIndexBuffer());
#endif
}
ModelOBJ::Vertex is just 3,2,3 float for pos, texcoord, normal. Indices are ushort.
UPDATE: I've now wrapped the draw setup (ie. the attrib binding calls) into an VAO and now performance is ok, even slightly better than drawing from main memory. So my conclusion is that VBO support without VAOs is broken on iOS. Is that assumption correct?
It is likely that the driver was falling back to software vertex submission (CPU copy from the VBO into the command buffer). This can be worse than using vertex arrays in client memory, as client memory us usually cached, while VBO contents are typically in write combined memory on iOS.
When using the CPU Sampler in Instruments, you'll see a ton if time underneath glDrawArrays/glDrawElements in gleRunVertexSubmitARM.
The most common reason to fall back to SW CPU submission is an unaligned attribute (current iOS devices require each attribute to be 4 byte aligned), but that doesn't appear to be the case for the 3 attributes you've shown. After that, the next most common cause is mixing client arrays and buffer objects in a single vertex array configuration.
In this case, you probably have a stray vertex attribute binding: some other array element is likely still enabled and pointing to a client array, causing everything to fall off of the hardware DMA path. By creating a VAO, you've either switched away from the misconfigured default VAO, or alternatively you are trying to enable a client VAO but being saved because client arrays are depreciated and do not function when used with VAOs (throws an INVALID_OPERATION error instead).
When you populate your index buffer with glBufferData, the second argument should be 2*numIndices rather than stride*numIndices.
Since your index buffer is much larger than it needs to be, this could explain your performance problem.

How to use VBOs in GLKit for iOS?

I'm making a game for iOS using GLKit (OpenGL ES 2), and would like to use VBOs and VAOs as I think they would increase performance quite a lot (and Instruments is recommending it when I test my app in it).
I have a lot of textured objects that don't actually change position, size, texture etc, so I would assume VBOs would help.
At the moment I am using arrays of GLKVector2 to store vertex and texture coordinate data, and I'm not quite sure how to go from here to VBOs.
Can anyone help with this?
Cheers,
Nick.
You can pass them directly to glBufferData (), like this:
GLKVector2 objects[k];
// ... Fill out your objects vertices in objects
GLuint buffer = 0;
glGenBuffers (1, &buffer);
glBindBuffer (GL_ARRAY_BUFFER, buffer);
glBufferData (GL_ARRAY_BUFFER, sizeof (objects), objects, GL_STATIC_DRAW);
glEnableClientState (GL_VERTEX_ARRAY);
glVertexPointer (2, GL_FLOAT, sizeof (GLKVertex2), 0);
This tell OpenGL to generate a buffer and bind it. glBufferData() actually uploads it to the card. The call to glVertexPointer () says where you want to start pulling vertices from in the array. The pointer in the last parameter becomes an offset when you're using VBOs.
EDIT: Sorry - filled in some details. Had a brain fart. See here for details.

OpenGL ES 2.0, drawing using multiple vertex buffers

I can't find much info on whether drawing from multiple vertex buffers is supported on opengl es 2.0 (i.e use one vertex buffer for position data and another for normal, colors etc). This page http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html and listing 9.4 in particular implies you should be able to, but I can't get it to work on my program. Code for the offending draw call:
glBindBuffer(GL_ARRAY_BUFFER, mPositionBuffer->openglID);
glVertexAttribPointer(0, 4, GL_FLOAT, 0, 16, NULL);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mTexCoordBuffer->openglID);
glVertexAttribPointer(1, 2, GL_FLOAT, 0, 76, NULL);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBuffer->openglID);
glDrawElements(GL_TRIANGLES, 10788, GL_UNSIGNED_SHORT, NULL);
This draw call will stall or crash with EXC_BAD_ACCESS on the simulator, and gives very weird behavior on the device (opengl draws random triangles or presents previously rendered frames). No opengl call ever returns an error, and I've inspected the vertex buffers extensively and am confident they have the correct sizes and data.
Has anyone successfully rendered using multiple vertex buffers and can share their experience on why this might not be working? Any info on where to start debugging stalled/failed draw calls that don't return any error code would be greatly appreciated.
Access violations generally mean that you are trying to draw more triangles than you have allocated in a buffer. The way you've set up buffers is perfectly fine and should work, I would be checking if your parameters are set properly:
http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
I think your issue is either that you've switched offset and stride in your glVertexAttribPointer calls, or you've miscounted the number of indices you're drawing
Yes, you can use multiple vertex buffer objects (VBOs) for a single draw. The OpenGL ES 2.0 spec says so in section 2.9.1.
Do you really have all those hard-coded constants in your code? Where did that 76 come from?
If you want help debugging, you need to post the code that initializes your buffers (the code that calls glGenBuffers and glBufferData). You should also post the stack trace of EXC_BAD_ACCESS.
It might also be easier to debug if you drew something simpler, like one triangle, instead of 3596 triangles.

Resources