Strange crash on glDrawElements [ EXC_??? (11)(code = 0, subcode=0x0) ] - ios

my app runs just fine, but after about 3 minutes i get a strange crash that looks like this
has anyone experienced something like this before and know what could be the cause? could this be some kind of memory leak?
some code:
- (void) draw {
[EAGLContext setCurrentContext:context];
glBindVertexArrayOES(_vertexArray);
shader.modelViewMatrix = mvm;
[shader texture:texture];
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
glBindVertexArrayOES(0);
}
- (void) texture: (int) tex {
glUseProgram(TextureShader);
_camModelViewMatrix = GLKMatrix4Multiply(_cameraMatrix, _modelViewMatrix);
_modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _camModelViewMatrix);
glUniformMatrix4fv(mvp, 1, 0, _modelViewProjectionMatrix.m);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[tex]);
}
if you need to see any other code let me know

I haven't found good documentation on the EXC_??? exception, but my understanding is that the thread has been consuming CPU for too long. The best explanation of this problem I've found is in another question on stackoverflow: GCD crashes with any task longer than 255 seconds. I have had this problem when writing long testcases, and I fixed them by breaking them up into smaller testcases or improving the performance of the testcase with the EXC_???. It's worth it to take a look at the stack when the EXC_??? happens and consider whether improvements could be made to speed up the path.

Related

"invalid framebuffer operation" on glClear - using sRGB in OpenGL ES3

Using openGL-ES3, running on an iPhone5s (hardware, not in the simulator) in Xcode 7.3 I receive an "invalid framebuffer operation" when doing a glClear.
The texture in question is a "final" texture for my GBuffer, much like in this tutorial http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.html.
Key difference being that I'm requesting an sRGB texture and that I use GL_COLOR_ATTACHMENT3 (instead of 4), due to ES3 limitations.
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
// glTexParameteri ...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, GL_TEXTURE_2D, m_finalTexture, 0);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); // No errors here
Now when I try to clear it, I get an "invalid framebuffer operation"
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);
// Element at index**[i]** needs to match GL_COLOR_ATTACHMENT**i** on GL-ES3!
GLenum drawbuf[4] = { GL_NONE, GL_NONE, GL_NONE, GL_COLOR_ATTACHMENT3 };
glDrawBuffers(sizeof(drawbuf)/sizeof(drawbuf[0]), drawbuf);
GLCheckError(); // no errors
glClear(GL_COLOR_BUFFER_BIT);
GLCheckError(); // => glGetError 506 GL_INVALID_FRAMEBUFFER_OPERATION
Now if instead I initialise the texture like this (so without sRGB), OpenGL doesn't give an error on the clear:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
Now as I understood it, sRGB is supported on OpenGL ES3... so why does glClear fail?
Any ideas anyone?
GL_SRGB8 is not a color-renderable format in ES 3.0. In the spec document:
In the "Required Texture Format" section starting on page 128, SRGB8 is listed under "Texture-only color formats".
In table 3.13, starting on page 130, SRGB8 does not have a checkmark in the "Color-renderable" column.
This also matches the EXT_srgb extension specification, under "Issues":
Do we require SRGB8_EXT be supported for RenderbufferStorage?
No. Some hardware would need to pad this out to RGBA and instead of adding that unknown for application developers we will simply not support that format in this extension.
glCheckFramebufferStatus() should return GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT in this case. If it's not doing that, that looks like a bug in the OpenGL implementation.
The closest alternative that is color-renderable is GL_SRGB8_ALPHA8.
try this
#define GL_COLOR_BUFFER_BIT 0x00004000
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

How do I avoid a logical buffer store with GLKView's framebuffer?

Running Xcode's OpenGL ES diagnostic on a very simple app that switches to a second framebuffer and back (with appropriate use of glClear and glInvalidateFramebuffer) shows warnings about a logical buffer store on switching to the second framebuffer:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
// At this point, GLKView's framebuffer is bound
// Clear (to avoid logical buffer load)
glClear(GL_COLOR_BUFFER_BIT);
// Invalidate (to avoid logical buffer store)
glInvalidateFramebuffer(GL_FRAMEBUFFER, 1, (GLenum[]){ GL_COLOR_ATTACHMENT0 });
// Switch to our own framebuffer, and attach a texture as the color attachment
// At this point, Xcode's OpenGL ES tool warns:
// "For best performance keep logical buffer store operations to a minimum."
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _texture, 0);
// Clear (to avoid logical buffer load)
glClear(GL_COLOR_BUFFER_BIT);
// Invalidate (to avoid logical buffer store)
glInvalidateFramebuffer(GL_FRAMEBUFFER, 1, (GLenum[]){ GL_COLOR_ATTACHMENT0 });
// Might want to switch back to GLKView's drawable here, and do more rendering
}
Anyone have any ideas about why the invalidate's not taking hold? Note that in this example, the GLKView only has a color buffer attachment:
view.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
view.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
view.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
view.drawableMultisample = GLKViewDrawableMultisampleNone;
Test app demonstrating this at https://dl.dropboxusercontent.com/u/6956432/test.zip
Cheers!

EXC_BAD_ACCESS with glTexImage2D in GLKViewController

I have an EXC_BAD_ACCESS at the last line of this code (this code is fired several times per second), but I cannot figure out what is the problem:
[EAGLContext setCurrentContext:_context];
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
When debugging I make sure that the texture is created (the id is > 0), output frame has a valid pointer to the data and is a 4 channel matrix. I am inside the drawRect method of a GLKViewController. I think I should not have to bind the framebuffer as it is one of the things that are automated here. It doesn't crash at the first frame, but a few dozens frames later.
Can anybody spot the problem?
UPDATE:
It seems it's because of a race condition on _outputFrame, it's being updated while being read by glTexImage2D. I will try to lock it for read, then report back.
That was the solution indeed (see UPDATE), I fixed it with NSLock. Firstly I swapped the instance variable _outputFrame with a temporary one that gets updated from another thread and used the lock to update the instance variable:
[_frameLock lock];
_outputFrame = temp;
[_frameLock unlock];
Then used the lock when I wanted to read from the instance variable:
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, _backgroundTexture);
[_frameLock lock];
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _outputFrame.cols, _outputFrame.rows, 0, GL_BGRA, GL_UNSIGNED_BYTE, _outputFrame.data);
[_frameLock unlock];
I just figured out some problem like this after several days.
1. better avoid rendering in multi-thread
2. better render in GLKView with base affect, and don't manually manage framebuffer& render buffer by yourself
3. base effect render raw pixel data like this
My solution:
glTexImage2D(...);
self.baseEffect.texture2d0.envMode = GLKTextureEnvModeReplace;
self.baseEffect.texture2d0.target = GLKTextureTarget2D;
self.baseEffect.texture2d0.name = texture;
self.baseEffect.texture2d0.enabled = YES;
self.baseEffect.useConstantColor = YES;

glReadPixels - malloc with global var

I just stumbled over a quite tricky issue.
In context of a openGL app for iOS I tried to call glReadPixels.
Therefore, a global buffer variable was created/allocated once at the beginning.
I tried to use the glReadPixel-Function on that buffer, but it did not succeed. I did not get any new picture, just crap.
so my question: Why do I need to use a free() on my allocated buffer space, when I want to use the location of that memory a lot of times before I finally free it?
See for example:
int bytes = width*height*3; //Color space is RGB
if(buffer == null)
buffer = (GLubyte *)malloc(bytes);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
free(buffer);
EDIT: I replaced free(bytes); with free(buffer);

OpenGL ES2 Vertex Array Objects help

I am having trouble understanding how to use VAO's in OpenGL ES2 (on iOS) and getting them to work.
My current rendering setup looks like this (in pseudocode):
Initialization:
foreach VBO:
glGenBuffers();
Rendering a frame:
// Render VBO 1
glClear(color | depth);
glBindBuffer(GL_ARRAY_BUFFER, arrayVBO1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBO1);
foreach vertex attribute:
glVertexAttribPointer();
glEnableVertexAttribArray();
glBufferData(GL_ARRAY_BUFFER, ...);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...);
glDrawElements(GL_TRIANGLE_STRIP, ...);
// Render VBO 2
glClear(color | depth);
glBindBuffer(GL_ARRAY_BUFFER, arrayVBO2);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBO2);
foreach vertex attribute:
glVertexAttribPointer();
glEnableVertexAttribArray();
glBufferData(GL_ARRAY_BUFFER, ...);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ...);
glDrawElements(GL_TRIANGLE_STRIP, ...);
This works fine, however, both VBO's have exactly the same interleaved vertex attribute struct, and as you can see I'm setting up and enabling each attribute every frame for every VBO. Instruments complains about redundant calls to glVertexAttribPointer() and glEnableVertexAttribArray(), but when I move one of them or both to the initialization phase I either get a EXC_BAD_ACCESS when calling glDrawElements or nothing is drawn.
My question is whether I need to do this every frame, why it doesn't work if I don't, and how I would use VAO's to solve this.
Sorry for the dredge, but I'm procrastinating and you keep topping my google search. I'm sure you've solved it by now...
The correct way is to only update the buffers when the data changes, not every frame. Ideally, you would only update the part of the buffer that changed. Also, Attribute Pointers are offsets into the buffer if a buffer is bound.
Initialisation:
glGenBuffers()
foreach VBO:
glBufferData()
Updates / Animation, etc:
glMapBuffer() //or something like this
buffer->vertex = vec3(1,2,3) // etc
glUnmapBuffer()
And Render:
glBindFBO()
glClear(color | depth);
glBindBuffer(GL_ARRAY_BUFFER, arrayVBO1)
glVertexAttribPointer(GL_VERTEX,..., 0) // Buffer Offset = 0
glVertexAttribPointer(GL_TEXCOORD,..., sizeof(vertex)) // Buffer Offset = size of vertex
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexVBO1);
glEnableVertexAttribArray(..., 0) // Buffer Offset = 0
glBindBuffer(0); // Unbind buffers, you don't need them here
glDrawElements(GL_TRIANGLE_STRIP, ...);
Hope that helps.

Resources