EXC_BAD_ACCESS when capturing a GPU frame - ios

I was attempting to debug why I wasn't seeing a new object (quad) being rendered, so I used the "Capture GPU frame" feature of Xcode. It usually works fine, but now it's giving me EXC_BAD_ACCESS in another render call, during glDrawElements.
Note that it seems similar to bugs I've seen, related to a mixed usage of VBOs and not. However, I'm definitely unbinding the VBO after usage, and disabling vertex attribute arrays:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(posAttr);
glDisableVertexAttribArray(texCoordAttr);
(Also, bear in mind that I'm only getting the crash when using "Capture GPU frame", not all the time)
What might I be doing wrong? Or could this be a bug in Xcode...?

This was indeed due to some GL state being left over, specifically a glVertexAttribPointer. The reason I didn't catch it was because it was an order of operations problem directly within the 3d engine itself: child objects were being iterated (and rendered) before some state was cleared up.
(Apologies, this was a tediously project-specific issue)

Related

Triple buffering using OpenGL on IOS

Our app still uses OpenGLES2.0 on IOS. Yes, I know we should use Metal, but our app also works on Android. While most of the time it runs perfectly happy on 60 fps, occasionally there's a glitch, and in come cases it seems to alternate between taking one frame to render the scene, then two frames. 1, 2, 1, 2, 1, 2... Then, without changing whats rendered, it will jump back to 1,1,1 i.e. 60fps. The delay is in the first glClear after we've 'presented' the last buffer. I guess OpenGL is either still rendering the last scene, and it has to wait a whole frame to sync up again. Maybe our render/update loop takes close to or just over a whole frame - this would help explain the delay, as it 'misses' the vsync.
However, If we had triple buffering I would expect the frame times to be 1,1,2, 1,1,2, 1,1,2.. not 1,2,1,2,1,2,1. Is there a way to get the IOS to use triple buffering ?
Currently we only seem to initialise two 'buffers'
GLuint viewRenderbuffer;
GLuint viewFramebuffer;
glGenFramebuffers(1, &viewFramebuffer);
glGenRenderbuffers(1, &viewRenderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
Then we call this after each frame is finished rendering
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
Normally I would expect to call glSwapbuffers somewhere, but I expect that is inside the presentRenderbuffer call. I guess it's up to the driver then to handle double or triple buffering.
Is there a way to force triple buffering, or is this actually already being used.
Thanks
Shaun
Rendering on iOS is tripple buffered by default. This prevents frame tearing/and or stalls. Frame stuttering usually occurs when your avarage frame time is bigger than the limit imposed by the vsync interval (e.g. ~16.6ms for 60 FPS). You can check this time in XCode profiling tools or measure it yourself using system timers and draw the result in a debug HUD.
Unexpected performance drops with the same rendered contect could be due to CPU/GPU frequency managment by the OS.
Please check out this talk about Frame Pacing (6:00 and onwards)
https://developer.apple.com/videos/play/wwdc2018/612/
On a side note, performance may be bad not only because of raw load, but also synchronization issues. An example of a synchronization issue is reading back framebuffer content or improper handling of dynamic vertex/index buffers.
Improving rendering performance on mobile is a complex issue involving carefull handling of FBOs to avoid unnecessary bandwidth usage. You can use XCode profiling tools & frame capture to find the bottleneck.

Random crash when calling glDrawArrays

I am making an iOS app which can render big number of textures which I stream from disk on the fly. I use a NSCache for LRU cache of textures. There is one screen with a 3D model and one screen with a full screen detail of a texture where this texture can be changed with swiping. Kind of a very simple carousel. The app never takes more then 250MiB of RAM on 1GiB devices, the textures' cache works good.
For the fullscreen view I have a cache of VBOs based on the screen resolution and texture resolution (different texture coordinates). I never delete these VBOs and always check if the VBO is OK (glIsBuffer()). The screens are separate UIViewControllers and I use the same EAGLContext in both of them, no context sharing. This is OK as it is on the same thread.
All this is Open GL ES 2.0 and everything works good. I can switch between the 3D/2D screens, change the textures. The textures are created/deleted on the fly as needed based on the available memory.
BUT sometimes I get a random crash when rendering a full screen quad when calling:
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
This can happen when I receive a lot of memory warnings in a row. Sometimes I can get hundreds of memory warnings in few seconds and the app works OK but sometimes it will crash while swiping to new full screen texture quad. This happens even for textures that were already rendered in full screen. It never crashes on the 3D model which uses the same textures.
The crash report is always on the glDrawArrays call (in my code) with a EXC_BAD_ACCESS KERN_INVALID_ADDRESS at 0x00000018 exception. The last call in a stack trace is always gleRunVertexSubmitARM. This happens on various iPads and iPhones.
It looks like a system memory pressure corrupts some GL memory but I do not know when, where and why.
I have also tried switching from VBO to the old way of having vertex data on the heap, where I first check if the vertex data is not NULL before calling glDrawArrays. The result is the same, random crashes in low memory situations.
Any ideas what could be wrong? The address 0x00000018 in the EXC_BAD_ACCESS is really bad but I do not know whose address it should be. Could a de-allocated texture or shader cause a EXC_BAD_ACCESS in glDrawArrays?
After several days of intensive debugging I finally figured it out. The problem was the NSCache storing OpenGL textures. Under memory pressure the NSCache starts removing items from it to free memory. The problem is that in that situation it does it on its own background thread (com.apple.root.utility-qos), thus there is no GL context for that thread (and no sharegroup with the main context) so the texture name is not a valid name (cannot be deleted) and will leak the GL memory. So after some memory warnings there was a lot of leaked GL textures, memory full and app finally crashed.
TL DR:
Do not use NSCache for caching OpenGL objects because it will leak them after a memory warning.

Why does Xcode 5.1 OpenGLES frame capture cause app to crash?

I'm trying to debug some hand written OpenGLES 2.0 code on iOS 7. The code runs fine in so far as it doesn't crash, run out of memory or behave erratically on both the simulator and on an actual iPhone device but the graphical output is not what I'm expecting.
I'm using the Capture OpenGLES frame feature in Xcode 5.1 to try to debug the OGL calls but I find that when I click the button to capture a frame the app crashes (in OpenGL rendering code - glDrawArrays() to be exact) with an EXC_BAD_ACCESS, code = 1.
To repeat, the code will run fine with no crashes for arbitrarily long and it is only when I click the button in the debugger to capture the frame that the crash occurs.
Any suggestions as to what I may be doing wrongly that would cause this to happen ?
I don't know exactly what you are doing, but here is what I was doing that caused normal (although different than expected) rendering, and crash only when attempting to capture the frame:
Load texture asynchronously (own code, not GLKit, but similar method) in background thread, by using background EAGLContext (same share group as main context). Pass a C block as 'completion handler', that takes the created texture as the only argument, to pass it back to the client when created.
On completion, call the block (Note that this is from the texture loading method, so we are still running in the background thread/gl context.)
From within the completion block, create a sprite using said texture. The sprite creation involves generating a Vertex Array Object with the vertex/texture coords, shader attribute locations etc. Said code does not call openGL ES functions directly, but instead uses a set of wrapper functions that cache OpenGL ES state on the client (app) side, and only call the actual functions if the values involved do change. Because gl state is cached on the client side, a separate data structure is needed for each gl context, and the caching functions must always know which context they are dealing with. The VAO generating code was not aware that is was being run on the background context, so the caching was likely corrupted/out of sync.
Render said sprite every frame: nothing is drawn. When attempting OpenGL Frame Capture, it crashes with EXC_BAD_ACCESS at: glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
I do not really need to create the geometry on a background thread, so what I did is force calling the completion block on the main thread (see this question), so that once the texture was ready all the sprites are created using the main gl context.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

A hooked DirectX 9 program crashes on window resize, texture related

I'm using EasyHook and SlimDX to overlay some graphics using SlimDX's Sprite and Texture classes. When I resize windows some programs fine, but others will crash - Winamp's MilkDrop 2 gives me an ambiguous memory error for example.
I expect this is due to the after market Texture I created. The question is what VTable function should I hook and/or how/when do I dispose and recreate the Texture? Reset perhaps?
If it isn't obvious I don't know much about DirectX.
edit/ps: I paint the texture inside an EndScene hook, but I haven't created any other hooks yet...
You shouldn't have to recreate texture at all if it was created in D3DPOOL_MANAGED (D3DPOOL parameter of IDirect3DDevice9::CreateTexture).
If you absolutely have to use D3DPOOL_DEFAULT and need to kill off lost textures, then,
the simplest way would be to destroy all "perishable" objects before call to IDirect3DDevice9::Reset, and restore then after the call, but only if it was succesfull.
YOu could also track functions that may return D3DERR_DEVICELOST (there are two of them), but hooking only Reset() will be easier.

Resources