Why does Xcode 5.1 OpenGLES frame capture cause app to crash? - ios

I'm trying to debug some hand written OpenGLES 2.0 code on iOS 7. The code runs fine in so far as it doesn't crash, run out of memory or behave erratically on both the simulator and on an actual iPhone device but the graphical output is not what I'm expecting.
I'm using the Capture OpenGLES frame feature in Xcode 5.1 to try to debug the OGL calls but I find that when I click the button to capture a frame the app crashes (in OpenGL rendering code - glDrawArrays() to be exact) with an EXC_BAD_ACCESS, code = 1.
To repeat, the code will run fine with no crashes for arbitrarily long and it is only when I click the button in the debugger to capture the frame that the crash occurs.
Any suggestions as to what I may be doing wrongly that would cause this to happen ?

I don't know exactly what you are doing, but here is what I was doing that caused normal (although different than expected) rendering, and crash only when attempting to capture the frame:
Load texture asynchronously (own code, not GLKit, but similar method) in background thread, by using background EAGLContext (same share group as main context). Pass a C block as 'completion handler', that takes the created texture as the only argument, to pass it back to the client when created.
On completion, call the block (Note that this is from the texture loading method, so we are still running in the background thread/gl context.)
From within the completion block, create a sprite using said texture. The sprite creation involves generating a Vertex Array Object with the vertex/texture coords, shader attribute locations etc. Said code does not call openGL ES functions directly, but instead uses a set of wrapper functions that cache OpenGL ES state on the client (app) side, and only call the actual functions if the values involved do change. Because gl state is cached on the client side, a separate data structure is needed for each gl context, and the caching functions must always know which context they are dealing with. The VAO generating code was not aware that is was being run on the background context, so the caching was likely corrupted/out of sync.
Render said sprite every frame: nothing is drawn. When attempting OpenGL Frame Capture, it crashes with EXC_BAD_ACCESS at: glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
I do not really need to create the geometry on a background thread, so what I did is force calling the completion block on the main thread (see this question), so that once the texture was ready all the sprites are created using the main gl context.

Related

Wanting to ditch MTKView.currentRenderPassDescriptor

I have an occasional issue with my MTKView renderer stalling on obtaining a currentRenderPassDescriptor for 1.0s. According to the docs, this is either due the view's device not being set (it is) or there are no drawables available.
If there are no drawables available, I don't see a means of just immediately bailing or skipping that video frame. The render loop will stall for 1.0s.
Is there a workaround for this?. Any help would be appreciated.
My workflow is a bunch of kernel shader work then one final vertex shader. I could do the drawing of the final shader onto my own texture (instead of using the currentPassDescriptor), then hoodwink that texture into the view's currentDrawable -- but in the obtaining of that drawable we're back to the same stalling situation.
Should I get rid of MTKView entirely and fall back to using a CAMetalLayer instead? Again, I suspect the same stalling issues will arise. Is there a way to set the maximumDrawableCount on an MTKView like there is on CAMetalLayer?
I'm a little baffled as, according the Metal System Trace, my work is invariably completed under 5.0ms per frame on an iMac 2015 R9 M395.

Best way to pass textures between MTKViews?

I have a situation I've been struggling with for some time.
Within a draw: call to an MTKView, I generate a MTLTexture which is part of a render chain to that view's drawable. Now, I want to use that texture as a basis for drawing in a secondary MTKView.
To prevent animation stutter when tinkering with the app's menus, I have both views configured in explicit draw mode, with a CVDisplayLink dispatching these draw calls into a serial queue (ie. not on the main thread). I've even tried configuring the secondary view to draw on the main queue with setNeedsDisplay.
I can get this to mostly work, though due to what I suspect is some threading issues, I receive an occasional crash. I've even go so far as to place the draw calls to these two MTKViews successively on the same serial thread (via dispatch_queue_async) without much success. I've also tried placing a generated MTLTexture into a little semaphore-protected FIFO queue that the secondary view consumes from -- again, without much success.
Things will run swimmingly well for a few minutes with full-motion video frames as the source, then I receive a crash whilst in the draw method of the second view. Typically, this is what happens when I go to retrieve the texture:
id<MTLTexture>inputTexture = [textureQueue firstObject];
// EXC_BAD_ACCESS (code=1, address=0x40dedeadbec8)
Occasionally, I will end up bailing on the draw as the texture is MTLTextureType1D (instead of 2D) or its usage is erroneously only MTLTextureUsageShaderRead. And yet my MTLTextureDescriptor is fine; like I say, 99.99% of the time, things work swimmingly well.
I wondering if someone can assist with the proper, thread-safe way of obtaining a texture in one MTKView and passing it off to a secondary MTKView for subsequent processing.

gpus_ReturnGuiltyForHardwareRestart Crash in [EAGLContext presentRenderbuffer]

I'm getting a lot of crashes in EAGLContext presentRenderbuffer on iOS 11, but only on iPhone 6/6+ and older.
As per this post, I think we've already ruled out VBO-related problems by rewriting everything to not use VBO/VAOs, but the crash wasn't fixed by that.
There are a few other questions on SO about this but no solution -- has anyone else been seeing the uptick in this crash and been able to resolve it?
TL;DR:
Here is what we know so far:
The crash is specific to iOS11, iPhone 5S/6/6+. It doesn’t occur on 6S and up.
The core of the OpenGL stack returns gpus_ReturnGuiltyForHardwareRestart
It occurs when we attempt to invoke [EAGLContext presentRenderbuffer] from a CAEAGLLayer
We don’t have a repro.
What we have tried so far:
Remove any reference to VBO/VAO in our rendering stack. Didn’t help.
We have tried reproing with a large range of drawing scenarios (rotation, resize, background/foreground). No luck.
As far as we can tell, there is nothing specific in our application logic between the iPhone 6 family and the iPhone 6S family.
Some clues (that could be relevant but not necessarily):
We know that when the presentRenderBuffer is invoked off main thread, and some CATransaction are occurring at the same time on the main thread, the crash rate goes up.
When presentRenderBuffer is invoked on main thread (along with the whole drawing pipeline), the crash rate goes slightly down but not drastically.
A substantial chunk (~20%) of the crashes occurs when the layer goes off screen and/or gets out of the view hierarchy.
Here is the stack trace:
libGPUSupportMercury.dylib gpus_ReturnGuiltyForHardwareRestart
1 AGXGLDriver gldUpdateDispatch
2 libGPUSupportMercury.dylib gpusSubmitDataBuffers
3 AGXGLDriver gldUpdateDispatch
4 GLEngine gliPresentViewES_Exec
5 OpenGLES -[EAGLContext presentRenderbuffer:]
From my experience I get this sort of crashes in these cases:
Calling OpenGL API when application is in UIApplicationStateBackground state.
Using objects (textures, VBO, etc) that was created in OpenGL context that have different shareGroup. This can happened if you don't call [EAGLContext setCurrentContext:..] before rendering or other work with OpenGL object.
Invalid geometry. For example this can happened if you allocate index buffer for bigger size that you need. Fill it with some values and then try to render with size that was used at allocation. Sometimes this works (tail of buffer is filled with 0, and you don't see any visual glitches). Sometimes it will crash (when tail of buffer filled with junk, and reference to point that is out of bounds).
Hope this helps in some way.
P.S. Maybe you tell some more info about your application? I write application that render vector maps at iOS and don't face any troubles with iOS 11 at this moment. Rendering pipeline is pretty simple CADisplayLink call our callback on main thread when we can render next frame. Each view with OpenGL scene can have several background contexts to load resources in background (ofc it have same shareGroup with main context).

Async rendering into CAEAGLLayer

I have a OpenGL-based rendering pipeline for filtering images, which I now want to use to process videos as well.
On the one end if the pipeline is an AVPlayer for fetching frames from a video file, on the other end is my preview view, backed by a CAEAGLLayer. The rendering itself happens async on a different thread because it's quite expensive. The view is hooked to a CADisplayLink that triggers a new async rendering on every screen refresh. When the pipeline is done rendering into the layer's renderbuffer, I'm calling presentRenderbuffer: to show it on screen (in the rendering thread). Draw requests happening while a rendering is still in progress are ignored.
This works—however, I seem to be getting synchronization issues with the display refresh. When I set the frameInterval of the display link to 1 (call every frame), I'm getting ~2 FPS in the end (actual view refreshes). If I'm setting it to 2 (call every other frame), I'm suddenly getting 15 FPS. Setting it to 4 drops the FPS down to 2 again.
My guess is that the async call to presentRenderbuffer: happens "at the wrong moment" in the run loop and is either ignored by the system or delayed.
Now I want to know what's the best practice for displaying the results of async renderings in a view. All the examples and docs I could find only describe the single-threaded case.
In these cases it is best to use double buffering which in your case are 2 textures. The rendering of the video should be done on the FBO (frame buffer object) with an attached texture. Since the drawing is on a separate thread I suggest you to create the 2 textures on the main context, main thread then create a shared context on the other thread which can now access the 2 threads.
Now there is no sense to block the background thread since it is expected to be slow so what it will do is keep rendering to the texture then once done send the texture to the main thread (where you present the buffer) and continue drawing to the other texture. Then repeat the procedure.
The main thread should then check if it got a request to display a new texture and when it does it should draw it to the main buffer and present it. If you need to draw it at 60FPS (or any other constant) you can still do that but you will be redrawing the same texture.
Now just to be on the same side you should still probably do some locking mechanism. Since the background thread does the buffer swapping (sends the new texture and starts drawing to the previous one) it makes sense that there is a boolean value swapLocked where the main thread will set it to true just before it starts drawing and set it to false once it is done with the texture. Now if the background thread is done drawing and the swapLocked is true it should not continue drawing. In this situation continue the swap and the drawing once the swapLocked is set to false. You can override the setter to do that but be careful to continue the process on the background thread since the setter will most likely be called on the main thread.

glDrawArrays takes long on first call using OpenGL ES on iOS

I'm trying to use multiple GLSL fragment shaders with OpenGL ES on iOS 7 and upwards. The shaders itself are running fine after the first call to glDrawArrays. Nevertheless, the very first call to glDrawArrays after the shaders and their program have been compiled and linked takes ages to complete. Afterwards some pipeline or whatever seems to have been loaded and everything goes smooth. Any ideas what the cause of this issue are and how to prevent it?
The most likely cause is that your shaders may not be fully compiled until you use them the first time. They might have been translated to some kind of intermediate form when you call glCompileShader(), which would be enough for the driver to provide a compile status and to act as if the shaders had been compiled. But the full compilation and optimization could well be deferred until the first draw call that uses the shader program.
A commonly used technique for games is to render a few frames without actually displaying them while some kind of intro screen is still shown to the user. This prevents the user from seeing stuttering that could otherwise result from all kinds of possible deferred initialization or data loading during the first few frames.
You could also look into using binary shaders to reduce slowdowns from shader compilation. See glShaderBinary() in the ES 2.0 documentation.
What actually helped speeding up the first draw call was the following (which is fine in my use case since I'm rendering a video so no depth testing is needed).
glDisable(GL_DEPTHTEST)

Resources