I have a OpenGL-based rendering pipeline for filtering images, which I now want to use to process videos as well.
On the one end if the pipeline is an AVPlayer for fetching frames from a video file, on the other end is my preview view, backed by a CAEAGLLayer. The rendering itself happens async on a different thread because it's quite expensive. The view is hooked to a CADisplayLink that triggers a new async rendering on every screen refresh. When the pipeline is done rendering into the layer's renderbuffer, I'm calling presentRenderbuffer: to show it on screen (in the rendering thread). Draw requests happening while a rendering is still in progress are ignored.
This works—however, I seem to be getting synchronization issues with the display refresh. When I set the frameInterval of the display link to 1 (call every frame), I'm getting ~2 FPS in the end (actual view refreshes). If I'm setting it to 2 (call every other frame), I'm suddenly getting 15 FPS. Setting it to 4 drops the FPS down to 2 again.
My guess is that the async call to presentRenderbuffer: happens "at the wrong moment" in the run loop and is either ignored by the system or delayed.
Now I want to know what's the best practice for displaying the results of async renderings in a view. All the examples and docs I could find only describe the single-threaded case.
In these cases it is best to use double buffering which in your case are 2 textures. The rendering of the video should be done on the FBO (frame buffer object) with an attached texture. Since the drawing is on a separate thread I suggest you to create the 2 textures on the main context, main thread then create a shared context on the other thread which can now access the 2 threads.
Now there is no sense to block the background thread since it is expected to be slow so what it will do is keep rendering to the texture then once done send the texture to the main thread (where you present the buffer) and continue drawing to the other texture. Then repeat the procedure.
The main thread should then check if it got a request to display a new texture and when it does it should draw it to the main buffer and present it. If you need to draw it at 60FPS (or any other constant) you can still do that but you will be redrawing the same texture.
Now just to be on the same side you should still probably do some locking mechanism. Since the background thread does the buffer swapping (sends the new texture and starts drawing to the previous one) it makes sense that there is a boolean value swapLocked where the main thread will set it to true just before it starts drawing and set it to false once it is done with the texture. Now if the background thread is done drawing and the swapLocked is true it should not continue drawing. In this situation continue the swap and the drawing once the swapLocked is set to false. You can override the setter to do that but be careful to continue the process on the background thread since the setter will most likely be called on the main thread.
Related
MetalKit calls drawInMTKView when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find whether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView is called as often as a drawable is available, since they seem to rely on it being called to make work that uses resources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.
MTKView documentation mentions in paused page that
If the value is NO, the view periodically redraws the contents, at a frame rate set by the value of preferredFramesPerSecond.
Based on samples there are for MTKView, it probably uses a combination of an internal timer and CVDisplayLink callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink or CADisplayLink to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.
I have a situation I've been struggling with for some time.
Within a draw: call to an MTKView, I generate a MTLTexture which is part of a render chain to that view's drawable. Now, I want to use that texture as a basis for drawing in a secondary MTKView.
To prevent animation stutter when tinkering with the app's menus, I have both views configured in explicit draw mode, with a CVDisplayLink dispatching these draw calls into a serial queue (ie. not on the main thread). I've even tried configuring the secondary view to draw on the main queue with setNeedsDisplay.
I can get this to mostly work, though due to what I suspect is some threading issues, I receive an occasional crash. I've even go so far as to place the draw calls to these two MTKViews successively on the same serial thread (via dispatch_queue_async) without much success. I've also tried placing a generated MTLTexture into a little semaphore-protected FIFO queue that the secondary view consumes from -- again, without much success.
Things will run swimmingly well for a few minutes with full-motion video frames as the source, then I receive a crash whilst in the draw method of the second view. Typically, this is what happens when I go to retrieve the texture:
id<MTLTexture>inputTexture = [textureQueue firstObject];
// EXC_BAD_ACCESS (code=1, address=0x40dedeadbec8)
Occasionally, I will end up bailing on the draw as the texture is MTLTextureType1D (instead of 2D) or its usage is erroneously only MTLTextureUsageShaderRead. And yet my MTLTextureDescriptor is fine; like I say, 99.99% of the time, things work swimmingly well.
I wondering if someone can assist with the proper, thread-safe way of obtaining a texture in one MTKView and passing it off to a secondary MTKView for subsequent processing.
I'm writing a opengl es app for ios. I created a thread for EAGLContext, do all the opengl stuff on this thread, then when it's done, present the render buffer to screen by using performSelectorOnMainThread and presentRenderbuffer.
The thing is, if no other UIView on top of my opengl view, everything is fine, but if a add a view on top of my opengl view, thing start to get unstable. Sometime if the UIView is animating, it will cause the opengl call crash, sometime crash at glClear, sometimes presentRenderbuffer. Things are particularly bad on iphone 6 plus, I guess because it has bigger screen resolution.
So, am I doing something wrong or I can only stop opengl rendering when UIView animating to prevent my app from crash?
You are doing a few things wrong here but I guess it should work...
The context is bound to the thread so the minimum you need is to set the context as current on the main thread before presenting the render buffer.
The main problem you are facing here though is that if you are presenting the buffer on a separate thread then it is possible that your background thread has already continued with the drawing and if for instance the first call is to clear the buffer then the buffer might already be cleared on the main thread.
So you will need some kind of locking mechanism or double buffering. You need to consider which is your main thread for openGL and rather create a context on that thread and set it as current. Then create another thread with shared context which is used on the secondary thread (it really should make no difference which is which, just define one as a "master" from which you can create other shared contexts). Now when the drawing thread is done with drawing the buffer you should notify the main thread that it can present the buffer (calling performSelectorOnMainThread should do), the main thread should then take that buffer and then present it. After it is done doing so it should notify back the drawing thread that it is ready to collect a new buffer.
So if you created double buffering then the pipeline would be something like that:
Drawing thread draws to buffer 1
Drawing buffer reports drawing ended and swaps the drawing buffer and continues to draw to buffer 2
Presenting thread takes (locks) the buffer 1 and starts presenting it
Now the buffer 2 might be already finished but presenting the buffer 1 is still locked so you need to choose either to discard the content of buffer 2 and draw to it again in hope the next cycle the buffer 1 will be unlocked. But in any case you may not swap the buffers
After the buffer 1 is presented notify (unlock) buffer 1 on presenting thread and report the unlocking to the drawing thread. (reporting is not needed if you chose to redraw the buffer 2)
When you are notified that the buffer 1 is unlocked and the contents of buffer 2 are already drawn simply swap the buffers and notify the presenting thread to present the buffer 2
(OR) When you are done with drawing to buffer 2 again check if the buffer 1 is unlocked and swap the buffers and again notify the presenting thread to present the buffer 2
Repeat the cycle with drawing to buffer 1...
I read this from page on Tuning Your OpenGL ES App :
Redraw Scenes Only When the Scene Data Changes :
Your app should wait until something in the scene changes before rendering a new frame. Core Animation caches the last image presented to the user and continues to display it until a new frame is presented.
Even when your data changes, it is not necessary to render frames at the speed the hardware processes commands. A slower but fixed frame rate often appears smoother to the user than a fast but variable frame rate. A fixed frame rate of 30 frames per second is sufficient for most animation and helps reduce power consumption.
From what I understand, there is an event loop which keeps on running and re-rendering the scene. We just override the onDrawFrame method and put our rendering code there. I don't have any control on when this method gets called. How can then I "Redraw Scenes Only When the Scene Data Changes" ?
In my case, there is a change in scene only when user interacts (click, pinch etc.). Ideally I would like to not render when user is not interacting with my scene, but this function is getting called continuously. Am confused.
At the lowest exposed level, there is an OpenGL-containing type of CoreAnimation layer, CAEAGLLayer. That can supply a colour buffer that is usable to construct a framebuffer object, to which you can draw as and when you wish, presenting as and when you wish. So that's the full extent of the process for OpenGL in iOS: draw when you want, present when you want.
The layer then has a pixel copy of the scene. Normal Core Animation rules apply so you're never autonomously asked to redraw. Full composition, transitions, Core Animation animations, etc, can all occur without any work on your part.
It is fairly common practice to connect up a timer, usually a CADisplayLink, either manually or just by taking advantage of one of the GLKit constructs. In that case you're trying to produce one frame per timer tick. Apple is suggesting that running the timer at half the refresh rate is acceptable and, if you wake up to perform a draw and realise that you'd just be drawing exactly the same frame as you did last time, not bothering to draw at all. Ideally stopping the timer completely if you have sufficient foresight.
As per the comment, onDrawFrame isn't worded like an Objective-C or Swift method and isn't provided by any Apple-supplied library. Whomever is posting that — presumably to try to look familiar to Android authors — needs to take responsibility for appropriate behaviour.
I'm trying to debug some hand written OpenGLES 2.0 code on iOS 7. The code runs fine in so far as it doesn't crash, run out of memory or behave erratically on both the simulator and on an actual iPhone device but the graphical output is not what I'm expecting.
I'm using the Capture OpenGLES frame feature in Xcode 5.1 to try to debug the OGL calls but I find that when I click the button to capture a frame the app crashes (in OpenGL rendering code - glDrawArrays() to be exact) with an EXC_BAD_ACCESS, code = 1.
To repeat, the code will run fine with no crashes for arbitrarily long and it is only when I click the button in the debugger to capture the frame that the crash occurs.
Any suggestions as to what I may be doing wrongly that would cause this to happen ?
I don't know exactly what you are doing, but here is what I was doing that caused normal (although different than expected) rendering, and crash only when attempting to capture the frame:
Load texture asynchronously (own code, not GLKit, but similar method) in background thread, by using background EAGLContext (same share group as main context). Pass a C block as 'completion handler', that takes the created texture as the only argument, to pass it back to the client when created.
On completion, call the block (Note that this is from the texture loading method, so we are still running in the background thread/gl context.)
From within the completion block, create a sprite using said texture. The sprite creation involves generating a Vertex Array Object with the vertex/texture coords, shader attribute locations etc. Said code does not call openGL ES functions directly, but instead uses a set of wrapper functions that cache OpenGL ES state on the client (app) side, and only call the actual functions if the values involved do change. Because gl state is cached on the client side, a separate data structure is needed for each gl context, and the caching functions must always know which context they are dealing with. The VAO generating code was not aware that is was being run on the background context, so the caching was likely corrupted/out of sync.
Render said sprite every frame: nothing is drawn. When attempting OpenGL Frame Capture, it crashes with EXC_BAD_ACCESS at: glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
I do not really need to create the geometry on a background thread, so what I did is force calling the completion block on the main thread (see this question), so that once the texture was ready all the sprites are created using the main gl context.