I have a situation I've been struggling with for some time.
Within a draw: call to an MTKView, I generate a MTLTexture which is part of a render chain to that view's drawable. Now, I want to use that texture as a basis for drawing in a secondary MTKView.
To prevent animation stutter when tinkering with the app's menus, I have both views configured in explicit draw mode, with a CVDisplayLink dispatching these draw calls into a serial queue (ie. not on the main thread). I've even tried configuring the secondary view to draw on the main queue with setNeedsDisplay.
I can get this to mostly work, though due to what I suspect is some threading issues, I receive an occasional crash. I've even go so far as to place the draw calls to these two MTKViews successively on the same serial thread (via dispatch_queue_async) without much success. I've also tried placing a generated MTLTexture into a little semaphore-protected FIFO queue that the secondary view consumes from -- again, without much success.
Things will run swimmingly well for a few minutes with full-motion video frames as the source, then I receive a crash whilst in the draw method of the second view. Typically, this is what happens when I go to retrieve the texture:
id<MTLTexture>inputTexture = [textureQueue firstObject];
// EXC_BAD_ACCESS (code=1, address=0x40dedeadbec8)
Occasionally, I will end up bailing on the draw as the texture is MTLTextureType1D (instead of 2D) or its usage is erroneously only MTLTextureUsageShaderRead. And yet my MTLTextureDescriptor is fine; like I say, 99.99% of the time, things work swimmingly well.
I wondering if someone can assist with the proper, thread-safe way of obtaining a texture in one MTKView and passing it off to a secondary MTKView for subsequent processing.
Related
MetalKit calls drawInMTKView when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find whether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView is called as often as a drawable is available, since they seem to rely on it being called to make work that uses resources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.
MTKView documentation mentions in paused page that
If the value is NO, the view periodically redraws the contents, at a frame rate set by the value of preferredFramesPerSecond.
Based on samples there are for MTKView, it probably uses a combination of an internal timer and CVDisplayLink callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink or CADisplayLink to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.
I have a OpenGL-based rendering pipeline for filtering images, which I now want to use to process videos as well.
On the one end if the pipeline is an AVPlayer for fetching frames from a video file, on the other end is my preview view, backed by a CAEAGLLayer. The rendering itself happens async on a different thread because it's quite expensive. The view is hooked to a CADisplayLink that triggers a new async rendering on every screen refresh. When the pipeline is done rendering into the layer's renderbuffer, I'm calling presentRenderbuffer: to show it on screen (in the rendering thread). Draw requests happening while a rendering is still in progress are ignored.
This works—however, I seem to be getting synchronization issues with the display refresh. When I set the frameInterval of the display link to 1 (call every frame), I'm getting ~2 FPS in the end (actual view refreshes). If I'm setting it to 2 (call every other frame), I'm suddenly getting 15 FPS. Setting it to 4 drops the FPS down to 2 again.
My guess is that the async call to presentRenderbuffer: happens "at the wrong moment" in the run loop and is either ignored by the system or delayed.
Now I want to know what's the best practice for displaying the results of async renderings in a view. All the examples and docs I could find only describe the single-threaded case.
In these cases it is best to use double buffering which in your case are 2 textures. The rendering of the video should be done on the FBO (frame buffer object) with an attached texture. Since the drawing is on a separate thread I suggest you to create the 2 textures on the main context, main thread then create a shared context on the other thread which can now access the 2 threads.
Now there is no sense to block the background thread since it is expected to be slow so what it will do is keep rendering to the texture then once done send the texture to the main thread (where you present the buffer) and continue drawing to the other texture. Then repeat the procedure.
The main thread should then check if it got a request to display a new texture and when it does it should draw it to the main buffer and present it. If you need to draw it at 60FPS (or any other constant) you can still do that but you will be redrawing the same texture.
Now just to be on the same side you should still probably do some locking mechanism. Since the background thread does the buffer swapping (sends the new texture and starts drawing to the previous one) it makes sense that there is a boolean value swapLocked where the main thread will set it to true just before it starts drawing and set it to false once it is done with the texture. Now if the background thread is done drawing and the swapLocked is true it should not continue drawing. In this situation continue the swap and the drawing once the swapLocked is set to false. You can override the setter to do that but be careful to continue the process on the background thread since the setter will most likely be called on the main thread.
I'm trying to debug some hand written OpenGLES 2.0 code on iOS 7. The code runs fine in so far as it doesn't crash, run out of memory or behave erratically on both the simulator and on an actual iPhone device but the graphical output is not what I'm expecting.
I'm using the Capture OpenGLES frame feature in Xcode 5.1 to try to debug the OGL calls but I find that when I click the button to capture a frame the app crashes (in OpenGL rendering code - glDrawArrays() to be exact) with an EXC_BAD_ACCESS, code = 1.
To repeat, the code will run fine with no crashes for arbitrarily long and it is only when I click the button in the debugger to capture the frame that the crash occurs.
Any suggestions as to what I may be doing wrongly that would cause this to happen ?
I don't know exactly what you are doing, but here is what I was doing that caused normal (although different than expected) rendering, and crash only when attempting to capture the frame:
Load texture asynchronously (own code, not GLKit, but similar method) in background thread, by using background EAGLContext (same share group as main context). Pass a C block as 'completion handler', that takes the created texture as the only argument, to pass it back to the client when created.
On completion, call the block (Note that this is from the texture loading method, so we are still running in the background thread/gl context.)
From within the completion block, create a sprite using said texture. The sprite creation involves generating a Vertex Array Object with the vertex/texture coords, shader attribute locations etc. Said code does not call openGL ES functions directly, but instead uses a set of wrapper functions that cache OpenGL ES state on the client (app) side, and only call the actual functions if the values involved do change. Because gl state is cached on the client side, a separate data structure is needed for each gl context, and the caching functions must always know which context they are dealing with. The VAO generating code was not aware that is was being run on the background context, so the caching was likely corrupted/out of sync.
Render said sprite every frame: nothing is drawn. When attempting OpenGL Frame Capture, it crashes with EXC_BAD_ACCESS at: glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
I do not really need to create the geometry on a background thread, so what I did is force calling the completion block on the main thread (see this question), so that once the texture was ready all the sprites are created using the main gl context.
I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.
Currently, I have a fixed timestep game loop running on a second thread in my game. The OpenGL context is on the same thread, and rendering is done once per frame after any updates. So the main "loop" has to wait for drawing each frame before it can proceed. This wasn't really a problem until I wrote my particle system. Upwards of 1500+ particles with a physics step of 16ms causes the framerate to drop just below 30, anymore and it's worse. The particle rendering can't be optimized anymore without losing capability, so I decided to try moving OpenGL to a 3rd thread. I know this is somewhat of an extreme case, but I feel it should be able to handle it.
I've thought of running 2 loops concurrently, one for the main stepping (fixed timestep) and one for drawing (however fast it can go). However the rendering calls pass in data that may be changed each update, so I was concerned that locking would slow it down and negate benefit. However, after implenting a test to do this, I'm just getting EXC_BAD_ACCESS after less than a second of runtime. I assume because they're trying to access the same data at the same time? I thought the system automatically handled this?
When I was first learning OpenGL on the iPhone, I had OpenGL setup on the main thread, and would call performSelectorOnMainThread:withObject:waitUntilDone: with the rendering selector, and these errors would happen any time waitUntilDone was false. If it was true, it would happen randomly sometimes, but sometimes I could let it run for 30 mins and it would be fine. Same concept as what's happening now I assume. I am getting the first frame drawn to the screen before it crashes though, so I know something is working.
How would this be properly handled, and if so would it even provide the speed up I'm looking for? Or would multiple access slow it down just as much?