When exactly is drawInMTKView called? - ios

MetalKit calls drawInMTKView when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find whether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView is called as often as a drawable is available, since they seem to rely on it being called to make work that uses resources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.

MTKView documentation mentions in paused page that
If the value is NO, the view periodically redraws the contents, at a frame rate set by the value of preferredFramesPerSecond.
Based on samples there are for MTKView, it probably uses a combination of an internal timer and CVDisplayLink callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink or CADisplayLink to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.

Related

Best way to pass textures between MTKViews?

I have a situation I've been struggling with for some time.
Within a draw: call to an MTKView, I generate a MTLTexture which is part of a render chain to that view's drawable. Now, I want to use that texture as a basis for drawing in a secondary MTKView.
To prevent animation stutter when tinkering with the app's menus, I have both views configured in explicit draw mode, with a CVDisplayLink dispatching these draw calls into a serial queue (ie. not on the main thread). I've even tried configuring the secondary view to draw on the main queue with setNeedsDisplay.
I can get this to mostly work, though due to what I suspect is some threading issues, I receive an occasional crash. I've even go so far as to place the draw calls to these two MTKViews successively on the same serial thread (via dispatch_queue_async) without much success. I've also tried placing a generated MTLTexture into a little semaphore-protected FIFO queue that the secondary view consumes from -- again, without much success.
Things will run swimmingly well for a few minutes with full-motion video frames as the source, then I receive a crash whilst in the draw method of the second view. Typically, this is what happens when I go to retrieve the texture:
id<MTLTexture>inputTexture = [textureQueue firstObject];
// EXC_BAD_ACCESS (code=1, address=0x40dedeadbec8)
Occasionally, I will end up bailing on the draw as the texture is MTLTextureType1D (instead of 2D) or its usage is erroneously only MTLTextureUsageShaderRead. And yet my MTLTextureDescriptor is fine; like I say, 99.99% of the time, things work swimmingly well.
I wondering if someone can assist with the proper, thread-safe way of obtaining a texture in one MTKView and passing it off to a secondary MTKView for subsequent processing.

Redraw Scenes Only When the Scene Data Changes

I read this from page on Tuning Your OpenGL ES App :
Redraw Scenes Only When the Scene Data Changes :
Your app should wait until something in the scene changes before rendering a new frame. Core Animation caches the last image presented to the user and continues to display it until a new frame is presented.
Even when your data changes, it is not necessary to render frames at the speed the hardware processes commands. A slower but fixed frame rate often appears smoother to the user than a fast but variable frame rate. A fixed frame rate of 30 frames per second is sufficient for most animation and helps reduce power consumption.
From what I understand, there is an event loop which keeps on running and re-rendering the scene. We just override the onDrawFrame method and put our rendering code there. I don't have any control on when this method gets called. How can then I "Redraw Scenes Only When the Scene Data Changes" ?
In my case, there is a change in scene only when user interacts (click, pinch etc.). Ideally I would like to not render when user is not interacting with my scene, but this function is getting called continuously. Am confused.
At the lowest exposed level, there is an OpenGL-containing type of CoreAnimation layer, CAEAGLLayer. That can supply a colour buffer that is usable to construct a framebuffer object, to which you can draw as and when you wish, presenting as and when you wish. So that's the full extent of the process for OpenGL in iOS: draw when you want, present when you want.
The layer then has a pixel copy of the scene. Normal Core Animation rules apply so you're never autonomously asked to redraw. Full composition, transitions, Core Animation animations, etc, can all occur without any work on your part.
It is fairly common practice to connect up a timer, usually a CADisplayLink, either manually or just by taking advantage of one of the GLKit constructs. In that case you're trying to produce one frame per timer tick. Apple is suggesting that running the timer at half the refresh rate is acceptable and, if you wake up to perform a draw and realise that you'd just be drawing exactly the same frame as you did last time, not bothering to draw at all. Ideally stopping the timer completely if you have sufficient foresight.
As per the comment, onDrawFrame isn't worded like an Objective-C or Swift method and isn't provided by any Apple-supplied library. Whomever is posting that — presumably to try to look familiar to Android authors — needs to take responsibility for appropriate behaviour.

iOS OpenGL ES - Only draw on request

I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

DirectX: Game loop order, draw first and then handle input?

I was just reading through the DirectX documentation and encountered something interesting in the page for IDirect3DDevice9::BeginScene :
To enable maximal parallelism between
the CPU and the graphics accelerator,
it is advantageous to call
IDirect3DDevice9::EndScene as far
ahead of calling present as possible.
I've been accustomed to writing my game loop to handle input and such, then draw. Do I have it backwards? Maybe the game loop should be more like this: (semi-pseudocode, obviously)
while(running) {
d3ddev->Clear(...);
d3ddev->BeginScene();
// draw things
d3ddev->EndScene();
// handle input
// do any other processing
// play sounds, etc.
d3ddev->Present(NULL, NULL, NULL, NULL);
}
According to that sentence of the documentation, this loop would "enable maximal parallelism".
Is this commonly done? Are there any downsides to ordering the game loop like this? I see no real problem with it after the first iteration... And I know the best way to know the actual speed increase of something like this is to actually benchmark it, but has anyone else already tried this and can you attest to any actual speed increase?
Since I always felt that it was "awkward" to draw-before-sim, I tended to push the draws until after the update but also after the "present" call. E.g.
while True:
Simulate()
FlipBuffers()
Render()
While on the first frame you're flipping nothing (and you need to set up things so that the first flip does indeed flip to a known state), this always struck me as a bit nicer than putting the Render() first, even though the order of operations are the same once you're under way.
The short answer is yes, this is how it's commonly done. Take a look at the following presentation on the game loop in God of War III on the PS3:
http://www.tilander.org/aurora/comp/gdc2009_Tilander_Filippov_SPU.pdf
If you're running a double buffered game at 30 fps, the input lag will be 1 / 30 ~= 0.033 seconds which is way to small to be detected by a human (for comparison, any reaction time under 0.1 seconds on 100 metres is considered to be a false start).
Its worth noting that on nearly all PC hardware BeginScene and EndScene do nothing. In fact the driver buffers up all the draw commands and then when you call present it may not even begin drawing. They commonly buffer up several frames of draw commands to smooth out frame rate. Usually the driver does things based around the present call.
This can cause input lag when frame rate isn't particularly high.
I'd wager if you did your rendering immediately before the present you'd notice no difference to the loop you give above. Of course on some odd bits of hardware this may then cause issues so, in general, you are best off looping as you suggest above.

Resources