Redraw Scenes Only When the Scene Data Changes - ios

I read this from page on Tuning Your OpenGL ES App :
Redraw Scenes Only When the Scene Data Changes :
Your app should wait until something in the scene changes before rendering a new frame. Core Animation caches the last image presented to the user and continues to display it until a new frame is presented.
Even when your data changes, it is not necessary to render frames at the speed the hardware processes commands. A slower but fixed frame rate often appears smoother to the user than a fast but variable frame rate. A fixed frame rate of 30 frames per second is sufficient for most animation and helps reduce power consumption.
From what I understand, there is an event loop which keeps on running and re-rendering the scene. We just override the onDrawFrame method and put our rendering code there. I don't have any control on when this method gets called. How can then I "Redraw Scenes Only When the Scene Data Changes" ?
In my case, there is a change in scene only when user interacts (click, pinch etc.). Ideally I would like to not render when user is not interacting with my scene, but this function is getting called continuously. Am confused.

At the lowest exposed level, there is an OpenGL-containing type of CoreAnimation layer, CAEAGLLayer. That can supply a colour buffer that is usable to construct a framebuffer object, to which you can draw as and when you wish, presenting as and when you wish. So that's the full extent of the process for OpenGL in iOS: draw when you want, present when you want.
The layer then has a pixel copy of the scene. Normal Core Animation rules apply so you're never autonomously asked to redraw. Full composition, transitions, Core Animation animations, etc, can all occur without any work on your part.
It is fairly common practice to connect up a timer, usually a CADisplayLink, either manually or just by taking advantage of one of the GLKit constructs. In that case you're trying to produce one frame per timer tick. Apple is suggesting that running the timer at half the refresh rate is acceptable and, if you wake up to perform a draw and realise that you'd just be drawing exactly the same frame as you did last time, not bothering to draw at all. Ideally stopping the timer completely if you have sufficient foresight.
As per the comment, onDrawFrame isn't worded like an Objective-C or Swift method and isn't provided by any Apple-supplied library. Whomever is posting that — presumably to try to look familiar to Android authors — needs to take responsibility for appropriate behaviour.

Related

When exactly is drawInMTKView called?

MetalKit calls drawInMTKView when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find whether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView is called as often as a drawable is available, since they seem to rely on it being called to make work that uses resources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.
MTKView documentation mentions in paused page that
If the value is NO, the view periodically redraws the contents, at a frame rate set by the value of preferredFramesPerSecond.
Based on samples there are for MTKView, it probably uses a combination of an internal timer and CVDisplayLink callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink or CADisplayLink to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.

How does UIImageView animate so smoothly? (Or: how to get OpenGL to efficiently animate frame by frame textures)

To clarify, I know that a texture atlas improves performance when using multiple distinct images. But I'm interested in how things are done when you are not doing this.
I tried doing some frame-by-frame animation manually in custom OpenGL where each frame I bind a new texture and draw it on the same point sprite. It works, but it is very slow compared to the UIImageView ability to abstract the same. I load all the textures up front, but the rebinding is done each frame. By comparison, UIImageView accepts the individual images, not a texture atlas, so I'd imagine it is doing similarly.
These are 76 images loaded individually, not as a texture atlas, and each is about 200px square. In OpenGL, I suspect the bottleneck is the requirement to rebind a texture at every frame. But how is UIImageView doing this as I'd expect a similar bottleneck?? Is UIImageView somehow creating an atlas behind the scenes so no rebinding of textures is necessary? Since UIKit ultimately has OpenGL running beneath it, I'm curious how this must be working.
If there is a more efficient means to animate multiple textures, rather than swapping out different bound textures each frame in OpenGL, I'd like to know, as it might hint at what Apple is doing in their framework.
If I did in fact get a new frame for each of 60 frames in a second, then it would take about 1.25 seconds to animate through my 76 frames. Indeed I get that with UIImageView, but the OpenGL is taking about 3 - 4 seconds.
I would say your bottleneck is somewhere else. The openGL is more then capable doing an animation the way you are doing. Since all the textures are loaded and you just bind another one each frame there is no loading time or anything else. Consider for a comparison I have an application that can in runtime generate or delete textures and can at some point have a great amount of textures loaded on the GPU, I have to bind all those textures every frame (not 1 every frame), using all from depth buffer, stencil, multiple FBOs, heavy user input, about 5 threads bottlenecked into 1 to process all the GL code and I have no trouble with the FPS at all.
Since you are working with the iOS I suggest you run some profilers to see what code is responsible for the overhead. And if for some reason your time profiler will tell you that the line with glBindTexture is taking too long I would still say that the problem is somewhere else.
So to answer your question, it is normal and great that UIImageView does its work so smoothly and there should be no problem achieving same performance with openGL. THOUGH, there are a few things to consider at this point. How can you say that image view does not skip images, you might be setting a pointer to a different image 60 times per second but the image view might just ask itself 30 times per second to redraw and when it does just uses a current image assigned to it. On the other hand with your GL code you are forcing the application to do the redraw 60FPS regardless to if it is capable of doing so.
Taking all into consideration, there is a thing called display link that apple developers created for you. I believe it is meant for exactly what you want to do. The display link will tell you how much time has elapsed between frames and by that key you should ask yourself what texture to bind rather then trying to force them all in a time frame that might be too short.
And another thing, I have seen that if you try to present render buffer at 100 FPS on most iOS devices (might be all), you will only get 60 FPS as the method to present render buffer will pause your thread if it has been called in less then 1/60s. That being said it is rather impossible do display anything at all at 60 FPS on iOS devices and everything running 30+ FPS is considered good.
"not as a texture atlas" is the sentence that is a red flag for me.
USing a texture atlas is a good thing....the texture is loaded into memory once and then you just move the rectangle position to play the animation. It's fast because its already all in memory. Any operation which involves constantly loading and reloading new image frames is going to be slower than that.
You'd have to post source code to get any more exact an answer than that.

CGContextDrawLayerAtPoint is slow on iPad 3

I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.

iOS OpenGL ES - Only draw on request

I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.

Pushing texture information to OpenGLES for iPad - Implications?

We have a project that is up and coming that will require us to push texture image information to a the EAGLView of an iPad app. Being green to OpenGL in general, are there implications to having a surface wait for texture information? What will OpenGL do while it's waiting for the image data? Does OpenGL require constant updates to it's textures, or will it retain the data until we update the texture again? We're not going to be having a loop or anything in the view, but more like an observer pattern.
When you upload a texture, you hand it off to the GPU — so a copy is made, in memory you don't have direct access to. It's then available to be drawn with as many times as you want. So there's no need for constant updates.
OpenGL won't do anything else while waiting for the image data, it's a synchronous API. The call to upload the data will take as long as it takes, the texture object will have no graphic associated with it beforehand and will have whatever you uploaded associated with it afterwards.
In the general case, OpenGL objects, including texture objects, belong to a specific context and contexts belong to a specific thread. However, iOS implements share groups, which allow you to put several contexts into a share group, allowing objects to be shared between them subject to you having to be a tiny bit careful about synchronisation.
iOS provides a specific subclass of CALayer, CAEAGLLayer, that you can use to draw to from OpenGL. It's up to you when you draw and how often. So your approach is the more native one, if anything. A lot of the samples wrap
Obviously try the simplest approach of 'everything on the main thread' first. If you're not doing all that much then it'll likely be fast enough and save you code maintenance. However, uploading can cost more than you expect, since the OpenGL way around of working is that you specify the data and the format it's in, leaving OpenGL to rearrange it as necessary for the particular GPU you're on. We're talking amounts of the 0.3 of a second variety rather than 30 seconds, but enough that there'll be an obvious pause if the user taps a button or tries to move a slider at the same time.
So if keeping the main thread responsive proves an issue, I'd imagine that you'd want to hop onto a background thread, create a new context within the same share group as the one on the main thread, upload, then hop back to do the actual drawing. In which case it'll remain up to you how you communicate to the user that data has been received and is being processed as distinct from no data having been received yet, if the gap is large enough to justify doing so.

Resources