Synchronize UIView layout to OpenGL frames - ios

We are developing a game that has 2d elements displayed with UIViews over an OpenGL ES view (specifically, we're using GLKit's GLKView) and are having problems keeping the positions perfectly in sync.
In the parent view's layoutSubviews, we're projecting 3d positions in the world onto the screen, and using those as locations for several UIView "markers" in the game. The whole game only updates in response to the user moving the camera, and the camera tells the view setNeedsLayout each time it moves.
Everthing's working fine, except that the markers seem to be roughly 1 frame out of sync with the 3d rendering. I say roughly because (1) it's an estimate! and (2) I'm wondering whether there's potentially a multithreading issue: doesn't GLKView sync to a special screen refresh callback or something?
Is there some way of hooking a view's layoutSubviews so that it sync's to the 3d view update?
Update: Weirdly, calling layoutIfNeeded immediately after setNeedsLayout makes the problem worse! Possibly 2 or more frames out. Really don't understand that!

What's triggering your call to LayoutSubviews?
It all depends where in the RunLoop your call is triggered vs. where your GLK update call is triggered.
In general, for what you're doing, I'd aim to do your layout as a side-effect of the GLK update - i.e. don't wait for layoutSubviews to change your position.
(if you're using OpenGL, then the whole "layout" system isn't much use to you: GLK is running in its own little world of variable frame rate, and you want to make that your reference point)

This is impossible to do correctly without drawing the frames of the video using OpenGL (drawing in the same context so that you are always sure that one frame contains the same time of video and animation). Everyting else you do, framerate compensation, lag prediction, only depends on chance, and will always be a little bit unsynchronized.
I'm not familiar with UIView but if there is any way to let it play audio and copy the frames to a texture, do that. The lag in the audio is much easier to compensate and much less noticeable by the humans than in the video.

Related

Handling multiple/frequent calls to UIView needsDisplayInRect efficiently

I am working on an iPad application which has a custom drawn UIView which basically covers the whole screen. It is an audio application and it has VU meters which get update messages over ethernet about 30 times a second which then tell the UI to redraw the VU meters on the screen.
The problem I have is that there are 16 VU meters at various positions in the view and I end up calling needsDisplayInRect for each one 30 times a second. If I do all of the drawing in the drawRect function then it works, but it's pretty sluggish as the CGRect passed in to drawRect is almost the entire screen.
In the OSX version there is the NSView getRectsBeingDrawn function which gave me a route to significantly improve the redraw performance. However, as far as I'm aware there is no equivalent function in iOS.
I've tried storing the CGRects that I've called then only redrawing them in drawRect, but it turns out that iOS often decides to redraw areas between the CGRects that have had needsDisplayInRect called on them, so I end up with big white rectangles on areas that I've not requested a redraw for (although I do get the performance improvement I'm expecting). The problem here is that I can't see a way to get the rects that are actually being redrawn, just the rect passed into drawRect which is essentially a union of all of the rects being redrawn.
So, is there any other way to speed up very frequent redraws of small areas of a UIView that I've missed?
The simplest solution is almost certainly to make a subview for each VU meter. Presumably you'd just have one VUMeterView class and make sixteen instances of it.

Why does SpriteKit physics performance seem to depend on what's rendered on the 1st frame?

This is something I've been struggling with for a while now, and I'm really hoping someone can shed some light on the issue. Basically, I have a scene with a bunch of colliding SKPhysicsBodies. If I put the camera in the middle of the action on the very first frame of animation, the entire simulation runs at about 17fps indefinitely. However, if on the very first frame of animation I start the camera off-screen such that nothing (or very little) is drawn that first frame, and then move the camera to the middle of the action on the 2nd frame, then the simulation runs at 60fps.
The simulation is exactly the same in both cases. Nothing is different other than what sprites are culled on the very first frame, yet that seems to make all the difference for the rest of the simulation's performance.
This has proven 100% reproducible in every case I come up with, so in order to make sure that my game runs at 60fps, I have to "prime" the camera by starting it way, way off-screen, render a frame, and then move the camera to where I really want it to be. If I don't do that, the game runs at a cripplingly low frame rate and will never speed up.

Synchronize COCOS2D Layer on top of a MKMapView

I'm tring to synchronize cocos2D layer objects with the map, I managed to get it working by adjusting the glView to the visibleMapRect of the MKMapView. I can zoom, move, my objects are following the map. But, there is a small and annoying lag between the MKMapView and the cocos2D Layer.
I'm synchronizing it at each display loop.
Method:
1) Retrieve the MKMapView.visibleMapRect
2) Set the glViewPort
3) Do an orthographic projection to adjust my layer to the MapView.
I already tried others methods, like moving the cocos2D layer with touch and then move the coordinates of my map according to the touch, still laggy.
Even disabling acceleration and deceleration of the MapView doesn't remove the lag.
Thanks.
Shot in the dark: we know that iOS devices use optimizations to speed up rendering while scaling. This is true for Safari browser, when you zoom in you actually only zoom in on the image that is currently being displayed as the browser window's contents. Only after you stop the pinch motion does the device update the view.
You'll see this specifically with text on older devices. When the device re-renders the contents with the new scale factor, the text suddenly becomes sharp and crisp again. I believe the same optimization is done in MKMapView.
You might want to check if the visibleMapRect values are actually updated during the zoom, and whether they accurately reflect the current zoom level or not.
The other issue I can imagine is that the framerate with MKMapView + Cocos2D is simply low. And specifically zooming might consume a lot of CPU power. You might want to enable the cocos2d FPS display to see what the framerate is.
Another trick that's necessary to allow smooth scrolling of views in cocos2d (particularly complex views like UITableView) is to reduce cocos2d's max framerate (animationInterval) and/or to run the rendering of the gl view on a separate thread. Your issue may simply be a variation of this issue: UIScrollView pauses NSTimer until scrolling finishes
Note that this also occurs with DisplayLink director. The info in this question did the trick for me.

When does a view (or layer) require offscreen rendering?

Hellothis weekend I started to watch the 2011 WWDC videos. I've found really interesting topics about iOS. My favorites were about performance and graphics, but I've found two of them apparently in contradiction. Of course there is something that I didn't get.
The sessions that I'm talking about are Understanding UIKit Rendering -121 and Polishing your app -105.
Unfortunately sample code from 2011 is still not downloadable, so is pretty hard to have an overall view.
In one session they explain that most of times offscreen rendering should be avoided during visualization in scrollview etc. They fix the performance issues in the sample code almost drawing everything inside the -drawRect method.
In the other session the performance issue (on a table view) seems to be due to too much code in the -drawRect method of the table's cells.
First is not clear to me when an OffScreen rendering is required by the system, I've seen in the video that some quartz function such as: cornerRadious, shadowOffset, shadowColor requires it, but does exist a general rule?
Second I don't know if I understood well, but it seems that when there is no offscreen rendering adding layers or views is the way to go.
I hope someone could bring light about that..
Thanks,
Andrea
I don't think there is a rule written down anywhere, but hopefully this will help:
First, let's clear up some definitions. I think offscreen vs onscreen rendering is not the overriding concern most of the time, because offscreen rendering can be as fast as onscreen. The main issue is whether the rendering is done in hardware or software.
There is also very little practical difference between using layers and views. Views are just a thin wrapper around CALayer and they don't introduce a significant performance penalty most of the time. You can override the type of layer used by a view using the +layerClass method if you want to have a view backed by a CAShapeLayer or CATileLayer, etc.
Generally, on iOS, pixel effects and Quartz / Core Graphics drawing are not hardware accelerated, and most other things are.
The following things are not hardware accelerated, which means that they need to be done in software (offscreen):
Anything done in a drawRect. If your view has a drawRect, even an empty one, the drawing is not done in hardware, and there is a performance penalty.
Any layer with the shouldRasterize property set to YES.
Any layer with a mask or drop shadow.
Text (any kind, including UILabels, CATextLayers, Core Text, etc).
Any drawing you do yourself (either onscreen or offscreen) using a CGContext.
Most other things are hardware accelerated, so they are much faster. However, this may not mean what you think it does.
Any of the above types of drawing are slow compared to hardware accelerated drawing, however they don't necessarily slow down your app because they don't need to happen every frame. For example, drawing a drop shadow on a view is slow the first time, but after it is drawn it is cached, and is only redrawn if the view changes size or shape.
The same goes for rasterised views or views with a custom drawRect: the view typically isn't redrawn every frame, it is drawn once and then cached, so the performance after the view is first set up is no worse, unless the bounds change or you call setNeedsDisplay on it.
For good performance, the trick is to avoid using software drawing for views that change every frame. For example, if you need an animated vector shape you'll get better performance using CAShapeLayer or OpenGL than drawRect and Core Graphics. But if you draw a shape once and then don't need to change it, it won't make much difference.
Similarly, don't put a drop shadow on an animated view because it will slow down your frame rate. But a shadow on a view that doesn't change from frame to frame won't have much negative impact.
Another thing to watch out for is slowing down the view setup time. For example, suppose you have a page of text with drop shadows on all the text; this will take a very long time to draw initially since both the text and shadows all need to be rendered in software, but once drawn it will be fast. You will therefore want to set up this view in advance when your application loads, and keep a copy of it in memory so that the user doesn't have to wait ages for the view to display when it first appears on screen.
This is probably the reason for the apparent contradiction in the WWDC videos. For large, complex views that don't change every frame, drawing them once in software (after which they are cached and don't need to be redrawn) will yield better performance than having the hardware re-composite them every frame, even though it will be slower to draw the first time.
But for views that must be redrawn constantly, like table cells (the cells are recycled so they must be redrawn each time one cell scrolls offscreen and is re-used as it scrolls back onto the other side as a different row), software drawing may slow things down a lot.
Offscreen-rendering is one of the worst defined topics in iOS rendering, today. When Apple's UIKit engineers refer to offscreen-rendering, it has a very specific meaning, and a ton of third-party iOS dev blogs are getting it wrong.
When you override "drawRect:", you're drawing via the CPU, and spitting out a bitmap. The bitmap is packaged up and sent to separate process that lives in iOS, the render server. Ideally, the render server just displays the data on screen.
If you fiddle with properties on CALayer, like turning on drop shadows, the GPU will perform additional drawing. This additional work is what UIKit engineers mean when they say "off-screen rendering." This is always performed with hardware.
The issue with off-screen drawing isn't necessarily the drawing. The off-screen pass requires a context switch, as the GPU switches its drawing destination. During this switch, the GPU is idle.
While I don't know a full list of properties that trigger an off-screen pass, you can diagnose this with the Core Animation Instrument's "Color Offscreen-rendered layer" toggle. I assume any property other than alpha is performed via an offscreen pass.
With early iOS hardware, it was reasonable to say "do everything in drawRect." Nowadays GPUs are better, and UIKit has features like shouldRasterize. Today, it's a balancing act between the time spent in drawRect, the number of off-screen passes, and the amount of blending. For the full details, watch the 2014 WWDC session 419, "Advanced Graphics and Animation for iOS Apps."
That all said, it's good to understand what's going on behind-the-scenes, and keep it in the back of your head so you don't do anything insane, but you should start from the simplest solution. Then test it on the slowest hardware you support. If you aren't hitting 60FPS, use Instruments to measure things and figure it out. There are a few possible bottlenecks, and if you aren't using data to diagnose things, you're just guessing.
Offscreen rendering / Rendering on the CPU
The biggest bottlenecks to graphics performance are offscreen rendering and blending – they can happen for every frame of the animation and can cause choppy scrolling.
Offscreen rendering (software rendering) happens when it is necessary to do the drawing in software (offscreen) before it can be handed over to the GPU. Hardware does not handle text rendering and advanced compositions with masks and shadows.
The following will trigger offscreen rendering:
Any layer with a mask (layer.mask)
Any layer with layer.masksToBounds / view.clipsToBounds being true
Any layer with layer.allowsGroupOpacity set to YES and layer.opacity is less than 1.0
When does a view (or layer) require offscreen rendering?
Any layer with a drop shadow (layer.shadow*).
Tips on how to fix: https://markpospesel.wordpress.com/tag/performance/
Any layer with layer.shouldRasterize being true
Any layer with layer.cornerRadius, layer.edgeAntialiasingMask, layer.allowsEdgeAntialiasing
Any layer with layer.borderWith and layer.borderColor?
Missing reference / proof
Text (any kind, including UILabel, CATextLayer, Core Text, etc).
Most of the drawings you do with CGContext in drawRect:. Even an empty implementation will be rendered offscreen.
This post covers blending and other things affecting performance: What triggers offscreen rendering, blending and layoutSubviews in iOS?

Custom UIview free rotation too slow

Programming for iOS, I have a composite custom view consisting of many UIViews. Some UIViews in this composites are responsible for drawing shadow and others for some custom shading. The shadow and shading need to be redrawn upon rotation recognized by UIRotationGestureRecognizer. However the speed of the rotation is far from satisfactory. When I commented out setNeedDisplay, the rotational speed is fine. However, if I do call setNeedDisplay, even when I commented out everything in all drawRects for the shadow and shading views, the rotation still lags significantly.
Are there any recommendations to speed things up?
I can think of one possible solution: make sure the system calls drawRect less often while in rotation. But I do not know how to do this, nor do I know if this is the best solution. Any suggestion appreciated. Thanks.
Calling setNeedsDisplay: too often, especially every frame will always be slow. setNeedsDisplay runs on the CPU, not the GPU. Don't redraw views during rotation and zooming. Wait until the end of the animation, then call setNeedsDisplay: to "render" the final position.
Take a look at how various UIKit views handle large animations:
While MapKit zooms in, the map image scales and looks blurry. Once the zoom gesture stops it renders a new image at that scale. (In this case the image is downloaded from the internet, but it still illustrates the concept.)
ZoomingPDF Sample code (see apple developer docs) shows how zooming on PDFs doesn't render in realtime, but after the zooming finishes.
Hope this helps.

Resources