Animate by changing CGPoint - ios

In my iOS app, I do a ton of trigonometric calculations based on a given point (specified by CGPoint) and then create some transformation matrices based on those calculations to finally be used in an OpenGL drawing (via GLKit). I'd like to create an animation by changing that fundamental CGPoint over time, but I'm not sure what approach I should use for the animation.
What I'm really looking for is an API that allows me to specify a function to be called on each iteration, much like NSTimer does, but it'd be really cool if I could take advantage ease in/out, etc. The only piece of data that needs to be modified each iteration is my main CGPoint, and the rest of the rendering can be determined from that.
Approaches I've considered, but abandoned:
Core Animation: I'm using OpenGL to draw, so Core Animation doesn't seem to help.
NSTimer: This doesn't give me the flexibility of bezier curves and seems very manual.
Heartbeat based on a given framerate: I only need to re-render when the point changes, and most of the time it is stationary. Doesn't feel like a heartbeat is the right approach.
Does something exist like what I'm describing? Do I have to write it myself? Or am I just misunderstanding the tools provided for me which suggests I should take another look at how I'm drawing my graphics?

I agree with the other poster. Assuming you can use iOS 5, you should use GLKView and GLKViewController. That's set up to call you once per screen refresh (using a CADisplayLink internally .) If you don't want to be iOS 5 only, you can set up a CADisplayLink yourself.
Core Animation isn't useful for OpenGL rendering. However, you can use the design of Core Animation to drive your design. Core Animation (like the rest of Cocoa) is build on top of OpenGL, so you can do everything CA does yourself. It just takes work. (sometimes a LOT of work.)
Core Animation use a motion-based, not a frame based, animation model. Each time it renders the scene, it decides how much motion should be applied based on the elapsed time since the beginning of the animation. If it gets behind in rendering frames, the next frame moves further, so the motion over time is consistent.
As far as ease in/ease out timing, you can do that yourself too. You'd need to read up on animation timing. It uses a non-linear mapping of input time to output time, using bezier curves to change the shape of the curve at the beginning and the end.

You can use GLKView and GLKViewController for your rendering and change the point in the update: method of GLKViewController.

Related

alternative to UIView setCenter

My app, which is a game, includes a CADisplayLink timer which calls a function that instructs about 20 calls to UIView setCenter: for the various objects on screen every frame.
Time profiling it, this accounts for about 30% of all activity in the game and drastically reduces performance on older devices (anything lower than 5th generation ipod touch or iphone).
Are there any lightweight, low-overhead alternatives I can use to move objects (specifically UIViews) around the screen every frame?
EDIT:
Just to clarify, the center property of these UIViews must be set EVERY FRAME. I have a number of tiles that represent the ground in my game. They zip across the screen, only to be replaced by new tiles. After fiddling with the code for a couple hours to change the UIViews to CAlayers, I have it working at absolutely no performance gain. There surely is a better way to do this.
Some code to give a general idea of what is going on:
for(Object* o in gameController.entities){
[o step:curTimeMS];
}
gameController is, as one would think, a class that takes care of the main game functions. It includes its list of entities, which are all the objects on-screen. The step method on each of these entities is a virtual function, so it is specific to each entity - the curTimeMS variable is simply a timestamp so the object can calculate its delta position. In essence, each entity updates its layer.position property every frame, moving it at an appropriate speed across the screen.
I would recommend SpriteKit. It is a very powerful game / 2d animation framework created by apple.. Cocos2D is also a very powerful framework of similar type. You can create a new SpriteKit game straight from XC
If you want to stay in house with just UIKit stuff, check out UIView block based animations. Here is the jist of it.
[UIView animateWithDuration:numberOfSecondsTakenToAnimate animations: ^{
// do you animation here. i.e.: move view frame, adjust color.
} completions: ^(BOOL complete) {
// when the animation is complete, code in this block is executed.
}];
I just remembered Core Graphics. It is used in tandem with UIViews to create simple 2d graphics and is very powerful and very fast. Here is the jist of that.
CGContextRef cntxt = UIGraphicsGetCurrentContext();
CGContextBeginPath(cntxt);
CGContextMoveToPoint(cntxt, <x>, <y>);
CGContextAddLineToPoint(cntxt, <x>, <y>);
CGContextClosePath(cntxt);
[[UIColor <color>] setFill];
[[UIColor <color>] setStroke];
CGContextDrawPath(cntxt, kCGPathFillStroke);
Note: things in < > are variables / values specified by you.
If you want to go all out, take the time to learn Open GL. Beware, I have heard that this is extremely hard to learn.
If you need performance, do not use UIView. It is not designed to be fast.
Instead, have a single UIView that takes up the whole screen, with a bunch of CALayer objects inside the one view. Move the layers around.
CALayer works by talking direct to the GPU, so it's very fast. Perhaps even faster than OpenGL. UIView is using CALayer internally so they both behave approximately the same. The only real difference is any change to the position of a CALayer will be animated by default. You can easily turn the animation off, although in a game you probably want animation.
Your other option, of course, is to use OpenGL. But that's a lot more work.
EDIT: here is some sample code for changing the position of a layer properly: https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/CoreAnimation_guide/CreatingBasicAnimations/CreatingBasicAnimations.html#//apple_ref/doc/uid/TP40004514-CH3-SW8

Synchronize UIView layout to OpenGL frames

We are developing a game that has 2d elements displayed with UIViews over an OpenGL ES view (specifically, we're using GLKit's GLKView) and are having problems keeping the positions perfectly in sync.
In the parent view's layoutSubviews, we're projecting 3d positions in the world onto the screen, and using those as locations for several UIView "markers" in the game. The whole game only updates in response to the user moving the camera, and the camera tells the view setNeedsLayout each time it moves.
Everthing's working fine, except that the markers seem to be roughly 1 frame out of sync with the 3d rendering. I say roughly because (1) it's an estimate! and (2) I'm wondering whether there's potentially a multithreading issue: doesn't GLKView sync to a special screen refresh callback or something?
Is there some way of hooking a view's layoutSubviews so that it sync's to the 3d view update?
Update: Weirdly, calling layoutIfNeeded immediately after setNeedsLayout makes the problem worse! Possibly 2 or more frames out. Really don't understand that!
What's triggering your call to LayoutSubviews?
It all depends where in the RunLoop your call is triggered vs. where your GLK update call is triggered.
In general, for what you're doing, I'd aim to do your layout as a side-effect of the GLK update - i.e. don't wait for layoutSubviews to change your position.
(if you're using OpenGL, then the whole "layout" system isn't much use to you: GLK is running in its own little world of variable frame rate, and you want to make that your reference point)
This is impossible to do correctly without drawing the frames of the video using OpenGL (drawing in the same context so that you are always sure that one frame contains the same time of video and animation). Everyting else you do, framerate compensation, lag prediction, only depends on chance, and will always be a little bit unsynchronized.
I'm not familiar with UIView but if there is any way to let it play audio and copy the frames to a texture, do that. The lag in the audio is much easier to compensate and much less noticeable by the humans than in the video.

Menu from the Contre Jour app

I'm trying to do a menu like the one that "Contre Jour" game has, with 3 elements spinning in a circle when user drags left and right. I'm using CALayers with CATransforms to position them in a 3d spinning wheel (no problem so far).
I need a way (maybe with NSTimers?) to calculate in-between values, because CoreAnimation just interpolates values, but if you NSLog them, it's just gonna show the start and the end, or just the end. I need all the in-between values, I need to snap the wheel movement when I release the finger (touches ends)in one position (there are 3 elements, each one shoud be at 120 degrees.
My guess and am quite sure I'm correct is that they are using a game engine such as Unity3D or Cocos2D or any other of the many to manage their sprites, animations, textures, physics and basically everything. Trying to replicate it outside of game engine will most likely result in crummy performance and a lot of hair pulling. I would suggest looking into a dedicated game engine and give it a shot there.
I am not sure I understand exactly what Contre Jour does with the spinners, anyway, I think that a reasonable approach for your case is using a UIPanGestureRecognizers to update the status of your spinning wheels according to the panning.
Now, it is not clear what you do to animate the spinning wheel (if you could provide some code, this would help understanding exactly what you are trying to do), but the idea would be this: instead of specifying an animation ending point far away from the starting point (and letting Core Animation do all the handling for you, even when the dragging has stopped), you would only modify the status of the spinning wheel in small increments.
If your only issue is stopping the animation when the dragging stops, you could try calling removeAnimationForKey on your layer to halt a specific animation.
Look into CADisplayLink. This works very much like an NSTimer, except its refresh rate is tied to that of the display, so your animations will be smoother than if you were to use timers. This will allow you to calculate all the in-between values and update your control.
I'm not clear what you are asking, but I do have one insight for you: To get the in-between values of an in-flight animation, query the layer's presentationLayer property. the property that's being animated will have a value that's a close approximation of it's on-screen appearance at the moment you fetch the value.

When does a view (or layer) require offscreen rendering?

Hellothis weekend I started to watch the 2011 WWDC videos. I've found really interesting topics about iOS. My favorites were about performance and graphics, but I've found two of them apparently in contradiction. Of course there is something that I didn't get.
The sessions that I'm talking about are Understanding UIKit Rendering -121 and Polishing your app -105.
Unfortunately sample code from 2011 is still not downloadable, so is pretty hard to have an overall view.
In one session they explain that most of times offscreen rendering should be avoided during visualization in scrollview etc. They fix the performance issues in the sample code almost drawing everything inside the -drawRect method.
In the other session the performance issue (on a table view) seems to be due to too much code in the -drawRect method of the table's cells.
First is not clear to me when an OffScreen rendering is required by the system, I've seen in the video that some quartz function such as: cornerRadious, shadowOffset, shadowColor requires it, but does exist a general rule?
Second I don't know if I understood well, but it seems that when there is no offscreen rendering adding layers or views is the way to go.
I hope someone could bring light about that..
Thanks,
Andrea
I don't think there is a rule written down anywhere, but hopefully this will help:
First, let's clear up some definitions. I think offscreen vs onscreen rendering is not the overriding concern most of the time, because offscreen rendering can be as fast as onscreen. The main issue is whether the rendering is done in hardware or software.
There is also very little practical difference between using layers and views. Views are just a thin wrapper around CALayer and they don't introduce a significant performance penalty most of the time. You can override the type of layer used by a view using the +layerClass method if you want to have a view backed by a CAShapeLayer or CATileLayer, etc.
Generally, on iOS, pixel effects and Quartz / Core Graphics drawing are not hardware accelerated, and most other things are.
The following things are not hardware accelerated, which means that they need to be done in software (offscreen):
Anything done in a drawRect. If your view has a drawRect, even an empty one, the drawing is not done in hardware, and there is a performance penalty.
Any layer with the shouldRasterize property set to YES.
Any layer with a mask or drop shadow.
Text (any kind, including UILabels, CATextLayers, Core Text, etc).
Any drawing you do yourself (either onscreen or offscreen) using a CGContext.
Most other things are hardware accelerated, so they are much faster. However, this may not mean what you think it does.
Any of the above types of drawing are slow compared to hardware accelerated drawing, however they don't necessarily slow down your app because they don't need to happen every frame. For example, drawing a drop shadow on a view is slow the first time, but after it is drawn it is cached, and is only redrawn if the view changes size or shape.
The same goes for rasterised views or views with a custom drawRect: the view typically isn't redrawn every frame, it is drawn once and then cached, so the performance after the view is first set up is no worse, unless the bounds change or you call setNeedsDisplay on it.
For good performance, the trick is to avoid using software drawing for views that change every frame. For example, if you need an animated vector shape you'll get better performance using CAShapeLayer or OpenGL than drawRect and Core Graphics. But if you draw a shape once and then don't need to change it, it won't make much difference.
Similarly, don't put a drop shadow on an animated view because it will slow down your frame rate. But a shadow on a view that doesn't change from frame to frame won't have much negative impact.
Another thing to watch out for is slowing down the view setup time. For example, suppose you have a page of text with drop shadows on all the text; this will take a very long time to draw initially since both the text and shadows all need to be rendered in software, but once drawn it will be fast. You will therefore want to set up this view in advance when your application loads, and keep a copy of it in memory so that the user doesn't have to wait ages for the view to display when it first appears on screen.
This is probably the reason for the apparent contradiction in the WWDC videos. For large, complex views that don't change every frame, drawing them once in software (after which they are cached and don't need to be redrawn) will yield better performance than having the hardware re-composite them every frame, even though it will be slower to draw the first time.
But for views that must be redrawn constantly, like table cells (the cells are recycled so they must be redrawn each time one cell scrolls offscreen and is re-used as it scrolls back onto the other side as a different row), software drawing may slow things down a lot.
Offscreen-rendering is one of the worst defined topics in iOS rendering, today. When Apple's UIKit engineers refer to offscreen-rendering, it has a very specific meaning, and a ton of third-party iOS dev blogs are getting it wrong.
When you override "drawRect:", you're drawing via the CPU, and spitting out a bitmap. The bitmap is packaged up and sent to separate process that lives in iOS, the render server. Ideally, the render server just displays the data on screen.
If you fiddle with properties on CALayer, like turning on drop shadows, the GPU will perform additional drawing. This additional work is what UIKit engineers mean when they say "off-screen rendering." This is always performed with hardware.
The issue with off-screen drawing isn't necessarily the drawing. The off-screen pass requires a context switch, as the GPU switches its drawing destination. During this switch, the GPU is idle.
While I don't know a full list of properties that trigger an off-screen pass, you can diagnose this with the Core Animation Instrument's "Color Offscreen-rendered layer" toggle. I assume any property other than alpha is performed via an offscreen pass.
With early iOS hardware, it was reasonable to say "do everything in drawRect." Nowadays GPUs are better, and UIKit has features like shouldRasterize. Today, it's a balancing act between the time spent in drawRect, the number of off-screen passes, and the amount of blending. For the full details, watch the 2014 WWDC session 419, "Advanced Graphics and Animation for iOS Apps."
That all said, it's good to understand what's going on behind-the-scenes, and keep it in the back of your head so you don't do anything insane, but you should start from the simplest solution. Then test it on the slowest hardware you support. If you aren't hitting 60FPS, use Instruments to measure things and figure it out. There are a few possible bottlenecks, and if you aren't using data to diagnose things, you're just guessing.
Offscreen rendering / Rendering on the CPU
The biggest bottlenecks to graphics performance are offscreen rendering and blending – they can happen for every frame of the animation and can cause choppy scrolling.
Offscreen rendering (software rendering) happens when it is necessary to do the drawing in software (offscreen) before it can be handed over to the GPU. Hardware does not handle text rendering and advanced compositions with masks and shadows.
The following will trigger offscreen rendering:
Any layer with a mask (layer.mask)
Any layer with layer.masksToBounds / view.clipsToBounds being true
Any layer with layer.allowsGroupOpacity set to YES and layer.opacity is less than 1.0
When does a view (or layer) require offscreen rendering?
Any layer with a drop shadow (layer.shadow*).
Tips on how to fix: https://markpospesel.wordpress.com/tag/performance/
Any layer with layer.shouldRasterize being true
Any layer with layer.cornerRadius, layer.edgeAntialiasingMask, layer.allowsEdgeAntialiasing
Any layer with layer.borderWith and layer.borderColor?
Missing reference / proof
Text (any kind, including UILabel, CATextLayer, Core Text, etc).
Most of the drawings you do with CGContext in drawRect:. Even an empty implementation will be rendered offscreen.
This post covers blending and other things affecting performance: What triggers offscreen rendering, blending and layoutSubviews in iOS?

UIView animation vs CALayers

I'm struggling with conceptualizing animations with a CALayer as opposed to UIView's own animation methods. Throw "Core Animation" into this and, well, maybe someone can articulate these concepts from a high level so I can better visualize what's happening and why I'd want to migrate UIView animations (which I'm quite familiar with now) to CALayer animations on the iPhone. Every view in Cocoa-Touch automatically gets a layer. And, it seems, you can animate one and/or the other?!? Even mix them together?!? But why? Where's the line? What's the pro/con to each?
The Core Animation Programming Guide immediately jumps into Layer & Timing Classes and I think need to take a step back and understand why these varied pieces exist and how relate to each other.
Use views for control and layers for eye candy. Layers don't receive events so it's easier to use a view for those cases, but when you want to animate a sprite or backgrounds, etc., layers make sense. Events pass right through layers to the backing view so you can have a pretty visual representation without messing up your events. Try to overlay a view that you're just using for visual representation and you'll have to pass tap events through to the underlying view yourself.
An UIView is always rendered to a CALayer. When you use UIView methods to animate a view, you are effectively manipulating the underlying CALayer.
If you need to do simple things, use the UIView methods. For more complex situatins, or if you want layers not associated with any view in particular, use CALayers.
I've done a bunch of apps in the past year. Here's my rule of thumb:
Use UIView until it doesn't do what you want.
Then move to CoreAnimation. But before you get into it too much...
If you write more than a few animations, use Cocos2D.
UIView transforms are only 2D and are restricted to that, LAyer transforms however can be 3D and you should use those if you want to do 3D stuff, UIView animation will work if you change either the UIView transform or the CALayer transform. So at a basic level, you can do a lot more manipulation when you are working with a Layer rather than the View.
I am not sure if I am misunderstanding Chris' response to "What's Cocos2D doing better? Don't you have other problems then, regarding the touch event handling and many other stuff that misses in openGL ES?"
It sounds like the answer suggests Cocos2D is not based on the OpenGL ES framework when in fact it actual is. While it is a great 2D game engine it does implement OpenGL for much of it's rendering - attached to a physics library it allows for a lot of very interesting possibilities for animation - and Chris is correct - it is a lot less coding indeed.

Resources