I'm using OpenGL ES (2.0) exclusively for my app. I'm trying to get UIMotionEffect to work with one of my OpenGL objects - where all I need are the "tilt" values.
UIInterpolatingMotionEffect can only be applied to a UIView, and a hack would be to apply UIInterpolatingMotionEffect to a UIView and grab the values from there per frame, while not rendering the supposed UIView. But it seems far too hack-ish to be the only solution.
I tried to subclass UIMotionEffect, but couldn't figure out how - (NSDictionary *)keyPathsAndRelativeValuesForViewerOffset:(UIOffset)viewerOffset worked (ie. what calls it and how to retrieve the values I need).
Any ideas on how could I use UIMotionEffect with OpenGL?
I finally tested the method suggested by #rickster: connect UIInterpolatingMotionEffect to my GLKView/GL View pass keys that are custom made and just retrieve them. Unfortunately, that idea royally fails.
First, the keys that are passed to UIInterpolatingMotionEffect must be UIView animatable properties, and can't be custom variables/properties. Considering most people wouldn't want their entire context to parallax in one way direction/intensity, this method will fail.
Second, you can't just retrieve the values. If you pass center.x as a key, UIInterpolatingMotionEffect won't update center.x since the changes are considered "animations". As such, you have you retrieve the animated center.x, for example [[view.layer presentationLayer] center];
Final Notes: Just create a separate UIView on the main thread, add it to your UIViewController and systematically retrieve values from it (also on the main thread). I found that to be the simplest solution by far.
Also, since I needed to retrieve the values on a separate thread, I'm generally 1 value behind, as I dispatch to the main thread to retrieve the current value, while at the same time retrieving the last value set (make sure the variable is thread-safe/atomic).
EDIT
Since its best to access the presentationLayer on the main thread, but my method is called for a thread that isn't main, I dispatch the main thread to retrieve the value and place it in an atomic property. I would then return the atomic property outside the thread. This makes sure my secondary thread doesn't get blocked by the main thread, and I can retrieve the values I need.
Here's the code I used to retrieve values from a thread that isn't the main thread:
#property (atomic) GLKVector2 currentOffset;
#property (atomic, readonly) GLKVector2 offset;
- (GLKVector2)offset
{
dispatch_async(dispatch_get_main_queue(), ^{
CGRect frame = [[self.view.layer presentationLayer] frame];
self.currentOffset = (GLKVector2){-frame.origin.x, -frame.origin.y};
});
return self.currentOffset;
}
UIInterpolatingMotionEffect works by using key-value coding to set a value for any numeric property on a UIView. This can be any property you define, not just those built into the UIView class.
You need some kind of UIView to get your content on screen. If you're using OpenGL ES on iOS, you're (preferably) using GLKView, which is a UIView subclass that manages its own GL framebuffers and gives you a place to write GL drawing code to render into those framebuffers. (If you're not using GLKView, you're probably using some kind of custom UIView subclass that uses a CAEAGLLayer to render its contents.)
Either way, if you've got GL-rendered content on the screen already, you already have a UIView. If that view isn't your own custom subclass (whether a direct UIView subclass or GLKView subclass), you can make it one, and define your own custom properties on it. Then set up your UIInterpolatingMotionEffect to use those properties. In your view subclass, or whatever object is responsible for your GL rendering, read the values of those properties and use them to set up your scene. (For example, you could use them to set up a ModelView matrix, and pass that to a shader uniform.)
Related
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
I'm new to CG drawing, and I'm confused on where the CG code goes.
What is stopping the idea of putting the draw functions in the UIViewController vs the UIView? How should I determine which parts of the CG code should go where? I see that some of the tutorials have code in viewDidLoad from the view controller, but others say it goes in the view. What determines what goes where?
(Yes this is kind of an MVC question, but Im still having trouble differentiating.)
The correct place for custom drawing code is (almost always) in the drawRect method of a subclass of UIView. The usual way to go is to make a custom subclass of UIView and make that the root view of your view controller. In the view controller's loadView method, for example, you can assign self.view = [[MyCustomView alloc] init]; (autorelease that view if you're in non-ARC code!) Then your custom drawing code should go in the drawRect method of MyCustomView.
Core Graphics drawing code can go where ever a valid context is. This means it can go in your own functions if you create your own context.
The reason you generally place Core Graphics drawing code in a UIView subclass is you normally may want to encapsulate the code in a reusable form. But if you were going to create an image from the Core Graphics code you could easily start a new image context, draw, then save the contents of the context into a UIImage. This type of drawing can go anywhere, even a UIViewController. Core Graphics drawing can even be used to generate PDFs. Its simply a simple geometric drawing framework. As long as you have a valid context be it the one created before drawRect: is called in a UIView or a context you created on command.
I am looking into converting my OpenGL rendering code to take advantage of a few features of GLKit (namely the asynchronous texture loading and the automation provided by GLKView/Controller). However, it appears that the classes are designed mainly to accommodate people rendering using an animation loop, whereas I'm working with on-demand rendering. Additionally, some of the rendering is to a texture rather than the GLKView's framebuffer, so should I be looking to just subclass the GLKView and add additional FBOs?
Is there a recommended approach for this type of setup? I would expect something along the lines of:
Set the view controller's preferredFramesPerSecond to 0, or just
pause the frame updates?
Ignore the glkViewControllerUpdate or glkView:drawInRect: methods
and just draw what I need, when I need it.
Use the view's setNeedsDisplay as with a normal UIView in order
to display the frame (do I need to call bindDrawable given that I
will be rendering to a texture as well?).
Perhaps it's not worth the effort if this is not what the new API is designed for? I wish the documentation was a little more thorough than it is. Perhaps more samples will be provided when the API has 'matured' a little...
Thanks
The approach I ended up using was to not bother with the GLKViewController, but just use GLKView directly under a UIViewController subclass.
Clearly, the GLKViewController is intended for use by people who need a consistent rendering loop for apps such as games. Without it, drawing to the GLKView is as simple as calling [glkView setNeedsDisplay]. Be sure to set enableSetNeedsDisplay to YES in order to enable this behaviour.
If you did still want to make use of a GLKViewController, you can disable the animation rendering loop in viewWillAppear like so:
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated]; // setPaused automatically set to NO in super's implementation
[self setPaused:YES];
}
Also, set resumeOnDidBecomeActive to NO to prevent the view controller from resuming again automatically.
Using a plain UIViewController with a GLKView is perfectly acceptable however, and I have seen it recommended by an Apple engineer as an appropriate way to perform on-demand drawing.
I've just converted my code from using an EAGLContext manager I rolled myself to using the GLKit classes.
You suggest you might "..ignore the.. glkView:drawInRect: methods and just draw what [you] need, when I need it". This seems like a sensible option performance-wise; I assume (though haven't tried) if you simply don't specify a GLKViewDelegate or provide a subclassed GLKView with its drawInRect: defined then no animation loop rendering will occur. Have you attempted this?
The alternative would be to simply create some #property (assign, nonatomic) BOOL shouldUpdate; in your MyController : GLKViewController <GLKViewDelegate> class which will only update if there is something to do:
[self setDelegate:self]; // in init or awakeFromNib or other..
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
if ([self shouldUpdate]) { ...
I'm sure you get the idea, it's hardly complicated.
One thing worth mentioning: the official API docs state that viewDidLoad should be used in your GLKViewController for initial GL setup. I had issues with this; for some reason my glCreateShader calls always returned zero. This may have been due to my setting the EAGLContext post-initialisation; I couldn't pass it as an init parameter since I created the controller in Storyboard. However, there was nothing logically wrong with the code, so I offer this friendly warning in case you encounter similar issues. My solution is simply to have the following in my drawInRect:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
if ([self initialGLSetupDone] == NO) {
[self beforeFirstRender];
[self setInitialGLSetupDone:YES];
}
// .. rest of render code goes here.
}
Obviously it's not ideal to have an IF in there unnecessarily, but it was an easy solution.
Let me know how it goes if you try updating to use GLKit.
After you have created GLKView, add this line:
glkView.enableSetNeedsDisplay = TRUE;
(Because of this, no one will redraw the view automatically)
When you want redraw, insert this line:
[glkView setNeedsDisplay];
... then drawInRect routine will be called only once.
Hope it helps.
Everyone and every book claims that there are implicit animations happening in CALayer. However, every time I wanted to verify that so far, I end up with a hard-snap to the set value. No animation at all.
Here's an example in a project where nothing else is happening. All I do is create a view, then get it's CALayer instance and do something that should be implicitly animated.
[theLayer setValue:[NSNumber numberWithFloat:M_PI * 1.1] forKeyPath:#"transform.rotation.z"];
Another one:
CGRect currentBounds = theLayer.bounds;
currentBounds.size.width += 120.f;
[self.animatedLayer setBounds:currentBounds];
The view contains some stuff of course so I can see the change.
I see the visual change, but as a hard snap. No animation at all.
So either all those books are wrong and have old Mac OS X knowledge in mind when writing about Core Animation and implicit animations, or I'm doing something wrong. Can anyone provide an working example that demonstrates implicit animations on the iPhone?
UIKit disables implicit animations. To be more specific, a CALayer associated with a UIView will never implicitly animate. CALayers that you create yourself and that are not associated with a UIView will buy into the normal implicit animation machinery.
If you're interested in how this works, implicit animations happen after the normal -actionForKey: lookup. If there's a CAAction for a given property, it's used. Otherwise, the implicit animation is used. In the case of a CALayer associated with a UIView, UIView implements the -actionForLayer:forKey: delegate method, and when not in a UIView animation block it always returns [NSNull null] to signify that the action lookup should stop here. This prevents implicit animations from working. Inside of a UIView animation block, it constructs its own action to represent the current UIView animation settings. Again, this prevents implicit animations.
If you need to animate CALayer properties that UIView won't do for you, you can use an explicit CAAnimation subclass (such as CABasicAnimation).
Old post, but link below points to a section in the Core Animation Programming Guide that helps shed some more light on what Kevin Ballard was saying. There really needs to be a blatant note that mentions that in order to animate a UIView's underlying layer properties you need to ensure that you set the layer's delegate to an instance that will adopt the "actionForLayer:ForKey:" method and return nil. It's been found that you can also set the delegate to an instance that doesn't adopt this method and it still allows implicit animations, but this is a sloppy and confusing practice.
Core Animation - Layer Actions - Defined Search Pattern for Action Keys
Keep in mind that sublayers of a UIView's primary layer do not normally have the UIView as their delegate. Therefore, they participate in implicit animation unless you explicitly block this, even if you haven't provided a delegate to return a nil action in response to actionForLayer:forKey:.
So, one simple way to get implicit animation in a generic view is simply to add a sublayer that covers the primary layer.
Matt Neuberg's book Programming iOS 5 (currently in beta on Safari Books Online) explains this topic very lucidly.
It's exactly correct of Kevin Ballard's answer. I want to provide something more here.
The detail information from the official document
There is a section called: Rules for Modifying Layers in iOS, and you will find
The UIView class disables layer animations by default but reenables them inside animation blocks.
Except enable the implicit in the UIView animation block. There are two another solution:
Make the layer you need animation a sublayer of a view. Refer to the [answer][2]
[2]: Implicit property animations do not work with CAReplicatorLayer? of Implicit animation fade-in is not working. I tried this, it worked as expected.
assign the layer's delegate to the layer itself. The default delegate of the layer in iOS is its backing store view which implements the actionForLayer:(id)layer forKey:(NSString *)key and return [NSNull null] to disable the implicit animation of the layer.
You could also set the layer's delegate to your own object and handle -actionForLayer:forKey:. Seems as if you get implicit animation even if the delegate doesn't implement -actionForLayer:forKey:.
actually, you could just return nil to get the default animation of CALay.
(id)actionForLayer:(CALayer *)layer forKey:(NSString *)event{
return nil;
}