I am looking into converting my OpenGL rendering code to take advantage of a few features of GLKit (namely the asynchronous texture loading and the automation provided by GLKView/Controller). However, it appears that the classes are designed mainly to accommodate people rendering using an animation loop, whereas I'm working with on-demand rendering. Additionally, some of the rendering is to a texture rather than the GLKView's framebuffer, so should I be looking to just subclass the GLKView and add additional FBOs?
Is there a recommended approach for this type of setup? I would expect something along the lines of:
Set the view controller's preferredFramesPerSecond to 0, or just
pause the frame updates?
Ignore the glkViewControllerUpdate or glkView:drawInRect: methods
and just draw what I need, when I need it.
Use the view's setNeedsDisplay as with a normal UIView in order
to display the frame (do I need to call bindDrawable given that I
will be rendering to a texture as well?).
Perhaps it's not worth the effort if this is not what the new API is designed for? I wish the documentation was a little more thorough than it is. Perhaps more samples will be provided when the API has 'matured' a little...
Thanks
The approach I ended up using was to not bother with the GLKViewController, but just use GLKView directly under a UIViewController subclass.
Clearly, the GLKViewController is intended for use by people who need a consistent rendering loop for apps such as games. Without it, drawing to the GLKView is as simple as calling [glkView setNeedsDisplay]. Be sure to set enableSetNeedsDisplay to YES in order to enable this behaviour.
If you did still want to make use of a GLKViewController, you can disable the animation rendering loop in viewWillAppear like so:
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated]; // setPaused automatically set to NO in super's implementation
[self setPaused:YES];
}
Also, set resumeOnDidBecomeActive to NO to prevent the view controller from resuming again automatically.
Using a plain UIViewController with a GLKView is perfectly acceptable however, and I have seen it recommended by an Apple engineer as an appropriate way to perform on-demand drawing.
I've just converted my code from using an EAGLContext manager I rolled myself to using the GLKit classes.
You suggest you might "..ignore the.. glkView:drawInRect: methods and just draw what [you] need, when I need it". This seems like a sensible option performance-wise; I assume (though haven't tried) if you simply don't specify a GLKViewDelegate or provide a subclassed GLKView with its drawInRect: defined then no animation loop rendering will occur. Have you attempted this?
The alternative would be to simply create some #property (assign, nonatomic) BOOL shouldUpdate; in your MyController : GLKViewController <GLKViewDelegate> class which will only update if there is something to do:
[self setDelegate:self]; // in init or awakeFromNib or other..
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
if ([self shouldUpdate]) { ...
I'm sure you get the idea, it's hardly complicated.
One thing worth mentioning: the official API docs state that viewDidLoad should be used in your GLKViewController for initial GL setup. I had issues with this; for some reason my glCreateShader calls always returned zero. This may have been due to my setting the EAGLContext post-initialisation; I couldn't pass it as an init parameter since I created the controller in Storyboard. However, there was nothing logically wrong with the code, so I offer this friendly warning in case you encounter similar issues. My solution is simply to have the following in my drawInRect:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
if ([self initialGLSetupDone] == NO) {
[self beforeFirstRender];
[self setInitialGLSetupDone:YES];
}
// .. rest of render code goes here.
}
Obviously it's not ideal to have an IF in there unnecessarily, but it was an easy solution.
Let me know how it goes if you try updating to use GLKit.
After you have created GLKView, add this line:
glkView.enableSetNeedsDisplay = TRUE;
(Because of this, no one will redraw the view automatically)
When you want redraw, insert this line:
[glkView setNeedsDisplay];
... then drawInRect routine will be called only once.
Hope it helps.
Related
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
I'm using OpenGL ES (2.0) exclusively for my app. I'm trying to get UIMotionEffect to work with one of my OpenGL objects - where all I need are the "tilt" values.
UIInterpolatingMotionEffect can only be applied to a UIView, and a hack would be to apply UIInterpolatingMotionEffect to a UIView and grab the values from there per frame, while not rendering the supposed UIView. But it seems far too hack-ish to be the only solution.
I tried to subclass UIMotionEffect, but couldn't figure out how - (NSDictionary *)keyPathsAndRelativeValuesForViewerOffset:(UIOffset)viewerOffset worked (ie. what calls it and how to retrieve the values I need).
Any ideas on how could I use UIMotionEffect with OpenGL?
I finally tested the method suggested by #rickster: connect UIInterpolatingMotionEffect to my GLKView/GL View pass keys that are custom made and just retrieve them. Unfortunately, that idea royally fails.
First, the keys that are passed to UIInterpolatingMotionEffect must be UIView animatable properties, and can't be custom variables/properties. Considering most people wouldn't want their entire context to parallax in one way direction/intensity, this method will fail.
Second, you can't just retrieve the values. If you pass center.x as a key, UIInterpolatingMotionEffect won't update center.x since the changes are considered "animations". As such, you have you retrieve the animated center.x, for example [[view.layer presentationLayer] center];
Final Notes: Just create a separate UIView on the main thread, add it to your UIViewController and systematically retrieve values from it (also on the main thread). I found that to be the simplest solution by far.
Also, since I needed to retrieve the values on a separate thread, I'm generally 1 value behind, as I dispatch to the main thread to retrieve the current value, while at the same time retrieving the last value set (make sure the variable is thread-safe/atomic).
EDIT
Since its best to access the presentationLayer on the main thread, but my method is called for a thread that isn't main, I dispatch the main thread to retrieve the value and place it in an atomic property. I would then return the atomic property outside the thread. This makes sure my secondary thread doesn't get blocked by the main thread, and I can retrieve the values I need.
Here's the code I used to retrieve values from a thread that isn't the main thread:
#property (atomic) GLKVector2 currentOffset;
#property (atomic, readonly) GLKVector2 offset;
- (GLKVector2)offset
{
dispatch_async(dispatch_get_main_queue(), ^{
CGRect frame = [[self.view.layer presentationLayer] frame];
self.currentOffset = (GLKVector2){-frame.origin.x, -frame.origin.y};
});
return self.currentOffset;
}
UIInterpolatingMotionEffect works by using key-value coding to set a value for any numeric property on a UIView. This can be any property you define, not just those built into the UIView class.
You need some kind of UIView to get your content on screen. If you're using OpenGL ES on iOS, you're (preferably) using GLKView, which is a UIView subclass that manages its own GL framebuffers and gives you a place to write GL drawing code to render into those framebuffers. (If you're not using GLKView, you're probably using some kind of custom UIView subclass that uses a CAEAGLLayer to render its contents.)
Either way, if you've got GL-rendered content on the screen already, you already have a UIView. If that view isn't your own custom subclass (whether a direct UIView subclass or GLKView subclass), you can make it one, and define your own custom properties on it. Then set up your UIInterpolatingMotionEffect to use those properties. In your view subclass, or whatever object is responsible for your GL rendering, read the values of those properties and use them to set up your scene. (For example, you could use them to set up a ModelView matrix, and pass that to a shader uniform.)
I'm having trouble finding the correct place to do my shader setup for an OpenGLES application using GLKView and GLKViewController.
It seems like viewDidLoad is a natural place to do this, but shader creation fails when I try to do this here. My setup is something like this:
//shader helper method
int setupShaders(const char* vShader, const char* fShader); //returns a program handle
//inside GLKViewController subclass
static int program;
-(void)viewDidLoad{
[super viewDidLoad];
program = setupShaders(vsh, fsh); //program will be zero indicating setup failure
}
I know the setup code works because it succeeds if I call it inside -(void)glkView:(GLKView *)view drawInRect:(CGRect)rect.
So I'm assuming OpenGL isn't fully initialized when -(void)viewDidLoad is called, or something has to be done to set the correct OpenGL context for the setup I'm trying to do, I just can't find any documentation on where or how to do setup correctly.
You are right that the earliest place you can easily set up your shaders is in the drawRect method. This is because there must be a valid GL context current. Per the GLKView documentation:
Before calling its drawRect: method, the view makes its EAGLContext object the current OpenGL ES context and binds its framebuffer object to the OpenGL ES context as the target for rendering commands.
So, the easiest thing to do is hang onto some information, like the program handle, and only initialize if it is non-zero.
if (program == 0)
program = setupShaders(vsh, fsh);
If you don't like this approach, you can consider initializing your GLKView with a context that you provide, or overriding bindDrawable. Or you could not use GLKView and do things manually...
There needs to be a current EAGLContext before you can call any OpenGL ES API, and that includes setup work like compiling shaders. GLKView makes its context current before invoking your drawRect: (or glkView:drawInRect:) method, but you're welcome to make it the current context at any time.
The view's context is current as of viewDidAppear: because that method is called after the first time the view draws itself. I'm not sure it's guaranteed to be current at that point, though -- there's no documented API contract that a GLKView's context will remain current after the end of drawing. So it's better to call [EAGLContext setCurrentContext:myContext] yourself whenever you want to do setup work for your OpenGL ES scene (even if you choose to do it in viewDidAppear:).
See the OpenGL ES Game template when you create a new Xcode project for an example of GLKView/GLKViewController setup.
So it turns out it works perfectly if you initialize from inside -(void)viewDidAppear. Jacob's solution works fine as well, but it seems slightly cleaner to me to use a callback rather than adding a conditional to the draw method.
I have an issue and here how it goes,
I have a view with a subview, the subview is loaded conditionally, only if the parent view is setHidden property is set to YES;
something like [parentView setHidden:YES] and if([parentView isHidden]),
I want to call a method when the orientation changes and that is the cited snippet above, but I have observed that the method shouldAutorotateToInterfaceOrientation is called 4 times during loading and 2 times during runtime, since the method is called more than once, how can I possibly implement a method call ideally since apple's existing method doesn't seem to give me the intuitiveness to put my custom method call with the existing method.
If I would hack this thing, it is possible, but somebody might have a better idea before resorting to things that in the future would just cause me more trouble than benefit.
TIA
Have you tried with
- (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation
duration:(NSTimeInterval)duration {
// check here for your desired rotation
}