Modern user interfaces, especially MacOS and iOS, have lots of “casual” animation -- views that appear through brief animated sequences largely orchestrated by the system.
[[myNewView animator] setFrame: rect]
Occasionally, we might have a slightly more elaborate animation, something with an animation group and a completion block.
Now, I can imagine bug reports like this:
Hey -- that nice animation when myNewView appears isn't happening in the new release!
So, we'd want unit tests to do some simple things:
confirm that the animation happens
check the duration of the animation
check the frame rate of the animation
But of course all these tests have to be simple to write and mustn't make the code worse; we don’t want to spoil the simplicity of the implicit animations with a ton of test-driven complexity!
So, what is a TDD-friendly approach to implementing tests for casual animations?
Justifications for unit testing
Let's take a concrete example to illustrate why we'd want a unit test. Let's say we have a view that contains a bunch of WidgetViews. When the user makes a new Widget by double-clicking, it’s supposed to initially appear tiny and transparent, expanding to full size during the animation.
Now, it's true that we don't want to need to unit test system behavior. But here are some things that might go wrong because we fouled things up:
The animation is called on the wrong thread, and doesn't get drawn. But in the course of the animation, we call setNeedsDisplay, so eventually the widget gets drawn.
We're recycling disused widgets from a pool of discarded WidgetControllers. NEW WidgetViews are initially transparent, but some views in the recycle pool are still opaque. So the fade doesn't happen.
Some additional animation starts on the new widget before the animation finishes. As a result, the widget begins to appear, and then starts jerking and flashing briefly before it settles down.
You made a change to the widget's drawRect: method, and the new drawRect is slow. The old animation was fine, but now it's a mess.
All of these are going to show up in your support log as, "The create-widget animation isn't working anymore." And my experience has been that, once you get used to an animation, it’s really hard for the developer to notice right away that an unrelated change has broken the animation. That's a recipe for unit testing, right?
The animation is called on the wrong thread, and doesn't get drawn.
But in the course of the animation, we call setNeedsDisplay, so
eventually the widget gets drawn.
Don't unit test for this directly. Instead use assertions and/or raise exceptions when animation is on the incorrect thread. Unit test that the assertion will raise an exception appropriately. Apple does this aggressively with their frameworks. It keeps you from shooting yourself in the foot. And you will know immediately when you are using an object outside of valid parameters.
We're recycling disused widgets from a pool of discarded
WidgetControllers. NEW WidgetViews are initially transparent, but some
views in the recycle pool are still opaque. So the fade doesn't
happen.
This is why you see methods like dequeueReusableCellWithIdentifier in UITableView. You need a public method to get the reused WidgetView which is the perfectly opportunity to test properties like alpha are reset appropriately.
Some additional animation starts on the new widget before the
animation finishes. As a result, the widget begins to appear, and then
starts jerking and flashing briefly before it settles down.
Same as number 1. Use assertions to impose your rules on your code. Unit test that the assertions can be triggered.
You made a change to the widget's drawRect: method, and the new
drawRect is slow. The old animation was fine, but now it's a mess.
A unit test can be just timing a method. I often do this with calculations to ensure they stay within a reasonable time limit.
-(void)testAnimationTime
{
NSDate * start = [NSDate date];
NSView * view = [[NSView alloc]init];
for (int i = 0; i < 10; i++)
{
[view display];
}
NSTimeInterval timeSpent = [start timeIntervalSinceNow] * -1.0;
if (timeSpent > 1.5)
{
STFail(#"View took %f seconds to calculate 10 times", timeSpent);
}
}
I can read your question two ways, so I want to separate those.
If you are asking, "How can I unit test that the system actually performs the animation that I request?", I would say it's not worth it. My experience tells me it is a lot of pain with not a lot of gain and in this kind of case, the test would be brittle. I've found that in most cases where we call operating system APIs, it provides the most value to assume that they work and will continue to work until proven otherwise.
If you are asking, "How can I unit test that my code requests the correct animation?", then that's more interesting. You'll want a framework for test doubles like OCMock. Or you can use Kiwi, which is my favorite testing framework and has stubbing and mocking built in.
With Kiwi, you can do something like the following, for example:
id fakeView = [NSView nullMock];
id fakeAnimator = [NSView nullMock];
[fakeView stub:#selector(animator) andReturn:fakeAnimator];
CGRect newFrame = {.origin = {2,2}, .size = {11,44}};
[[[fakeAnimator should] receive] setFrame:theValue(newFrame)];
[myController enterWasClicked:nil];
You don't want to actually wait for the animation; that would take the time the animation takes to run. If you have a few thousand tests, this can add up.
More effective is to mock out the UIView static method in a category so that it takes effect immediately. Then include that file in your test target (but not your app target) so that the category is compiled into your tests only. We use:
#import "UIView+SpecFlywheel.h"
#implementation UIView (SpecFlywheel)
#pragma mark - Animation
+ (void)animateWithDuration:(NSTimeInterval)duration animations:(void (^)(void))animations completion:(void (^)(BOOL finished))completion {
if (animations)
animations();
if (completion)
completion(YES);
}
#end
The above simply executes the animation block immediately, and the completion block immediately if it's provided as well.
Related
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
This is probably an easy answer, but I cannot find any information on it anywhere. I have an animation that transforms the x value of a label. If a certain task completes, the animation stops early and the completion action occurs. With this in mind, is there a way I can use the animation duration to determine if the timer ran out first or if the task completed?
I have a boolean that I was using to do this called taskComplete, but when I reset the view for the next level the completion sees the boolean as false and runs the code. For example, is there a way in xcode to do something like:
completion:^(BOOL finished) {
//boolean is false and animation has lasted the amount asked or greater
if (!taskComplete && animationDuration > animationTimer) {
//do this
}
}
All help is appreciated thank you!
There is way how to retrieve current animation status, though I have to say you are not supposed to do it (but if you need, what can you do :)
CALayer has method called .presentationLayer (docs):
The layer object returned by this method provides a close
approximation of the layer that is currently being displayed onscreen.
While an animation is in progress, you can retrieve this object and
use it to get the current values for those animations.
So on that layer, you can choose whatever attribute and run your condition against it. You can also add KVO to track any changes (and access it through view.layer.presentationLayer.attribute)
Other than that, you would have to use Core Animation or POPFramework from Facebook to track changes in greater detail.
Hope it helps!
Edit: I forgot to mention, if you need to know time, you can always calculate it from current value and start / end value as ((currentValue - startValue) / (endValue - startValue)) * animationTime, so there is no need to track it differently.
As #Sulthan stated, you should never base calculations off the duration of an animation. With this in mind, I opened another question on the same subject matter here:
https://stackoverflow.com/questions/31791748/objective-c-animation-not-stopping
Is there any method that will execute prior to - (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration?
This is very important to me because it seems like the view coordinate system switches immediately before this method is executed. I am trying to execute a method immediately before the system has decided to rotate the device, so something along the lines of - (BOOL)shouldAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation would seem to be the perfect place to execute such a method (but it doesn't seem to exist in the documentation).
Thank you!
The method you are looking for is willRotateToInterfaceOrientation:duration:. The bounds have not been transformed at the time it is called.
But:
These methods are deprecated in iOS 8 (and the entire "rotation" model is completely changed), so don't become reliant upon them.
In my view it would be better ask yourself why you think you need the view coordinate system in this way. It would be better to position things using constraints that are independent of such considerations.
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.
I need to mirror a UIWebView's CALayers to a smaller CALayer. The smaller CALayer is essentially a pip of the larger UIWebView. I'm having difficulty in doing this. The only thing that comes close is CAReplicatorLayer, but given the original and the copy have to have CAReplicatorLayer as a parent, I cannot split the original and copy on different screens.
An illustration of what I'm trying to do:
The user needs to be able to interact with the smaller CALayer and both need to be in sync.
I've tried doing this with renderInContext and CADisplayLink. Unfortunately there is some lag/stutter because it's trying to re-draw every frame, 60 times a second. I need a way to do the mirroring without re-drawing on each frame, unless something has actually changed. So I need a way of knowing when the CALayer (or child CALayers) become dirty.
I cannot simply have two UIWebView's because two pages may be different (timing is off, different background, etc...). I have no control over the web page being displayed. I also cannot display the entire iPad screen as there are other elements on the screen that should not show on the external screen.
Both the larger CALayer and smaller "pip" CALayer need to match smoothly frame-for-frame in iOS 6. I do not need to support earlier versions.
The solution needs to be app-store passable.
As written in comments, if the main needing is to know WHEN to update the layer (and not How), I move my original answer after the "OLD ANSWER" line and add what discussed in the comments:
First (100% Apple Review Safe ;-)
You can take periodic "screenshots" of your original UIView and compare the resulting NSData (old and new) --> if the data is different, the layer content changed. There is no need to compare the FULL RESOLUTION screenshots, but you can do it with smaller one, to have better performance
Second: performance friendly and "theorically" review safe...but not sure :-/
At this link http://www.lombax.it/documents/DirtyLayer.zip you can find a sample project that alert you every time the UIWebView layer becomes dirty ;-)
I try to explain how I arrived to this code:
The main goal is to understand when TileLayer (a private subclass of CALayer used by UIWebView) becomes dirty.
The problem is that you can't access it directly. But, you can use method swizzle to change the behavior of the layerSetNeedsDisplay: method in every CALayer and subclasses.
You must be sure to avoid a radical change in the original behavior, and do only the necessary to add a "notification" when the method is called.
When you have successfully detected each layerSetNeedsDisplay: call, the only remaining thing is to understand "which is" the involved CALayer --> if it's the internal UIWebView TileLayer, we trigger an "isDirty" notification.
But we can't iterate through the UIWebView content and find the TileLayer, for example simply using "isKindOfClass:[TileLayer class]" will sure give you a rejection (Apple uses a static analyzer to check the use of private API). What can you do?
Something tricky like...for example...compare the involved layer size (the one that is calling layerSetNeedsDisplay:) with the UIWebView size? ;-)
Moreover, sometimes the UIWebView changes the child TileLayer and use a new one, so you have to do this check more times.
Last thing: layerSetNeedsDisplay: is not always called when you simply scroll the UIWebView (if the layer is already built), so you have to use UIWebViewDelegate to intercept the scrolling / zooming.
You will find that method swizzle it's the reason of rejection in some apps, but it has been always motivated with "you changed the behavior of an object". In this case you don't change the behavior of something, but simply intercept when a method is called.
I think that you can give it a try or contact Apple Support to check if it's legal, if you are not sure.
OLD ANSWER
I'm not sure this is performance friendly enough, I tried it only with both view on the same device and it works pretty good... you should try it using Airplay.
The solution is quite simple: you take a "screenshot" of the UIWebView / MKMapView using UIGraphicsGetImageFromCurrentImageContext. You do this 30/60 times a second, and copy the result in an UIImageView (visible on the second display, you can move it wherever you want).
To detect if the view changed and avoid doing traffic on the wireless link, you can compare the two uiimages (the old frame and the new frame) byte by byte, and set the new only if it's different from the previous. (yeah, it works! ;-)
The only thing I didn't manage this evening is to make this comparison fast: if you look at the sample code attached, you'll see that the comparison is really cpu intensive (because it uses UIImagePNGRepresentation() to convert UIImage in NSData) and makes the whole app going so slow. If you don't use the comparison (copying every frame) the app is fast and smooth (at least on my iPhone 5).
But I think that there are very much possibility to solve it...for example making the comparison every 4-5 frames, or optimizing the NSData creation in background
I attach a sample project: http://www.lombax.it/documents/ImageMirror.zip
In the project the frame comparison is disabled (an if commented)
I attach the code here for future reference:
// here you start a timer, 50fps
// the timer is started on a background thread to avoid blocking it when you scroll the webview
- (IBAction)enableMirror:(id)sender {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); //0ul --> unsigned long
dispatch_async(queue, ^{
// 0.04f --> 25 fps
NSTimer __unused *timer = [NSTimer scheduledTimerWithTimeInterval:0.02f target:self selector:#selector(copyImageIfNeeded) userInfo:nil repeats:YES];
// need to start a run loop otherwise the thread stops
CFRunLoopRun();
});
}
// this method create an UIImage with the content of the given view
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
// the method called by the timer
-(void)copyImageIfNeeded
{
// this method is called from a background thread, so the code before the dispatch is executed in background
UIImage *newImage = [self imageWithView:self.webView];
// the copy is made only if the two images are really different (compared byte to byte)
// this comparison method is cpu intensive
// UNCOMMENT THE IF AND THE {} to enable the frame comparison
//if (!([self image:self.mirrorView.image isEqualTo:newImage]))
//{
// this must be called on the main queue because it updates the user interface
dispatch_queue_t queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
self.mirrorView.image = newImage;
});
//}
}
// method to compare the two images - not performance friendly
// it can be optimized, because you can "store" the old image and avoid
// converting it more and more...until it's changed
// you can even try to generate the nsdata in background when the frame
// is created?
- (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
NSData *data1 = UIImagePNGRepresentation(image1);
NSData *data2 = UIImagePNGRepresentation(image2);
return [data1 isEqual:data2];
}
I think your idea of using CADisplayLink is good. The main problem is that you're trying to refresh every frame. You can use the frameInterval property to decrease the frame rate automatically. Alternatively, you can use the timestamp property to know when the last update happened.
Another option that might just work: to know if the layers are dirty, why don't you have an object be the delegate of all the layers, which would get its drawLayer:inContext: triggered whenever each layer needs drawing? Then just update the other layers accordingly.