I have added a CAAnimation to some view's layer, and it animates the view's position. While the animation runs, I want to animate an object in a OpenGL window. In order to do so, I have to somehow get live updates from the property I animate.
I tried
someView.layer.addObserver(self, forKeyPath: "position", options: nil, context: nil)
But I only get updates once the animatin completes. I also tried to add the observer to the presentation layer, without success.
Any ideas?
I was trying to observe (KVO) the transform of a CALayer during an CABasicAnimation but never worked for me, even trying on the PresentationLayer... The way I solved it in my situation was to use a CADisplayLink.
That will allow you to specify the frameTimeInterval you want the link to run with and each time it fires you can directly check on the transform of the layer you have been trying to get KVO updates from.
Works great for me.
So, something like this:
displayLink = CADisplayLink(target: self, selector: "foo")
displayLink?.frameInterval = 1
displayLink?.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)
Inside foo() I would then check for the transform on the layer's presentationLayer() and then I would use the information as needed...
Related
I have UITabBarController and in one of the UIViewController there I scroll UICollectionView each 5 second using Timer. Here is short code how I do it:
override func viewDidLoad() {
super.viewDidLoad()
configureTimer()
}
private func configureTimer() {
slideTimer = Timer.scheduledTimer(timeInterval: 5, target: self, selector: #selector(scrollCollectionView), userInfo: nil, repeats: true)
}
#objc func scrollCollectionView() {
collectionView.scrollToItem(at: someIndexPath, at: .centeredHorizontally, animated: true)
}
It perfectly works. But I think it has a big issue. Of course, I can open another screen from this UIViewController (for example, I can tap to another tab or push another UIViewController). It means, my UIViewController's view, containing UICollectionView, disappears. In another words, viewDidDissapear will be called. But my timer still exists and I am having strong reference to it, possibly, there is retain cycle. It keeps working and each 5 second scrollCollectionView method is called even my view dissapeared. I don't know how, but iOS somehow handles it. In other words, it can modify view even it is not visible. How is that possible and is it good practice? Of course, I can invalidate my timer in viewDidDissapear and start it in viewDidAppear. But I don't want to loose my timer value and don't want to start it from zero again. Or may be it is ok to invalidate my timer in deinit?
My question covers pretty common situation. For instance, if I make network request and open another UIViewController. After that request finished, I should modify UI, but now am on another screen. Is it ok to allow iOS to modify UI even it is not visible?
A couple of thoughts:
If the timer is updating the UI at some interval, you definitely should start it in viewDidAppear and stop it in viewDidDisappear. There’s no point in wasting resources updating a view that is not visible.
By doing this, you can solve your strong reference cycle, too.
In terms of “losing” your timer value and starting at zero, we generally would just save the time you’re counting from or to and calculate the necessary value when restarting the timer later.
We do this, anyway, because you really shouldn’t be using timers to increment values because you’re technically not assured that they’ll be called with the frequency you expect.
All of that said, I don’t know what timer “value” you’re worried about losing in this example.
But definitely don’t waste time updating a UI that is no longer visible. It’s not scalable and blurs the distinction between the model (what you’re counting to or from) from the UI (the update that happens every five seconds).
In SpriteKit, is there a callback when a scene has completed its transition?
It doesn't appear like the SKView presentScene function has a callback.
The alternative is to have the scene manually notify the caller after the scene moves into view, but was hoping there is a cleaner approach with a native callback.
presentScene has no known callback when a scene has finished transitions, instead, use either Notifications or create your own delegate of some kind on your outgoing scenes func willMove(from:view) to achieve the desired effect
func willMove(from view:SKView)
{
NotificationCenter.default.post(name: "TRANSITIONCOMPLETE", object: nil)
//or create a delegate using protocols, assign the delegate, and call it
delegate?.finishedTransition()
}
Note, you must use the outgoingScenes willMove(from:view), this is the last thing to happen during the transition. didMove(to:view) on the incomingScene is the start of the transition
There are 2 ways to get CADisplayLink in iOS. The direct one is to use initializer:
let displaylink = CADisplayLink(target: self,
selector: #selector(step))
Returns a new display link.
This way is used in Apple's example: Listing 1.
But there are other way to get it from UIScreen:
let displayLink = UIScreen.main.displayLink(withTarget: self,
selector: #selector(step))
Returns a display link object for the current screen.
You use display link objects to synchronize your drawing code to the screen’s refresh rate. The newly constructed display link retains the target.
Documentation is very poor for details, but the second way looks a little more optimised. May be someone who has experience with CADisplayLink can tell which way to create it is preferred.
I am using this function to detect a screenshot in Swift:
let mainQueue = OperationQueue.main
NotificationCenter.default.addObserver(forName: UIApplication.userDidTakeScreenshotNotification,
object: nil,
queue: mainQueue) { notification in
print("[!]detected screenshot")
}
It is located in the viewDidLoad() and each time I access the viewController it adds another screenshot observer. So if I were to access the view controller twice in the same session, it would execute two times when I take a screenshot. If I were to visit the view controller this function is running four times, it the screenshot observer would execute four times. How do I keep this from being redeclared between view controller sessions? Thank you for the help.
The problem is that it seems every example of screenshot observer we run at in the internet use the main queue. This is kind of misleading, since it means that the observer is added in a more general context, instead of the view controller context, which is what you want.
The way to do it would be to add the following to the viewDidLoad (or to the viewWillAppear, or whichever fits better in your navigation flow):
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(didTakeScreenshot), name: UIApplicationUserDidTakeScreenshotNotification, object: nil)
(You should replace didTakeScreenshot with your desired method's name)
And then, on deinit (or viewDidDisappear...), you should remove the observer:
NSNotificationCenter.defaultCenter().removeObserver(self)
(for removing all observers)
or
NSNotificationCenter.defaultCenter().removeObserver(self, name: UIApplicationUserDidTakeScreenshotNotification, object: nil)
(for removing that specific observer only)
I know this was asked a long time ago, but here's the answer, just in case somebody comes across the same problem. The most simple answer is to override the method viewDidAppear and subscribe to your notification there, and override viewWillDisappear and unsubscribe to the notification there. That way if you go to the view controller, you subscribe, and if you leave, you unsubscribe. In my opinion, it is not good to subscribe in viewDidLoad. The reason is that viewDidLoad is not called every time the view appears. Swift only calls the method if the view controller has not been loaded before, so, if you unsubscribe when it disappears, but do not subscribe when it appears, you will not be subscribed because viewDidLoad will not be called.
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods. In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations. I guess it does not do a snapshot of whole view hierarchy, as that would be horribly inefficient. The animation block is opaque code blob during runtime, so I don't think it can analyze that directly. Does it replace the implementation of property setters on CALayer with some kind of recoding versions? Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
It is actually a very elegant solution that is built around the fact that the view is the layers delegate and that stand-alone layers implicitly do animate on property changes.
It just happens to be that I gave a BLITZ talk about this at NSConference just a couple of days ago and I posted my slides on GitHub and tried to write down more or less what I said in the presenter notes.
That said: it is a very interesting question that I don't see being asked very often. It may be a bit to broad but I really like curiosity.
UIView animations existed before iOS 4
Ever since their introduction in iOS 4, I have been wondering about the internal implementation of the UIView's block-based animation methods.
UIView animations existed before iOS 4 but in a different style (that is no longer recommended to use because it is more cumbersome to use). For example, animating position and color of a view with a delay could be done like this. Disclaimer: I did not run this code so it may contains bugs.
// Setup
static void *myAnimationContext = &myAnimationContext;
[UIView beginAnimations:#"My Animation ID" context:myAnimationContext];
// Configure
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelay:0.25];
[UIView setAnimationCurve:UIViewAnimationCurveEaseInOut];
// Make changes
myView.center = newCenter;
myView.backgroundColor = newColor;
// Commit
[UIView commitAnimations];
The view-layer synergy is very elegant
In particular I would like to understand what mystical features of Objective C are used there to capture all the relevant layer state changes before and after execution of the animation block.
It is actually the other way around. The view is built on top of the layer and they work together very closely. When you set a property on the view it sets the corresponding property on the layer. You can for example see that the view doesn't even have it's own variable for the frame, bounds or position.
Observing the black-box implementation, I gather that it needs to capture the before-state of all layer properties modified in the animation block, to create all the relevant CAAnimations.
It does not need to do that and this is where it all gets very elegant. Whenever a layer property changes, the layer looks for the action (a more general term for an animation) to execute. Since setting most properties on a view actually sets the property on the layer, you are implicitly setting a bunch of layer properties.
The first place that the layer goes looking for an action is that it asks the layer delegate (it is documented behaviour that the view is the layers delegate). This means that when the layer property changes, the layers asks the view to provide an animation object for that each property change. So the view doesn't need to keep track of any state since the layer has the state and the layer asks the view to provide an animation when the properties change.
Actually, that's not entirely true. The view needs to keep track of some state such as: if you are inside of the block or not, what duration to use for the animation, etc.
You could imagine that the API looks something like this.
Note: I don't know what the actual implementation does and this is obviously a huge simplification to prove a point
// static variables since this is a class method
static NSTimeInterval _durationToUseWhenAsked;
static BOOL _isInsideAnimationBlock;
// Oversimplified example implementation of how it _could_ be done
+ (void)animateWithDuration:(NSTimeInterval)duration
animations:(void (^)(void))animations
{
_durationToUseWhenAsked = duration;
_isInsideAnimationBlock = YES;
animations();
_isInsideAnimationBlock = NO;
}
// Running the animations block is going to change a bunch of properties
// which result in the delegate method being called for each property change
- (id<CAAction>)actionForLayer:(CALayer *)layer
forKey:(NSString *)event
{
// Don't animate outside of an animation block
if (!_isInsideAnimationBlock)
return (id)[NSNull null]; // return NSNull to don't animate
// Only animate certain properties
if (![[[self class] arrayOfPropertiesThatSupportAnimations] containsObject:event])
return (id)[NSNull null]; // return NSNull to don't animate
CABasicAnimation *theAnimation = [CABasicAnimation animationWithKeyPath:event];
theAnimation.duration = _durationToUseWhenAsked;
// Get the value that is currently seen on screen
id oldValue = [[layer presentationLayer] valueForKeyPath:event];
theAnimation.fromValue = oldValue;
// Only setting the from value means animating form that value to the model value
return theAnimation;
}
Does it replace the implementation of property setters on CALayer with some kind of recoding versions?
No (see above)
Or is the support for this property change recoding baked-in somewhere deep inside the CALayers?
Yes, sort of (see above)
Creating similar API yourself
To generalize the question a little bit, is it possible do create similar block-based API for recording state changes using some Objective C dark magic, or does this rely on knowing and having the access to the internals of the objects being changed in the block?
You can definitely create a similar block based API if you want to provide your own animations based on property changes. If you look at the techniques I showed in my talk at NSConference for inspecting UIView animations (directly asking the layer for the actionForLayer:forKey: and using layerClass to create a layer that logs all addAnimation:forKey: information) then you should be able to learn enough about how the view is using the layer to create this abstraction.
I'm not sure if recording state changes is you end goal or not. If you only want to do your own animation API then you shouldn't have to. If you really want to do it, You could probably could, but there wouldn't be as much communication infrastructure (delegate methods and callbacks between the view and the layer) available to you as there is for animations.
David's answer is awesome. You should accept it as the definitive answer.
I do have a minor contribution. I created a markdown file in one of my github projects called "Sleuthing UIView Animations." (link) It goes into more detail on how you can watch the CAAnimation objects that the system creates in response to UIView animations. The project is called KeyframeViewAnimations. (link)
It also shows working code that logs the CAAnimations that are created when you submit UIView animations.
And, to give credit where credit is due, it was David who suggested the technique I use.