Is it possible to pause a CAEmitterLayer? - ios

I have a CAEmitterLayer instance that emits some CAEmitterCells. I'm wondering, is it possible to pause this layer such that no new CAEmitterCells are produced and the ones that have been produced remained fixed in their position on the screen? Then, when the CAEmitterLayer instance is "un-paused", the fixed CAEmitterCells on the screen start to move again.
Thanks for any help here.
EDIT
Setting:
emitterLayer.speed = 0.1
where emitterLayer is an instance of a subclass of CAEmitterLayer, just removes the layer completely from the view.
Setting:
emitterLayer.lifetime = 0.0
just stops any new emitterCells being produced but doesn't "freeze" the existing emitterCells at the current position.

You can set the lifetime property of the CAEmitterLayer to 0, which will cause newly emitted cells to not even be rendered, but will leave already existing cells unaffected. When you want to "un-pause" your emitter layer, you can simply reset lifetime to whatever it was before the pause.
To freeze the existing cells as well, you can set speed to 0 and also add a timeOffset.
extension CAEmitterLayer {
func pause() {
// Freeze existing cells
self.speed = 0
self.timeOffset = convertTime(CACurrentMediaTime(), from: self)
// Stop creating new cells
self.lifetime = 0
}
}
Then you can simply call it like emitterLayer.pause()

Related

Observe UIView frame while animating [duplicate]

I want to observe changes to the x coordinate of my UIView's origin while it is being animated using animateWithDuration:delay:options:animations:completion:. I want to track changes in the x coordinate during this animation at a granular level because I want to make a change in interaction to another view that the view being animated may make contact with. I want to make that change at the exact point of contact. I want to understand the best way to do something like this at a higher level:
-- Should I use animateWithDuration:... in the completion call back at the point of contact? In other words, The first animation runs until it hits that x coordinate, and the rest of the animation takes place in the completion callback?
-- Should I use NSNotification observers and observe changes to the frame property? How accurate / granular is this? Can I track every change to x? Should I do this in a separate thread?
Any other suggestions would be welcome. I'm looking for a abest practice.
Use CADisplayLink since it is specifically built for this purpose. In the documentation, it says:
Once the display link is associated with a run loop, the selector on the target is called when the screen’s contents need to be updated.
For me I had a bar that fills up, and as it passed a certain mark, I had to change the colors of the view above that mark.
This is what I did:
let displayLink = CADisplayLink(target: self, selector: #selector(animationDidUpdate))
displayLink.frameInterval = 3
displayLink.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)
UIView.animateWithDuration(1.2, delay: 0.0, options: [.CurveEaseInOut], animations: {
self.viewGaugeGraph.frame.size.width = self.graphWidth
self.imageViewGraphCoin.center.x = self.graphWidth
}, completion: { (_) in
displayLink.invalidate()
})
func animationDidUpdate(displayLink: CADisplayLink) {
let presentationLayer = self.viewGaugeGraph.layer.presentationLayer() as! CALayer
let newWidth = presentationLayer.bounds.width
switch newWidth {
case 0 ..< width * 0.3:
break
case width * 0.3 ..< width * 0.6:
// Color first mark
break
case width * 0.6 ..< width * 0.9:
// Color second mark
break
case width * 0.9 ... width:
// Color third mark
break
default:
fatalError("Invalid value observed. \(newWidth) cannot be bigger than \(width).")
}
}
In the example, I set the frameInterval property to 3 since I didn't have to rigorously update. Default is 1 and it means it will fire for every frame, but it will take a toll on performance.
create a NSTimer with some delay and run particular selector after each time lapse. In that method check the frame of animating view and compare it with your colliding view.
And make sure you use presentationLayer frame because if you access view.frame while animating, it gives the destination frame which is constant through out the animation.
CGRect animationViewFrame= [[animationView.layer presentationLayer] frame];
If you don't want to create timer, write a selector which calls itself after some delay.Have delay around .01 seconds.
CLARIFICATION->
Lets say you have a view which you are animating its position from (0,0) to (100,100) with duration of 5secs. Assume you implemented KVO to the frame of this view
When you call the animateWithDuration block, then the position of the view changes directly to (100,100) which is final value even though the view moves with intermediate position values.
So, your KVO will be fired one time at the instant of start of animation.
Because, layers have layer Tree and Presentation Tree. While layer tree just stores destination values while presentation Layer stores intermediate values.
When you access view.frame it will always gives the value of frame in layer tree not the intermediate frames it takes.
So, you had to use presentation Layer frame to get intermediate frames.
Hope this helps.
UIDynamics and collision behaviours would be worth investigating here. You can set a delegate which is called when a collision occurs.
See the collision behaviour documentation for more details.

CAAnimation on multiple SceneKit nodes simultaneously

I am creating an application wherein I am using SceneKit contents in AR app. I have multiple nodes which are being placed at different places in my scene. They may or may not be necessarily be inside one parent node. The user has to choose a correct node, as per challenge set by the application. If the user chooses correct node, the correct node goes through one kind of animation and incorrect ones (may be several) undergo another set of animation. I am accomplishing animations using CAAnimation directly, which is all good. Basically to accomplish this, I am creating an array of all nodes and using them for animation.
DispatchQueue.global(qos: .userInteractive).async { [weak self] in
for node in (self?.nodesAddedInScene.keys)! {
for index in 1...node.childNodes.count - 1 {
if node.childNodes[index].childNodes.first?.name == "target" {
self?.riseUpSpinAndFadeAnimation(on: node.childNodes[index])
} else {
self?.fadeAnimation(on: node.childNodes[index])
}
}
}
}
When user chooses "target" node, that node goes through one set of animation and others go through another set of animations.
RiseUpSpinAndFadeAnimation:
private func riseUpSpinAndFadeAnimation(on shape: SCNNode) {
let riseUpAnimation = CABasicAnimation(keyPath: "position")
riseUpAnimation.fromValue = SCNVector3(shape.position.x, shape.position.y, shape.position.z)
riseUpAnimation.toValue = SCNVector3(shape.position.x, shape.position.y + 0.5, shape.position.z)
let spinAnimation = CABasicAnimation(keyPath: "eulerAngles.y")
spinAnimation.toValue = shape.eulerAngles.y + 180.0
spinAnimation.autoreverses = true
let fadeAnimation = CABasicAnimation(keyPath: "opacity")
fadeAnimation.toValue = 0.0
let riseUpSpinAndFadeAnimation = CAAnimationGroup()
riseUpSpinAndFadeAnimation.animations = [riseUpAnimation, fadeAnimation, spinAnimation]
riseUpSpinAndFadeAnimation.duration = 1.0
riseUpSpinAndFadeAnimation.fillMode = kCAFillModeForwards
riseUpSpinAndFadeAnimation.isRemovedOnCompletion = false
shape.addAnimation(riseUpSpinAndFadeAnimation, forKey: "riseUpSpinAndFade")
}
FadeAnimation:
private func fadeAnimation(on shape: SCNNode) {
let fadeAnimation = CABasicAnimation(keyPath: "opacity")
fadeAnimation.toValue = 0.0
fadeAnimation.duration = 0.5
fadeAnimation.fillMode = kCAFillModeForwards
fadeAnimation.isRemovedOnCompletion = false
shape.addAnimation(fadeAnimation, forKey: "fade")
}
I expect animations to work out, which they are actually. However, the issue is since the nodes are in an array animation is not being done at the same time for all nodes. There are minute differences in start of animation which actually is leading to not so good UI.
What I am looking for is a logic wherein I can attach animations on all nodes and call these animations together later when let's say the user taps correct node. Arrays don't seem to be a wise choice to me. However, I am afraid if I make all of these nodes child nodes of an empty node and the run animation on that empty node, possibly it would be difficult to manage placement of these child nodes in the first place since they supposed to be kept at random distances and not necessarily close together. Given that this ultimately drives AR experience, it more so becomes a bummer.
Requesting some inputs whether there are methods to attach animation to multiple (selected out of those) object (even if sequentially) but RUN them together. I used shape.addAnimation(fadeAnimation, forKey: "fade") "forKey", can that be made of use in such use case? Any pointers appreciated.
I've had up to fifty SCNNodes animating in perfect harmony by using CAKeyframe animations that are paused (.speed = 0) and setting the animation's time offset (.timeOffset) inside a SCNSceneRendererDelegate "renderer(updateAtTime:)" function.
It's pretty amazing how you can add a paused animation with an time offset every 1/60th of a second for a large number of nodes. Hats off to the SceneKit developers for having so little overhead on adding and removing CAAnimations.
I tried many different CAAnimation/SCNAction techniques before settling on this. In other methods the animations would drift out of sync over time.
Manganese,
I am just taking a guess here or could spark an idea for you :-)
I am focusing on this part of your question:
"What I am looking for is a logic wherein I can attach animations on all nodes and call these animations together later when let's say the user taps correct node."
I wonder if SCNTransaction:
[https://developer.apple.com/documentation/scenekit/scntransaction][1]
might do the trick
or maybe dispatch.sync or async (totally guessing...but could help)
[https://developer.apple.com/documentation/dispatch][1]
or I am way off the mark :-)
just trying to help out....
We all learn by sharing what we know
RAD

How to know whether an SKSpriteNode is not in motion or stopped?

In my game, I need to do some setting changes after my SKSpriteNode stops moving.
First,
I apply force on my SKSpriteNode, so it starts moving due to the force acting on it.
Now, i need to know when it stops moving.
I know that SKSpriteNode has a speed property, but it is always set to 1.0 by default.
Any ideas?
You can check a node's velocity by using something like this:
if((self.physicsBody.velocity.dx == 0.0) && (self.physicsBody.velocity.dy == 0.0)) {
NSLog(#"node has stopped moving");
}
The usual place to put the above code would be in the update method.
Since you are using physics you can use the resting property of SKPhysicsBody.
if sprite.physicsBody?.resting == true {
println("The sprite is resting")
} else {
println("The sprite is moving")
}
Hope this helps.
You can subclass SKSpriteNode and add a previousPosition property on it, which you can update on -didFinishUpdateMethod. If there is no change in the position you can assume the node is stopped (you might have to get a median of the last few positions to smooth it out). The speed property is something different, according to the class reference, speed is:
A speed modifier applied to all actions executed by a node and its descendants.
Hope this helps!
Danny

CALayer delegate is only called occasionally, when using Swift

I'm new to IOS and Swift, so I've started by porting Apple's Accelerometer example code to Swift.
This was all quite straightforward. Since the Accelerometer API has been deprecated, I used Core Motion instead, and it works just fine. I also switched to a storyboard.
The problem I have is that my layer delegate is only rarely called. It will go for a few minutes and never get called, and then it will get called 40 times a second, and then go back to not being called. If I context switch, the delegate will get called, and one of the sublayers will be displayed, but there are 32 sublayers, and I've yet to see them all get drawn. What's drawn seems to be fine - the problem is just getting the delegate to actually get called when I call setNeedsDisplay(), and getting all of the sublayers to get drawn.
I've checked to be sure that each sublayer has the correct bounds and frame dimensions, and I've checked to make sure that setNeedsDisplay() gets called after each accelerometer point is acquired.
If I attach an instrument, I see that the frame rate is usually zero, but occasionally it will be some higher number.
My guess is that the run loop isn't cycling. There's actually nothing in the run loop, and I'm not sure where to put one. In the ViewDidLoad delegate, I set up an update rate for the accelerometer, and call a function that updates the sublayers in the view. Everything else is event driven, so I don't know what I'd do with a run loop.
I've tried creating CALayers, and adding them as sublayers. I've also tried making the GraphViewSegment class a UIView, so it has it's own layer.
The version that's written in Objective C works perfectly reliably.
The way that this application works, is that acceleration values show up on the left side of the screen, and scroll to the right. To make it efficient, new acceleration values are written into a small sublayer that holds a graph for 32 time values. When it's full, that whole sublayer is just moved a pixel at a time to the right, and a new (or recycled) segment takes its place at the left side of the screen.
Here's the code that moves unchanged segments to the right by a pixel:
for s: GraphViewSegment in self.segments {
var position = s.layer.position
position.x += 1.0;
s.layer.position = position;
//s.layer.hidden = false
s.layer.setNeedsDisplay()
}
I don't think that the setNeedsDisplay is strictly necessary here, since it's called for the layer when the segment at the left gets a new line segment.
Here's how new layers are added:
public func addSegment() -> GraphViewSegment {
// Create a new segment and add it to the segments array.
var segment = GraphViewSegment(coder: self.coder)
// We add it at the front of the array because -recycleSegment expects the oldest segment
// to be at the end of the array. As long as we always insert the youngest segment at the front
// this will be true.
self.segments.insert(segment, atIndex: 0)
// this is now a weak reference
// Ensure that newly added segment layers are placed after the text view's layer so that the text view
// always renders above the segment layer.
self.layer.insertSublayer(segment.layer, below: self.text.layer)
// Position it properly (see the comment for kSegmentInitialPosition)
segment.layer.position = kSegmentInitialPosition;
//println("New segment added")
self.layer.setNeedsDisplay()
segment.layer.setNeedsDisplay()
return segment;
}
At this point I'm pretty confused. I've tried calling setNeedsDisplay all over the place, including the owning UIView. I've tried making the sublayers UIViews, and I've tried making them not be UIViews. No matter what I do, the behavior is always the same.
Everything is set up in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
pause.possibleTitles?.setByAddingObjectsFromArray([kLocalizedPause, kLocalizedResume])
isPaused = false
useAdaptive = false
self.changeFilter(LowpassFilter)
var accelerometerQueue = NSOperationQueue()
motionManager.accelerometerUpdateInterval = 1.0 / kUpdateFrequency
motionManager.startAccelerometerUpdatesToQueue(accelerometerQueue,
withHandler:
{(accelerometerData: CMAccelerometerData!, error: NSError!) -> Void in
self.accelerometer(accelerometerData)})
unfiltered.isAccessibilityElement = true
unfiltered.accessibilityLabel = "unfiltered graph"
filtered.isAccessibilityElement = true
filtered.accessibilityLabel = "filtered graph"
}
func accelerometer (accelerometerData: CMAccelerometerData!) {
if (!isPaused) {
let acceleration: CMAcceleration = accelerometerData.acceleration
filter.addAcceleration(acceleration)
unfiltered!.addPoint(acceleration.x, y: acceleration.y, z: acceleration.z)
filtered!.addPoint(filter.x, y: filter.y, z: filter.z)
//unfiltered.setNeedsDisplay()
}
}
Any idea?
I quite like Swift as a language - it takes the best parts of Java and C#, and adds some nice syntactic sugar. But this is driving me spare! I'm sure it's some little thing that I've overlooked, but I can't figure out what.
Since you've created a new NSOperationQueue for your accelerometer updates handler, everything that handler calls is also running in a separate queue, sequestered from the main run loop. I'd suggest either running that handler on the main thread NSOperationQueue.mainQueue() or moving anything that could update the UI back to the main thread via a block on the main queue:
NSOperationQueue.mainQueue().addOperationWithBlock {
// do UI stuff here
}

Observing change in frame of a UIView during animation

I want to observe changes to the x coordinate of my UIView's origin while it is being animated using animateWithDuration:delay:options:animations:completion:. I want to track changes in the x coordinate during this animation at a granular level because I want to make a change in interaction to another view that the view being animated may make contact with. I want to make that change at the exact point of contact. I want to understand the best way to do something like this at a higher level:
-- Should I use animateWithDuration:... in the completion call back at the point of contact? In other words, The first animation runs until it hits that x coordinate, and the rest of the animation takes place in the completion callback?
-- Should I use NSNotification observers and observe changes to the frame property? How accurate / granular is this? Can I track every change to x? Should I do this in a separate thread?
Any other suggestions would be welcome. I'm looking for a abest practice.
Use CADisplayLink since it is specifically built for this purpose. In the documentation, it says:
Once the display link is associated with a run loop, the selector on the target is called when the screen’s contents need to be updated.
For me I had a bar that fills up, and as it passed a certain mark, I had to change the colors of the view above that mark.
This is what I did:
let displayLink = CADisplayLink(target: self, selector: #selector(animationDidUpdate))
displayLink.frameInterval = 3
displayLink.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)
UIView.animateWithDuration(1.2, delay: 0.0, options: [.CurveEaseInOut], animations: {
self.viewGaugeGraph.frame.size.width = self.graphWidth
self.imageViewGraphCoin.center.x = self.graphWidth
}, completion: { (_) in
displayLink.invalidate()
})
func animationDidUpdate(displayLink: CADisplayLink) {
let presentationLayer = self.viewGaugeGraph.layer.presentationLayer() as! CALayer
let newWidth = presentationLayer.bounds.width
switch newWidth {
case 0 ..< width * 0.3:
break
case width * 0.3 ..< width * 0.6:
// Color first mark
break
case width * 0.6 ..< width * 0.9:
// Color second mark
break
case width * 0.9 ... width:
// Color third mark
break
default:
fatalError("Invalid value observed. \(newWidth) cannot be bigger than \(width).")
}
}
In the example, I set the frameInterval property to 3 since I didn't have to rigorously update. Default is 1 and it means it will fire for every frame, but it will take a toll on performance.
create a NSTimer with some delay and run particular selector after each time lapse. In that method check the frame of animating view and compare it with your colliding view.
And make sure you use presentationLayer frame because if you access view.frame while animating, it gives the destination frame which is constant through out the animation.
CGRect animationViewFrame= [[animationView.layer presentationLayer] frame];
If you don't want to create timer, write a selector which calls itself after some delay.Have delay around .01 seconds.
CLARIFICATION->
Lets say you have a view which you are animating its position from (0,0) to (100,100) with duration of 5secs. Assume you implemented KVO to the frame of this view
When you call the animateWithDuration block, then the position of the view changes directly to (100,100) which is final value even though the view moves with intermediate position values.
So, your KVO will be fired one time at the instant of start of animation.
Because, layers have layer Tree and Presentation Tree. While layer tree just stores destination values while presentation Layer stores intermediate values.
When you access view.frame it will always gives the value of frame in layer tree not the intermediate frames it takes.
So, you had to use presentation Layer frame to get intermediate frames.
Hope this helps.
UIDynamics and collision behaviours would be worth investigating here. You can set a delegate which is called when a collision occurs.
See the collision behaviour documentation for more details.

Resources