CALayer delegate is only called occasionally, when using Swift - ios

I'm new to IOS and Swift, so I've started by porting Apple's Accelerometer example code to Swift.
This was all quite straightforward. Since the Accelerometer API has been deprecated, I used Core Motion instead, and it works just fine. I also switched to a storyboard.
The problem I have is that my layer delegate is only rarely called. It will go for a few minutes and never get called, and then it will get called 40 times a second, and then go back to not being called. If I context switch, the delegate will get called, and one of the sublayers will be displayed, but there are 32 sublayers, and I've yet to see them all get drawn. What's drawn seems to be fine - the problem is just getting the delegate to actually get called when I call setNeedsDisplay(), and getting all of the sublayers to get drawn.
I've checked to be sure that each sublayer has the correct bounds and frame dimensions, and I've checked to make sure that setNeedsDisplay() gets called after each accelerometer point is acquired.
If I attach an instrument, I see that the frame rate is usually zero, but occasionally it will be some higher number.
My guess is that the run loop isn't cycling. There's actually nothing in the run loop, and I'm not sure where to put one. In the ViewDidLoad delegate, I set up an update rate for the accelerometer, and call a function that updates the sublayers in the view. Everything else is event driven, so I don't know what I'd do with a run loop.
I've tried creating CALayers, and adding them as sublayers. I've also tried making the GraphViewSegment class a UIView, so it has it's own layer.
The version that's written in Objective C works perfectly reliably.
The way that this application works, is that acceleration values show up on the left side of the screen, and scroll to the right. To make it efficient, new acceleration values are written into a small sublayer that holds a graph for 32 time values. When it's full, that whole sublayer is just moved a pixel at a time to the right, and a new (or recycled) segment takes its place at the left side of the screen.
Here's the code that moves unchanged segments to the right by a pixel:
for s: GraphViewSegment in self.segments {
var position = s.layer.position
position.x += 1.0;
s.layer.position = position;
//s.layer.hidden = false
s.layer.setNeedsDisplay()
}
I don't think that the setNeedsDisplay is strictly necessary here, since it's called for the layer when the segment at the left gets a new line segment.
Here's how new layers are added:
public func addSegment() -> GraphViewSegment {
// Create a new segment and add it to the segments array.
var segment = GraphViewSegment(coder: self.coder)
// We add it at the front of the array because -recycleSegment expects the oldest segment
// to be at the end of the array. As long as we always insert the youngest segment at the front
// this will be true.
self.segments.insert(segment, atIndex: 0)
// this is now a weak reference
// Ensure that newly added segment layers are placed after the text view's layer so that the text view
// always renders above the segment layer.
self.layer.insertSublayer(segment.layer, below: self.text.layer)
// Position it properly (see the comment for kSegmentInitialPosition)
segment.layer.position = kSegmentInitialPosition;
//println("New segment added")
self.layer.setNeedsDisplay()
segment.layer.setNeedsDisplay()
return segment;
}
At this point I'm pretty confused. I've tried calling setNeedsDisplay all over the place, including the owning UIView. I've tried making the sublayers UIViews, and I've tried making them not be UIViews. No matter what I do, the behavior is always the same.
Everything is set up in viewDidLoad:
override func viewDidLoad() {
super.viewDidLoad()
pause.possibleTitles?.setByAddingObjectsFromArray([kLocalizedPause, kLocalizedResume])
isPaused = false
useAdaptive = false
self.changeFilter(LowpassFilter)
var accelerometerQueue = NSOperationQueue()
motionManager.accelerometerUpdateInterval = 1.0 / kUpdateFrequency
motionManager.startAccelerometerUpdatesToQueue(accelerometerQueue,
withHandler:
{(accelerometerData: CMAccelerometerData!, error: NSError!) -> Void in
self.accelerometer(accelerometerData)})
unfiltered.isAccessibilityElement = true
unfiltered.accessibilityLabel = "unfiltered graph"
filtered.isAccessibilityElement = true
filtered.accessibilityLabel = "filtered graph"
}
func accelerometer (accelerometerData: CMAccelerometerData!) {
if (!isPaused) {
let acceleration: CMAcceleration = accelerometerData.acceleration
filter.addAcceleration(acceleration)
unfiltered!.addPoint(acceleration.x, y: acceleration.y, z: acceleration.z)
filtered!.addPoint(filter.x, y: filter.y, z: filter.z)
//unfiltered.setNeedsDisplay()
}
}
Any idea?
I quite like Swift as a language - it takes the best parts of Java and C#, and adds some nice syntactic sugar. But this is driving me spare! I'm sure it's some little thing that I've overlooked, but I can't figure out what.

Since you've created a new NSOperationQueue for your accelerometer updates handler, everything that handler calls is also running in a separate queue, sequestered from the main run loop. I'd suggest either running that handler on the main thread NSOperationQueue.mainQueue() or moving anything that could update the UI back to the main thread via a block on the main queue:
NSOperationQueue.mainQueue().addOperationWithBlock {
// do UI stuff here
}

Related

Is it possible to pause a CAEmitterLayer?

I have a CAEmitterLayer instance that emits some CAEmitterCells. I'm wondering, is it possible to pause this layer such that no new CAEmitterCells are produced and the ones that have been produced remained fixed in their position on the screen? Then, when the CAEmitterLayer instance is "un-paused", the fixed CAEmitterCells on the screen start to move again.
Thanks for any help here.
EDIT
Setting:
emitterLayer.speed = 0.1
where emitterLayer is an instance of a subclass of CAEmitterLayer, just removes the layer completely from the view.
Setting:
emitterLayer.lifetime = 0.0
just stops any new emitterCells being produced but doesn't "freeze" the existing emitterCells at the current position.
You can set the lifetime property of the CAEmitterLayer to 0, which will cause newly emitted cells to not even be rendered, but will leave already existing cells unaffected. When you want to "un-pause" your emitter layer, you can simply reset lifetime to whatever it was before the pause.
To freeze the existing cells as well, you can set speed to 0 and also add a timeOffset.
extension CAEmitterLayer {
func pause() {
// Freeze existing cells
self.speed = 0
self.timeOffset = convertTime(CACurrentMediaTime(), from: self)
// Stop creating new cells
self.lifetime = 0
}
}
Then you can simply call it like emitterLayer.pause()

How to animate drawing in Swift, but also change a UIImageView's scale?

I'd like to animate a drawing sequence. My code draws a spiral into a UIImageView.image. The sequence changes the image contents, but also changes the scale of the surrounding UIImageView. The code is parameterized for the number of revolutions of the spiral:
func drawSpiral(rotations:Double) {
let scale = scaleFactor(rotations) // do some math to figure the best scale
UIGraphicsBeginImageContextWithOptions(mainImageView.bounds.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
context.scaleBy(x: scale, y: scale) // some animation prohibits changes!
// ... drawing happens here
myUIImageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
For example, I'd like to animate from drawSpiral(2.0) to drawSpiral(2.75) in 20 increments, over a duration of 1.0 seconds.
Can I setup UIView.annimate(withDuration...) to call my method with successive intermediate values? How? Is there a better animation approach?
Can I setup UIView.annimate(withDuration...) to call my method with successive intermediate values
Animation is merely a succession of timed intermediate values being thrown at something. It is perfectly reasonable to ask that they be thrown at your code so that you can do whatever you like with them. Here's how.
You'll need a special layer:
class MyLayer : CALayer {
#objc var spirality : CGFloat = 0
override class func needsDisplay(forKey key: String) -> Bool {
if key == #keyPath(spirality) {
return true
}
return super.needsDisplay(forKey:key)
}
override func draw(in con: CGContext) {
print(self.spirality) // in real life, this is our signal to draw!
}
}
The layer must actually be in the interface, though it can be impossible for the user to see:
let lay = MyLayer()
lay.frame = CGRect(x: 0, y: 0, width: 1, height: 1)
self.view.layer.addSublayer(lay)
Subsequently, we can initialize the spirality of the layer:
lay.spirality = 2.0
lay.setNeedsDisplay() // prints: 2.0
Now when we want to "animate" the spirality, this is what we do:
let ba = CABasicAnimation(keyPath:#keyPath(MyLayer.spirality))
ba.fromValue = lay.spirality
ba.toValue = 2.75
ba.duration = 1
lay.add(ba, forKey:nil)
CATransaction.setDisableActions(true)
lay.spirality = 2.75
The console shows the arrival of a succession of intermediate values over the course of 1 second!
2.03143266495317
2.04482554644346
2.05783333256841
2.0708108600229
2.08361491002142
2.0966724678874
2.10976020619273
2.12260236591101
2.13551922515035
2.14842618256807
2.16123360767961
2.17421661689878
2.18713565543294
2.200748950243
2.21360073238611
2.2268518730998
2.23987507075071
2.25273013859987
2.26560932397842
2.27846492826939
2.29135236144066
2.30436328798532
2.31764804571867
2.33049770444632
2.34330793470144
2.35606706887484
2.36881992220879
2.38163591921329
2.39440815150738
2.40716737508774
2.42003352940083
2.43287514150143
2.44590276479721
2.45875595510006
2.47169743478298
2.48451870679855
2.49806520342827
2.51120449602604
2.52407149970531
2.53691896796227
2.54965999722481
2.56257836520672
2.57552136480808
2.58910304307938
2.60209316015244
2.6151298135519
2.62802086770535
2.64094598591328
2.6540260463953
2.6669240295887
2.6798157542944
2.69264766573906
2.70616912841797
2.71896715462208
2.73285858333111
2.74564798176289
2.75
2.75
2.75
Those are exactly the numbers that would be thrown at an animatable property, such as when you change a view's frame origin x from 2 to 2.75 in a 1-second duration animation. But now the numbers are coming to you as numbers, and so you can now do anything you like with that series of numbers. If you want to call your method with each new value as it arrives, go right ahead.
Personally, in more complicated animations I would use lottie the animation itself is built with Adobe After Effect and exported as a JSON file which you will manage using the lottie library this approach will save you time and effort when you port your app to another platform like Android as they also have an Android Lottie which means the complicated process of creating the animation is only done once.
Lottie Files has some examples animations as well for you to look.
#Matt provided the answer and gets the checkmark. I'll recap some points for emphasis:
UIView animation is great for commonly animated properties, but if
you need to vary a property not on UIView's animatable list, you can't use it. You must
create a new CALayer and add a CABasicAnimation(keyPath:) to it.
I tried but was unable to get my CABasicAnimations to fire by adding them to the default UIView.layer. I needed to create a custom CALayer
sublayer to the UIView.layer - something like
myView.layer.addSublayer(myLayer)
Leave the custom sublayer installed and re-add the CABasicAnimation to that sublayer when (and only when) you want to animate drawing.
In the custom CALayer object, be sure to override class func needsDisplay(forKey key: String) -> Bool with your key property (as #Matt's example shows), and also override func draw(in cxt: CGContext) to do your drawing. Be sure to decorate your key property with #objc. And reference the key property within the drawing code.
A "gotcha" to avoid: in the UIView object, be sure to null out the usual draw method (override func draw(_ rect: CGRect) { }) to avoid conflict between animated and non-animated drawing on the separate layers. For coordinating animated and non-animated content in the same UIView, it's good (necessary?) to do all your drawing from your custom layer.
When doing that, use myLayer.setNeedsDisplay() to update the non-animated content within the custom layer; use myLayer.add(myBasicAnimation, forKey:nil) to trigger animated drawing within the custom layer.
As I said above, #Matt answered - but these items seemed worth emphasizing.

Observe UIView frame while animating [duplicate]

I want to observe changes to the x coordinate of my UIView's origin while it is being animated using animateWithDuration:delay:options:animations:completion:. I want to track changes in the x coordinate during this animation at a granular level because I want to make a change in interaction to another view that the view being animated may make contact with. I want to make that change at the exact point of contact. I want to understand the best way to do something like this at a higher level:
-- Should I use animateWithDuration:... in the completion call back at the point of contact? In other words, The first animation runs until it hits that x coordinate, and the rest of the animation takes place in the completion callback?
-- Should I use NSNotification observers and observe changes to the frame property? How accurate / granular is this? Can I track every change to x? Should I do this in a separate thread?
Any other suggestions would be welcome. I'm looking for a abest practice.
Use CADisplayLink since it is specifically built for this purpose. In the documentation, it says:
Once the display link is associated with a run loop, the selector on the target is called when the screen’s contents need to be updated.
For me I had a bar that fills up, and as it passed a certain mark, I had to change the colors of the view above that mark.
This is what I did:
let displayLink = CADisplayLink(target: self, selector: #selector(animationDidUpdate))
displayLink.frameInterval = 3
displayLink.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)
UIView.animateWithDuration(1.2, delay: 0.0, options: [.CurveEaseInOut], animations: {
self.viewGaugeGraph.frame.size.width = self.graphWidth
self.imageViewGraphCoin.center.x = self.graphWidth
}, completion: { (_) in
displayLink.invalidate()
})
func animationDidUpdate(displayLink: CADisplayLink) {
let presentationLayer = self.viewGaugeGraph.layer.presentationLayer() as! CALayer
let newWidth = presentationLayer.bounds.width
switch newWidth {
case 0 ..< width * 0.3:
break
case width * 0.3 ..< width * 0.6:
// Color first mark
break
case width * 0.6 ..< width * 0.9:
// Color second mark
break
case width * 0.9 ... width:
// Color third mark
break
default:
fatalError("Invalid value observed. \(newWidth) cannot be bigger than \(width).")
}
}
In the example, I set the frameInterval property to 3 since I didn't have to rigorously update. Default is 1 and it means it will fire for every frame, but it will take a toll on performance.
create a NSTimer with some delay and run particular selector after each time lapse. In that method check the frame of animating view and compare it with your colliding view.
And make sure you use presentationLayer frame because if you access view.frame while animating, it gives the destination frame which is constant through out the animation.
CGRect animationViewFrame= [[animationView.layer presentationLayer] frame];
If you don't want to create timer, write a selector which calls itself after some delay.Have delay around .01 seconds.
CLARIFICATION->
Lets say you have a view which you are animating its position from (0,0) to (100,100) with duration of 5secs. Assume you implemented KVO to the frame of this view
When you call the animateWithDuration block, then the position of the view changes directly to (100,100) which is final value even though the view moves with intermediate position values.
So, your KVO will be fired one time at the instant of start of animation.
Because, layers have layer Tree and Presentation Tree. While layer tree just stores destination values while presentation Layer stores intermediate values.
When you access view.frame it will always gives the value of frame in layer tree not the intermediate frames it takes.
So, you had to use presentation Layer frame to get intermediate frames.
Hope this helps.
UIDynamics and collision behaviours would be worth investigating here. You can set a delegate which is called when a collision occurs.
See the collision behaviour documentation for more details.

How to add SCNNodes without blocking main thread?

I'm creating and adding a large number of SCNNodes to a SceneKit scene, which causes the app to freeze for a second or two.
I thought I could fix this by putting all the action in a background thread using DispatchQueue.global(qos: .background).async(), but no dice. It behaves exactly the same.
I saw this answer and put the nodes through SCNView.prepare() before adding them, hoping it would slow down the background thread and prevent blocking. It didn't.
Here's a test function that reproduces the problem:
func spawnNodesInBackground() {
// put all the action in a background thread
DispatchQueue.global(qos: .background).async {
var nodes = [SCNNode]()
for i in 0...5000 {
// create a simple SCNNode
let node = SCNNode()
node.position = SCNVector3(i, i, i)
let geometry = SCNSphere(radius: 1)
geometry.firstMaterial?.diffuse.contents = UIColor.white.cgColor
node.geometry = geometry
nodes.append(node)
}
// run the nodes through prepare()
self.mySCNView.prepare(nodes, completionHandler: { (Bool) in
// nodes are prepared, add them to scene
for node in nodes {
self.myRootNode.addChildNode(node)
}
})
}
}
When I call spawnNodesInBackground() I expect the scene to continue rendering normally (perhaps at a reduced frame rate) while new nodes are added at whatever pace the CPU is comfortable with. Instead the app freezes completely for a second or two, then all the new nodes appear at once.
Why is this happening, and how can I add a large number of nodes without blocking the main thread?
I don't think this problem is solvable using the DispatchQueue. If I substitute some other task instead of creating SCNNodes it works as expected, so I think the problem is related to SceneKit.
The answers to this question suggest that SceneKit has its own private background thread that it batches all changes to. So regardless of what thread I use to create my SCNNodes, they all end up in the same queue in the same thread as the render loop.
The ugly workaround I'm using is to add the nodes a few at a time in SceneKit's delegated renderer(_:updateAtTime:) method until they're all done.
I poked around on this and didn't solve the freeze (I did reduce it a bit).
I expect that prepare() is going to exacerbate the freeze, not reduce it, because it's going to load all resources into the GPU immediately, instead of letting them be lazily loaded. I don't think you need to call prepare() from a background thread, because the doc says it already uses a background thread. But creating the nodes on a background thread is a good move.
I did see pretty good performance improvement by moving the geometry outside the loop, and by using a temporary parent node (which is then cloned), so that there's only one call to add a new child to the scene's root node. I also reduced the sphere's segment count to 10 (from the default of 48).
I started with the spinning spaceship sample project, and triggered the addition of the spheres from the tap gesture. Before my changes, I saw 11 fps, 7410 draw calls per frame, 8.18M triangles. After moving the geometry out of the loop and flattening the sphere tree, I hit 60 fps, with only 3 draw calls per frame and 1.67M triangles (iPhone 6s).
Do you need to build these objects at run time? You could build this scene once, archive it, and then embed it as an asset. Depending on the effect you want to achieve, you might also consider using SCNSceneRenderer's present(_:with:incomingPointOfView:transition:completionHandler) to replace the entire scene at once.
func spawnNodesInBackgroundClone() {
print(Date(), "starting")
DispatchQueue.global(qos: .background).async {
let tempParentNode = SCNNode()
tempParentNode.name = "spheres"
let geometry = SCNSphere(radius: 0.4)
geometry.segmentCount = 10
geometry.firstMaterial?.diffuse.contents = UIColor.green.cgColor
for x in -10...10 {
for y in -10...10 {
for z in 0...20 {
let node = SCNNode()
node.position = SCNVector3(x, y, -z)
node.geometry = geometry
tempParentNode.addChildNode(node)
}
}
}
print(Date(), "cloning")
let scnView = self.view as! SCNView
let cloneNode = tempParentNode.flattenedClone()
print(Date(), "adding")
DispatchQueue.main.async {
print(Date(), "main queue")
print(Date(), "prepare()")
scnView.prepare([cloneNode], completionHandler: { (Bool) in
scnView.scene?.rootNode.addChildNode(cloneNode)
print(Date(), "added")
})
// only do this once, on the simulator
// let sceneData = NSKeyedArchiver.archivedData(withRootObject: scnView.scene!)
// try! sceneData.write(to: URL(fileURLWithPath: "/Users/hal/scene.scn"))
print(Date(), "queued")
}
}
}
I have an asteroid simulation with 10000 nodes and ran into this issue myself. What worked for me was creating the container node, then passing it to a background process to fill it with child nodes.
That background process uses an SCNAction on that container node to add each of the generated asteroids to the container node.
let action = runBlock {
Container in
// generate nodes
/// then For each node in generatedNodes
Container.addChildNode(node)
}
I also used a shared level of detail node with an uneven sided block as its geometry so that the scene can draw those nodes in a single pass.
I also pre-generate 50 asteroid shapes that get random transformations applied during the background generation process. That process simply has to grab at random a pregen block apply a random simd transformation then stored for adding scene later.
I’m considering using a pyramid for the LOD but the 5 x 10 x 15 block works for my purpose. Also this method can be easily throttled to only add a set amount of blocks at a time by creating and passing multiple actions to the node. Initially I passed each node as an action but this way works too.
Showing the entire field of 10000 still affects the FPS slightly by 10 a 20 FPS but At that point the container nodes own LOD comes into effect showing a single ring.
Add all of them when application started but position them where camera dont see. When you need them change their position where they should be.

Observing change in frame of a UIView during animation

I want to observe changes to the x coordinate of my UIView's origin while it is being animated using animateWithDuration:delay:options:animations:completion:. I want to track changes in the x coordinate during this animation at a granular level because I want to make a change in interaction to another view that the view being animated may make contact with. I want to make that change at the exact point of contact. I want to understand the best way to do something like this at a higher level:
-- Should I use animateWithDuration:... in the completion call back at the point of contact? In other words, The first animation runs until it hits that x coordinate, and the rest of the animation takes place in the completion callback?
-- Should I use NSNotification observers and observe changes to the frame property? How accurate / granular is this? Can I track every change to x? Should I do this in a separate thread?
Any other suggestions would be welcome. I'm looking for a abest practice.
Use CADisplayLink since it is specifically built for this purpose. In the documentation, it says:
Once the display link is associated with a run loop, the selector on the target is called when the screen’s contents need to be updated.
For me I had a bar that fills up, and as it passed a certain mark, I had to change the colors of the view above that mark.
This is what I did:
let displayLink = CADisplayLink(target: self, selector: #selector(animationDidUpdate))
displayLink.frameInterval = 3
displayLink.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode)
UIView.animateWithDuration(1.2, delay: 0.0, options: [.CurveEaseInOut], animations: {
self.viewGaugeGraph.frame.size.width = self.graphWidth
self.imageViewGraphCoin.center.x = self.graphWidth
}, completion: { (_) in
displayLink.invalidate()
})
func animationDidUpdate(displayLink: CADisplayLink) {
let presentationLayer = self.viewGaugeGraph.layer.presentationLayer() as! CALayer
let newWidth = presentationLayer.bounds.width
switch newWidth {
case 0 ..< width * 0.3:
break
case width * 0.3 ..< width * 0.6:
// Color first mark
break
case width * 0.6 ..< width * 0.9:
// Color second mark
break
case width * 0.9 ... width:
// Color third mark
break
default:
fatalError("Invalid value observed. \(newWidth) cannot be bigger than \(width).")
}
}
In the example, I set the frameInterval property to 3 since I didn't have to rigorously update. Default is 1 and it means it will fire for every frame, but it will take a toll on performance.
create a NSTimer with some delay and run particular selector after each time lapse. In that method check the frame of animating view and compare it with your colliding view.
And make sure you use presentationLayer frame because if you access view.frame while animating, it gives the destination frame which is constant through out the animation.
CGRect animationViewFrame= [[animationView.layer presentationLayer] frame];
If you don't want to create timer, write a selector which calls itself after some delay.Have delay around .01 seconds.
CLARIFICATION->
Lets say you have a view which you are animating its position from (0,0) to (100,100) with duration of 5secs. Assume you implemented KVO to the frame of this view
When you call the animateWithDuration block, then the position of the view changes directly to (100,100) which is final value even though the view moves with intermediate position values.
So, your KVO will be fired one time at the instant of start of animation.
Because, layers have layer Tree and Presentation Tree. While layer tree just stores destination values while presentation Layer stores intermediate values.
When you access view.frame it will always gives the value of frame in layer tree not the intermediate frames it takes.
So, you had to use presentation Layer frame to get intermediate frames.
Hope this helps.
UIDynamics and collision behaviours would be worth investigating here. You can set a delegate which is called when a collision occurs.
See the collision behaviour documentation for more details.

Resources