Unable to locate memory leak using Xcode Instruments - ios

The problem
I discovered earlier that my memory usage in my game was only going up when the tiles were moved, but it never went back down again. From this, I could tell there was a memory leak.
I then started using Xcode Instruments, which I am very new to. So I followed many things from this article, especially the Recording Options, and then I set the mode to show the Call Tree.
Results of Instruments
What do I need help with?
I have two functions that just move all the tiles along that row/column, and then clones the tile at the end (using node.copy()) so everything can "loopover", hence the project name.
I feel as if the tile cloning may be causing some retention cycle, however, it is stored in the variable within the function scope. After I run the SKAction on the clone, I remove the tile from the scene using copiedNode.removeFromParent().
So what may be causing this memory leak? Could I be looking in the wrong place?
Code
I have shortened this code to what I consider necessary.
Declaration at the top of the class:
/// Delegate to the game scene to reference properties.
weak var delegate: GameScene!
/// All the cloned tiles currently on the board.
private var cloneTiles = [SKSpriteNode]()
Cloning of the tile within the moving tiles functions:
/// A duplicate of the current tile.
let copiedNode = currentTile.node.copy() as! SKSpriteNode // Create copy
cloneTiles.append(copiedNode) // Add as a clone
delegate.addChild(copiedNode) // Add to the scene
let copiedNodeAction = SKAction.moveBy(x: movementDifference, y: 0, duration: animationDuration) // Create the movement action
// Run the action, and then remove itself
copiedNode.run(copiedNodeAction) {
self.cloneTiles.remove(at: self.cloneTiles.firstIndex(of: copiedNode)!)
copiedNode.removeFromParent()
}
Function to move tiles immediately:
/// Move all tiles to the correct location immediately.
private func moveTilesToLocationImmediately() {
// Remove all clone tiles
cloneTiles.forEach { $0.removeFromParent() }
cloneTiles.removeAll()
/* Moves tiles here */
}
Is there something I need to declare as a weak var or something? I know how retain cycles occur, but do not get why it exists in this code, as I remove the cloned tile reference from the cloneTiles array.
Roughly where the leak is occurring (helped by Mark Szymczyk)
Here is what happened after I double-clicked on the move tiles function in the call stack (refer to his answer below):
This is confirming that the memory leak is caused somehow by the node clone, but I still don't know why this node is still being retained after it is removed from the cloneTiles array and the scene. Could the node be having trouble getting removed from the scene for some reason?
Please leave any tips or questions about this, so this problem can be solved!
More investigating
I have now been trying to get to grips with Xcode Instruments, but I am still really struggling to find this memory leak. Here is the leaks panel which may help:
Even after trying [weak self], I still had no luck:
Even the leaks history still looks the same with the [weak self] within the closure.
Continuing to try to resolve the reference cycle
Currently, #matt is helping me with this issue. I have changed a few lines of code, by adding things like [unowned self]:
// Determine if the tile will roll over
if direction == .up && movementDifference < 0 || direction == .down && movementDifference > 0 {
// Calculate where the clone tile should move to
movementDifference -= rollOverDistance
/// A duplicate of the current tile.
let copiedNode = currentTile.node.copy() as! SKSpriteNode // Create copy
cloneTiles.append(copiedNode) // Add as a clone
delegate.addChild(copiedNode) // Add to the scene
let copiedNodeAction = SKAction.moveBy(x: 0, y: movementDifference, duration: animationDuration) // Create the movement action
// Run the action, and then remove itself
copiedNode.run(copiedNodeAction) { [unowned self, copiedNode] in
self.cloneTiles.remove(at: self.cloneTiles.firstIndex(of: copiedNode)!).removeFromParent()
}
// Move the original roll over tile back to the other side of the screen
currentTile.node.position.y += rollOverDistance
}
/// The normal action to perform, moving the tile by a distance.
let normalNodeAction = SKAction.moveBy(x: 0, y: movementDifference, duration: animationDuration) // Create the action
currentTile.node.run(normalNodeAction) { [unowned self] in // Apply the action
if forRow == 1 { self.animationsCount -= 1 } // Lower animation count for completion
}
Unfortunately, I could not make copiedNode a weak property as it would always be instantly nil, and unowned caused a crash about the reference being read after being deallocated. Here is also the Cycles & Roots graph if this is helpful:
Thank you for any help!

I'm rather suspicious of the way you're managing the copied node; you may be releasing it prematurely, and only the retain cycle was preventing you from discovering this mistake. However, let's concentrate on breaking the retain cycle.
What you want to do is make everything coming into the action method weak, so that there is no strong capture by the action method. Then in the action method you want to immediately retain those weak references so they don't vanish out from under you. That's called the "weak-strong dance". Like this:
copiedNode.run(copiedNodeAction) { [weak self, weak copiedNode] in
if let `self` = self, let copiedNode = copiedNode {
// do stuff here
// be sure to log so you know we arrived here at all, as we might not
}
}

I can help a little on the Instruments front. If you double-click the moveHorizontally entry in the Instruments call tree, Instruments will show you the lines of code that are allocating the leaked memory. At the bottom of the window is a Call Tree button. If you click on that, you can invert the call tree and hide system libraries. Doing that will make it easier to find your code in the call tree.
You can learn more about Instruments in the following article:
Measuring Your App's Memory Usage with Instruments

Related

Is it possible to have an SKaction repeat forever only when the sprite is in view of the camera/player?

This is for a 2D game:
I have certain quality of life SKactions repeating forever, the two big ones for me are coins rotating/bobbing up and down and water flowing.
According to Apple's documentation, SKactions are instanced. So as long as I have the action subclassed then its only running "once" regardless of how many sprites its being used on. For example, as long as I have all my coins getting their actions from the same "Coin" class, then the memory footprint being used by the coin's action is the same regardless if I have 1 or 20 coins.
All that being said, it seems like such a waste to have these actions going when they aren't even in view of the camera/player.
Is there a way to have repeating forever actions deactivate when they aren't in view of the camera? I know that defeats the purpose of "forever" but as far as I can tell its either choosing some some sort of static duration or choosing forever.
Any advice is greatly appreciated.
Is better to add node only when is visible, but if you have few nodes, you can use
func containedNodeSet() -> Set
Example
class Enemy: SKSprite {
func startAnimationForever() {
//do animation if is not already running
}
func stopAnimation() {//stop animation}
}
in your scene, suppose your cam i myCam:
//add all enemies in this var or search in the scene all enemy
var allEnemies = Set<Enemy>()
func allVisibleEnemy() -> Set<Enemy>() {
let allVisibleEnemy = myCam.containedNodeSet().enumerated().flatMap{node in
if let enemy = node as? Enemy {
return enemy
}
}
return Set(arrayLiteral: allVisibleEnemy)
}
func allInvisibleEnemy() -> Set<Enemy>() {
let allVisibleEnemy = allVisibleEnemy()
return allEnemies.substract(allVisibleEnemy)
}
override func update() {
//all your stuff
let allVisibleEnemy = getVisibleEnemy()
let allInvisibleEnemy = allInvisibleEnemy()
allVisibleEnemy.forEach{enemy in
enemy.startAnimationForever()
}
allInvisibleEnemy.forEach{enemy in
enemy.stopAnimation()
}
}
You can optimizate it if necessary
I've not the compiler, fell free to edit.
If for any reason you have nodes that are not on the screen that does need to be on the scene for anything, then you should be taking it off the scene to help improve performance. This would stop actions on the nodes from running, and stop your physics world from having to check if anything physics related needs to placed on it.
Now there are many ways to go about doing this, but a very basic principal would be to establish some kind of map that lays out your nodes (This could be an SKScene that you have not attached to the main scene) Then use the map(scene) to keep track of all of your nodes. Take your camera and find all the nodes on the map (scene) that is in the view of the camera, and move those nodes over to the main scene.

How to, simply, wait for any layout in iOS?

Before beginning note that this has nothing to do with background processing. There is no "calculation" involved that one would background.
Only UIKit.
view.addItemsA()
view.addItemsB()
view.addItemsC()
Let's say on a 6s iPhone
EACH of these takes one second for UIKit to construct.
This will happen:
THEY APPEAR ALL AT ONCE. To repeat, the screen simply hangs for 3 seconds while UIKit does a massive amount of work. Then they all appear at once.
But let's say I want this to happen:
THEY APPEAR PROGRESSIVELY. The screen simply hangs for 1 second while UIKit builds one. It appears. It hangs again while it builds the next one. It appears. And so on.
(Note "one second" is just a simple example for clarity. See the end of this post for a fuller example.)
How do you do it in iOS?
You can try the following. It does not seem to work.
view.addItemsA()
view.setNeedsDisplay()
view.layoutIfNeeded()
view.addItemsB()
You can try this:
view.addItemsA()
view.setNeedsDisplay()
view.layoutIfNeeded()_b()
delay(0.1) { self._b() }
}
func _b() {
view.addItemsB()
view.setNeedsDisplay()
view.layoutIfNeeded()
delay(0.1) { self._c() }...
Note that if the value is too small - this approach simply, and obviously, does nothing. UIKit will just keep working. (What else would it do?). If the value is too big, it's pointless.
Note that currently (iOS10), if I'm not mistaken: if you try this trick with the trick of a zero delay, it works erratically at best. (As you'd probably expect.)
Trip the run loop...
view.addItemsA()
view.setNeedsDisplay()
view.layoutIfNeeded()
RunLoop.current.run(mode: RunLoop.Mode.default, before: Date())
view.addItemsB()
view.setNeedsDisplay()
view.layoutIfNeeded()
Reasonable. But our recent real life testing shows that this seems to NOT work in many cases.
(ie, Apple's UIKit is now sophisticated enough to smear UIKit work beyond that "trick".)
Thought: is there perhaps a way, in UIKit, to get a callback when it has, basically, drawn-up all the views you've stacked up? Is there another solution?
One solution seems to be .. put the subviews in controllers, so you get a "didAppear" callback, and track those. That seems infantile, but maybe it's the only pattern? Would it really work anyway? (Merely one issue: I don't see any guarantee that didAppear ensures all subviews have been drawn.)
In case this still isn't clear...
Example everyday use case:
• Say there are perhaps seven of the sections.
• Say each one typically takes 0.01 to 0.20 for UIKit to construct (depending on what info you're showing).
• If you just "let the whole thing go in one whack" it will often be OK or acceptable (total time, say 0.05 to 0.15) ... but ...
• there will often be a tedious pause for the user as the "new screen appears". (.1 to .5 or worse).
• Whereas if you do what I am asking about, it will always smooth on to the screen, one chunk at a time, with the minimum possible time for each chunk.
TLDR
Force pending UI changes onto the render server with CATransaction.flush() or split the work across multiple frames using CADisplayLink (example code below).
Summary
Is there perhaps a way, in UIKit, to get a callback when it has drawn-up all the views you've stacked up?
No
iOS acts like a game rendering changes (no matter how many you make) at most once per frame. The only way to guarantee a peice of code runs after your changes have been rendered on screen is to wait for the next frame.
Is there another solution?
Yes, iOS may only render changes once per frame but your app isn't what does that rendering. The window server process is.
Your app does its layout and rendering and then commit its changes to its layerTree to the render server. It will do this automatically at the end of the runloop, or you can force outstanding transactions to be sent to the render server be calling CATransaction.flush().
However, blocking the main thread is bad in general (not just because it blocks UI updates). So if you can you should avoid it.
Possible Solutions
This is the part you are interested in.
1: Do as much as possible on a background queue as you can and improve performance.
Seriously the iPhone 7 is the 3rd most powerful computer (not phone) in my house, only beaten by my gaming PC and Macbook Pro. It is faster than every other computer in my house. It shouldn't take a 3 second pause to render your apps UI.
2: Flush pending CATransactions
EDIT: As pointed out by rob mayoff you can force CoreAnimation to send the pending changes to the render server by calling CATransaction.flush()
addItems1()
CATransaction.flush()
addItems2()
CATransaction.flush()
addItems3()
This won't actually render the changes right there but sends the pending UI updates to the window server, ensuring they are included in the next screen update.
This will work, but comes with these warning in Apples documentation for it.
However, you should attempt to avoid calling flush explicitly. By allowing flush to execute during the runloop...
...and transactions and animations that work from transaction to transaction will continue to function.
However the CATransaction header file includes this quote, which seems to imply that, even if they don't like it, this is officially supported usage.
In some circumstances (i.e. no run-loop, or the run-loop is blocked) it may be necessary to use explicit transactions to get timely render tree updates.
Apple's Documentation - "Better documentation for +[CATransaction flush]".
3: dispatch_after()
Just delay the code until the next runloop. dispatch_async(main_queue) won't work, but you can use dispatch_after() with no delay.
addItems1()
DispatchQueue.main.asyncAfter(deadline: .now() + 0.0) {
addItems2()
DispatchQueue.main.asyncAfter(deadline: .now() + 0.0) {
addItems3()
}
}
You mention in your answer this doesn't work for you anymore. However, it works fine in the test Swift Playground and example iOS app I've included with this answer.
4: Use CADisplayLink
CADisplayLink gets called once per frame and allows you to ensure only one operation runs per frame, guaranteeing the screen will be able to refresh between operations.
DisplayQueue.sharedInstance.addItem {
addItems1()
}
DisplayQueue.sharedInstance.addItem {
addItems2()
}
DisplayQueue.sharedInstance.addItem {
addItems3()
}
Needs this helper class to work (or similar).
// A queue of item that you want to run one per frame (to allow the display to update in between)
class DisplayQueue {
static let sharedInstance = DisplayQueue()
init() {
displayLink = CADisplayLink(target: self, selector: #selector(displayLinkTick))
displayLink.add(to: RunLoop.current, forMode: RunLoopMode.commonModes)
}
private var displayLink:CADisplayLink!
#objc func displayLinkTick(){
if let _ = itemQueue.first {
itemQueue.remove(at: 0)() // Remove it from the queue and run it
// Stop the display link if it's not needed
displayLink.isPaused = (itemQueue.count == 0)
}
}
private var itemQueue:[()->()] = []
func addItem(block:#escaping ()->()) {
displayLink.isPaused = false // It's needed again
itemQueue.append(block) // Add the closure to the queue
}
}
5: Call the runloop directly.
I don't like it because of the possibility for an infinite loop. But, I admit that is unlikely. I'm also not sure if this is officially supported or an Apple engineer is going to read this code and look horrified.
// Runloop (seems to work ok, might lead to infitie recursion if used too frequently in the codebase)
addItems1()
RunLoop.current.run(mode: .default, before: Date())
addItems2()
RunLoop.current.run(mode: .default, before: Date())
addItems3()
This should work, unless (while responding to the runloop events) you do something else to block that runloop call from completing as the CATransaction's are sent to the window server at the end of the runloop.
Example Code
Demonstration Xcode Project & Xcode Playground (Xcode 8.2, Swift 3)
Which option should I use?
I like the solutions DispatchQueue.main.asyncAfter(deadline: .now() + 0.0) and CADisplayLink the best. However, DispatchQueue.main.asyncAfter doesn't guarantee it will run on the next runloop tick so you might not want to trust it?
CATransaction.flush() will force you UI changes to be pushed to the render server and this usage seems to fit Apple's comments for the class, but comes with some warnings attached.
In some circumstances (i.e. no run-loop, or the run-loop is blocked) it may be necessary to use explicit transactions to get timely render tree updates.
Detailed Explanation
The rest of this answer is is background on what's going on inside UIKit and explains why the original answers attempts to use view.setNeedsDisplay() and view.layoutIfNeeded() didn't do anything.
Overview of UIKit Layout & Rendering
CADisplayLink is totally unrelated to UIKit and the runloop.
Not quite. iOS's UI is GPU rendered like a 3D game. And tries to do as little as possible. So a lot of things, like layout and rendering don't happen when something changes but when it is needed. That is why we call ‘setNeedsLayout’ not layout subviews. Each frame the layout might change multiple times. However, iOS will try to only call layoutSubviews once per frame, instead of the 10 times setNeedsLayout might have been called.
However, quite a lot happens on the CPU (layout, -drawRect:, etc...) so how does it all fit together.
Note this is all simplified and skips lots of things like CALayer actually being the real view object that shows on screen not UIView, etc...
Each UIView can be thought of as a bitmap, an image/GPU texture. When the screen is rendered the GPU composites the view hierarchy into the resulting frame we see. It composes the views, rendering the subviews textures over the top of previous views into the finished render that we see on screen (similarly to a game).
This is what has allowed iOS to have such a smooth and easily animated interface. To animate a view across the screen it doesn't have to rerender anything. On the next frame that views texture is just composited in a slightly different place on the screen than before. Neither it, nor the view it was on top of need to have their contents rerendered.
In the past a common performance tip used to be to cut down on the number of views in the view hierarchy by rendering table view cells entirely in drawRect:. This tip was to make the GPU composting step faster on the early iOS devices. However, GPU's are so fast on modern iOS devices now this is no longer worried about very much.
LayoutSubviews and DrawRect
-setNeedsLayout invalidates the views current layout and marks it as needing layout.
-layoutIfNeeded will relayout the view if it doesn't have a valid layout
-setNeedsDisplay will mark the views as needing to be redraw. We said earlier that each view is rendered into a texture/image of the view which can be moved around and manipulated by the GPU without needing to be redrawn. This will trigger it to redraw. The drawing is done by calling -drawRect: on the CPU and so is slower than being able to rely on the GPU, which it can do most frames.
And important thing to notice is what these methods do not do. The layout methods do not do anything visual. Though if the views contentMode is set to redraw, changing the views frame might invalidate the views render (trigger -setNeedsDisplay).
You can try the following all day. It does not seem to work:
view.addItemsA()
view.setNeedsDisplay()
view.layoutIfNeeded()
view.addItemsB()
view.setNeedsDisplay()
view.layoutIfNeeded()
view.addItemsC()
view.setNeedsDisplay()
view.layoutIfNeeded()
From what we've learnt the answer should be obvious why this doesn't work now.
view.layoutIfNeeded() does nothing but recalculate the frames of its subviews.
view.setNeedsDisplay() just marks the view as needing redrawing next time UIKit sweeps through the view hierarchy updating view textures for sending to the GPU. However, is doesn't effect the subviews you tried to add.
In your example view.addItemsA() adds 100 sub views. Those are separate unrelated layers/textures on the GPU until the GPU composites them together into the next framebuffer. The only exception to this is if the CALayer has shouldRasterize set to true. In which case it creates a separate texture for the view and it's sub views and renders (in think on the GPU) the view and it's subviews into a single texture, effectively caching the compositing it would have to do each frame. This has the performance advantage of not needing to compose all its subviews every frame. However, if the view or its subviews change frequently (like during an animation) it will be a performance penalty, as it will invalidate the cached texture frequently requiring it to be redrawn (similar to frequently calling -setNeedsDisplay).
Now, any game engineer would just do this ...
view.addItemsA()
RunLoop.current.run(mode: .default, before: Date())
view.addItemsB()
RunLoop.current.run(mode: .default, before: Date())
view.addItemsC()
Now indeed, that seems to work.
But why does it work?
Now -setNeedsLayout and -setNeedsDisplay don't trigger a relayout or redraw but instead just mark the view as needing it. As UIKit comes through preparing to render the next frame it triggers views with invalid textures or layouts to redraw or relayout. After everything is ready it sends tells the GPU to composite and display the new frame.
So the main run loop in UIKit probably looks something like this.
-(void)runloop
{
//... do touch handling and other events, etc...
self.windows.recursivelyCheckLayout() // effectively call layoutIfNeeded on everything
self.windows.recursivelyDisplay() // call -drawRect: on things that need it
GPU.recompositeFrame() // render all the layers into the frame buffer for this frame and displays it on screen
}
So back to your original code.
view.addItemsA() // Takes 1 second
view.addItemsB() // Takes 1 second
view.addItemsC() // Takes 1 second
So why do all 3 changes show up at once after 3 seconds instead of one at a time 1 second apart?
Well if this bit of code is running as a result of a button press, or similar, it is executing synchronously blocking the main thread (the thread UIKit requires UI changes be made on) and so blocks the run loop on line 1, the even processing part. In effect, you are making that first line of the runloop method take 3 seconds to return.
However, we have determined that the layout won't update until line 3, the individual views won't be rendered until line 4 and no changes will actually appear on screen until the last line of the runloop method, line 5.
The reason that pumping the runloop manually works is because you are basically inserting a call to the runloop() method. Your method is running as a result of being called from within the runloop function
-runloop()
- events, touch handling, etc...
- addLotsOfViewsPressed():
-addItems1() // blocks for 1 second
-runloop()
| - events and touch handling
| - layout invalid views
| - redraw invalid views
| - tell GPU to composite and display a new frame
-addItem2() // blocks for 1 second
-runloop()
| - events // hopefully nothing massive like addLotsOfViewsPressed()
| - layout
| - drawing
| - GPU render new frame
-addItems3() // blocks for 1 second
- relayout invalid views
- redraw invalid views
- GPU render new frame
This will work, as long as it's not used very often because this is using recursion. If it's used frequently every call to the -runloop could trigger another one leading to runaway recursion.
THE END
Below this point is just clarification.
Extra information about what is going on here
CADisplayLink and NSRunLoop
If I'm not mistaken KH it appears that fundamentally you believe "the run loop" (ie: this one: RunLoop.current) is CADisplayLink.
The runloop and CADisplayLink aren't the same thing. But CADisplayLink gets attached to a runloop in order to work.
I slightly misspoke earlier (in the chat) when I said NSRunLoop calls CADisplayLink every tick, It doesn’t. To my understanding NSRunLoop is basically a while(1) loop that’s job is to keep the thread alive, process events, etc... To avoid slipping up I’m going to try to quote extensively from Apple’s own documentation for the next bits.
A run loop is very much like its name sounds. It is a loop your thread enters and uses to run event handlers in response to incoming events. Your code provides the control statements used to implement the actual loop portion of the run loop—in other words, your code provides the while or for loop that drives the run loop. Within your loop, you use a run loop object to "run” the event-processing code that receives events and calls the installed handlers.
Anatomy of a Run Loop - Threading Programming Guide - developer.apple.com
CADisplayLink uses NSRunLoop and needs to be added to one but is different. To quote the CADisplayLink header file:
“Unless paused, it will fire every vsync until removed.”
From: func add(to runloop: RunLoop, forMode mode: RunLoopMode)
And from the preferredFramesPerSecond properties documentation.
Default value is zero, which means the display link will fire at the native cadence of the display hardware.
...
For example, if the maximum refresh rate of the screen is 60 frames per second, that is also the highest frame rate the display link sets as the actual frame rate.
So if you want to do anything timed to screen refreshes CADisplayLink (with default settings) is what you want to use.
Introducing the Render Server
If you happen to block a thread, that has nothing to do with how UIKit works.
Not quite. The reason we are required to only touch UIView’s from the main thread is because UIKit is not thread safe and it runs on the main thread. If you block the main thread you have blocked the thread UIKit runs on.
Whether UIKit works "like you say" {... "send a message to stop video frames. do all our work! send another message to start video again!"}
That’s not what I’m saying.
Or whether it works "like I say" {... ie, like normal programming "do as much as you can until the frames about to end - oh no it's ending! - wait until the next frame! do more..."}
That’s not how UIKit works and I don’t see how it ever could without fundamentally changing its architecture. How is it meant to watch for the frame ending?
As discussed in the “Overview of UIKit Layout & Rendering” section of my answer UIKit tries to do no work upfront. -setNeedsLayout and -setNeedsDisplay can be called as many times per frame as you want. They only invalidate the layout and view render, if it has already been invalidated that frame then the second call does nothing. This means that if 10 changes all invalidate the layout of a view UIKit still only needs to pay the cost of recalculating the layout once (unless you used -layoutIfNeeded in between -setNeedsLayout calls).
The same is true of -setNeedsDisplay. Though as previously discussed neither of these relates to what appears on screen. layoutIfNeeded updates the views frame and displayIfNeeded updates the views render texture, but that is not related to what appears on screen. Imagine each UIView has a UIImage variable that represents it’s backing store (it’s actually in CALayer, or below, and isn’t a UIImage. But this is an illustration). Redrawing that view simply updates the UIImage. But the UIImage is still just data, not a graphic on screen until it is drawn onto the screen by something.
So how does a UIView get drawn on screen?
Earlier I wrote pseudo code UIKit’s main render runloop. So far in my answers I have been ignoring a significant part of UIKit, not all of it runs inside your process. A surprising amount of UIKit stuff related to displaying things actually happens in the render server process not your apps process. The render server/window server was SpringBoard (the home screen UI) until iOS 6 (since then then BackBoard and FrontBoard have absorbed a lot of SpringBoards more core OS related features, leaving it to focus more on being the main operating system UI. Home screen/lock screen/notification center/control center/app switcher/etc...).
The pseudo code for UIKit’s main render runloop is likely closer to this. And again, remember UIKit’s architecture is designed to do as little work as possible so it will only do this stuff once per frame (unlike network calls or whatever else the main runloop might also manage).
-(void)runloop
{
//... do touch handling and other events, etc...
UIWindow.allWindows.layoutIfNeeded() // effectively call layoutIfNeeded on everything
UIWindow.allWindows.recursivelyDisplay() // call -drawRect: on things that need to be rerendered
// Sends all the changes to the render server process to actually make these changes appear on screen
// CATransaction.flush() which means:
CoreAnimation.commit_layers_animations_to_WindowServer()
}
This makes sense, a single iOS app freezing shouldn’t be able to freeze the entire device. In fact we can demonstrate this on an iPad with 2 apps running side by side. When we cause one to freeze the other is unaffected.
These are 2 empty app templates I created and pasted the same code into both. Both should the current time in a label in the middle of the screen. When I press freeze it calls sleep(1) and freezes the app. Everything stops. But iOS as a whole is fine. The other app, control center, notification center, etc... are all unaffected by it.
Whether UIKit works "like you say" {... "send a message to stop video frames. do all our work! send another message to start video again!"}
In the app there is no UIKit stop video frames command because your app has no control over the screen at all. The screen will update at 60FPS using whatever frame the window server gives it. The window server will composite a new frame for the display at 60FPS using the last known positions, textures and layer trees your app gave it to work with.
When you freeze the main thread in your app the CoreAnimation.commitLayersAnimationsToWindowServer() line, which runs last (after your expensive add lots of views code), is blocked and doesn’t run. As a result even if there are changes, the window server hasn’t been sent them yet and so just continues to use the last state it was sent for your app.
Animations is another part of UIKit that runs out of process, in the window server. If, before the sleep(1) in that example app, we start a UIView animation first we will see it start, then the label will freeze and stop updating (because sleep() has run). However, even though the apps main thread is frozen the animation will continue regardless.
func freezePressed() {
var newFrame = animationView.frame
newFrame.origin.y = 600
UIView.animate(withDuration: 3, animations: { [weak self] in
self?.animationView.frame = newFrame
})
// Wait for the animation to have a chance to start, then try to freeze it
DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) {
NSLog("Before freeze");
sleep(2) // block the main thread for 2 seconds
NSLog("After freeze");
}
}
This is the result:
In fact we can go one better.
If we change the freezePressed() method to this.
func freezePressed() {
var newFrame = animationView.frame
newFrame.origin.y = 600
UIView.animate(withDuration: 4, animations: { [weak self] in
self?.animationView.frame = newFrame
})
// Wait for the animation to have a chance to start, then try to freeze it
DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) { [weak self] in
// Do a lot of UI changes, these should completely change the view, cancel its animation and move it somewhere else
self?.animationView.backgroundColor = .red
self?.animationView.layer.removeAllAnimations()
newFrame.origin.y = 0
newFrame.origin.x = 200
self?.animationView.frame = newFrame
sleep(2) // block the main thread for 2 seconds, this will prevent any of the above changes from actually taking place
}
}
Now without the sleep(2) call the animation will run for 0.2 seconds then it’ll be canceled and the view will be moved to a different part of the screen a different color. However, the sleep call blocks the main thread for 2 seconds meaning none of these changes are sent to the window server until most of the way through the animation.
And just to confirm here is the result with the sleep() line commented out.
This should hopefully explain what’s going on. These changes are like the UIView’s you add in your question. They are queued up to be included in the next update, but because you are blocking the main thread by sending so many in one go you are stopping the message being sent which will get them included in the next frame. The next frame isn’t being blocked, iOS will produce a new frame showing all the updates it has received from SpringBoard, and other iOS app. But because your app is still blocking it’s main thread iOS hasn’t received any updates from your app and so won’t show any change (unless it has changes, like animations, already queued up on the window server).
So to summarise
UIKit tries to do as little as possible so batches changes to layout and rendering up into one go.
UIKit runs on the main thread, blocking the main thread prevents UIKit doing anything until that operation has completed.
UIKit in process can’t touch the display, it sends layers and updates to the window server every frame
If you block the main thread then the changes are never sent to the window server and so aren’t displayed
The window server has final control of what appears on screen. iOS only sends updates to the window server when the current CATransaction is committed. To make this happen when it is needed, iOS registers a CFRunLoopObserver for the .beforeWaiting activity on the main thread's run loop. After handling an event (presumably by calling into your code), the run loop calls the observer before it waits for the next event to arrive. The observer commits the current transaction, if there is one. Committing the transaction includes running the layout pass, the display pass (in which your drawRect methods are called), and sending the updated layout and contents to the window server.
Calling layoutIfNeeded performs layout, if needed, but doesn't invoke the display pass or send anything to the window server. If you want iOS to send updates to the window server, you must commit the current transaction.
One way to do that is to call CATransaction.flush(). A reasonable case to use CATransaction.flush() is when you want to put a new CALayer on the screen and you want it to have an animation immediately. The new CALayer won't be sent to the window server until the transaction is committed, and you can't add animations to it until it's on the screen. So, you add the layer to your layer hierarchy, call CATransaction.flush(), and then add the animation to the layer.
You can use CATransaction.flush to get the effect you want. I don't recommend this, but here's the code:
#IBOutlet var stackView: UIStackView!
#IBAction func buttonWasTapped(_ sender: Any) {
stackView.subviews.forEach { $0.removeFromSuperview() }
for _ in 0 ..< 3 {
addSlowSubviewToStack()
CATransaction.flush()
}
}
func addSlowSubviewToStack() {
let view = UIView()
// 300 milliseconds of “work”:
let endTime = CFAbsoluteTimeGetCurrent() + 0.3
while CFAbsoluteTimeGetCurrent() < endTime { }
view.translatesAutoresizingMaskIntoConstraints = false
view.heightAnchor.constraint(equalToConstant: 44).isActive = true
view.backgroundColor = .purple
view.layer.borderColor = UIColor.yellow.cgColor
view.layer.borderWidth = 4
stackView.addArrangedSubview(view)
}
And here's the result:
The problem with the above solution is that it blocks the main thread by calling Thread.sleep. If your main thread doesn't respond to events, not only does the user get frustrated (because your app isn't responding to her touches), but eventually iOS will decide that the app is hung and kill it.
The better way is simply to schedule the addition of each view when you want it to appear. You claim “it's not engineering”, but you are wrong, and your given reasons make no sense. iOS generally updates the screen every 16⅔ milliseconds (unless your app takes longer than that to handle events). As long as the delay you want is at least that long, you can just schedule a block to be run after the delay to add the next view. If you want a delay of less than 16⅔ milliseconds, you cannot in general have it.
So here's the better, recommended way to add the subviews:
#IBOutlet var betterButton: UIButton!
#IBAction func betterButtonWasTapped(_ sender: Any) {
betterButton.isEnabled = false
stackView.subviews.forEach { $0.removeFromSuperview() }
addViewsIfNeededWithoutBlocking()
}
private func addViewsIfNeededWithoutBlocking() {
guard stackView.arrangedSubviews.count < 3 else {
betterButton.isEnabled = true
return
}
self.addSubviewToStack()
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(300)) {
self.addViewsIfNeededWithoutBlocking()
}
}
func addSubviewToStack() {
let view = UIView()
view.translatesAutoresizingMaskIntoConstraints = false
view.heightAnchor.constraint(equalToConstant: 44).isActive = true
view.backgroundColor = .purple
view.layer.borderColor = UIColor.yellow.cgColor
view.layer.borderWidth = 4
stackView.addArrangedSubview(view)
}
And here's the (identical) result:
It is kind of a solution. But it's not engineering.
Actually, yes it is. By adding the delay, you are doing exactly what you said you wanted to do: you are permitting the runloop to complete and layout to be performed, and re-entering on the main thread as soon as that's done. That, in fact, is one of my main uses of delay. (You might even be able to use a delay of zero.)
3 methods that might work below. The first I could make it work if a subview is adding the controller as well if it is not directly in the view controller.The second is a custom view :) It seems you are wondering when layoutSubviews is finished on the view to me. This continuous process is what is freezing the display because of the 1000 subviews plus sequentially. Depending on your situation you can add a childviewcontroller view and post a notification when viewDidLayoutSubviews() is finished but I don't know if this fits your use case. I tested with 1000 subs on the viewcontroller view being added and it worked. In that case a delay of 0 will do exactly what you want. Here is a working example.
import UIKit
class TrackingViewController: UIViewController {
var layoutCount = 0
override func viewDidLoad() {
super.viewDidLoad()
// Add a bunch of subviews
for _ in 0...1000{
let view = UIView(frame: self.view.bounds)
view.autoresizingMask = [.flexibleWidth,.flexibleHeight]
view.backgroundColor = UIColor.green
self.view.addSubview(view)
}
}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
print("Called \(layoutCount)")
if layoutCount == 1{
//finished because first call was an emptyview
NotificationCenter.default.post(name: NSNotification.Name(rawValue: "kLayoutFinished"), object: nil)
}
layoutCount += 1
} }
Then in your main View Controller that you are adding subviews you could do this.
import UIKit
class ViewController: UIViewController {
var y :CGFloat = 0
var count = 0
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
NotificationCenter.default.addObserver(self, selector: #selector(ViewController.finishedLayoutAddAnother), name: NSNotification.Name(rawValue: "kLayoutFinished"), object: nil)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 4, execute: {
//add first view
self.finishedLayoutAddAnother()
})
}
deinit {
NotificationCenter.default.removeObserver(self, name: NSNotification.Name(rawValue: "kLayoutFinished"), object: nil)
}
func finishedLayoutAddAnother(){
print("We are finished with the layout of last addition and we are displaying")
addView()
}
func addView(){
// we keep adding views just to cause
print("Fired \(Date())")
if count < 100{
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.0, execute: {
// let test = TestSubView(frame: CGRect(x: self.view.bounds.midX - 50, y: y, width: 50, height: 20))
let trackerVC = TrackingViewController()
trackerVC.view.frame = CGRect(x: self.view.bounds.midX - 50, y: self.y, width: 50, height: 20)
trackerVC.view.backgroundColor = UIColor.red
self.view.addSubview(trackerVC.view)
trackerVC.didMove(toParentViewController: self)
self.y += 30
self.count += 1
})
}
}
}
Or there is an EVEN crazier way and probably better way. Create your own view that in a sense keeps its own time and calls back when it is good to not drop frames. This is unpolished but would work.
import UIKit
class CompletionView: UIView {
private var lastUpdate : TimeInterval = 0.0
private var checkTimer : Timer!
private var milliSecTimer : Timer!
var adding = false
private var action : (()->Void)?
//just for testing
private var y : CGFloat = 0
private var x : CGFloat = 0
//just for testing
var randomColors = [UIColor.purple,UIColor.gray,UIColor.green,UIColor.green]
init(frame: CGRect,targetAction:(()->Void)?) {
super.init(frame: frame)
action = targetAction
adding = true
for i in 0...999{
if y > bounds.height - bounds.height/100{
y -= bounds.height/100
}
let v = UIView(frame: CGRect(x: x, y: y, width: bounds.width/10, height: bounds.height/100))
x += bounds.width/10
if i % 9 == 0{
x = 0
y += bounds.height/100
}
v.backgroundColor = randomColors[Int(arc4random_uniform(4))]
self.addSubview(v)
}
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func milliSecCounting(){
lastUpdate += 0.001
}
func checkDate(){
//length of 1 frame
if lastUpdate >= 0.003{
checkTimer.invalidate()
checkTimer = nil
milliSecTimer.invalidate()
milliSecTimer = nil
print("notify \(lastUpdate)")
adding = false
if let _ = action{
self.action!()
}
}
}
override func layoutSubviews() {
super.layoutSubviews()
lastUpdate = 0.0
if checkTimer == nil && adding == true{
checkTimer = Timer.scheduledTimer(timeInterval: 0.01, target: self, selector: #selector(CompletionView.checkDate), userInfo: nil, repeats: true)
}
if milliSecTimer == nil && adding == true{
milliSecTimer = Timer.scheduledTimer(timeInterval: 0.001, target: self, selector: #selector(CompletionView.milliSecCounting), userInfo: nil, repeats: true)
}
}
}
import UIKit
class ViewController: UIViewController {
var y :CGFloat = 30
override func viewDidLoad() {
super.viewDidLoad()
// Wait 3 seconds to give the sim time
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 3, execute: {
[weak self] in
self?.addView()
})
}
var count = 0
func addView(){
print("starting")
if count < 20{
let completionView = CompletionView(frame: CGRect(x: 0, y: self.y, width: 100, height: 100), targetAction: {
[weak self] in
self?.count += 1
self?.addView()
print("finished")
})
self.y += 105
completionView.backgroundColor = UIColor.blue
self.view.addSubview(completionView)
}
}
}
Or Finally,you could do the callback or notification in viewDidAppear but it also seems that any code executed on the callback would need to be wrapped in to execute in a timely manner from the viewDidAppear callback.
DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 0.0, execute: {
//code})
Use NSNotification to achieve needed effect.
First - register an observer to the main view and create observer handler.
Then, initialize all these A,B,C... objects in separate thread (background, for instance) by, for instance, self performSelectorInBackground
Then - post notification from subviews and the last - performSelectorOnMainThread to add subview in desired order with needed delays.
To answer the questions in comments, let's say you have a UIViewController that was shown on the screen. This object - not a point of discussion and you can decide where to put the code, controlling view appearance. The code is for the UIViewController object (so, it is self). View - some UIView object, considered as a parent view. ViewN - one of subviews. It can be scaled later.
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleNotification:)
name:#"ViewNotification"
object:nil];
This registerd an observer - needed to communicate between threads.
ViewN * V1 = [[ViewN alloc] init];
Here - subviews can be allocated - not shown yet.
- (void) handleNotification: (id) note {
ViewN * Vx = (ViewN*) [(NSNotification *) note.userInfo objectForKey: #"ViewArrived"];
[self.View performSelectorOnMainThread: #selector(addSubView) withObject: Vx waitUntilDone: FALSE];
}
This handler allows to receive messages and place UIViewobject to the parent view. Looks strange, but the point is - you need to execute addSubview method on the main thread to take effect. performSelectorOnMainThread allows to start adding subview on main thread without blocking application execution.
Now - we make a method that will place subviews to the screen.
-(void) sendToScreen: (id) obj {
NSDictionary * mess = [NSDictionary dictionaryWithObjectsAndKeys: obj, #"ViewArrived",nil];
[[NSNotificationCenter defaultCenter] postNotificationName: #"ViewNotification" object: nil userInfo: mess];
}
This method will post notification from any thread, sending an object as NSDictionary item named ViewArrived.
And finally - views that have to be added with 3 seconds delay:
-(void) initViews {
ViewN * V1 = [[ViewN alloc] init];
ViewN * V2 = [[ViewN alloc] init];
ViewN * V3 = [[ViewN alloc] init];
[self performSelector: #selector(sendToScreen:) withObject: V1 afterDelay: 3.0];
[self performSelector: #selector(sendToScreen:) withObject: V2 afterDelay: 6.0];
[self performSelector: #selector(sendToScreen:) withObject: V3 afterDelay: 9.0];
}
It is not the only one solution. It is also possible to control subviews of the parent view by counting the NSArray subviews property.
In any case, you can run initViews method whenever you need and even in background thread and it allows to control subview appearance, performSelector mechanism allows to avoid execution thread blocking.

Why does MTKView persist without being referenced? How do I clean it up?

I noticed that when I remove all references to an instance of my MTKView subclass, or never create any in the first place, the instance still persists long after and continues drawing. This can be demonstrated by calling the makeLeak method below.
Source:
class MTKViewSubclass: MTKView {
override func draw(_ rect: CGRect) {
print("MTKViewSubclass draw")
}
}
func makeLeak() {
// Make an MTKViewSubclass without storing a reference to it.
let _ = MTKViewSubclass(frame: CGRect(x: 0, y: 0, width: 100, height: 100), device: nil)
}
Output:
MTKViewSubclass draw
MTKViewSubclass draw
MTKViewSubclass draw
...
Given that the draw method continues to be called, it makes sense that the MTKView's display link is keeping it alive. Xcode's Memory Graph Hierarchy in the Debug navigator seems to confirm that, though I'm not an expert at reading it.
It surprises me that the MTKView draws even when it is not in the view hierarchy. In any case, how can I make sure it's cleaned up? I have a view controller with an MTKView that may be instantiated and freed many times.
Edit: Interestingly, while pausing the MTKView with isPaused seems to remove its display link reference, the view still persists. The Memory Graph Hierarchy shows a reference from the MTKView's CAMetalLayer to the view itself, among other references. I don't know which references are strong.

sound is not playing when node is touched in spritekit swift

I want to have a sound when a node is clicked.
currently the code is:
let sndButtonClick = SKAction.playSoundFileNamed("button_click.wav", waitForCompletion: false)
and from touches began its
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
for touches: AnyObject in touches {
let location = touches.locationInNode(self)
if playButton.containsPoint(location) {
playButton.runAction(sndButtonClick)
playButton.removeFromParent()
let nextScene = GamePlayMode(size: self.scene!.size)
nextScene.scaleMode = self.scaleMode
self.view?.presentScene(nextScene)
}
}
I did the exact same thing for the collision of two nodes in gameplaymode and it works but in main menu it does not work!
I tried
self.runAction(sndButtonClick)
then
playbutton.runAction(sndButtonClick)
both didnt work
Why running sound action on a button doesn't work ?
This is from the docs:
An SKAction object is an action that is executed by a node in the
scene (SKScene)...When the scene processes its nodes, actions associated with those nodes are evaluated.
Means the actions for a node will run only if the node is added to the scene. So this has no effect :
playButton.runAction(sndButtonClick)
playButton.removeFromParent()
because you have removed the button from the scene and it will not be present in the next frame when actions should be executed. That is how runAction method works:
Adds an action to the list of actions executed by the node...The new action is processed the next time the scene’s animation loop is processed.
Also, because you are immediately calling presentScene there will be no next frame anyways, so even if you delete removeFromParent statement, sound will not work, because there is no next frame.
Why running sound action on a scene doesn't work?
self.runAction(sndButtonClick) won't work because you are making a transition immediately without waiting for a next frame where the queued actions will be executed (like described above).
Solution for the Problem
To play the sound before transition, you have to wait for a next frame, and you can do something like:
runAction(sndButtonClick, completion: {
self.view?.presentScene(nextScene)
})
or
let block = SKAction.runBlock({
self.view?.presentScene(nextScene)
})
runAction(SKAction.sequence([sndButtonClick, block]))
Preventing Leaks:
Consider using capture list inside of a block which captures self to avoid possible strong reference cycles when needed, like this:
let block = SKAction.runBlock({
[unowned self] in
//use self here
})
In this particular case of yours, it should be safe to go without capture list because scene doesn't have a strong reference to the block. Only block has strong reference to the scene, but after the block is executed, because nothing retains it (no strong references to it), it will be released, thus the scene can be released correctly. But, if the block was declared as a property, or the action which executes the block was running infinitely (using repeateActionForever method to repeat a certain sequence), then you will have a leak for sure.
You should always override scene's deinit to see what is going on (if it is not called, something retaining the scene and causing the leak).

Image memory not being freed

I have an issue on an iPad app where every images (all 1024x768) displayed takes 3Mo on RAM without freeing them when not needed, leading to a crash after some time.
I load images using "contentsOfFile" method which should (in my understanding) release memory when the memory is low and the app doesn't need it anymore but it just doesn't seem to be the case (even when i simulate a memory warning on simulator).
I made a really simple test class to illustrate the problem below (swift 1.2).
When the application loads it creates a view, add sublayers filled with the images, and then when I tap on the screen I expect the 72Mo taken by the uncompressed bitmaps data (used by Core Animation to render the layer) to be freed (or at least to be freed when i simulate a memory warning), which it never does.
(Note : I use ARC on both my main project and this test project)
ViewController: UIViewController {
func tap(gesture : UITapGestureRecognizer)
{
if gesture.state == UIGestureRecognizerState.Ended {
var tmpV = self.view.viewWithTag(1000)
// tried everything to release memory
for subLayer : CALayer in tmpV?.layer.sublayers as! [CALayer!] {
subLayer.contents = nil
}
tmpV?.layer.sublayers.removeAll(keepCapacity: false)
tmpV?.removeFromSuperview()
tmpV = nil
}
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
//add gesture to remove images from view
var gesture = UITapGestureRecognizer(target: self, action: Selector("tap:"))
self.view.addGestureRecognizer(gesture)
//loading images
var imagesArray : [UIImage!]! = self.fillImagesArray()
//adding those images to a view
let tmpV = UIView(frame: CGRect(x: 0, y: 0, width: 1024, height: 768))
for img in imagesArray {
var layer = CALayer()
layer.frame = CGRect(x: 0, y: 0, width: 1024, height: 768)
layer.contents = img.CGImage
tmpV.layer.addSublayer(layer)
}
tmpV.tag = 1000
self.view.addSubview(tmpV)
}
func fillImagesArray() -> [UIImage!]{
var array : [UIImage!] = []
for i in 0 ... 23 {
var image : UIImage?
var filename = NSBundle.mainBundle().pathForResource("image \(i)", ofType: "png")
image = UIImage(contentsOfFile: filename!)
array.append(image)
}
return array
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}
EDIT:
To add a few more details, here is a screenshot taken from Profile/Allocations tool, so as I said I guess that the memory is taken by CA when rendering the layers.
When the images aren't displayed on screen, the memory is still really low, but once they got displayed they aren't released even after removal.
I tried to draw the tmpV layer, remove the sublayers and then adding the drawn image to tmpV.layer.contents in viewDidAppear, and if I do so my image is correct and the memory stays low but it cost around 1.5 seconds to draw the layer (I need my app to be fast and I can't afford this time).
The only way I found to release the memory was to add the images into a class variable array, and emptied it on tap func. But I'd need to reload the images again if I want to re-create an image and I'd like to avoid that.
You said:
I load images using "contentOfFile" method which should (in my
understanding) release memory when the memory is low and the app
doesn't need it anymore...
(should be contentsOfFile, with an "s")
That's not quite correct. The other common way of loading images, imageNamed, caches the images it loads in case you need them again.
The contentsOfFile method for loading images does not do any caching. It hands you back an image, and then you need to maintain a strong reference to the image or it will be deallocated. You need to track all strong references to the image and make sure they are all nilled out once you are done with them.
Let's walk through the code you posted and the ownership of the images (and the CGImage data from those images.)
You store your images in an array that's defined in the scope of your fillImagesArray method. That method returns the array, so the caller will take ownership of the array. You're storing the array of images in another local variable, imagesArray, in the scope of viewDidAppear. That array should be released when viewDidAppear returns.
You install the CGImage data from the images into a set of layers which you install as sublayers of a view, tmpV, that you create.
The original UIImage objects should be released and deallocated when viewDidAppear returns. However, the image data is now stored in a bunch of layers.
You then install tmpV as a subview of your view controller's content view.
So when all that is done the image data is owned by your layers. The layers are owned by your view's layer, which is owned by your view, tmpV. That view is owned by your view controller's content view. I don't see any other ownership of the image data. The chain of strong references has a single point, the fact that your tmpV view was added as a subview of your view controller's content view.
So, you should be able to get rid of the whole kit and caboodle simply by removing tmpV from it's superview. If that works, the data should be deallocated.
The other work you do of zeroing out the image data from the layers, and removing the layers, should not be needed. In fact, it might cause problems. I would remove all that code. In particular, the call that empties out the array of sublayers looks like a bad idea to me. If you're going to iterate through the array of layers, I would suggest you call removeFromSuperlayer on each one (although you'd need to copy the array of layers before enumerating it so it doesn't mutate while it's being enumerated.)
You should log the code that fetches your tmpV view via tag to make sure it finds the view.
How are you concluding that your memory is not being freed?, and where are you measuring it?
The programmer is responsible for all memory usage associated with contentsOfFile. If you have a strong reference to an image object, say in an array, it will not be removed from memory until you remove it from the array and there are no strong references to it.
You might want to consider using NSCache and it's associated NSDiscardableContent Protocol to control the images.

Resources