Does iOS cache images by default in memory? (RAM) - ios

I've been struggling to find out why my application continues to increase in memory as I move throughout the application.
When leaving the view, I make sure to check to see if the controller is de-initialized and it is, but the memory that was added while in the view retains. I've used the instruments tool and it hasn't detected any leaks, and leaving/re-entering the view repeatedly doesn't have any effect on the used memory.
This leads me to believe that iOS by default caches the UIImage into memory, and only frees the memory if the device needs it.
The view that I'm working with is a UICollectionView which shows the user a gallery of pictures that have been uploaded to my server. Currently I have this limited at 10 images per user, but as you can imagine if there's quite a few images and that can increase the memory rather quickly.
Do I need to worry about this memory? Is it default behaviour for the images to stay in memory until the device needs to free some space? I don't want to submit to the application store and get rejected for poor memory-management.
EDIT: It's also fair to note that I am constructing the image using the UIImage(data: NSData) constructor.

iOS does natively cache plenty of memory. The underlying support for that is in libcache, which UIKit uses internally, in a way that is inaccessible to you.
During times of "memory pressure", that is, when RAM is low, an event (technically , a knote) is broadcast to all listeners. Your app is listening because the frameworks automatically open a kevent file descriptor, and react to it with the well known didReceiveLowMemoryWarning.
Remember, there's no swap in iOS (discounting compressed RAM for the scope of this answer), so this happens quite frequently.
Even before didReceiveLowMemoryWarning is passed to you, libcache uses the malloc feature of "zone pressure relief" , which you can see for yourself in :
...
/* Empty out caches in the face of memory pressure. The callback may be NULL. Present in version >=
8. */
size_t (*pressure_relief)(struct _malloc_zone_t *zone, size_t goal);
} malloc_zone_t;
Thus, images (main consumers of memory) will be purged if necessary, and therefore should be of much concern to you. That said, if you know you don't want a given UI object anymore, you can of course dispose of it explicitly. Of course, if you have any additional resources that cannot be auto-purged in this way, you should handle them in the delegate. Because if you don't, Jetsam will jettison your app (i.e. kill you with an untrappable -9), and probably slay a few innocents in your priority band as well.

Related

How to implement didReceiveMemoryWarning in Swift?

Whenever I create a new View Controller subclass, Xcode automatically adds the method
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated
}
Usually I just delete it or ignore it. This is what all the tutorials I have seen do, too. But I assume that since Xcode gives it to me every time, it should be somewhat important, right? What should I be doing here? I assume that disposing of resources means setting them to nil, but what exactly are "resources that can be recreated"?
I have seen these questions:
How to implement didReceiveMemoryWarning?
UIViewController's didReceiveMemoryWarning in ARC environment
iPhone Development - Simulate Memory Warning
But they are all pre-Swift. Although I don't know much about Objective-C, I have heard that the memory management is different. How does that affect what I should do in didReceiveMemoryWarning?
Other notes:
I am fuzzily aware of Automatic Reference Counting and lazy instantiation
The documentation on didReceiveMemoryWarning that I found was rather brief.
Swift
Swift uses ARC just like Objective-C does (source to Apple Docs). The same kind of rules apply for freeing memory, remove all of the references to an object and it will be deallocated.
How to free memory
I assume that disposing of resources means setting them to nil, but what exactly are "resources that can be recreated"?
"resources that can be recreated" really depends on your application.
Examples
Say you are a social media app that deals with a lot of pictures. You want a snappy user interface so you cache the next 20 pictures in memory to make scrolling fast. Those images are always saved on the local file system.
Images can take up a lot of memory
You don't need those images in memory. If the app is low on memory, taking an extra second to load the image from a file is fine.
You could entirely dump the image cache when you receive that memory warning.
This will free up memory that the system needs
You are creating an amazing game that has a number of different levels.
Loading a level into your fancy game engine takes a while so if the user has enough memory you can load level 3 while they are playing level 2.
Levels take up a lot of memory
You don't NEED the next level in memory. They are nice to have but not essential.
LevelCache.sharedCache().nextLevel = nil frees up all that memory
What shouldn't you deallocate
Never deallocate the stuff that is on screen. I've seen some answers to related questions deallocate the view of the UIViewController. If you remove everything from the screen you might at well crash (in my opinion).
Examples
If the user has a document open that they are editing, DON'T deallocate it. Users will get exceptional angry at you if your app deletes their work without ever being saved. (In fact, you should probably have some emergency save mechanism for when that happens)
If your user is writing a post for your fabulous social media app, don't let their work go to waste. Save it and try to restore it when they open the app again. Although it's a lot of work to setup I love apps that do this.
Note
Most modern devices rarely run out of memory. The system does a pretty good job of killing apps in the background to free up memory for the app running in the foreground.
You have probably seen an app "open" in the App Switcher yet when you tapped on the app it opened to its initial state. The OS killed the app in the background to free up memory. See State Restoration for info on how to avoid this problem.
If your app is getting consistent memory warnings when you aren't doing a huge amount of processing make sure that you aren't leaking memory first.
Detecting memory leaks is outside the scope of this answer. Docs and a tutorial.
When didReceiveMemoryWarning is called, it means your app is using too much memory (compare with memory of device), and you should release any additional memory used by your view controller to reduce the memory of your app. If the memory app gets over the memory of device, iOS will kill your app immediately. "resources that can be recreated" means somethings you can recreate it again at somewhere, you don't need them now (don't need to put them in the memory). And you can release it when get didReceiveMemoryWarning.
Here is another detail topic: ios app maximum memory budget

Xcode Instrument : Memory Terms Live Bytes and Overall Bytes (Real Memory) confusion

I am working on a Browser application in which I use a UIWebView for opening web pages. I run the Instruments tool with Memory Monitor. I am totally confused by the terms which are used in Instruments and why they're important. Please explain some of my questions with proper reasons:
Live Bytes is important for checking memory optimization or memory consumption? Why ?
Why would I care about the Overall Bytes/ Real Memory, if it contains also released objects?
When and why are these terms used (Live Bytes/ Overall Bytes/Real Memory)?
Thanks
"Live Bytes" means "memory which has been allocated, but not yet deallocated." It's important because it's the most easily graspable measure of "how much memory your app is using."
"Overall Bytes" means "all memory which has ever been allocated including memory that has been deallocated." This is less useful, but gives you some idea of "heap churn." Churn leads to fragmentation, and heap fragmentation can be a problem (albeit a pretty obscure one these days.)
"Real Memory" is an attempt to distinguish how much physical RAM is in use (as opposed to how many bytes of address space are valid). This is different from "Live Bytes" because "Live Bytes" could include ranges of memory that correspond to memory-mapped files (or shared memory, or window backing stores, or whatever) that are not currently paged into physical RAM. Even if you don't use memory-mapped files or other exotic VM allocation methods, the system frameworks do, and you use them, so this distinction will always have some importance to every process.
EDIT: Since you're clearly concerned about memory use incurred by using UIWebView, let me see if I can shed some light on that:
There is a certain memory "price" to using UIWebView at all (i.e. global caches and the like). These include various global font caches, JavaScript JIT caches, and stuff like that. Most of these are going to behave like singletons: allocated the first time you use them (indirectly by using UIWebView) and never deallocated until the process ends. There are also some variable size global caches (like those that cache web responses; CFURL typically manages these) but those are expected to be managed by the system. The collective "weight" of these things with respect to UIWebView is, as you've seen, non-trivial.
I don't have any knowledge of UIKit or WebKit internals, but I would expect that if you had a discussion with someone who did, their response to the question of "Why is my use of UIWebView causing so much memory use?" would be two pronged: The first prong would be "this is the price of admission for using UIWebView -- it's basically like running a whole web browser in your process." The second prong would be "system framework caches are automatically managed by the system" by which they would mean that, for instance, the CFURL caches (which is one of the things that using UIWebView causes to be created) are managed by the system, so if a memory warning came in, the system frameworks would be responsible for evicting things from those caches to reduce the memory consumed by them; you have no control over those, and you just have to trust that the system frameworks will do what needs to be done. (That doesn't help you in the case where whatever the system cache managers do isn't aggressive enough for you, but you're not going to get any more control over them, so you need to attack the issue from another angle, either way.) If you're wondering why the memory use doesn't go down once you deallocate your UIWebView, this is your answer. There's a bunch of stuff it's doing behind the scenes, that you can't control.
The expectation that allocating, using, and then deallocating a UIWebView is a net-zero operation ignores some non-trivial, inherent and unavoidable side-effects. The existence of such side-effects is not (in and of itself) indicative of a bug in UIWebView. There are side effects like this all over the place. If you were to create a trivial application that did nothing but launch and then terminate after one spin of the run loop, and you set a breakpoint on exit(), and looked at the memory that had been allocated and never freed, there would be thousands of allocations. This is a very common pattern used throughout the system frameworks and in virtually every application.
What does this mean for you? It means that you effectively have two choices: Use UIWebView and pay the "price of admission" in memory consumption, or don't use UIWebView.

How much memory can an iOS app allocate?

I'm trying to get a feel for the amount of memory an iOS app can reliably allocate to help me drive some design decisions. The app is going to involve real time synchronised processed audio and animation.
Other than writing code that loads up the frameworks I'll need then trying to progressively allocate memory until I get warnings, is there any way to determine this kind of thing?
The simulator doesn't let you select a specific hardware model, so I assume I can't even simulate this stuff.
You cannot determine how much memory an app allocate as far as I know. Always try to keep the lowest memory possible for your app.
The memory allocated to your app depends on many factors like : number of background process happening, amount of memory available, memory used by other apps, the device you are running etc..
So, its not a good practice to keep a maximum line for memory consumed by your app and design accordingly.
Also try to hold only the necessary memory you need and handle memory issue in the memory callback methods like 'didreceivememorywarning'. Always consider that you have the least amount of memory allocated by the OS.

How to reserve memory for my application and leave a specified amount remaining?

I'm planning an application which will involve loading many pictures at one time and thus requires a large chunk of memory. For example, I might have 50 image objects created at once, taking a total of 1GB of RAM. But when the user goes to load 20 more pictures, I'd like to make sure that amount of memory is already reserved and ready.
Now this part might seem a little backwards from normal. Rather than specifying how much memory my application shall reserve, instead I need to specify how much memory to leave free for other applications, and adjust my application's memory periodically according to this specification. I must say I've never worked with reserving memory at all, and especially won't know how to leave this remaining available memory.
So for example, if the computer has 2048 MB of RAM, and the option is set to leave 50 MB free for other applications, and there is already 10MB of RAM being used by other apps, then it should reserve 2048-50-10 = 1988 MB for my app.
The trouble I foresee is suppose the user opens another application which requires 1GB. My app has to catch this and shrink its self.
Does this even sound like a feasible approach? Basically, I need to make sure there is as much memory reserved as possible at any given time, while leaving a decent amount available for other apps. Would it make a significant impact on performance if I do this, or not much at all? I might be loading and unloading images at rapid paces, and I don't want it to reserve/free this memory on demand, I want it to stay reserved.
+1 for Sertac's mentioning of how SQL Server rides the line of allocating memory it needs, but releasing memory when Windows complains.
Applications can receive Window's complaints by using the CreateMemoryResourceNotification:
hLowMemory := CreateMemoryResourceNotification(LowMemoryResourceNotification);
Applications can use memory resource notification events to scale the
memory usage as appropriate. If available memory is low, the
application can reduce its working set. If available memory is high,
the application can allocate more memory.
Any thread of the calling
process can specify the memory resource notification handle in a call
to the QueryMemoryResourceNotification function or one of the wait functions.
The state of the object is signaled when the specified
memory condition exists. This is a system-wide event, so all
applications receive notification when the object is signaled. Note
that there is a range of memory availability where neither the
LowMemoryResourceNotification or HighMemoryResourceNotification object
is signaled. In this case, applications should attempt to keep the
memory use constant.
But it's also worth mentioning that you might as well allocate memory that you need. Your operating system has a very sophisiticated set of algorithms to swap out the least used memory when memory pressure is high. You can take advantage of this by simply allocating all the memory that you need. When Windows starts to run low, it will find those pages of memory that you are using the least and swap them out to disk. (This is how a well-known reverse proxy works).
The only thing left is to decide if you want to free some images when Windows says it's running low on RAM. But if you're not using the memory, it is going to be swapped out to disk for you.
It's not realistic to account for other apps. Just ignore them. The system will page things in and out as needed. If you really wanted to do this you'd have to dynamically adapt to other processes as they start and finish. That's really not realistic. What's more it's not practical to inquire of other processes how much memory they need. Leave it all to the system.
Set a budget for your app and make sure you don't exceed it. Keep the most recently used images in memory and when you approach your memory budget throw away the least recently used images to make space.
If you are stressing the available resources then make sure you use FastMM and enable LARGE_ADDRESS_AWARE for your app so that you get 4GB address space when running on a 64 bit OS.

Memory-mapped files and low-memory scenarios

How does the iOS platform handle memory-mapped files during low-memory scenarios? By low-memory scenarios, I mean when the OS sends the UIApplicationDidReceiveMemoryWarningNotification notification to all observers in the application.
Our files are mapped into memory using +[NSData dataWithContentsOfMappedFile:], the documentation for which states:
A mapped file uses virtual memory techniques to avoid copying pages of the file into memory until they are actually needed.
Does this mean that the OS will also unmap the pages when they're no longer in use? Is it possible to mark pages as being no longer in use? This data is read-only, if that changes the scenario. How about if we were to use mmap() directly? Would this be preferable?
Memory-mapped files copy data from disk into memory a page at a time. Unused pages are free to be swapped out, the same as any other virtual memory, unless they have been wired into physical memory using mlock(2). Memory mapping leaves the determination of what to copy from disk to memory and when to the OS.
Dropping from the Foundation level to the BSD level to use mmap is unlikely to make much difference, beyond making code that has to interface with other Foundation code somewhat more awkward.
(This is not an answer, but it would be useful information.)
From #ID_AA_Carmack tweet,
#ID_AA_Carmack are iOS memory mapped files automatically unmapped in low memory conditions? (using +[NSData dataWithContentsOfMappedFile]?)
ID_AA_Carmack replied for this,
#KhrobEdmonds yes, that is one of the great benefits of using mapped files on iOS. I use mmap(), though.
I'm not sure that is true or not...
From my experiments NSData does not respond to memory warnings. I tested by creating a memory mapped NSData and accessing parts of the file so that it would be loaded into memory and finally sending memory warnings. There was no decrease in memory usage after the memory warning. Nothing in the documentation says that a memory will cause NSData to reduce real memory usage in low memory situations so it leads me to believe that it does not respond to memory warnings. For example NSCache documentation says that it will try and play nice with respect to memory usage plus I have been told it responds to the low memory warnings the system raises.
Also in my simple tests on an iPod Touch (4th gen) I was able to map about 600 megs of file data into virtual memory use +[NSData dataWithContentsOfMappedFile:]. Next I started to access pages via the bytes property on the NSData instance. As I did this real memory started to grow however it stopped growing at around 30 megs of real memory usage. So the way it is implemented it seems to cap how much real memory will be used.
In short if you want to reduce memory usage of NSData objects the best bet is to actually make sure they are completely released and not relying on anything the system automagically does on your behalf.
If iOS is like any other Unix -- and I would bet money it is in this regard -- pages in an mmap() region are not "swapped out"; they are simply dropped (if they are clean) or are written to the underlying file and then dropped (if they are dirty). This process is called "evicting" the page.
Since your memory map is read-only, the pages will always be clean.
The kernel will decide which pages to evict when physical memory gets tight.
You can give the kernel hints about which pages you would prefer it keep/evict using posix_madvise(). In particular, POSIX_MADV_DONTNEED tells the kernel to feel free to evict the pages; or as you say, "mark pages as being no longer in use".
It should be pretty simple to write some test programs to see whether iOS honors the "don't need" hint. Since it is derived from BSD, I bet it will.
Standard virtual memory techniques for file-backed memory says that the OS is free to throw away pages whenever it wants because it can always get them again later. I have not used iOS, but this has been the behavior of virtual memory on many other operating systems for a long time.
The simplest way to test it is to map several large files into memory, read through them to guarantee that it pages them into memory, and see if you can force a low memory situation. If you can't, then the OS must have unmapped the pages once it decided that they were no longer in use.
The dataWithContentsOfMappedFile: method is now deprecated from iOS5.
Use mmap, as you will avoid these situations.

Resources