Why does mmap fail on iOS? - ios

I'm trying to use mmap to read and play audio files on iOS. It works fine for files up to about 400MB. But when I try a 500MB file, I get a ENOMEM error.
char *path = [[[NSBundle mainBundle] pathForResource: #"test500MB" ofType: #"wav"] cStringUsingEncoding: [NSString defaultCStringEncoding]];
FILE *f = fopen( path, "rb" );
fseek( f, 0, SEEK_END );
int len = (int)ftell( f );
fseek( f, 0, SEEK_SET );
void *raw = mmap( 0, len, PROT_READ, MAP_SHARED, fileno( f ), 0 );
if ( raw == MAP_FAILED ) {
printf( "MAP_FAILED. errno=%d", errno ); // Here it says 12, which is ENOMEM.
}
Why?
I'd be happy with an answer like "700MB is the virtual memory limit, but sometimes the address space is fragmented, so you DO get 700MB but in smaller chunks". (This is just speculation, I still need an answer)
The Apple doc page about virtual memory says:
Although OS X supports a backing store, iOS does not. In iPhone
applications, read-only data that is already on the disk (such as code
pages) is simply removed from memory and reloaded from disk as needed.
which seems to confirm that mmap should work for blocks larger than the physical memory but still doesn't explain why I'm hitting such a low limit.
Update
This answer is interesting, but 500MB is well below the 700MB limit it mentions.
This discussion mentions contiguous memory. So memory fragmentation could be a real issue?
I'm using iPod Touch 4th generation which has 256MB physical memory.
The point of my research is to see if there's a better way of managing memory when loading read-only data from files than "keep allocating until you get a memory warning". mmap seemed like a nice way to solve this...
Update 2
I expect mmap to work perfectly with the new 64bit version of iOS. Will test once I get my hands on a 64bit device.

After further investigation and reading this excellent blog post by John Carmack, here are my conclusions:
700MB is the limit for virtual memory on iOS (as of 2012, 32-bit iOS)
It may or may not be available in a single block; this depends on device state and app behaviour
Therefore, to reliably mmap 700MB worth of file data it is necessary to break it into smaller chunks.

I don't have an answer for you, but I did test your code on my iPhone 5 running 6.0.1 and mmap succeeded on a 700MB ISO file. So I would start with other factors and assume mmap is working properly. Perhaps the error you're getting back is not really due to memory, or perhaps the memory on the device itself is somehow exhausted to where mmap is failing; try rebooting the device. Depending on your version of iOS, I wonder also if perhaps your seek to the end of the file might be causing mmap to try and map the entire file in; you might try cleaning that up and use stat instead to determine the file size, or close then re-open the file descriptor before mapping. These are all just ideas; if I could reproduce your error, I'd be glad to help fix it.

Use NSData and dont touch mmap directly here.
To get the advantages of faulting reads, use NSDataReadingMapped.
NSData ALSO frees bytes when in low-mem situations

Normally the amount of physical memory available has nothing to do with whether or not you are able to mmap a file. This is after all VIRTUAL memory we are talking about. The problem on iOS is that, at least according to the iOS App Programming Guide, the virtual memory manager only swaps out read-only sections... In theory that would mean that you are not only constrained by the amount of available address space, but you are also constrained by the amount of available RAM, if you are mapping with anything other than PROT_READ.
See http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/TheiOSEnvironment/TheiOSEnvironment.html
Nevertheless, it may well be that the problem you are having is a lack of contiguous memory large enough for your mapping in the virtual address space. So far as I can find, Apple does not publish the upper memory limit of a user-mode process. Typically the upper region of the address space is reserved for the kernel. You may only have 16-bits of memory to work with in user-mode.
What you can't see without dumping a memory map in the debugger, is that there are many (I counted over 100 in a simple sample application from Apple) shared libraries (dylibs) loaded into the process address space. Each of these are also mmap'd in, and each will fragment the available address space.
In gdb you should be able to dump the memory mappings with 'info proc mappings'. Unfortunately in lldb the only equivalent I've been able to find is 'image list', which only shows shared library mappings, not data mmap mappings.
Using the debugger in this way you should be able to determine if the address space has a contiguous block large enough for the data you are trying to map, though it make take some work to discover the upper limit (Apple should publish this!)

Related

Virtual memory: does the operating system always load the whole file into physical memory?

I'm studying how virtual memory works and I'm not sure what happens if I load a big file (smaller than the physical memory, though) with fread() and similar.
As far as I understand, the operating system might not allocate the entire corresponding physical memory. Instead, it could wait until a page fault is triggered as my program reads a specific portion of the file (a portion not yet mapped to physical memory).
This is basically the behavior of a memory mapped file. So, if my assumptions are correct, what is the benefit of using system calls like mmap()? Just to avoid the usual for-loop dance when reading with fread(), maybe?
read(),fread() will read the amount you specified into the buffer you provide. Mmap is a separate interface into the kernel file cache. Where the two intersect is that the kernel will most likely first read the file into cache buffers, then copy select bits of those cache buffers into your user buffer.
This double copy is often necessary because your program doesn't provide the necessary alignment and blocking size the underlying device requires, and if the data requires transformation (decrypt, uncompress), it needs a place to do it from.
This kernel cache is kept coherent with the file, so system wide reads and writes go through it.
If you mmap the file, you may be able to avoid the double copy; but have to deal with changes to the file appearing un-announced.

How to know when its not safe to open a Realm on iOS

From current list of "Realm Limitations":
Any single Realm file cannot be larger than the amount of memory your
application would be allowed to map in iOS
Does this mean that if I check ProcessInfo.processInfo.physicalMemory and it is smaller than FileManager.default.attributesOfItem(atPath:realmPath)[FileAttributeKey.size] (plus a variable amount to account for fragmentation etc), I should not try to open the Realm?
If the Realm file is too big for mmap to map the file, you should get a Swift error. So all you really need to do is to try opening the Realm and catch any Realm.Error.addressSpaceExhausted errors.
The bigger problem is what to do once you know the file is too big. Our compaction on launch feature requires that the file be openable first, which rules it out (and is why we recommend that compact on launch be used to pre-empt this issue). We're working on ways to mitigate this problem.
mmap shouldn't depend upon the amount of free physical RAM you have (although some amount of RAM is required to map the file), nor is the limit that iOS imposes anywhere near the theoretical maximum. Finally, virtual memory limits operate on a per-process basis, meaning that the size of a Realm file you can open depends both on what other files have been mapped by that process and by how much memory that process is using for other things.

why cannot access to contiguous memory addresses in physical memory

According to Microsoft documentation in the following link :
https://msdn.microsoft.com/en-us/library/windows/hardware/hh439648%28v=vs.85%29.aspx
A program can use a contiguous range of virtual addresses to access a
large memory buffer that is not contiguous in physical memory.
So there's a question,that why in physical memory cannot have contiguous memory for a process?
Also there's another question due to the documentation, the following picture which demonstrates virtual memory for user and system space:
The system virtual address space is unique in the whole of the memory but there's a virtual address space for each process ?
Thanks.
At first when a process is loaded into memory, the OS can optimize to load process pages contiguously to physical memory.The process pages in memory cant always be contiguous due to swapping in and out, because there are other processes and things in memory that occupy space,so if later when some process pages becomes less used it is swapped back to hard drive, and when it is needed again it is not guaranteed to be loaded to the same spot before swapping out because there can be another process page laying there. You should read about virtual memory to gain good understanding of all of this.
You'r Questionn is simple!you have asked why we can have large memory buffer in virtual memory but not in physical one! thats because we are limited to the hardware!if we were able to access as much as buffer we want on our physical memory,industries had to make like 1024GB memories for our satisfaction! but we are using 8GB memory and we are satisfy...!virtual memories exist to satisfy our needs and make hardwares much more efficient!
hope it helps <3

How much SRAM will I use on my ARM board?

I am developing for the Arduino Due which has 96k SRAM and 512k flash memory for code. If I have a program that will compile to, say, 50k, when I run the code, how much sram will I use? will I use 50k immediately, or only the memory used by the functions I call? Is there a way to measure this memory usage before I upload the sketch to the arduino?
You can run
arm-none-eabi-size bin.elf
Where:
bin.elf is the generated binary (look it up in the compile log)
arm-none-eabi-size is a tool included with Arduino for arm which lets you know the memory distribution of your binary. This program can be found inside the Arduino directory. In my mac, this is /Applications/Arduino.app/Contents/Resources/Java/hardware/tools/g++_arm_none_eabi/bin
This command will output:
text data bss dec hex filename
9648 0 1188 10836 2a54 /var/folders/jz/ylfb9j0s76xb57xrkb605djm0000gn/T/build2004175178561973401.tmp/sketch_oct24a.cpp.elf
data + bss is RAM, text is program memory.
Very important: This doesn't account for dynamic memory (created in stack), this is only RAM memory for static and global variables. There are other techniques to check the RAM usage dynamically, like this one, but it will depend on the linker capabilities of the compiler suite you are using.
Your whole program is loaded into arduino, so atleast 50K flash memory will be used. Then on running the code, you will allocate some variables, some on stack, some global which will take some memory too but on SRAM.
I am not sure if there is a way to exactly measure the memory required but you can get a rough estimation based on the number and types of variables being allocated in the code. Remember, the global variables will take the space during the entire time the code is running on arduino, the local variables( the ones that are declared within a pair of {..}) remain in the memory till the '}' brace also known as the scope of the variables. Also remember, the compiled 50K code which you are mentioning is just the code portion, it does not include your variables, not even the global ones. The code is store in Flash memory and the variables are stored in the SRAM. The variables start taking memory only during runtime.
Also I curious to know how you are calculating that your code uses 50K memory?
Here is a little library to output the avalaible RAM memory.
I used it a lot when my program was crashing with no bug in the code. It turned out that I was running out of RAM.
So it's very handy!
Avalaible Memory Library
Hope it helps! :)

Memory-mapped files and low-memory scenarios

How does the iOS platform handle memory-mapped files during low-memory scenarios? By low-memory scenarios, I mean when the OS sends the UIApplicationDidReceiveMemoryWarningNotification notification to all observers in the application.
Our files are mapped into memory using +[NSData dataWithContentsOfMappedFile:], the documentation for which states:
A mapped file uses virtual memory techniques to avoid copying pages of the file into memory until they are actually needed.
Does this mean that the OS will also unmap the pages when they're no longer in use? Is it possible to mark pages as being no longer in use? This data is read-only, if that changes the scenario. How about if we were to use mmap() directly? Would this be preferable?
Memory-mapped files copy data from disk into memory a page at a time. Unused pages are free to be swapped out, the same as any other virtual memory, unless they have been wired into physical memory using mlock(2). Memory mapping leaves the determination of what to copy from disk to memory and when to the OS.
Dropping from the Foundation level to the BSD level to use mmap is unlikely to make much difference, beyond making code that has to interface with other Foundation code somewhat more awkward.
(This is not an answer, but it would be useful information.)
From #ID_AA_Carmack tweet,
#ID_AA_Carmack are iOS memory mapped files automatically unmapped in low memory conditions? (using +[NSData dataWithContentsOfMappedFile]?)
ID_AA_Carmack replied for this,
#KhrobEdmonds yes, that is one of the great benefits of using mapped files on iOS. I use mmap(), though.
I'm not sure that is true or not...
From my experiments NSData does not respond to memory warnings. I tested by creating a memory mapped NSData and accessing parts of the file so that it would be loaded into memory and finally sending memory warnings. There was no decrease in memory usage after the memory warning. Nothing in the documentation says that a memory will cause NSData to reduce real memory usage in low memory situations so it leads me to believe that it does not respond to memory warnings. For example NSCache documentation says that it will try and play nice with respect to memory usage plus I have been told it responds to the low memory warnings the system raises.
Also in my simple tests on an iPod Touch (4th gen) I was able to map about 600 megs of file data into virtual memory use +[NSData dataWithContentsOfMappedFile:]. Next I started to access pages via the bytes property on the NSData instance. As I did this real memory started to grow however it stopped growing at around 30 megs of real memory usage. So the way it is implemented it seems to cap how much real memory will be used.
In short if you want to reduce memory usage of NSData objects the best bet is to actually make sure they are completely released and not relying on anything the system automagically does on your behalf.
If iOS is like any other Unix -- and I would bet money it is in this regard -- pages in an mmap() region are not "swapped out"; they are simply dropped (if they are clean) or are written to the underlying file and then dropped (if they are dirty). This process is called "evicting" the page.
Since your memory map is read-only, the pages will always be clean.
The kernel will decide which pages to evict when physical memory gets tight.
You can give the kernel hints about which pages you would prefer it keep/evict using posix_madvise(). In particular, POSIX_MADV_DONTNEED tells the kernel to feel free to evict the pages; or as you say, "mark pages as being no longer in use".
It should be pretty simple to write some test programs to see whether iOS honors the "don't need" hint. Since it is derived from BSD, I bet it will.
Standard virtual memory techniques for file-backed memory says that the OS is free to throw away pages whenever it wants because it can always get them again later. I have not used iOS, but this has been the behavior of virtual memory on many other operating systems for a long time.
The simplest way to test it is to map several large files into memory, read through them to guarantee that it pages them into memory, and see if you can force a low memory situation. If you can't, then the OS must have unmapped the pages once it decided that they were no longer in use.
The dataWithContentsOfMappedFile: method is now deprecated from iOS5.
Use mmap, as you will avoid these situations.

Resources