I have some issue on iOS trying to allocate more than 140 Mbytes with File plugin 1.0.1.
I have 10 giga free on device but a QUOTA_EXCEEDED_ERR was thrown
Here is the code:
var requestBytes = 150 * 1024 * 1024;
window.requestFileSystem(LocalFileSystem.PERSISTENT, requestBytes, function(fs) {
// success callback
}, function (e) {
// error callback
});
I see that the free space calulated in requestFileSystem method of CDVFile.m results always about 144 Mbytes.
Any idea? How is the free space calculated? Are there some limits for iOS apps?
Note that on android I haven't any issues.
It was a bug.
I opened an issue that is already merged on master branch
https://issues.apache.org/jira/browse/CB-6872
Related
I am using react-native for iOS. My project has the following warning:
Possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit<.
I do not use DeviceEventEmitter, and I use Keyboard component.
Are you using Flux by any chance if not please provide the npm link to the component you are using.
One of your Stores is Exceeding what EventEmitter is capable of. Just Do this.
var AppDispatcher = require('../Dispatcher/Dispatcher');
var EventEmitter = require('events').EventEmitter;
require('events').EventEmitter.prototype._maxListeners = 100;
/* By Default, a maximum of 10 listeners can be registered for any single event. more here:
possible EventEmitter memory leak detected */
I'm developing an iOS app and when I run it on my devices I got lots of the following warnings:
MyApp(2138,0x104338000) malloc: *** can't protect(0x3) region for postlude guard page at 0x104950000
They don't stop the execution but looks scary and probably are related to occasional crash of my app. I googled and only found two pages on the entire web and none of the helps. I wonder if anyone here knows how to fix this?
Edit: here is the product scheme I used:
The error you're seeing comes from Apple's malloc implementation and is due to vm_protect failing when trying modify memory protection of the guard pages that have been added to your memory allocations.
So it sounds like you've enabled debugmalloc's MallocGuardEdges flag (I didn't think debugmalloc was available on ios devices).
The 0x3 = VM_PROT_READ | VM_PROT_WRITE in the message is saying that vm_protect failed to make the page read-write which means that this is happening in response to a free.
The only documented return codes for vm_protect are KERN_PROTECTION_FAILURE and KERN_INVALID_ADDRESS so at this point I can only guess what happened. Making a page read-write seems like a modest request, for a valid page you wouldn't expect KERN_PROTECTION_FAILURE, which leaves KERN_INVALID_ADDRESS, meaning that perhaps your page at 0x104950000 is invalid.
Which would imply a memory stomping bug.
The issue is one-year old, but we ran into the same issue and found this thread. We where able to simplify and reproduce it in latest Xcode 7.3 on Mac with the following piece of C code:
int main(int argc, char *argv[])
{
const int s = 100, n = 5000;
int i;
void *p = malloc(s);
for (i = 2 ; i <= n ; i++)
p = realloc(p,i * s);
for (i = n - 1 ; i > 0 ; i--)
{
void *newp = realloc(p,i * s);
if (newp != p)
printf("realloc(p,%d * %d = %d) changes pointer from %p to %p\n",i,s,i * s,p,newp);
p = newp;
}
free(p);
return 0;
}
This will trigger the malloc_printf() breakpoint in the 2nd for loop (when the reallocations shrink memory) and print:
malloc: *** can't protect(0x3) region for postlude guard page at 0x48ed000
It appears (setting a breakpoint on malloc_printf()) that this happens exactly on the first time that realloc() changes the returned pointer, the total output of above program is:
realloc(p,1249 * 100 = 124900) changes pointer from 0x48b0000 to 0x5000000
realloc(p,2 * 100 = 200) changes pointer from 0x5000000 to 0x240cc60
Playing a bit with combinations of the block size s and number of iterations n it happens at least for 10/50000, 100/5000, 200/5000, ..., it seems when the allocated memory i * s shrinks to around 124000 bytes. Other combinations like 1/200000 don't trigger malloc_printf().
Given the simplicity of this code snippet, we believe that this is a bug in Apple's malloc debug implementation.... or the message is supposed to be some informative (internal) message rather than trying to signal a real memory issue.
(A version of) the source code for Apple's malloc implementation can be found here http://www.opensource.apple.com/source/Libc/Libc-391.4.2/gen/scalable_malloc.c?txt. We are considering to raise with Apple Developer Centre...
So in short the answer is that it might very well not be a memory stomping bug in your code, but instead an issue in de malloc debug code itself, in which case you need to just ignore the message.
I'm playing heaps of videos at the same time with AVPlayer. To reduce loading times, I'm storing the corresponding views in a NSCache.
This works fine until reaching a certain number of videos, from which the videos simply stop playing, or even appearing.
There's no error, log or memory warning. In particular, I'm listening to UIApplicationDidReceiveMemoryWarningNotification to clear the cache but this is never received.
If I remove the cache, all the videos play at expense of worse performance.
This makes me suspect that AVPlayer is using memory from a different process (which one?). And when that memory reaches a certain limit, new players cease to work.
Is this correct?
If so, is there a way to be notified when this magic limit is reached to take the appropriate measures (e.g., clear the cache) to ensure playback of other media?
Good news and bad news - good is you can probably fix the problem, bad is it takes work and is somewhat complex.
Root Problem
The reason you don't get notified early happens because iOS does not find out that your app has exceeded its memory budget til its almost too late, then it immediately kills it. The problem has to do with the way iOS (and OS X) manage the file system cache. Normally, when files get opened, as you read the data, the file data gets transferred into a buffer in the Unified Buffer Cache (a term you can google for more info) - I'll call it UBC from now on.
So suppose you have 10 open files, and you have read every file to the end, but have not closed the files. Well, all that data is sitting in the UBC. Now, if you close the files, the buffers are all freed. And technically, the OS can purge this buffers too - only it seems by the time it realizes that memory is tight, it chooses to blow the app away first (and there may be valid reasons for it to do this). So imagine that you app is showing videos, and the way the videos get loaded is through the file system, the number of free buffers starts dropping. At some point iOS notices this, tracks down who most belong to (your app), and wham, kills your app ASAP.
I hit this problem myself in an open source project I support, PhotoScrollerNetwork. Users started complaining that their project was getting terminated by the system, like you, without any notification. I tried in vain to monitor the UBC (there are APIs on OS X to do so, but not on iOS). In the end I found a solution using an heuristic - monitor all your memory usage including the UBC, and don't exceed 50% of the total available iOS memory pool.
So (you might ask) - what is the Apple approved way to solve this problem? Well, there is none. How do I know that? Because I had a half hour long discussion at WWDC 2012 with the Director of Core iOS in one of the labs (after getting ping ponged around by others who had no idea what I was talking about). In the end, after I explained the above heuristic, he told me directly that the solution was probably as good as any he could think of. Without an API to directly monitor the UBC, you can only approximate its usage and adjust accordingly.
But you say, I'm using the NSCache - why doesn't the system account for the AVPlayer memory there? There reason is undoubtedly the UBC - an AVPlayer instance probably only consumes a few thousand K of memory itself - its the open file to the video that is not accounted for by iOS.
Possible Solutions
1) If you can load the videos directly into a NSData object, and keep that in the NSCache, you can most likely totally avoid the UBC issues mentioned above. [I don't know enough about the AV system to know if you can do this.] In this case the system should be capable of purging memory when it needs to.
2) Continue using your original code, but add memory management to it. That is, when you get create an AVPlayer instance, you will need to account for the size of the video in bytes, and keep a running tally of all this memory. When you approach 50% of total device free memory, then start purging old AVPlayers.
Code
For completeness, I've provided the relevant code from PhotoScrollerNetwork below. If you want more details you can peruse the project - however its quite complex so expect to spend some time (its doing JPEG decoding on the fly for massive images and writing tiles to the file system as the decode proceeds).
// Data Structure
typedef struct {
size_t freeMemory;
size_t usedMemory;
size_t totlMemory;
size_t resident_size;
size_t virtual_size;
} freeMemory;
Early on in your app:
// ubc_threshold_ratio defaults to 0.5f
// Take a big chunk of either free memory or all memory
freeMemory fm = [self freeMemory:#"Initialize"];
float freeThresh = (float)fm.freeMemory*ubc_threshold_ratio;
float totalThresh = (float)fm.totlMemory*ubc_threshold_ratio;
size_t ubc_threshold = lrintf(MAX(freeThresh, totalThresh));
size_t ubc_usage = 0;
// Method on some class to monitor the memory pool
- (freeMemory)freeMemory:(NSString *)msg
{
// http://stackoverflow.com/questions/5012886
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
freeMemory fm = { 0, 0, 0, 0, 0 };
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS) {
LOG(#"Failed to fetch vm statistics");
} else {
/* Stats in bytes */
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
fm.freeMemory = (size_t)mem_free;
fm.usedMemory = (size_t)mem_used;
fm.totlMemory = (size_t)mem_total;
struct task_basic_info info;
if(dump_memory_usage(&info)) {
fm.resident_size = (size_t)info.resident_size;
fm.virtual_size = (size_t)info.virtual_size;
}
#if MEMORY_DEBUGGING == 1
LOG(#"%#: "
"total: %u "
"used: %u "
"FREE: %u "
" [resident=%u virtual=%u]",
msg,
(unsigned int)mem_total,
(unsigned int)mem_used,
(unsigned int)mem_free,
(unsigned int)fm.resident_size,
(unsigned int)fm.virtual_size
);
#endif
}
return fm;
}
When you open a video, add the size to ubc_usage, and when you close one decrement it. When you want to open a new video, test ubc_usage against ubc_threadhold, and its it exceeds the value you have to close something first.
PS: you can try calling that freeMemory method at other times, and see, but in my case it hardly changes at all when files get opened - the system seems to consider the whole UBC as "free", since it could purge it if it needed to (I guess).
If you're throwing all of these videos in a NSCache, you have to be prepared for the cache to throw away items when it feels like they are consuming too much memory. From the NSCache documentation:
The NSCache class incorporates various auto-removal policies, which
ensure that it does not use too much of the system’s memory. The
system automatically carries out these policies if memory is needed by
other applications. When invoked, these policies remove some items
from the cache, minimizing its memory footprint.
Check to see if you're getting nils back from the cache, and if you are, you'll have to reconstruct your objects.
Edit:
It is also worth mentioning that objc.io #7 advises against storing large objects in a NSCache:
The eviction method of NSCache is non-deterministic and not
documented. It’s not a good idea to put in super-large objects like
images that might fill up your cache faster than it can evict itself.
Is it is possible to write an app that uses 200MB, say? My iPad has 1GB, but I get
didReceiveMemoryWarning
after using 20 or 30MB and shortly after my app is killed. (I am the foreground app so I don't really see why I have to get this warning, why doesn't the OS close the background apps, but whatever). I am taking no action in didReceiveMemoryWarning (just logging it and calling super), is that why I am killed? Or is there other possible reasons?
So I understand I am supposed to free-up memory when I get the warning, but I don't want to! (Lets assume my app REALLY does need 200MB to operate).
If I did free-up some memory when I get the warning (how much?) then would my app then not be killed? And could I then carry on and use up MORE memory? If so I could create some "balloon" memory just so i can free it when warned and then at least my app survives. This seems insane though.
Or is it basically impossible to have an iPAD app that uses more than a few 10s of MB?
I recently had this problem. It basically comes down to the speed at which you allocate memory. If you try to grab a lot of memory up front then iOS will terminate you for using too much memory and not responding to memory warnings. iOS memory handling is ridiculous really. The worst thing is that my problems only arose AFTER I'd released the app on the app store. It took me ages to track down what the problem was :(
The way I managed to handle this was to allocate the RAM i needed at startup (64MB) slowly and hold off when I receive memory warnings. I create my own ViewController that displays an animated splash screen while I'm an initialising the memory usage In viewDidLoad I do the following (Meg is a simple inline function that multiplies by 1024* 1024):
AllocBlockSize = Meg( 2 );
mAllocBlock = (char*)malloc( mAllocBlockSize );
//[mpProgressLabel setText: #"Initialising Memory: 1MB"];
mpInitTimer = [NSTimer scheduledTimerWithTimeInterval: 0.5f target: self selector: #selector( AllocMemory ) userInfo: nil repeats: YES];
In my AllocMemory selector I do this:
- (void) AllocMemory
{
if ( self.view == nil )
return;
if ( mMemoryWarningCounter == 0 )
{
if ( mAllocBlockSize < Meg( 64 ) )
{
mAllocBlockSize *= 2;
mAllocBlock = (char*)realloc( mAllocBlock, mAllocBlockSize );
ZeroMemory( mAllocBlock, mAllocBlockSize );
if ( mAllocBlockSize == Meg( 64 ) )
{
mMemoryWarningCounter = 8;
}
}
else
{
free( mAllocBlock );
// Initialise main app here.
}
}
else
{
mMemoryWarningCounter--;
}
}
And to handle the memory warnings I do as follows:
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
mMemoryWarningCounter += 4;
}
Also do note the ZeroMemory step. When I didn't have this here i would allocate 64MB and still get booted. I assume the touching the memory fully commits it to my app thus zeroing the memory was necessary to eliminate the memory warning and eviction problems I was suffering.
Something is not right. Any app can use about 1/3-1/2 the total physical ram on any device under most any version of iOS from 6-8 without being jetslammed (killed)
I can write a simple app that instantly takes 400mb on a 1gb device, and it's not killed - unless iOS can't terminate other services fast enough.
IOS 8 is more forgiving vs 6 or 7 as many of the launchdaemons now have a jetslammed priority flag which decides which order things get killed - as well as a memory limit so if a LD exceeds that high water mark, it's killed. IOS should let any app keep using memory until all the lower priority jetslammed services have been killed off. There's also another setting for LDs that terminate them when memory is under pressure - regardless of JS priority.
Once all that's left are services with a higher priority than a user app, that's when jetslamms/kills or should get memory warnings before it gets slammed
Programming on the iPad 6 (air 2) is MUCH easier. 2gb ram. 300-400MB free after boot and iOS will back down to using 700MB allowing 1.3GB for an app.... And I bet the iPhone 7 and mini 4s will have 2gb. That will let us see play station 3, or better games, for iOS IF AND ONLY IF users will pay the price of a normal PS3 game (20-80$). Most ppl will complain, but most spend more on these free to pay (play) apps with 4.99$-129.99$ iAps - absurd (Apple should limit iAps to 29.99)
Gone are the days of the 10% rule (where your app should use no more than 10% system ram)
Look at more hardcore, major iOS games.... They use 300-400MB on 1gb devices and won't run on 512mb devices.
So if you are being killed for 30MB something really is not right.
I'm using sqlite in my app via the FMDB wrapper.
Memory usage in my app sits at 2.25 MB before a call to VACUUM:
[myFmdb executeUpdate: #"VACUUM;" ];
Afterwords its at 5.8 MB, and I can't seem to reclaim the memory. Post-vacuum, the Instruments/Allocations tool shows tons of sqlite3MemMalloc calls with live bytes, each allocating 1.5 K.
Short of closing the database and reopening it (an option), how can I clean this up?
Edit: closing and reopening the database connection does clear up the memory. This is my solution unless someone can shed some further insight to this.
I posted this question on the sqlite-users list and got a response that suggested reducing the cache size for sqlite. This is done by executing the following statement (adjusting the size value as desired):
pragma cache_size = 100
EDIT: here's another nifty trick for releasing SQLite memory. Be sure to #define SQLITE_ENABLE_MEMORY_MANAGEMENT.
Documented here: http://www.sqlite.org/c3ref/release_memory.html
int bytesReleased = sqlite3_release_memory( 0x7fffffff );
NSLog( #"sqlite freed %d bytes", bytesReleased );