memory leak (?) after sqlite+fmdb vacuum command - ios

I'm using sqlite in my app via the FMDB wrapper.
Memory usage in my app sits at 2.25 MB before a call to VACUUM:
[myFmdb executeUpdate: #"VACUUM;" ];
Afterwords its at 5.8 MB, and I can't seem to reclaim the memory. Post-vacuum, the Instruments/Allocations tool shows tons of sqlite3MemMalloc calls with live bytes, each allocating 1.5 K.
Short of closing the database and reopening it (an option), how can I clean this up?
Edit: closing and reopening the database connection does clear up the memory. This is my solution unless someone can shed some further insight to this.

I posted this question on the sqlite-users list and got a response that suggested reducing the cache size for sqlite. This is done by executing the following statement (adjusting the size value as desired):
pragma cache_size = 100
EDIT: here's another nifty trick for releasing SQLite memory. Be sure to #define SQLITE_ENABLE_MEMORY_MANAGEMENT.
Documented here: http://www.sqlite.org/c3ref/release_memory.html
int bytesReleased = sqlite3_release_memory( 0x7fffffff );
NSLog( #"sqlite freed %d bytes", bytesReleased );

Related

iPhone Memory management for a deallocated password (Malloc Scribble in production?, fill with zeroes deallocated memory?)

I'm doing some research on how iPhone manage the heap and stack but it's very difficult to find a good source of information about this. I'm trying to trace how a password is kept in memory, even after the NSString is deallocated.
As far as I can tell, an iPhone will not clear the memory content (write zeros or garbage) once the release count in ARC go down to 0. So the string with the password will live in memory until that memory position is overridden.
There's a debug option in Xcode, Malloc Scribble, to debug memory problems that will fill deallocated memory with 0x55, by enabling/disabling this option (and disabling Zombies), and after a memory dump of the simulator (using gcore) I can check if the content has been replaced in memory with 0x55.
I wonder if this is something that can be done with the Apple Store builds, fill deallocated memory with garbage data, if my assumption that iPhone will not do that by default is correct or not, or if there's any other better option to handle sensitive data in memory, and how it should be cleared after using it (Mutable data maybe? write in that memory position?)
I don't think that there's something that can be done on the build settings level. You can, however, apply some sort of memory scrubbing yourself by zeroing the memory (use memset with the pointer to your string).
As #Artal was saying, memset can be used to write in a memory position. I found this framework "iMAS Secure Memory" that can be useful to handle this:
The "iMAS Secure Memory" framework provides a set of tools for
securing, clearing, and validating memory regions and individual
variables. It allows an object to have it's data sections overwritten
in memory either with an encrypted version or null bytes
They have a method that it should be useful to clear a memory position:
// Return NO if wipe failed
extern inline BOOL wipe(NSObject* obj) {
NSLog(#"Object pointer: %p", obj);
if(handleType(obj, #"", &wipeWrapper) == YES) {
if (getSize(obj) > 0){
NSLog(#"WIPE OBJ");
memset(getStart(obj), 0, getSize(obj));
}
else
NSLog(#"WIPE: Unsupported Object.");
}
return YES;
}

Where is my AVPlayer's memory, and how do I get it back?

I'm playing heaps of videos at the same time with AVPlayer. To reduce loading times, I'm storing the corresponding views in a NSCache.
This works fine until reaching a certain number of videos, from which the videos simply stop playing, or even appearing.
There's no error, log or memory warning. In particular, I'm listening to UIApplicationDidReceiveMemoryWarningNotification to clear the cache but this is never received.
If I remove the cache, all the videos play at expense of worse performance.
This makes me suspect that AVPlayer is using memory from a different process (which one?). And when that memory reaches a certain limit, new players cease to work.
Is this correct?
If so, is there a way to be notified when this magic limit is reached to take the appropriate measures (e.g., clear the cache) to ensure playback of other media?
Good news and bad news - good is you can probably fix the problem, bad is it takes work and is somewhat complex.
Root Problem
The reason you don't get notified early happens because iOS does not find out that your app has exceeded its memory budget til its almost too late, then it immediately kills it. The problem has to do with the way iOS (and OS X) manage the file system cache. Normally, when files get opened, as you read the data, the file data gets transferred into a buffer in the Unified Buffer Cache (a term you can google for more info) - I'll call it UBC from now on.
So suppose you have 10 open files, and you have read every file to the end, but have not closed the files. Well, all that data is sitting in the UBC. Now, if you close the files, the buffers are all freed. And technically, the OS can purge this buffers too - only it seems by the time it realizes that memory is tight, it chooses to blow the app away first (and there may be valid reasons for it to do this). So imagine that you app is showing videos, and the way the videos get loaded is through the file system, the number of free buffers starts dropping. At some point iOS notices this, tracks down who most belong to (your app), and wham, kills your app ASAP.
I hit this problem myself in an open source project I support, PhotoScrollerNetwork. Users started complaining that their project was getting terminated by the system, like you, without any notification. I tried in vain to monitor the UBC (there are APIs on OS X to do so, but not on iOS). In the end I found a solution using an heuristic - monitor all your memory usage including the UBC, and don't exceed 50% of the total available iOS memory pool.
So (you might ask) - what is the Apple approved way to solve this problem? Well, there is none. How do I know that? Because I had a half hour long discussion at WWDC 2012 with the Director of Core iOS in one of the labs (after getting ping ponged around by others who had no idea what I was talking about). In the end, after I explained the above heuristic, he told me directly that the solution was probably as good as any he could think of. Without an API to directly monitor the UBC, you can only approximate its usage and adjust accordingly.
But you say, I'm using the NSCache - why doesn't the system account for the AVPlayer memory there? There reason is undoubtedly the UBC - an AVPlayer instance probably only consumes a few thousand K of memory itself - its the open file to the video that is not accounted for by iOS.
Possible Solutions
1) If you can load the videos directly into a NSData object, and keep that in the NSCache, you can most likely totally avoid the UBC issues mentioned above. [I don't know enough about the AV system to know if you can do this.] In this case the system should be capable of purging memory when it needs to.
2) Continue using your original code, but add memory management to it. That is, when you get create an AVPlayer instance, you will need to account for the size of the video in bytes, and keep a running tally of all this memory. When you approach 50% of total device free memory, then start purging old AVPlayers.
Code
For completeness, I've provided the relevant code from PhotoScrollerNetwork below. If you want more details you can peruse the project - however its quite complex so expect to spend some time (its doing JPEG decoding on the fly for massive images and writing tiles to the file system as the decode proceeds).
// Data Structure
typedef struct {
size_t freeMemory;
size_t usedMemory;
size_t totlMemory;
size_t resident_size;
size_t virtual_size;
} freeMemory;
Early on in your app:
// ubc_threshold_ratio defaults to 0.5f
// Take a big chunk of either free memory or all memory
freeMemory fm = [self freeMemory:#"Initialize"];
float freeThresh = (float)fm.freeMemory*ubc_threshold_ratio;
float totalThresh = (float)fm.totlMemory*ubc_threshold_ratio;
size_t ubc_threshold = lrintf(MAX(freeThresh, totalThresh));
size_t ubc_usage = 0;
// Method on some class to monitor the memory pool
- (freeMemory)freeMemory:(NSString *)msg
{
// http://stackoverflow.com/questions/5012886
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
freeMemory fm = { 0, 0, 0, 0, 0 };
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS) {
LOG(#"Failed to fetch vm statistics");
} else {
/* Stats in bytes */
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
fm.freeMemory = (size_t)mem_free;
fm.usedMemory = (size_t)mem_used;
fm.totlMemory = (size_t)mem_total;
struct task_basic_info info;
if(dump_memory_usage(&info)) {
fm.resident_size = (size_t)info.resident_size;
fm.virtual_size = (size_t)info.virtual_size;
}
#if MEMORY_DEBUGGING == 1
LOG(#"%#: "
"total: %u "
"used: %u "
"FREE: %u "
" [resident=%u virtual=%u]",
msg,
(unsigned int)mem_total,
(unsigned int)mem_used,
(unsigned int)mem_free,
(unsigned int)fm.resident_size,
(unsigned int)fm.virtual_size
);
#endif
}
return fm;
}
When you open a video, add the size to ubc_usage, and when you close one decrement it. When you want to open a new video, test ubc_usage against ubc_threadhold, and its it exceeds the value you have to close something first.
PS: you can try calling that freeMemory method at other times, and see, but in my case it hardly changes at all when files get opened - the system seems to consider the whole UBC as "free", since it could purge it if it needed to (I guess).
If you're throwing all of these videos in a NSCache, you have to be prepared for the cache to throw away items when it feels like they are consuming too much memory. From the NSCache documentation:
The NSCache class incorporates various auto-removal policies, which
ensure that it does not use too much of the system’s memory. The
system automatically carries out these policies if memory is needed by
other applications. When invoked, these policies remove some items
from the cache, minimizing its memory footprint.
Check to see if you're getting nils back from the cache, and if you are, you'll have to reconstruct your objects.
Edit:
It is also worth mentioning that objc.io #7 advises against storing large objects in a NSCache:
The eviction method of NSCache is non-deterministic and not
documented. It’s not a good idea to put in super-large objects like
images that might fill up your cache faster than it can evict itself.

Why do I get memory warnings with only 7 MB of memory allocated?

I am running my iOS App on iPod touch device and I get memory warnings even if the total allocation peak is only 7 MB as shown below (this happens when the Game Scene is pushed):
What I find strange is that:
the left peak (at time 0.00) corresponds to 20 MB of memory allocated (Introduction Scene) and despite this DOES NOT give any memory warning.
the central peak (at time 35.00) corresponds to raughly 7 MB of memory allocated (Game Scene is being pushed) and DOES give memory warning.
I do not understand why I get those warnings if the total memory is only 7 MB. Is this normal? How can I avoid this?
Looking at the allocation density we can see the following schema, which (to me) does not show much difference between the moment when the Intro Scene is being pushed (0.00) and the moment in which the Game Scene is being pushed (35.00). Being the density peaks similar I would assume that the memory warnings are due to something else that I am not able to spot.
EDIT:
I have been following a suggestion to use "Activity monitor" instead but unfortunately my App crashes when loading the Game Scene with only 30 MB of memory allocated. Here is the Activity monitor report.
Looking at the report I can see a total real memory usage sum of about 105 MB. Given this should refer to RAM memory and given my model should have 256 MB of RAM this should not cause APP crashes or Memory leaks problems.
I run the Leak monitor and it does not show any leak on my App. I also killed all the other apps.
However, analyzing the report, I see an astonishing 167 MB of Virtual Memory value associated to my App. Is this normal? What does that value mean? Can this be the reason for the crash? How can I detect which areas of my code are responsible for this?
My iPod is a 4th Generation model with 6.4 GB of capacity (memory) and only 290 MB of memory free. I am not sure if this somehow effects the Virtual Memory paging performance.
EDIT 2: I have also looked more at SpringBoard and its Virtual Memory usage is 180 MB. Is this normal? I found some questions/answers that seem to suggest that SpringBoard is responsible for autoreleasing objects (it should be the process for managing the screen and home botton but I am not sure if it has also to do with memory management). Is this correct?
Another note. I am using ARC. However I am not sure this has to do much with the issue as there are no apparent memory leaks and XCode should convert the code adding release/dealloc/retain calls to the compiled binary.
EDIT 3: As said before I am using ARC and Cocos2d (2.0). I have been playing around with the Activity monitor. I found out that if I remove the GameCenter authentication mechanism then the Activity Monitor runs fine (new doubt: did anyone else had a similar issue? Is the GameCenter authentication view being retained somewhere?). However I noticed that every time I navigate back and forwards among the various scenes prior the GameScene (Initial Scene -> Character Selection -> Planet Selection -> Character Selection -> Planet Selection -> etc.. -> Character Selection ..) the REAL MEMORY usage increases. After a while I start to get memory warnings and the App gets killed by iOS. Now the question is:
-> am I replacing the scenes in the correct way? I call the following from the various scene:
[[CCDirector sharedDirector] replaceScene: [MainMenuScene scene]];
I have Cocos2d 2.0 as static library and the code of replaceScene is this:
-(void) replaceScene: (CCScene*) scene
{
NSAssert( scene != nil, #"Argument must be non-nil");
NSUInteger index = [scenesStack_ count];
sendCleanupToScene_ = YES;
[scenesStack_ replaceObjectAtIndex:index-1 withObject:scene];
nextScene_ = scene; // nextScene_ is a weak ref
}
I wonder if somehow the scene does not get deallocated properly. I verified that the cleanup method is being called however I also added a CCLOG call on the CCLayer dealloc method and rebuild the static library. The result is that the dealloc method doesn't seem to be called.
Is this normal? :D
I found that other people had similar issues. I am wondering if it has to do with retain cycles and self blocks. I really need to spend some time studying this unless, from EDIT 3, anyone can tell me already what I am doing wrong :-)
All memory capacity shared through all apps&processes run in iOS. So, other apps can use a lot of memory and your app receive memory warning too. You'll receive memory warnings until it is not enough.
To understand what actually happens with memory in your app you should
Profile your app with Leaks (ARC is not guarantee that you don't have leaks, i.e. self-capturing issue).
Use heapshot analysis (shortly described here http://bentrengrove.com/blog/2013/4/26/heapshot-analysis)
And checkout this post about memory & virtual memory in iOS: http://liam.flookes.com/wp/2012/05/03/finding-ios-memory/
I solved this by adding a print of the process effective memory usage in the console. In this way I could get a precise measurament of the real memory used by the App process. Using instrument proved to be imprecise as the real memory used did not match with the one shown on instruments.
This code can be used to get the effective memory usage:
-(vm_size_t)report_memory
{
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
return info.resident_size;
}

Core Data Import - Not releasing memory

My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.

Huge memory consumption while parsing JSON and creating NSManagedObjects

I'm parsing a JSON file on an iPad which has about 53 MB. The parsing is working fine, I'm using Yajlparser which is a SAX parser and have set it up like this:
NSData *data = [NSData dataWithContentsOfFile:path options:NSDataReadingMappedAlways|NSDataReadingUncached error:&parseError];
YAJLParser *parser = [[YAJLParser alloc] init];
parser.delegate = self;
[parser parse:data];
Everything worked fine until now, but the JSON-file became bigger and now I'm suddenly experiencing memory warnings on the iPad 2. It receives 4 Memory Warnings and then just crashes. On the iPad 3 it works flawlessly without any mem warnings.
I have started profiling it with Instruments and found a lot of CFNumber allocations (I have stopped Instruments after a couple of minutes, I had it run before until the crash and the CFNumber thing was at about 60 mb or more).
After opening the CFNumber detail, it showed up a huge list of allocations. One of them showed me the following:
and another one here:
So what am I doing wrong? And what does that number (e.g. 72.8% in the last image) stand for? I'm using ARC so I'm not doing any Release or Retain or whatever.
Thanks for your help.
Cheers
EDIT: I have already asked the question about how to parse such huge files here: iPad - Parsing an extremely huge json - File (between 50 and 100 mb)
So the parsing itself seems to be fine.
See Apple's Core Data documentation on Efficiently Importing Data, particularly "Reducing Peak Memory Footprint".
You will need to make sure you don't have too many new entities in memory at once, which involves saving and resetting your context at regular intervals while you parse the data, as well as using autorelease pools well.
The general sudo code would be something like this:
while (there is new data) {
#autoreleasepool {
importAnItem();
if (we have imported more than 100 items) {
[context save:...];
[context reset];
}
}
}
So basically, put an autorelease pool around your main loop or parsing code. Count how many NSManagedObject instances you have created, and periodically save and reset the managed object context to flush these out of memory. This should keep your memory footprint down. The number 100 is arbitrary and you might want to experiment with different values.
Because you are saving the context for each batch, you may want to import into a temporary copy of your store in case something goes wrong and leaves you with a partial import. When everything is finished you can overwrite the original store.
Try to use [self.managedObjectContext refreshObject:obj refreshChanges:NO] after certain amount of insert operations. This will turn NSManagedObjects into faults and free up some memory.
Apple Docs on provided methods

Resources