I'm parsing a JSON file on an iPad which has about 53 MB. The parsing is working fine, I'm using Yajlparser which is a SAX parser and have set it up like this:
NSData *data = [NSData dataWithContentsOfFile:path options:NSDataReadingMappedAlways|NSDataReadingUncached error:&parseError];
YAJLParser *parser = [[YAJLParser alloc] init];
parser.delegate = self;
[parser parse:data];
Everything worked fine until now, but the JSON-file became bigger and now I'm suddenly experiencing memory warnings on the iPad 2. It receives 4 Memory Warnings and then just crashes. On the iPad 3 it works flawlessly without any mem warnings.
I have started profiling it with Instruments and found a lot of CFNumber allocations (I have stopped Instruments after a couple of minutes, I had it run before until the crash and the CFNumber thing was at about 60 mb or more).
After opening the CFNumber detail, it showed up a huge list of allocations. One of them showed me the following:
and another one here:
So what am I doing wrong? And what does that number (e.g. 72.8% in the last image) stand for? I'm using ARC so I'm not doing any Release or Retain or whatever.
Thanks for your help.
Cheers
EDIT: I have already asked the question about how to parse such huge files here: iPad - Parsing an extremely huge json - File (between 50 and 100 mb)
So the parsing itself seems to be fine.
See Apple's Core Data documentation on Efficiently Importing Data, particularly "Reducing Peak Memory Footprint".
You will need to make sure you don't have too many new entities in memory at once, which involves saving and resetting your context at regular intervals while you parse the data, as well as using autorelease pools well.
The general sudo code would be something like this:
while (there is new data) {
#autoreleasepool {
importAnItem();
if (we have imported more than 100 items) {
[context save:...];
[context reset];
}
}
}
So basically, put an autorelease pool around your main loop or parsing code. Count how many NSManagedObject instances you have created, and periodically save and reset the managed object context to flush these out of memory. This should keep your memory footprint down. The number 100 is arbitrary and you might want to experiment with different values.
Because you are saving the context for each batch, you may want to import into a temporary copy of your store in case something goes wrong and leaves you with a partial import. When everything is finished you can overwrite the original store.
Try to use [self.managedObjectContext refreshObject:obj refreshChanges:NO] after certain amount of insert operations. This will turn NSManagedObjects into faults and free up some memory.
Apple Docs on provided methods
Related
I am retrieving a tremendous amount of data from a server and I have to quickly initialize and deallocate a NSMutableDictionary.
I would use core data but requirements say I have to use one allocated object for fast storage of incoming JSON data, save it to NSUserDefaults and completely remove it. This will be happening multiple times in a background process over a period of time.
Tests were on a live iPhone 6S (not a simulator)
I tested a completely bare "Single View Application" that resulted with:
As you can see a bare bones "Single View Application" consumes 9MB of memory.
I then ran a test on a NSMutableDictionary. My test was to initialize with data, remove all objects, and restore the RAM.
The dictionary stored 10,000 tiny values (in viewDidLoad - for testing purposes):
_dictionary = [[NSMutableDictionary alloc] initWithDictionary:#{
#"Item00000" : [NSNumber numberWithInt:0],
...
#"Item10000" : [NSNumber numberWithInt:10000]}];
After initializing - I waited two seconds and removed all objects and attempted to remove the NSMutableDictionary.
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[_largeDictionary removeAllObjects];
_largeDictionary = nil;
});
But the memory consumption did not show the NSMutableDictionary getting removed:
Does anyone have a solution to this problem? I thought this was taken care of when Apple depreciated the autorelease method. My live environment is quickly initializing relatively large chunks of data per key-value pair - then deallocating and preparing for the next chunk. The next chunk may be stored in a different NSMutableDictionary. That's why it is necessary for me to completely remove the first NSMutableDictionary from memory. I have seen many answers pertaining to this question but none more recent than 2013.
Update: this #property is: (nonatomic, strong)
Thank you in advance.
I am using ARC for my iOS project and am using a library called SSKeychain to access/save items to the keychain. I expect my app to access keychain items once every 10 seconds or so (to access API security token) at peak load and as such I wanted to test this library to see how it handles when called frequently. I made this loop to simulate an insane amount of calls and noticed that it bleeds a significant amount (~75 mb) of memory when run on an iPhone (not simulator):
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
dispatch_async(dispatch_get_main_queue(), ^{
NSUInteger beginMemory = available_memory();
for (int i = 0; i < 10000; ++i) {
#autoreleasepool{
NSError * error2 = nil;
SSKeychainQuery* query2 = [[SSKeychainQuery alloc] init];
query2.service = #"Eko";
query2.account = #"loginPINForAccountID-2";
query2.password = nil;
[query2 fetch:&error2];
}
}
NSUInteger endMemory = available_memory();
NSLog(#"Started with %u, ended with %u, used %u", beginMemory, endMemory, endMemory-beginMemory);
});
return YES;
}
static NSUInteger available_memory(void) {
// Requires #import <mach/mach.h>
NSUInteger result = 0;
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
if (task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t)&info, &size) == KERN_SUCCESS) {
result = info.resident_size;
}
return result;
}
I am using SSKeychain which can be found here. This test bleeds about ~75 mb of memory regardless if things are actually stored on the keychain.
Any ideas what is happening? Is my testing methodology flawed?
I ran your code under the Leaks Instrument and this is what I saw from the Allocations track -
Which is what you would expect - a lot of memory allocated during the loop and then it is released.
Looking at the detail you see -
Persistent bytes on the heap of 2.36MB - This is the memory actually used by the app 'now' (i.e. after the loop with the app 'idling')
Persistent objects of 8,646 - again, the number of objects allocated "now".
Transient objects 663,288 - The total number of objects that have been created on the heap over the application lifetime. You can see from the difference between transient and persistent that most have been released.
Total bytes of 58.70MB - This is the total amount of memory that has been allocated during execution. Not the total of memory in use, but the total of the amounts that have been allocated regardless of whether or not those allocations have been subsequently freed.
The difference between the light and dark pink bar also shows the difference between the current 'active' memory use and the total use.
You can also see from the Leak Checks track that there are no leaks detected.
So, in summary, your code use a lot of transient memory as you would expect from a tight loop, but you wouldn't see this memory use in the normal course of your application execution where the keychain was accessed a few times every second or minute or whatever.
Now, I would imagine that having gone to the effort of growing the heap to support all of those objects, iOS isn't going to release that now freed heap memory back to the system straight away; it is possible that your app may need a large heap space again later, which is why your code reports that a lot of memory is in use and why you should be wary of trying to build your own instrumentation rather than using the tools available.
You should use Instruments to figure out where/what is causing a leak. Its a very good tool to know how to use.
This article is a little dated but you should get the basic gist.
Ray Wenderlich - Instruments
Going off of Paulw11's comment I stumbled across this,
From NSAutoreleasePool Class Reference:
The Application Kit creates an autorelease pool on the main thread at
the beginning of every cycle of the event loop, and drains it at the
end, thereby releasing any autoreleased objects generated while
processing an event.
So when you check it with instruments make sure the event loop has had time to finish. Maybe all you need to do is let the program keep running and then pause the debugger and check instruments again.
I have an app that needs to take screenshots and save them as files. I'm using ARC, so not releasing variables manually, and it seems my code has some serious leaks.
Here is what I'm running:
- (BOOL) saveNow:(NSString *)filePath {
UIImage *image = [self.view getImage];
NSData *imageData = UIImagePNGRepresentation(image);
return [imageData writeToFile:filePath atomically:YES];
}
Where getImage is a method of a category on UIView:
- (UIImage *)getImage {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [[UIScreen mainScreen]scale]);
[[self layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
When running this on a non-retina iPad, the creation of a UIImage object fills the memory with an additional 1 MB, NSData adds a further 4 MB, and as I run this many times, this memory is not released! On a retina iPad each call to saveNow: costs ~17 MB, which causes the device to run out of memory after a few runs.
A little extra info. I'm running this code in a loop that iterates a total of over 300 times (small changes are made to the view on each iteration and I need a screenshot of each for review purposes). If I reduce the amount of iterations so that the device does not run out of memory, I can see that the memory is released once the method that contains the loop returns. However, this is not ideal, and I would expect that taking the memory heavy code out into it's own function (saveNow:) should have made an improvement, but it does not. How can I force these objects to be released as soon as they are not needed instead of waiting for the parent method to return? Hopefully without having to disable ARC on the entire project.
Edit: I tried using #autoreleasepool like this:
#autoreleasepool {
[self saveNow:filePath];
}
The results are better but not perfect. It releases about 4 MB of memory when the block is complete, but another 1 MB is still stuck until the container method returns. So it's an 80% improvement (yey!) but I'm aiming for 100% :) I'll read up more about #autoreleasepool as I have not used it before.
I'll make my comment on #autoreleaspool a legitimate answer to help you out.
Apple suggests to use #autoreleaspool where the memory is in concern. The following paragraph is taken from Core Data documentation, but I believe can be applied in this situation as well:
In common with many other situations, when you use Core Data to import
a data file it is important to remember “normal rules” of Cocoa
application development apply. If you import a data file that you have
to parse in some way, it is likely you will create a large number of
temporary objects. These can take up a lot of memory and lead to
paging. Just as you would with a non-Core Data application, you can
use local autorelease pool blocks to put a bound on how many
additional objects reside in memory. For more about the interaction
between Core Data and memory management, see “Reducing Memory
Overhead.”
Basically, #autoreleasepool serves as a hint for a compiler to release all temporary objects once they are out of bounds.
You're expecting for the memory to be released completely, which might not be the case with Apple frameworks. There might be some caching going in behind the curtains (This is just and idea). That is why that remaining 1MB may be ok. However, just to be safe, I would recommend to increase the iteration number and see what happens.
As you mentioned in your comment your loop is big and nested, so there might be something else going on. Try to get rid of all extra operations and see what happens.
Hope this helps, Cheers!
My question is about Core Data and memory not being released. I am doing a sync process importing data from a WebService which returns a json. I load in, in memory, the data to import, loop through and create NSManagedObjects. The imported data needs to create objects that have relationships to other objects, in total there are around 11.000. But to isolate the problem I am right now only creating the items of the first and second level, leaving the relationship out, those are 9043 objects.
I started checking the amount of memory used, because the app was crashing at the end of the process (with the full data set). The first memory check is after loading in memory the json, so that the measurement really takes only in consideration the creation, and insert of the objects into Core Data. What I use to check the memory used is this code (source)
-(void) get_free_memory {
struct task_basic_info info;
mach_msg_type_number_t size = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&size);
if( kerr == KERN_SUCCESS ) {
NSLog(#"Memory in use (in bytes): %f",(float)(info.resident_size/1024.0)/1024.0 );
} else {
NSLog(#"Error with task_info(): %s", mach_error_string(kerr));
}
}
My setup:
1 Persistent Store Coordinator
1 Main ManagedObjectContext (MMC) (NSMainQueueConcurrencyType used to read (only reading) the data in the app)
1 Background ManagedObjectContext (BMC) (NSPrivateQueueConcurrencyType, undoManager is set to nil, used to import the data)
The BMC is independent to the MMC, so BMC is no child context of MMC. And they do not share any parent context. I don't need BMC to notify changes to MMC. So BMC only needs to create/update/delete the data.
Plaform:
iPad 2 and 3
iOS, I have tested to set the deployment target to 5.1 and 6.1. There is no difference
XCode 4.6.2
ARC
Problem:
Importing the data, the used memory doesn't stop to increase and iOS doesn't seem to be able to drain the memory even after the end of the process. Which, in case the data sample increases, leads to Memory Warnings and after the closing of the app.
Research:
Apple documentation
Efficiently importing Data
Reducing Memory Overhead
Good recap of the points to have in mind when importing data to Core Data (Stackoverflow)
Tests done and analysis of the memory release. He seems to have the same problem as I, and he sent an Apple Bug report with no response yet from Apple. (Source)
Importing and displaying large data sets (Source)
Indicates the best way to import large amount of data. Although he mentions:
"I can import millions of records in a stable 3MB of memory without
calling -reset."
This makes me think this might be somehow possible? (Source)
Tests:
Data Sample: creating a total of 9043 objects.
Turned off the creation of relationships, as the documentation says they are "expensive"
No fetching is being done
Code:
- (void)processItems {
[self.context performBlock:^{
for (int i=0; i < [self.downloadedRecords count];) {
#autoreleasepool
{
[self get_free_memory]; // prints current memory used
for (NSUInteger j = 0; j < batchSize && i < [self.downloadedRecords count]; j++, i++)
{
NSDictionary *record = [self.downloadedRecords objectAtIndex:i];
Item *item=[self createItem];
objectsCount++;
// fills in the item object with data from the record, no relationship creation is happening
[self updateItem:item WithRecord:record];
// creates the subitems, fills them in with data from record, relationship creation is turned off
[self processSubitemsWithItem:item AndRecord:record];
}
// Context save is done before draining the autoreleasepool, as specified in research 5)
[self.context save:nil];
// Faulting all the created items
for (NSManagedObject *object in [self.context registeredObjects]) {
[self.context refreshObject:object mergeChanges:NO];
}
// Double tap the previous action by reseting the context
[self.context reset];
}
}
}];
[self check_memory];// performs a repeated selector to [self get_free_memory] to view the memory after the sync
}
Measurment:
It goes from 16.97 MB to 30 MB, after the sync it goes down to 28 MB. Repeating the get_memory call each 5 seconds maintains the memory at 28 MB.
Other tests without any luck:
recreating the persistent store as indicated in research 2) has no effect
tested to let the thread wait a bit to see if memory restores, example 4)
setting context to nil after the whole process
Doing the whole process without saving context at any point (loosing therefor the info). That actually gave as result maintaing less amount of memory, leaving it at 20 MB. But it still doesn't decrease and... I need the info stored :)
Maybe I am missing something but I have really tested a lot, and after following the guidelines I would expect to see the memory decreasing again. I have run Allocations instruments to check the heap growth, and this seems to be fine too. Also no memory Leaks.
I am running out of ideas to test/adjust... I would really appreciate if anyone could help me with ideas of what else I could test, or maybe pointing to what I am doing wrong. Or it is just like that, how it is supposed to work... which I doubt...
Thanks for any help.
EDIT
I have used instruments to profile the memory usage with the Activity Monitor template and the result shown in "Real Memory Usage" is the same as the one that gets printed in the console with the get_free_memory and the memory still never seems to get released.
Ok this is quite embarrassing... Zombies were enabled on the Scheme, on the Arguments they were turned off but on Diagnostics "Enable Zombie Objects" was checked...
Turning this off maintains the memory stable.
Thanks for the ones that read trough the question and tried to solve it!
It seems to me, the key take away of your favorite source ("3MB, millions of records") is the batching that is mentioned -- beside disabling the undo manager which is also recommended by Apple and very important).
I think the important thing here is that this batching has to apply to the #autoreleasepool as well.
It's insufficient to drain the autorelease pool every 1000
iterations. You need to actually save the MOC, then drain the pool.
In your code, try putting a second #autoreleasepool into the second for loop. Then adjust your batch size to fine-tune.
I have made tests with more than 500.000 records on an original iPad 1. The size of the JSON string alone was close to 40MB. Still, it all works without crashes, and some tuning even leads to acceptable speed. In my tests, I could claim up to app. 70MB of memory on an original iPad.
I'm developing an application with adobe air 3 for ios and having low memory errors frequently.
After ios 5 update os started to kill my app after some low memory warnings.
But the thing is profiler says app uses 4 to 9 megs of memory.
There are a lot of bitmap copy operations around and sometimes instantiates new bitmaps from embedded bitmaps.
I highly optimized everything and look for leaks etc.
I watch profiler for memory status and seems like GC clears everything. everything looks perfect but app continues to get low memory errors and gets killed by os.
Is there anything wrong with this code below. Because my assumption is this ClassReference never gets off from memory even the profiles says memory is cleared.
I used clone method to pass value instead of pass by ref. so I guess GC can collect that local variable. I tried with and without clone nothing changes.
If the code below runs 10-15 times with different tile Id's app crashes but with same ID's it continues working.
Is there anyone who is familiar with this kind of thing?
tmp is bitmapData
if (isMoving)
{
tmp=getProxyImage(x,y); //low resolution tile image
}
else
{
strTmp="main_TILE"+getTileID(x,y);
var ClassReference:Class = getDefinitionByName(strTmp) as Class; //full resolution tile image //something wrong here
tmp=new ClassReference().bitmapData.clone(); //something wrong here
ClassReference=null;
}
return tmp.clone();
Thanks for reading. I hope some one has a solution for this.
You are creating three copies of your bitmapdata with this. They will likely get garbage collected eventually, but you probably run out of memory before that happens.
(Here I assume you have embedded your bitmapdata using the [Embed] tag)
tmp = new ClassReference()
// allocates no new memory, class reference already exists
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
// creates a new BitmapAsset from the class reference including it's BitmapData.
// then you clone this bitmapdata, giving you two
tmp = new ClassReference().bitmapData.clone();
// not really necessary since ClassReference goes out of scope anyway, but no harm done
ClassReference=null;
// Makes a third copy of your second copy and returns it.
return tmp.clone();
I would recommend this (assuming you need unique bitmapDatas for each tile)
var ClassReference:Class = getDefinitionByName(strTmp) as Class;
return new ClassReference().bitmapData.clone();
If you don't need unique bitmapDatas, keep static properties with the bitmapDatas on some class and use the same ones all over. That will minimize memory usage.