2 different NSManagedObjectContexts save causes infinite memory allocation - ios

I'm getting a weird issue. I have the following set-up:
Model.xcdatamodeld
ModelBackup.xcdatamodeld
Each of these have their own NSManagedObjectContext NSPersistantStoreCoordinator and NSMananagedObjectModel.
I can read and write to each of these successfully separately. However, if I try to read/write to Model at the same time I'm read/writing to ModelBackup, I get infinite memory allocation. In the simulator, the CPU will spike to 200% and the memory will increase around 80-100 MB/second. This will eventually crash when memory hits 2.0+GB. This happens when I do a context save on both these NSManagedObjectContexts. I can read/access them fine.
Anyone know why I'm unable to write to both of these?
I have Model with ConcurrencyType:NSMainQueueConcurrencyType and ModelBackup with ConcurrencyType:NSPrivateQueueConcurrencyType.
The idea behind having two different xcdatamodeld is a workaround/alternative the parent-child core-data pattern. Our app is doing massive updates to the datamodel so I want to have a background datamodel perform those updates then when the app launches the next time, switch to the new updated sqlite.

First, you can do massive updates with parent/child as long as you pay attention to how much you are writing to disk at any one save.
Second, you can have two NSPersistentStoreCoordinator instances pointed at the same sqlite file and avoid using two files.
Third, What does your code look like for the creation of these contexts? Are you using the second context only within -performBlock: calls?
Fourth, what happens when you stop in the middle of that memory allocation? What does your stack look like?
For your immediate problem, you have an infinite loop. If I had to guess I would guess that you are using NSManagedObjectContextDidSaveNotification, probably with a nil object and each save is kicking off a save to the other context causing a loop.
Seeing the code would help solve your immediate issue. However I suspect there is an easier solution to your problem.

Related

Avoiding fatal crashes

I have been debugging an app for a while and am ready to upload it to app store. However, I still get occasional crashes that are hard to duplicate or debug--for example when pushing buttons in weird sequences. If I write these sequences down, I can debug but inevitably it happens when I haven't written it down and these can be difficult to replicate.
I know I need to just call it a day and leave future bug fixes for next release. I have removed all the abort() statements I had in testing. However, occasionally rather than getting a crash that lets you just close the app and reopen, I get one that makes it impossible to open the app without reinstalling. For example, this just happened, with a
'NSGenericException', reason: '*** Collection <__NSCFSet: 0x174a41b90> was mutated while being enumerated.'
This resulted from switching VCs during a background sync to the cloud.
I can live with crashes where you just need to reopen but not with ones that render the app unusable. Are there any guidelines for types of crashes to help you focus on the really fatal ones?
Or is there anything you can do to keep crashes from bricking app?
You should just fix this problem. Since you have this crashlog with that error message, you know which method can raise the problem, so you've got a good head start on fixing it, even if the problem manifests itself inconsistently and/or in a variety of ways.
But the occasional crash may not seem like too much of an inconvenience to you, but it is the quickest way to 1-star reviews and unhappy customers. I suspect you will quickly regret distributing an app with known, easily reproduced crashes.
But, a few observations:
It sounds like your background update process is mutating your model objects used by the main thread.
If possible, I'd suggest being careful to simply do not change any of your model objects in the background thread, but rather populate a local variable and when you're ready to update the UI accordingly, dispatch both the model update and UI refresh to the main thread.
If you cannot do this for some reason, you have to synchronize all interaction of model updates with some mechanism such as locks, GCD serial queue, reader-writer model, etc. This is slightly more complicated approach, but can be done.
I would advise temporarily editing your target's "scheme" and turn on the thread sanitizer:
It may possibly help you identify and more easily reproduce these sorts of problems. And the more easily you can reproduce the problem, the more easily you will be able to fix the issue.
You say:
Or is there anything you can do to keep crashes from bricking app?
It sounds like the "save" operation is somehow leaving the results in persistent storage in an internally inconsistent manner. Any one of the following would help, though I'd suggest you do all three if possible):
At the risk of repeating myself, fix the crash (it's always better to eliminate the source of the problem than try to program around manifestations of the problem);
Depending upon how you're saving your results, you may be able to employ an atomic save operation, so that if it fails half way, it won't leave it in an inconsistent state; we can't advise how you should do that without seeing code snippet illustrating how you're saving the results, but it's often an option;
Make sure that, if the "load" process that reads the persistent storage can fail, that it does so gracefully, rather than crashing; see if you can get it in this state where the app is failing during start-up, and then carefully debug what's going on in the start-up process that is causing the app to fail with this particular set of data in persistent storage. In the "devices" section, there is an option to download the data associated with an app, so you can carefully diagnose what's going on.

CoreData(swift) memory issue while inserting thousands of records

My application is in swift(latest version) language, And its has a bit complex database structure.
I'm dumping records while app launch first time as app must support offline information, My app can have millions of records.
Now Saving records in entities, which has the relationship with around 14-15 entity(one to one and one to many).
My Application through memory warning and gets terminated after around 1000 thousand records. I tried with profiling for leakages but that time app is working fine, however, it take a long time.
I have tried to create singleton class of context manager and also tried with creating local kind of variable while inserting a chunk of records.
For now, I'm fetching 50 records from web API and saving my context by updating my entities.
I have tried with autoreleasepool, but no success.
Please suggest me what should it do?
Thank you
Ashwin
I can advice you to watch this video. It is very inspiring and explains a lot of useful things about core data:
https://developer.apple.com/videos/play/wwdc2013/211/
Are you using fetchBatchSize property?
https://developer.apple.com/reference/coredata/nsfetchrequest/1506558-fetchbatchsize
If you are processing large amounts of Core Data objects in a loop, then you need to periodically save the context so that core data can turn modified objects back into faults instead of keeping them in memory. How often you need to save and when depends on your application and the code you are using to process, which it would be helpful to see in your question. You'll need to experiment yourself to find a balance between speed and memory use.
Use the allocations instrument and you will see where your memory is going. You're not leaking memory, you're just using too much of it.
Disable zombie object of your project. Below I have posted how to disable zombie object follow it via images.
For more details about zombie object enter link description here
Image 1
Image 2

Saving Core Data Context before Crashing

For example if we hit "Stop" in XCode, it will close the app, mimicking the crash behaviour.
But if my Core Data Context hasn't been saved, when I go back, the data won't be there.
Are there any workaround for this?
Should I save the context every time a big operation is finished?
Thanks.
Based on my experience you should decide the right granularity when you use Core Data save mechanism.
IMHO (maybe someone else could have different opinions) there is no standard to follow. My rule of thumb is taking into considerations two different aspects. The user and performances.
In the first case, you should save whenever the user performs critical operations. e.g. the user has inserted a lot of values in a form and hence he will expect to not insert them again. Regarding the second aspect, save operations could impact the performance of your app. If you frequently write changes to disk the app will be less responsive. On the contrary having so many objects in memory could led to memory warning (those will cause Core Data to take specific behaviors).
A tradeoff could be using background operations to save changes or take advantage of new Core Data API. Obviously, previous rules still remain valid.

Fetch related Core Data objects in background? (to prevent UI freeze)

I am currently using a method where I run a fetch request in the background to obtain object IDs, and then instantiating them with -existingObjectWithID:error:.
The problem is that these objects have to-many relation to a large number of objects. And the UI freezes for a while when these objects are accessed. (They are accessed all at once.)
I am guessing that the related objects are faults. I am trying to figure out a way to preload them in the background. Is there a solution to this problem?
Do you know for sure that it is your main thread that is causing the slowdown (sure sounds like it) - I'd use Instruments and "Time Profiler" to be sure, and there is also a way to turn on SQL debugging/timing too.
If it is your main thread, there are fantastic WWDC videos (take a look at 2010 too, not just 2011) on how to optimize Core Data.
Try the setRelationshipKeyPathsForPrefetching: method on NSFetchRequest. Pass in an array of keys that represent relationships that should be fetched rather than faulted.
Core data is not thread safe. So for background thread you should have separate managed context.
Typically core data don't take lots time to load. But if you are storing blobs (like image data) it can hit the performance. You should you NSFetchRequestController with page size you want to set. It much faster So you probably wont need to worry about about background fetching

Performance of NSManagedObjectContext save degrades dramatically

I am having issues with a CoreData-based iOS app when it tries to build the initial DB from data sent from the server. Basically, the server sends down 1MB chunks of objects (about 3,000 per chunk), and the iOS client deserializes them and writes them into disk.
What I'm seeing is that everything is going pretty well for about the first 8 chunks (out of 44), then performance drops off dramatically and each chunk starts taking longer and longer, as in the image below. Pretty much all the time is consumed in [NSManagedObjectContext save] as you can see in the Instruments profiling data, but also it appears that the app is no longer running at 100% of CPU for some reason, like it's waiting on disk I/O or something.
A few important facts about how I'm doing this:
Each chunk is processed in its own NSManagedObjectContext with its own NSAutoreleasePool, so there is no object build-up in a non-flushed context between processing of chunks.
There is no NSUndoManager set on any of the contexts.
There is no mergeChangesFromContextDidSaveNotification: going on (i.e. the chunk contexts aren't pushing their changes into a "master" context)
I'm using a SQLite-based datastore on iOS 4.3.
The records being written do have indexes on them.
The entire sync job is processed on a single GCD background thread (i.e. dispatch_queue_create() and dispatch_async()).
I have no idea why the performance suddenly drops off like that or what can be done to address it. I have poked around and read the following, but nothing has jumped out at me yet:
http://cocoawithlove.com/2008/03/testing-core-data-with-very-big.html
Does the performance of saving a ManagedObjectContext depend on the number of contained (unchanged) objects?
Any ideas or pointers for making this app scale up to 100,000 records in the database would be much appreciated.
Edit - extra stats
This Instruments graph shows the same simulation as above (on iPad2), but includes the disk activity stats and you can see pretty plainly that all of the "not running at 100% CPU" time seems to be taken up with writing to disk.
I also ran same sync attempt running on the iOS simulator. Overall memory usage is more or less constant for each chunk except for a dictionary that contains object IDs that grows slightly over time (but these are not CoreData objects or anything that would affect saves, they are just NSNumbers). This dict is a small amount of memory compared to the total heap and so the problem is not running out of memory.
What is interesting about this test is that the CoreData Save instrument reports that the successive saves take roughly the same amount of time, which obviously conflicts with the CPU profiling information from the first set of results. It seems like CoreData thinks it is taking the same amount of time to push changes to the DB, but the DB itself (i.e. SQLite) suddenly takes a lot longer to actually stream those changes to disk.
I know this is an old issue, so this is probably no longer relevant for you, but it may be to someone else.
I've seen performance issues seeding a Core Data database over iCloud and discovered that if you have inverse relationships on the data model you can be hurt incredibly badly performance wise. The way iCloud transaction logging has been implemented, it actually seems to be an inevitable problem. Each transaction sent to iCloud (have a look at them on developer.icloud.com - they're just zipped up plists) records every relationship that is affected by a change. Unlike when you modify one end of an relationship in Core Data, and it takes care of the inverse end, the core data transaction log ends recording the changes at BOTH ends, rather than it working it out.
So if you have a 1 to many relationship, and you create another record which will end up hanging off the 'many' end - well the record at the '1' end will also be updated to reflect the fact a new additional record is now hanging off it. If you have an architecture that means you have a 'type' object that lots of 'data' objects hang off, then every time you add a new data object, the type one is going to have a transaction written for it as well - but here's the kicker, because the iCloud Core Data transactions record the ENTIRE state of edited entities, not just the changes, EVERY relationship already recorded against it is also added to the log, not just the one indicating the new subordinate record. This can quickly spiral out of control as the amount of data written grows as the number of relationships between entities grows - it ends up taking longer and longer to save batches.
I've answered a question a bit like this before here on the Apple dev forums which might be useful as I never seem to be able to describe this succinctly.
The easiest option to improve seeding performance if this scenario is what is impacting you is to switch inverse relationships off, but this isn't always an option.
More information about your implementation would help. For example, do you run this on the main thread or are you implementing background threads? However, I have seen this behavior before. When performing extensive batch operations using Core Data, it can slow down if not memory managed properly. Have you checked memory usage? Have you checked for leaks? Another thing to try is to make sure you are using NSAutoreleasePool correctly if needed. By draining the pool periodically, that may help performance.

Resources