Core Data Concurrency - ios

I would like to get some suggestion for making core data operation concurrent in my project. My project is running since two years, so that it has many implementations which can be optimized based on the availability of new features in objectiveC. Mainly, I am looking for optimizing CoreData operation.
Currently most of the data operations are done using main managed object context. Recently, I have implemented a new feature to download a big set of data and inserting in to database using core data after login. This was supposed to be execute in parallel with other operations in the application. Now I realized that the code written for core data is executing in the main thread, because the UI of application is blocking during the coredata operation. So I referred many blogs and came to know that there are two strategies in which core data concurrency can be achieved, Notifications with the help of multiple contexts and parent/child managed object contexts.
I tried the parent/child strategy as Apple is not preferring the other strategy. But I am getting random crashes with the exception “Collection was mutated while being enumerated” on executeFetchRequest. This exception starts happening after implementing the parent/child strategy. Can anyone help me to solve this issue?

Yeah , i know there are not so many blogs that describe efficient use of core data in project but luckily i found one... which points to your problem properly... check here-> https://medium.com/soundwave-stories/core-data-cffe22efe716#.3wcpw1ijo
also your exception is occurring because you are updating your database while it is being used somewhere to remove this exception you can do this like:
if you are fetching data in array or dictionary then do change statement like this
NSDictionary *myDict = [[coreDataDectionary] mutableCopy];
Now perform any operation on this array or dictionary which you fetch from database, it wont show any exception.
Hope this helps you.

Try this :
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, NULL), ^{
// DATA PROCESSING
dispatch_async(dispatch_get_main_queue(), ^{
// UPDATE UI
});
}

You should use completionBlock into your code. here is tutorial and explanation.
It will allow you to not freeze your UI application even if your download is not finished.
the execution of the code will continue even if the code inside the block isn't finished yet. There will be a callback action inside the block when the download will be over.

Use this Core Data stack to minimize UI locks when importing large datasets:
One main thread MOC with its own PSC.
One background MOC with its own PSC.
Merge changes into main thread MOC on background MOC's save notifications.
Yes, you can – and should – use two independent PSCs (NSPersistentStoreCoordinator) pointing to the same .sqlite file. It will reduce overall locking time to just SQLite locks, avoiding PSC-level locks, so overall UI locking time will be [SQLite write lock] + [main thread MOC read].
You can use background MOC with NSConfinementConcurrencyType within a background thread or even better within an NSOperation – I found it very convenient to process data and feed it to Core Data on the same thread.
Import in batches. Choose batch size empirically. Reset the background MOC after each save.
When processing really large datasets, with hundreds of thousands of objects, do not use refreshObject:mergeChanges: with main thread MOC on every save. It is slow and eventually will consume all of the available memory. Reload your FRCs instead.
And about "Collection was mutated while being enumerated". To-many relationships in Core Data are mutable sets, so you have to make a copy, or better sort them into NSArrays before iterating.

Related

Why not use a privatecontext for all Core Data operations?

In my iPhone app I am inserting lots of data after login by means of Core Data. Initially I was showing a loader while data was being inserted so the blocking of UI was not a matter, but now I removed the loader and moved all the insert operations on the background thread by changing the managedobjectcontext concurrency type to NSPrivateQueueConcurrencyType for some insertions to relieve the UI from the heavy insertion work.
I am wondering what will be the downside if I use this same context and NOT NSMainQueueConcurrencyType for all operations, is it recommended?
You SHOULD use NSPrivateQueueConcurrencyType for all of your contexts. NSFetchedResultsController, for example, does work fine with a private queue context as long as your observe all of the rules for using queue confinement (i.e. fetches must be performed through the queue, as well as faults, etc.). There is a bug with NSFetchedResultsController caching when using private queue contexts, but caching covers only a limited number of use cases.
If/when you are using data from Core Data to update UI elements, you will still have to access the UI from the main queue. For example, this would be accessing a property to update a label:
[[object managedObjectContext] performBlock:^{
NSString *text = [object someProperty];
[NSOperationQueue mainQueue] addOperationWithBlock:^{
[[self someLabel] setText:text];
}];
}];
There are many advantages to using private queue confinement. The downside is that you will have to include code like that above - which far outweighs performing Core Data work on the main queue.
There is no downside as long as all operations that involve core data are performed on this context.
Perhaps a difference, but not really a downside is that you have to perform all your data operations asynchronously.
By the way, core data has its own "Background" thread, and as long as you perform all the operations with "performBlock" you will be fine.
What is NSManagedObjectContext's performBlock: used for?
The real detail you should tell us is what type of operations you are running.
If you don't touch the UI, it's ok to perform operations in a different thread. For example, say you are importing JSON data (a lot of them) from a back-end. Without it that app could freeze.
NSMainQueueConcurrencyType creates a context that is associated with the main dispatch queue and thus the main thread. You could use such a context to link it to objects that are required to run on the main thread, usually UI elements. SO, you need it for example when you deal with a NSFetchedResultsController.
Anyway, my personal advice is to profile the application and use threaded contexts whereas you find bottlenecks. Core Data could become quite complex. So stay simple whenever possible.
I would say the downside is that you may not use NSManagedObjects you get hold of inside, outside of the -performBlock:. You will have to transport individual properties out of the block to pass the values to your UI elements, because you may not touch the UI directly from inside the -performBlock:.
Also, context of NSMainQueueConcurrencyType does not have a background queue. What -performBlock: does is queue the block for execution in the runloop cycle. So while it appears that the execution of the code continues from the statement, it will still block the thread later on once the block starts executing.

Core data multithreading performance

I am developing an application that uses Core Data for internal storage. This application has the following functionalities :
Synchronize data with a server by downloading and parsing a large XML file then save the entries with core data.
Allow user to make fetches (large data fetches) and CRUD operations.
I have read through a lot and a lot of documentation that there are several patterns to follow in order to apply multithreading with Core Data :
Nested contexts : this patterns seems to have many performance issues (children contexts block ancestor when making fetches).
Use one main thread context and background worker contexts.
Use a single context (main thread context) and apply multithreading with GCD.
I tried the 3 mentioned approaches and i realized that 2 last ones work fine. However i am not sure if these approaches are correct when talking about performance.
Is there please a well known performant pattern to apply in order to make a robust application that implements the mentioned functionalities ?
rokridi,
In my Twitter iOS apps, Retweever and #chat, I use a simple two MOC model. All database insertions and deletions take place on a private concurrent insertionMOC. The main MOC merges through -save: notifications from the insertionMOC and during merge processing emits a custom UI update notification. This lets me work in a staged fashion. All tweets come in to the app are processed on the background and are presented to the UI when everything is done.
If you download the apps, #chat's engine has been modernized and is more efficient and more isolated from the main thread than Retweever's engine.
Anon,
Andrew
Apple recommends using separate context for each thread.
The pattern recommended for concurrent programming with Core Data is
thread confinement: each thread must have its own entirely private
managed object context. There are two possible ways to adopt the
pattern: Create a separate managed object context for each thread
and share a single persistent store coordinator. This is the
typically-recommended approach. Create a separate managed object
context and persistent store coordinator for each thread. This
approach provides for greater concurrency at the expense of greater
complexity (particularly if you need to communicate changes between
different contexts) and increased memory usage.
See the apple Documentation
As per apple documentation use Thread Confinement to Support Concurrency
Creating one managed object context per thread. It will make your life easy. This is for when you are parsing large data in background and also fetching data on main thread to display in UI.
About the merging issue there are some best ways to do them.
Never pass objects between thread, but pass object ids to other thread if necessary and access them from that thread, for example when you are saving data by parsing xml you should save them on current thread moc and get the ids of them, and pass to UI thread, In UI thread re fetch them.
You can also register for notification and when one moc will change you will get notified by the user info dictionary which will have updated objects, you can pass them to merge context method call.

Pitfalls of using two persistent store coordinators for efficient background updates

I am searching for the best possible way to update a fairly large core-data based dataset in the background, with as little effect on the application UI (main thread) as possible.
There's some good material available on this topic including:
Session 211 from WWDC 2013 (Core Data Performance Optimization and Debugging, from around 25:30 onwards)
Importing Large Data Sets from objc.io
Common Background Practices from objc.io (Core Data in the Background)
Backstage with Nested Managed Object Contexts
Based on my research and personal experience, the best option available is to effectively use two separate core-data stacks that only share data at the database (SQLite) level. This means that we need two separate NSPersistentStoreCoordinators, each of them having it's own NSManagedObjectContext. With write-ahead logging enabled on the database (default from iOS 7 onwards), the need for locking could be avoided in almost all cases (except when we have two or more simultaneous writes, which is not likely in my scenario).
In order to do efficient background updates and conserve memory, one also needs to process data in batches and periodically save the background context, so the dirty objects get stored to the database and flushed from memory. One can use the NSManagedObjectContextDidSaveNotification that gets generated at this point to merge the background changes into the main context, but in general you don't want to update your UI immediately after a batch has been saved. You want to wait until the background job is completely done and than refresh the UI (recommended in both the WWDC session and objc.io articles). This effectively means that the application main context remains out of sync with the database for a certain time period.
All this leads me to my main question, which is, what can go wrong, if I changed the database in this manner, without immediately telling the main context to merge changes? I'm assuming it's not all sunshine an roses.
One specific scenario that I have in my head is, what happens if a fault needs to be fulfilled for an object loaded in the main context, if the background operation has in between deleted that object from the database? Can this for instance happen on a NSFetchedResultsController based table view that uses a batchSize to fetch objects incrementally into memory? I.e., an object that has not yet been fully fetched gets deleted but than we scroll up to a point where the object needs to get loaded. Is this a potential problem? Can other things go wrong? I'd appreciate any input on this matter.
Great question!
I.e., an object that has not yet been fully fetched gets deleted but
than we scroll up to a point where the object needs to get loaded. Is
this a potential problem?
Unfortunately it'll cause problems. A following exception will be thrown:
Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0xc544570 <x-coredata://(...)>'
This blog post (section titled "How to do concurrency with Core Data?") might be somewhat helpful, but it doesn't exhaust this topic. I'm struggling with the same problems in an app I'm working on right now and would love to read a write-up about it.
Based on your question, comments, and my own experience, it seems the larger problem you are trying to solve is:
1. Using an NSFetchedResultsController on the main thread with thread confinement
2. Importing a large data set, which will insert, update, or delete managed objects in a context.
3. The import causes large merge notifications to be processed by the main thread to update the UI.
4. The large merge has several possible effects:
- The UI gets slow, or too busy to be usable. This may be because you are using beginUpdates/endUpdates to update a tableview in your NSFetchedResultsControllerDelegate, and you have a LOT of animations queing up because of the large merge.
- Users can run into "Could not fulfill fault" as they try to access a faulted object which has been removed from the store. The managed object context thinks it exists, but when it goes to the store to fulfill the fault the fault it's already been deleted. If you are using reloadData to update a tableview in your NSFetchedResultsControllerDelegate, you are more likely to see this happen than when using beginUpdates/endUpdates.
The approach you are trying to use to solve the above issues is:
- Create two NSPersistentStoreCoordinators, each attached to the same NSPersistentStore or at least the same NSPersistentStore SQLite store file URL.
- Your import occurs on NSManagedObjectContext 1, attached to NSPersistentStoreCoordinator 1, and executing on some other thread(s). Your NSFetchedResultsController is using NSManagedObjectContext 2, attached to NSPersistentStoreCoordinator 2, running on the main thread.
- You are moving the changes from NSManagedObjectContext 1 to 2
You will run into a few problems with this approach.
- An NSPersistentStoreCoordinator's job is to mediate between it's attached NSManagedObjectContexts and it's attached stores. In the multiple-coordinator-context scenario you are describing, changes to the underlying store by NSManagedObjectContext 1 which cause a change in the SQLite file will not be seen by NSPersistentStoreCoordinator 2 and it's context. 2 does not know that 1 changed the file, and you will have "Could not fulfill fault" and other exciting exceptions.
- You will still, at some point, have to put the changed NSManagedObjects from the import into NSManagedObjectContext 2. If these changes are large, you will still have UI problems and the UI will be out of sync with the store, potentially leading to "Could not fulfill fault".
- In general, because NSManagedObjectContext 2 is not using the same NSPersistentStoreCoordinator as NSManagedObjectContext 1, you are going to have problems with things being out of sync. This isn't how these things are intended to be used together. If you import and save in NSManagedObjectContext 1, NSManagedObjectContext 2 is immediately in a state not consistent with the store.
Those are SOME of the things that could go wrong with this approach. Most of these problems will become visible when firing a fault, because that accesses the store. You can read more about how this process works in the Core Data Programming Guide, while the Incremental Store Programming Guide describes the process in more detail. The SQLite store follows the same process that an incremental store implementation does.
Again, the use case you are describing - getting a ton of new data, executing find-Or-Create on the data to create or update managed objects, and deleting "stale" objects that may in fact be the majority of the store - is something I have dealt with every day for several years, seeing all of the same problems you are. There are solutions - even for imports that change 60,000 complex objects at a time, and even using thread confinement! - but that is outside the scope of your question.
(Hint: Parent-Child contexts don't need merge notifications).
Two Persistent Store Coordinators (pscs) is certainly the way to go with large datasets. File locking is faster than the locking within core data.
There's no reason you couldn't use the background psc to create thread confined NSManagedObjectContexts in which each is created for each operation you do in the background. However, instead of letting core data manage the queueing you now need to create NSOperationQueues and/or threads to manage the operations based on what you're doing in the background. NSManagedObjectContexts are free and not expensive. Once you do this you can hang onto your NSManagedObjectContext and only use it during that one operation and/or threads life time and build as many changes as you want and wait until the end to commit them and merge them to the main thread how ever you decide. Even if you have some main thread writes you can still at crucial points in your operations life time refetch/merge back into your threads context.
Also it's important to know that if you're working on large sets of data don't worry about merging contexts so as long as you aren't touching something else. For example if you have class A and class B and you have two seperate opertions/threads to work on them and they have no direct relationship you do not have to merge the contexts if one changes you can keep on rolling with the changes. The only major need for merging background contexts in this fashion is if there are direct relationships faulting. It would be better to prevent this though through some sort of serialization whether it be NSOperationQueue or what ever else. So feel free to work away on different objects in the background just be careful about their relationships.
I've worked on a large scale core data projects and had this pattern work very well for me.
Indeed, this is the best core data scenario you can work with. Almost no Main UI staleness, and easy background management of your data. When you want to tell the Main Context (and maybe a currently running NSFetchedResultsController) you listen for save notifications of the backgroundContext like this:
[[NSNotificationCenter defaultCenter]
addObserver:self selector:#selector(reloadFetchedResults:)
name:NSManagedObjectContextDidSaveNotification
object:backgroundObjectContext];
Then, you can merge changes, but waiting for the Main Thread context to catch them before saving. When you receive the mergeChangesFromContextDidSaveNotification notification the changes are not yet saved. Hence the performBlockAndWait is mandatory, so the Main context gets the changes and then the NSFetchedResultsController updates its values correctly.
-(void)reloadFetchedResults:(NSNotification*)notification
{
NSManagedObjectContext*moc=[notification object];
if ([moc isEqual:backgroundObjectContext])
{
// Delete caches of fethcedResults if you have a deletion
if ([[theNotification.userInfo objectForKey:NSDeletedObjectsKey] count]) {
[NSFetchedResultsController deleteCacheWithName:nil];
}
// Block the background execution of the save, and merge changes before
[managedObjectContext performBlockandWait:^{
[managedObjectContext
mergeChangesFromContextDidSaveNotification:notification];
}];
}
}
There is a pitfall no one has noticed. You can get the save notification before the background context has actually saved the object you want to merge. If you want to avoid problems by a faster Main Context asking for an object that has not been saved yet by the background context, you should (you really should) call obtainPermanentIDsForObjects before any background save. Then you are safe to call the mergeChangesFromContextDidSaveNotification. This will ensure that the merge receives a valid permanent Id for merging.

Multiple Managed Object Context

I have seen many times people use many managedObjectContext, but aside from when using the Undo manager, what is the real reason for using multipleManagedObjectContext? Why can it be useful to use more than one? Could you please show a few examples?
Managed object contexts are not thread safe so if you ever need to do any kind of background work with your Coredata objects (i.e. a long running import/export function without blocking the main UI) you will want to do that on a background thread.
In these cases you will need to create a new managed object context on the background thread, iterate through your coredata operation and then notify the main context of your changes.
You can find an example of how this could work here Core Data and threads / Grand Central Dispatch

Core Data executeFetchRequest slow

As the title states, executeFetchRequest on Core Data is slow "some times", and it can even block the UI.
I have a suspicion it is because another thread is saving stuff into Core Data, which prevents me from executing the fetch.
I can't save data in a background thread and execute the fetch, since I would have a chance getting outdated data right?
How would I resolve this?
This page is a wonderful explanation of how to improve the design of your core data stack.
http://www.cocoanetics.com/2012/07/multi-context-coredata/
Essentially, the gist of it is that you have a background context (NSPrivateQueueConcurrencyType) that interacts with your persistent store coordinator. This means that all of the expensive disk writing operations will take place in the background leaving your main thread unblocked. You then have your main NSManagedObjectContext that handles most of the core data interactions. Lastly, whenever you are importing lots of new records or doing a lot of processing, you can create a child context and set its parent to be the main context. That way, you save the child and the changes are pushed up to the main context and then later on, the main context saves automatically and then the background context writes the changes to disk.
Personally, I feel like this is an extremely elegant solution and I adopted it in one of my apps and it has been working exceptionally well.

Resources