I have read several tutorials that recommend using two (or more) NSManageObjectContexts when implementing core data, so as not to block the UI of the main queue. I am a little confused, however, because some recommend making the child context of the persistent store coordinator that of type mainQueueConcurrencyType, and then giving it its own child context of type privateQueueConcurrencyType, while others suggest the opposite.
I would personally think the the best setup for using two contexts would be to have the persistent store coordinator -> privateQueueConcurrencyType -> mainQueueConcurrencyType, and then only saving to the private context, and only reading from the main context. My understanding of the benefits of this setup is that saving to the private context won't have to go through the main context, as well as reading on the main context will always include the changes that are made on the private context.
I know that many apps require a unique solution that this setup might not work for, but as a general good practice, does this make sense?
Edit:
Some people have pointed out that this setup isn’t necessary with the introduction of NSPersistentContainer. The reason I am asking about it is because I’ve inherited a huge project at work that uses a pre-iOS-10 setup, and its experiencing issues.
I am open to re-writing our core data stack using NSPersistentContainer, but I wouldn't be comfortable spending the time on it unless I could find an example of how it should be setup with respect to our use cases ahead of time.
Here are the steps that most of our main use cases follow:
1) User edits object (eg. adds a photo/text to an abstract object).
2) An object (sync task) is created to encapsulate an API call to update the edited object on the server. Sync tasks are saved to core data in a queue to fire one after the other, and only when internet is available (thus allowing offline editing).
3) The edited object is also immediately saved to core data and then returned to the user so that the UI reflects its updates.
With NSPersistentContainer, would having all the writing done in performBackgroundTask, and all the viewing done on viewContext suffice for our needs for the above use cases?
Since iOS10 you don't need to worry about any of this, just use the contexts NSPersistentContainer provides for you.
Related
Background story
I am developing a big iOS app. This app works under specific assumptions. The main of them is that app should work offline with internal storage which is a snapshot of last synchronized state of data saved on server. I decided to use CoreData to handle this storage. Every time app launches I check if WiFi connection is enabled and then try to synchronize storage with server. The synchronization can take about 3 minutes because of size of data.
The synchronization process consists of several stages and in each of them I:
fetch some data from the server (XML)
deserialize it
save it in Core Data
Problem
Synchronization process can be interrupted for several reasons (internet connection, server down, user leaving application, etc). This may cause data to be out-of-sync.
Let's assume that synchronization process has 5 stages and it breaks after third. It results in 3/5 of data being updated in internal storage and the rest being out of sync. I can't allow it because data are strongly connected to each other (business logic).
Goal
I don't know if it is possible but I'm thinking about implementing one solution. On start of synchronization process I would like to create snapshot (some kind of copy) of current state of Core Date and during synchronization process work on it. When synchronization process completes with success then this snapshot could overwrite current CoreData state. When synchronization interrupts then snapshot can be simply aborted. My internal storage will be secured.
Questions
How to create CoreData snapshot?
How to work with CoreData snapshot?
How to overwrite CoreDate state with snapshot?
Thanks in advice for any help. Code examples, if it is possible, will be appreciated.
EDIT 1
The size of data is too big to handle it with multiple CoreData's contexts. During synchronization I am saving current context multiple times to cleanup memory. If I do not do it, the application will crash with memory error.
I think it should be resolved with multiple NSPersistentStoreCoordinators using for example this method: link. Unfortunately, I don't know how to implement this.
You should do exactly what you said. Just create class (lets call it SyncBuffer) with methods "load", "sync" and "save".
The "load" method should read all entities from CoreData and store it in class variables.
The "sync" method should make all the synchronisation using class variables.
Finally the "save" method should save all values from class variables to CoreData - here you can even remove all data from CoreData and save brand new values from SyncBuffer.
A CoreData stack is composed at its core by three components: A context (NSManagedObjectContext) a model (NSManagedObjectModel) and the store coordinator (NSPersistentStoreCoordinator/NSPersistentStore).
What you want is to have two different contexts, that shares the same model but use two different stores. The store itself will be of the same type (i.e. an SQLite db) but use a different source file.
At this page you can see some documentation about the stack:
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/CoreData/InitializingtheCoreDataStack.html#//apple_ref/doc/uid/TP40001075-CH4-SW1
The NSPersistentContainer is a convenience class to initialise the CoreData stack.
Take the example of the initialisation of a NSPersistentContainer from the link: you can have the exact same code to initialise it, twice, but with the only difference that the two NSPersistentContainer use a different .sqlite file: i.e. you can have two properties in your app delegate called managedObjectContextForUI and managedObjectContextForSyncing that loads different .sqlite files. Then in your program you can use the context from one store to get current data to show to the user and you can use the context that use the other store with a different .sqlite if you are doing sync operations. When the sync operations are finally done you can eventually swap the two files and after clearing and reloading the NSPersistentContainer (this might be tricky, because you will want to invalidate and reload all managed objects: you are switching to an entirely new context) you can then show the newly synced data to the user and start syncing again on a new .sqlite file.
The way I understand the problem is that you wish to be able download a large "object graph". It is however so large that it cannot be loaded at once in memory, so you would have to break it in chunks and then merge it locally into to Core data.
If that is the case, I think that's not trivial. I am not sure I can think of direct solution without understanding the object relations and even then it may be really overwhelming.
An overly simplistic solution may be to generate the sqlite file on the backend then download it in chunks; it seems ugly, but it serves to separate the business logic from the sync, i.e. the sqlite file becomes the transport layer. So, I think the essence to the solution would be to find a way to physically represent the data you are syncing in a format that allows for splitting it in chunks and that can afterwards be merged into a sqlite file (if you insist on using Core data).
Please also note that as far as I know Amazon (https://aws.amazon.com/appsync/) and Realm (https://realm.io/blog/introducing-realm-mobile-platform/) provide background sync of you local database, but those are paid services and you would have to be careful not be locked in (should not depend on their libs in your model layer, instead have a translation layer).
I haven't ever found an answer to this when reading developer documentation.
When using main and private queue contexts in Core Data is it a good strategy to use a global NSPrivateQueueConcurrencyType and NSMainQueueConcurrencyType contexts that I can access across my app and for the entire lifetime of my app?
Or, should I be creating a new instance each time I need to use a NSManagedObjectContext?
I have used this documentation but it doesn't answer the question.
In most cases the current best practice is to start with NSPersistentContainer. Its methods point to good practices for dealing with managed object contexts.
NSPersistentContainer has a property viewContext which uses a main queue concurrency. As its name implies, it's good for use directly with the UI, and on the main queue. Use this context for those cases. Don't create new main queue contexts.
It also has a couple of ways to do background work on private queues, via newBackgroundContext() and performBackgroundTask. In most cases you can use either of these when you need to do background work, and not bother keeping a reference to a long-lived background context. One caveat is that since they use separate background queues, it's possible for one background context to be executing at the same time as another. If that seems possible in your case, you might want to hold onto a background context to avoid that possibility. Otherwise your background contexts might need to merge changes made on other background contexts, which can get ugly fast.
There are exceptions to all of the above, but this is a good starting point. If this doesn't fit with your app for some reason, come back with another question detailing why.
I would recommend you to do write on a different context and then merge it back.
As a good practice i can recommend the setup MagicalRecord uses.
Specifically they use a Default Context as a Child of a RootSavingContext. Then All writes go into a new context and then get merged into the root context.
This way the default context can be used in the main thread and gets proper update notifications, e.g. for use with a FetchedResultsController.
I've been looking for posts related to this scenario, but I don't have a clear idea of how should I manage it: I have a context that could have several (maybe quite a lot) managed objects that the application may be using to perform operations, or even the user could be editing them, and meanwhile I can receive updates of the information in such objects from a service. Updating those objects while the user is editing them or the app is using them to perform operations and calculations could be a problem, as well as saving the context for the update received. I need somehow to "block" the objects being used when I concurrently need to save the updates I receive.
I hope I'm explaining the scenario clearly... how could/should I manage it?
What you want to do is handle server updates on a child context as defined in the latest Core Data Programming Guide. Then set the merge policy on your main queue context to whatever makes sense for your business logic.
From there you let Core Data handle the merges. That is one of the primary features of Core Data.
We have been trying to debug a Core Data multiple-context/threading issue wherein merging a Core Data save notification into our main thread NSManagedObjectContext is sporadically crashing the app. This is crashing ~2% of our app sessions and we are at a loss as to how to solve this. We would really appreciate any guidance or general advice on what could possibly cause this crash.
We have a Core Data setup that looks like this:
N.B. This is the default Core Data stack in Magical Record v2.3 created from [MagicalRecord setupAutoMigratingCoreDataStack]
This is the scenario where our app is crashing:
HTTP request returns JSON
JSON is parsed into NSManagedObjects (Some new entities, some updated entities) on Root Saving Context
Root Saving Context saves to persistent store
NSManagedObjectContextDidSaveNotification is broadcast by Core Data. Default context on main queue observes this and calls mergeChangesFromContextDidSaveNotification: with the NSDictionary of changes on the main thread.
It crashes when objectID is sent to an invalid object (most likely NSManagedObject has been deallocated).
This is occurring inside the private implementation of NSManagedObjectContext mergeChangesFromContextDidSaveNotification: so it is impossible for us to see what has actually gone wrong here; all we can tell at this point that an object which should exist, does not.
This only happens on a small percent of Core Data saves, indicating that may not be a fundamental flaw in our Core Data → API stack. Moreover, there is no indication that the size or type of the changes (insertions/updates/deletions) in the context changes have any impact on the likelihood of the crash.
The documentation of NSManagedObjectContextDidSaveNotification says that:
"You can pass the notification object to mergeChangesFromContextDidSaveNotification: on another thread, however you must not use the managed object in the user info dictionary directly on another thread. For more details, see Concurrency with Core Data in Core Data Programming Guide."
Maybe this is the issue? I would make sure the object u get from the notification is getting saved on Default Context on the same thread it was posted by Root.
It's been some time now since this question was posted and after rediscovering it I'd like to answer my own question for the sake of others who find this thread.
In my circumstance I had migrated a large code base from sibling NSManagedObjectContexts updated via NSManagedObjectContextDidSaveNotification's. However the problem was not really anything to do with this, even though this did expose the issue.
The real cause of this were places that there were older parts of the code, setup by previous engineers, that had setup KVO on NSManagedObjects and their properties. It transpired that KVO on Core Data entities is in fact a very very bad idea.
More accurately, it appeared that this happened when KVO was setup on entities and either the object, or the target of a relationship on this object was deleted from the NSPersistentStore. This second condition seemed to not be the only cause of the issue, but was definitely a very prominent cause in my situation.
Lesson's learnt:
Use a fetched results controller when you need to. KVO is not a convenient shortcut and you shouldn't avoid migrating dodgy Core Data KVO code to NSFetchedResultsControllers or another sensible alternative as the procrastination will just hurt you.
Multi threaded Core Data is a difficult but very worthwhile skill to become an expert in. Knowing your Core Data stack and the nuances and limitations of Core Data multithreading is absolutely worth all the mental anguish.
One possibility is that your persistent store has become corrupted and is in an inconsistent state. If this happens an error code is generated which Magical Record does not necessarily deal with. This can be the source of a number of difficult-to-repeat apparently-random crashes related to Magical Record (and may or may not be considered a Magical Record bug).
It's worth reading the Magical Record issues threads here (same issue) and here (different issue, but could be similar cause). When I hit these problems I managed to make some temporary patch fixes following various hints in those threads, but ultimately I decided to remove my dependency on Magical Record, and I have had no problems since then.
I am developing an application that uses Core Data for internal storage. This application has the following functionalities :
Synchronize data with a server by downloading and parsing a large XML file then save the entries with core data.
Allow user to make fetches (large data fetches) and CRUD operations.
I have read through a lot and a lot of documentation that there are several patterns to follow in order to apply multithreading with Core Data :
Nested contexts : this patterns seems to have many performance issues (children contexts block ancestor when making fetches).
Use one main thread context and background worker contexts.
Use a single context (main thread context) and apply multithreading with GCD.
I tried the 3 mentioned approaches and i realized that 2 last ones work fine. However i am not sure if these approaches are correct when talking about performance.
Is there please a well known performant pattern to apply in order to make a robust application that implements the mentioned functionalities ?
rokridi,
In my Twitter iOS apps, Retweever and #chat, I use a simple two MOC model. All database insertions and deletions take place on a private concurrent insertionMOC. The main MOC merges through -save: notifications from the insertionMOC and during merge processing emits a custom UI update notification. This lets me work in a staged fashion. All tweets come in to the app are processed on the background and are presented to the UI when everything is done.
If you download the apps, #chat's engine has been modernized and is more efficient and more isolated from the main thread than Retweever's engine.
Anon,
Andrew
Apple recommends using separate context for each thread.
The pattern recommended for concurrent programming with Core Data is
thread confinement: each thread must have its own entirely private
managed object context. There are two possible ways to adopt the
pattern: Create a separate managed object context for each thread
and share a single persistent store coordinator. This is the
typically-recommended approach. Create a separate managed object
context and persistent store coordinator for each thread. This
approach provides for greater concurrency at the expense of greater
complexity (particularly if you need to communicate changes between
different contexts) and increased memory usage.
See the apple Documentation
As per apple documentation use Thread Confinement to Support Concurrency
Creating one managed object context per thread. It will make your life easy. This is for when you are parsing large data in background and also fetching data on main thread to display in UI.
About the merging issue there are some best ways to do them.
Never pass objects between thread, but pass object ids to other thread if necessary and access them from that thread, for example when you are saving data by parsing xml you should save them on current thread moc and get the ids of them, and pass to UI thread, In UI thread re fetch them.
You can also register for notification and when one moc will change you will get notified by the user info dictionary which will have updated objects, you can pass them to merge context method call.