Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am working on an iOS application on which the Core Data was already implemented. So I couldn't understand the Core Data implementations from the scratch. But I could work on Core Data while adding new features. But I am having many doubts on Core Data. I couldn't find out a clear idea from any of the blog.
1) Question 1 - I have setup the architecture for application in a way that it has a Webservice controller class, Webservice helper class, DatabaseManager class, UiViewController classes and Model Objects as part of Core Data.
Web service controller makes the connection to service with NSURLConnection and other related functionalities. Once the response got from web service, it gives a callback to Webservice helper class with blocks.
Web service handler class helps to call the services from all the UIViewControllers. Web service helper class acts as an intermediate class to make web services between UIViewControllers and Web service controller. So when the web service helper gets the callback from web service controller, it sends the response back to UIViewController with the help of blocks.
My question is here, What should be flow of storing the web service response in to core data as well as updating the data in the UI. I would like to know the best practice for doing it. Should I save the data in to core data, then retrieve and display in the UI? But saving the data will take time if the data is big. Should core data operation and updating the UI synchronously.
2) Question 2 - I read about Core data operation concurrency in many blogs, still I am not pretty much clear about the concurrency in Core Data.
According to my knowledge, inorder to achieve concurrency, we have to create two managedobjectcontext, one with NSMainQueueConcurrencyType and other NSPrivateQueueConcurrency. Then all the save and update operations has to be executed in privateMOC[NSPrivateQueueConcurrencyType] and read can be executed with mainMOC[NSMainQueueConcurrencyType]. How this operation is related with performBlock?
3) Question 3 - As we can create multiple moc, should that be of NSConfinementConcurrencyType and execute performBlock on all doc for concurrency?
4) Question 4 - What is difference of implementing concurrency as mentioned in Question 2 & Question 3?
5) Question 5 - Consider, I am reading a record using core data and due to concurrency the same record has to update a value. How this situation can be handled. What I know here is that I have to use the merge policy. But I am not sure how to implement this, since I am not clear about the above cases.
6) Question 6 - In an application, how many managedobjectcontext can be created of type NSMainQueueConcurrencyType, NSConfinementConcurrencyType and NSPrivateQueueCOncurrencyType?
Can anyone answer the above questions?
Thanks in advance.
This really should be several separate questions. I will attempt to answer the architecture question, and perhaps touch on some of the others.
The return path from the web service should not reach any view controllers directly. The point where your service helper has parsed the response and validated it is where you want to save to core data. This task should be handed off to another class.
From the view controller side, you want to use NSFetchedResultsControllers (FRCs) to know when the model has changed. You can setup an FRC to watch any number of objects, including a single object.
FRCs were intended for table views, and there are numerous examples available on how to use them for that purpose. If you have a view where you are editing a single object and you use the web service to save updates, for example, you can have an FRC that is watching the edited object. When the save is complete, the FRC will trigger and you can update the UI to indicate success, or whatever.
Core Data
Core Data concurrency is not trivial, as you've discovered. I've had the best experience with the following setup:
A read-only context with NSMainQueueConcurrencyType. This is the initial context that is tied to the persistent store. This context remains for the entire session.
An NSOperationQueue with a concurrency of 1. Operations on this queue clone the main (read-only) context with a concurrency type of NSConfinementConcurrencyType, and are connected to the same store. Only these cloned contexts are allowed to save. These contexts are discarded when the operation is complete.
A merge handler that will merge changes into the main context.
Operations execute on background threads, and are synchronous with respect to each other. This makes merges simple. Cloned contexts are setup with a merge policy of NSMergeByPropertyObjectTrumpMergePolicy, and the main context with NSMergeByPropertyStoreTrumpMergePolicy.
View controllers, and other main-thread activities, use the main context, which always exists.
There are lots of other setups, including multiple, writeable siblings, parent-child relationships, etc. I recommend picking something simple, because you don't want to be fighting Core Data and threading issues at the same time.
I recommend watching this video by Paul Goracke. The inspiration for my preferred stack was taken directly from Paul's presentation.
Related
I have read several tutorials that recommend using two (or more) NSManageObjectContexts when implementing core data, so as not to block the UI of the main queue. I am a little confused, however, because some recommend making the child context of the persistent store coordinator that of type mainQueueConcurrencyType, and then giving it its own child context of type privateQueueConcurrencyType, while others suggest the opposite.
I would personally think the the best setup for using two contexts would be to have the persistent store coordinator -> privateQueueConcurrencyType -> mainQueueConcurrencyType, and then only saving to the private context, and only reading from the main context. My understanding of the benefits of this setup is that saving to the private context won't have to go through the main context, as well as reading on the main context will always include the changes that are made on the private context.
I know that many apps require a unique solution that this setup might not work for, but as a general good practice, does this make sense?
Edit:
Some people have pointed out that this setup isn’t necessary with the introduction of NSPersistentContainer. The reason I am asking about it is because I’ve inherited a huge project at work that uses a pre-iOS-10 setup, and its experiencing issues.
I am open to re-writing our core data stack using NSPersistentContainer, but I wouldn't be comfortable spending the time on it unless I could find an example of how it should be setup with respect to our use cases ahead of time.
Here are the steps that most of our main use cases follow:
1) User edits object (eg. adds a photo/text to an abstract object).
2) An object (sync task) is created to encapsulate an API call to update the edited object on the server. Sync tasks are saved to core data in a queue to fire one after the other, and only when internet is available (thus allowing offline editing).
3) The edited object is also immediately saved to core data and then returned to the user so that the UI reflects its updates.
With NSPersistentContainer, would having all the writing done in performBackgroundTask, and all the viewing done on viewContext suffice for our needs for the above use cases?
Since iOS10 you don't need to worry about any of this, just use the contexts NSPersistentContainer provides for you.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
So I have a process for a user to take a photo add info and upload to my database. My question is how should I store that data so it is accessible through all my controllers and when they click the upload button it sends the final object to the server to be added to the database. Would I use core data? Or like a struct? I just want to make sure I am doing this correctly.
This is an opinion oriented answer and it is influenced by developer's familiarity/comfort with the various underlying concepts as well. Hence though I dont consider it as an answer here is my opinion.
Should I use core data, so it is accessible through all my controllers?
Absolutely no! U don't need core data, just to create a shared data source which is being used by multiple ViewController's simultaneously. You can obviously create a Singleton data source object and can be accessed by all the VC's.
But then, core data is not just a shared data source isn't it ?
Core data is a Persistent Data Store where as your structs are not.
Assume user takes a pic, and before it gets uploaded quits the app, or you want to provide offline capability to user, where user can take a pic without internet and queue it for upload and whenever internet comes back your app loads it to server, if u use structs and keep data source in memory, if user quits the app all the efforts done by user to will go waste and obviously user will not appreciate it. On the other hand if u use core data you can obviously have it in sqlite file, and then access it whenever u need it even if user quits the app in between :)
Managed Object Context provides performBlock and performBlockAndWait to synchronize the access to core data in multi threaded environment but with a plain array of struct u have to write it on ur own.
Now there is no point in reinventing the wheel isn't it? We all know data types like array are not thread safe :) So is managedObject Context but managedObject context on iOS 5 onwards provides the amazing handy methods like performBlock and performBlockAndWait which eases the life of developer when dealing with shared data source (managedObject) in multi threaded environment.
Managed Object Context provides notifications about the changes happening in real time and works like a charm with NSFetchedResultsController providing a mechanism to constatly monitor and update the data source
I dont think its a big thing, but in order to achieve the same thing with array u'll have to use KVO. But because KVO wont work with swift objects u'll have to override didSet method and manually throw notification to all VC's when data source changes. Not so elegant solution isn't it :)
Scalability and robustness :
Finally, how many records are u dealing with also matters. I have been a part of a company which uploads and restores thousands of images on/from users device. In a scenario where you are dealing with 1000s of images maintaining a array is always a pain and memory print costly as well because the entire array will be loaded all the time. On the other hand NSFetchedResultsController works on page fault mechanism. It loads data efficiently only when needed.
Scalability is just a matter of adding new fields to managed object entity and robustness is directly proportional to you skill set dealing with core data I believe.
Pinch of Advice :
No matter whether u use array of structs or Core Data, always store images in local file system and keep the local path relative reference in your data source. Holding an entire an image in memory is a real bad idea :D
Hope it helps.
I've been looking for posts related to this scenario, but I don't have a clear idea of how should I manage it: I have a context that could have several (maybe quite a lot) managed objects that the application may be using to perform operations, or even the user could be editing them, and meanwhile I can receive updates of the information in such objects from a service. Updating those objects while the user is editing them or the app is using them to perform operations and calculations could be a problem, as well as saving the context for the update received. I need somehow to "block" the objects being used when I concurrently need to save the updates I receive.
I hope I'm explaining the scenario clearly... how could/should I manage it?
What you want to do is handle server updates on a child context as defined in the latest Core Data Programming Guide. Then set the merge policy on your main queue context to whatever makes sense for your business logic.
From there you let Core Data handle the merges. That is one of the primary features of Core Data.
I'm working on a program which repeatedly needs to fetch new data, parse it and store it using Core Data. One of the problems is that the data is split up over multiple web service requests and so the parsing needs to be split up in various parts before the final object is assembled. All the parsing also needs to happen in the background.
I thought about creating a new NSManagedObjectContext per request, but then the problem is that I have to find a way to pass my objects from one context to the other and that seems quite tricky to me, considering it can easily take 10 parsing steps until the object is complete.
So now I thought about using a single NSManagedObjectContext initialised with a NSPrivateQueueConcurrencyType. It seems to work fine, except that sometimes I will receive an EXC_BAD_ACCESS in one step of the flow. So my question is, am I on the right path here? I know that I can nest multiple performBlock calls and that core data will take care of the threading. But can I also use multiple non-nested performBlock calls spread over time (which is what I'm doing), as long as they are all running on the same NSManagedObjectContext?
Implemented it like this and it turns out it works fine.
I am developing an application that uses Core Data for internal storage. This application has the following functionalities :
Synchronize data with a server by downloading and parsing a large XML file then save the entries with core data.
Allow user to make fetches (large data fetches) and CRUD operations.
I have read through a lot and a lot of documentation that there are several patterns to follow in order to apply multithreading with Core Data :
Nested contexts : this patterns seems to have many performance issues (children contexts block ancestor when making fetches).
Use one main thread context and background worker contexts.
Use a single context (main thread context) and apply multithreading with GCD.
I tried the 3 mentioned approaches and i realized that 2 last ones work fine. However i am not sure if these approaches are correct when talking about performance.
Is there please a well known performant pattern to apply in order to make a robust application that implements the mentioned functionalities ?
rokridi,
In my Twitter iOS apps, Retweever and #chat, I use a simple two MOC model. All database insertions and deletions take place on a private concurrent insertionMOC. The main MOC merges through -save: notifications from the insertionMOC and during merge processing emits a custom UI update notification. This lets me work in a staged fashion. All tweets come in to the app are processed on the background and are presented to the UI when everything is done.
If you download the apps, #chat's engine has been modernized and is more efficient and more isolated from the main thread than Retweever's engine.
Anon,
Andrew
Apple recommends using separate context for each thread.
The pattern recommended for concurrent programming with Core Data is
thread confinement: each thread must have its own entirely private
managed object context. There are two possible ways to adopt the
pattern: Create a separate managed object context for each thread
and share a single persistent store coordinator. This is the
typically-recommended approach. Create a separate managed object
context and persistent store coordinator for each thread. This
approach provides for greater concurrency at the expense of greater
complexity (particularly if you need to communicate changes between
different contexts) and increased memory usage.
See the apple Documentation
As per apple documentation use Thread Confinement to Support Concurrency
Creating one managed object context per thread. It will make your life easy. This is for when you are parsing large data in background and also fetching data on main thread to display in UI.
About the merging issue there are some best ways to do them.
Never pass objects between thread, but pass object ids to other thread if necessary and access them from that thread, for example when you are saving data by parsing xml you should save them on current thread moc and get the ids of them, and pass to UI thread, In UI thread re fetch them.
You can also register for notification and when one moc will change you will get notified by the user info dictionary which will have updated objects, you can pass them to merge context method call.