When to update my persistent storage (core data)? - ios

I have a NSObject that I manipulate quite a lot, as the user changes different things. It seems a bit crazy to update the core data every single time there is a change. It would require a lot of coding just for one little change in these different places.
When should I update the core data if I want my stuff to persist? Is it a bad idea to only update it before the app closes?
Thanks

When should I update the core data if I want my stuff to persist?
Basically, you save whenever you feel that the changes are significant enough and should be saved.
Is it a bad idea to only update it before the app closes?
If the changes are important and you don't want them to get lost, what do you think happens in case the app crashes or terminates, battery dies etc..? Well, all the changes get lost if you havn't saved it.
It seems a bit crazy to update the core data every single time there
is a change.
Well, the application can not magically know what is important to you that should be saved and what would not. This is how it works generally for everything.
So basically, thats the guideline, the rest is up to you.

Related

NSAsychronousFetchResult after the app is DidEnterBackground

I have a client app with coredata as its back end. Its simple enough, with two entities.
UPDATE: Using CloudKit as the sync service. I'm not really sure what is going on there. Except I can query and get results, incase things don't automatically work. The problem is, as I noticed with most third-party sync-service providers. 95% of the time, they all work. Its when I test it with a more than a few devices / simultaneous calls that some undesired change comes in.
This question is more about iOS and coredata than the actual syncing architecture.
there are times when there is definite sync data loss. I really can't tell when and how. That im still figuring out. Sometimes initial sync takes a long time (if theres existing data) and the user might close the app, (some people double press the home button and actually close apps!).
But no matter what I do, sometimes i miss an object, sometimes an attribute.
So I saw this NSAsynchronousFetchRequest and I thought about giving me an option to check if all local-data (coredata) is okay, to see if theres anything missing.
Perhaps i could use a simple predicate to see of some managedObject.title == nil and fetch its identifier. Collect those faulty objects and request the truth server for data for these objects? Is this a good use of NSAsynchronousFetchRequest?
If yes, when during the lifetime of the app would this be good?
Im thinking maybe after applicationDidEnterBackround would be a good time..? Then If I do get it, will need a good way to manage CoreData in the background!
If no, well.. Really don't know wat to do then.
Im trying to actually do this, will update with my results.
UPDATE: Question updated to reflect the use of Cloudkit

CoreData - when do I save?

I understand how to use CoreData, but I'm confused when it's best to save the data. When they press the home button? On every interaction in case the app crashes?
The reason why saving data is a separate call is so that you can batch multiple smaller changes that comprise a larger operation and save all at once, rather than saving at each step along the way.
You should save the data after each atomic operation, and never have committed data sitting only in memory for any significant period of time.
Each time the user commits a change to the data, they will expect the data to be there the next time they run the app, so it's your job to make sure it's there.
After your user submits a change to the data, your app is likely going to be waiting for the user do something else anyway, so save the data while the user decides what to do next.
If you wait to save data in the applicationDidEnterBackground, there is no guarantee that it will ever be called.
Obviously, not all data is critical, for example, data that a user has entered on a form, but hasn't submitted, is not critical. However, any submitted data is critical.
I don't think its a good idea to save on every interaction(Honestly your App shouldn't be able to crash on "every" Interaction ;) ).
I only save in my app in
- (void)applicationDidEnterBackground:(UIApplication *)application
In fact you are right regarding crashes. But what if an invalid Data causes the crash? So you will reload in the worst case the data so that it crashes every time.
But to be honest that is just an educated guess - I think it depends on how sensitive your Data/App is
edit: This answer provides a similar opinion Saving Core Data Context before Crashing
But there's is a really good point I missed:
you should save whenever the user performs critical operations
If you save in the background, you can do it very often without harming UX much. Remember though that you probably need to update you UI and that will have its impact (merge after save to main will be done on the main thread).
Keep you saves small (small number of objects) so not to stall the main thread.
it is very dependant on your CoreData stack architecture.
You will want to save on critical moments like entering background, or user important data/demand.

Saving Core Data Context before Crashing

For example if we hit "Stop" in XCode, it will close the app, mimicking the crash behaviour.
But if my Core Data Context hasn't been saved, when I go back, the data won't be there.
Are there any workaround for this?
Should I save the context every time a big operation is finished?
Thanks.
Based on my experience you should decide the right granularity when you use Core Data save mechanism.
IMHO (maybe someone else could have different opinions) there is no standard to follow. My rule of thumb is taking into considerations two different aspects. The user and performances.
In the first case, you should save whenever the user performs critical operations. e.g. the user has inserted a lot of values in a form and hence he will expect to not insert them again. Regarding the second aspect, save operations could impact the performance of your app. If you frequently write changes to disk the app will be less responsive. On the contrary having so many objects in memory could led to memory warning (those will cause Core Data to take specific behaviors).
A tradeoff could be using background operations to save changes or take advantage of new Core Data API. Obviously, previous rules still remain valid.

CoreData between app updates, signal a default-data refresh

when dealing with CoreData, I've run into a few problems I'm trying to nip in the bud for future proofing the system out of the gate. The simple fact of the matter is that I've never done anything like this before (work with CoreData that is). While I've managed to figure out how to work with it in the app, I need to know a decent practice to signal an app between versions that default data needs to be refresh on first app launch.
So right now, in my AppDelegate, I setup my managed object context, and I perform a fetch request to see if there are any records at all in a particular table/entity. I only want this to happen on first launch so im not constantly rewriting the contents of the DB every app launch. Anyways, so it goes ahead and uses Object Models to handle inserting of data amongst the entities in question (theres a few)
Now, for this version of the app, it's going into the store without an API (thats a far future thing), but between versions released to the app store, we may have to update specific information within the entities (for example: prices), again I only want this refresh to happen on app launch. Also, the schema MIGHT change, Im not sure if or when, but I'd like to make sure this can accomodate that just in case.
I figured, versioning the coredata "Add Model Version" would do the trick, set the new db version as the active version, but when I launch the app in the simulator, nothing happens which tells me that the data inside is being retained.
Any help towards what it is that I should do to accomodate this would be appreciated. Thank you!
You should find the Core Data Model Versioning and Data Migration guide useful:
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/Introduction.html
You'll also probably find Method for import initial data with coredata useful.

Storing Data In Memory: Session vs Cache vs Static

A bit of backstory: I am working on an web application that requires quite a bit of time to prep / crunch data before giving it to the user to edit / manipulate. The data request task ~ 15 / 20 secs to complete and a couple secs to process. Once there, the user can manipulate vaules on the fly. Any manipulation of values will require the data to be reprocessed completely.
Update: To avoid confusion, I am only making the data call 1 time (the 15 sec hit) and then wanting to keep the results in memory so that I will not have to call it again until the user is 100% done working with it. So, the first pull will take a while, but, using Ajax, I am going to hit the in-memory data to constantly update and keep the response time to around 2 secs or so (I hope).
In order to make this efficient, I am moving the intial data into memory and using Ajax calls back to the server so that I can reduce processing time to handle the recalculation that occurs w/ this user's updates.
Here is my question, with performance in mind, what would be the best way to storing this data, assuming that only 1 user will be working w/ this data at any given moment.
Also, the user could potentially be working in this process for a few hours. When the user is working w/ the data, I will need some kind of failsafe to save the user's current data (either in a db or in a serialized binary file) should their session be interrupted in some way. In other words, I will need a solution that has an appropriate hook to allow me to dump out the memory object's data in the case that the user gets disconnected / distracted for too long.
So far, here are my musings:
Session State - Pros: Locked to one user. Has the Session End event which will meet my failsafe requirements. Cons: Slowest perf of the my current options. The Session End event is sometimes tricky to ensure it fires properly.
Caching - Pros: Good Perf. Has access to dependencies which could be a bonus later down the line but not really useful in current scope. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Static - Pros: Best Perf. Easies to maintain as I can directly leverage my current class structures. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Does anyone have any suggestions / comments on what I option I should choose?
Thanks!
Update: Forgot to mention, I am using VB.Net, Asp.Net, and Sql Server 2005 to perform this task.
I'll vote for secret option #4: use the database for this. If you're talking about a 20+ second turnaround time on the data, you are not going to gain anything by trying to do this in-memory, given the limitations of the options you presented. You might as well set this up in the database (give it a table of its own, or even a separate database if the requirements are that large).
I'd go with the caching method of for storing the data across any page loads. You can name the cache you want to store the data in to avoid conflicts.
For tracking user-made changes, I'd go with a more old-school approach: append to a text file each time the user makes a change and then sweep that file at intervals to save changes back to DB. If you name the files based on the user/account or some other session-unique indicator then there's no issue with conflict and the app (or some other support app, which might be a better idea in general) can sweep through all such files and update the DB even if the session is over.
The first part of this can be adjusted to stagger the write out more: save changes to Session, then write that to file at intervals, then sweep the file at larger intervals. you can tune it to performance and choose what level of possible user-change loss will be possible.
Use the Session, but don't rely on it.
Simply, let the user "name" the dataset, and make a point of actively persisting it for the user, either automatically, or through something as simple as a "save" button.
You can not rely on the session simply because it is (typically) tied to the users browser instance. If they accidentally close the browser (click the X button, their PC crashes, etc.), then they lose all of their work. Which would be nasty.
Once the user has that kind of control over the "persistent" state of the data, you can rely on the Session to keep it in memory and leverage that as a cache.
I think you've pretty much just answered your question with the pros/cons. But if you are looking for some peer validation, my vote is for the Session. Although the performance is slower (do you know by how much slower?), your processing is going to take a long time regardless. Do you think the user will know the difference between 15 seconds and 17 seconds? Both are "forever" in web terms, so go with the one that seems easiest to implement.
perhaps a bit off topic. I'd recommend putting those long processing calls in asynchronous (not to be confused with AJAX's asynchronous) pages.
Take a look at this article and ping me back if it doesn't make sense.
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx
I suggest to create a copy of the data in a new database table (let's call it EDIT) as you send the initial results to the user. If performance is an issue, do this in a background thread.
As the user edits the data, update the table (also in a background thread if performance becomes an issue). If you have to use threads, you must make sure that the first thread is finished before you start updating the rows.
This allows a user to walk away, come back, even restart the browser and commit whenever she feels satisfied with the result.
One possible alternative to what the others mentioned, is to store the data on the client.
Assuming the dataset is not too large, and the code that manipulates it can be handled client side. You could store the data as an XML data island or JSON object. This data could then be manipulated/processed and handled all client side with no round trips to the server. If you need to persist this data back to the server the end resulting data could be posted via an AJAX or standard postback.
If this does not work with your requirements I'd go with just storing it on the SQL server as the other comment suggested.

Resources