Rollback synchronization using RestKit - ios

I am building an app with synchronization support. Since the sync will require more than one request there is a risk of persistent errors during the sync (e.g. if some but not all of the requests fails).
If anything goes wrong during the synchronization I want to rollback all changes so the synchronization is performed fully or not at all.
It seems RestKit saves the managedObjectContext when it fetches data. This means I have no way of using an NSUndoManager to handle undo/rollbacks. What is the preferred way to do this? To backup the object store file (sqlite) and replace it if the synchronization fails would be one way but it does not seem very "pure".

Related

Realm database locking?

On sync I overwrite all my local data with the server's data. For this I first call realm.delete(realm.objects(MyObj)) for all my objects. Then I save the response's objects with realm.add(obj, update: false). Everything is in a single transaction. The payload can take a while to process but it's not big enough to justify implementing pagination.
Can the user use the app normally during this process? Can they store new items that are deleted in the clearing part of the transaction, or that would trigger an error or be overwritten during the adding part? If yes how can I avoid this?
Realm uses a Multi-Version-Concurrency-Control algorithm. This uses locks to ensure exclusive write, while other threads can keep reading previous versions of the data. We have an article on our blog, which explains how that works in more depth.
Be aware that what you attempt to solve here is a non-trivial challenge.
Can they store new items that are deleted in the clearing part of the transaction, or that would trigger an error or be overwritten during the adding part?
While the background transaction is in progress, other write transactions would be blocked. If you do these writes from the main thread, you would block the main thread. If you do them from background threads, they would queue up and be executed after your sync transaction is completed.
The objects, which are deleted in the beginning would become inaccessible (which you can check via invalidated), because write transactions always operate on the latest version. If your objects have consistent primary keys across your sync operations, you can utilize those to re-fetch them and redo all modifications to the fresh instances. But note, that you need to store the primary keys (and all other object data) into memory, before beginning the write transaction, which implies an implicit refresh.

Core data with REST api [duplicate]

Hey, I'm working on the model layer for our app here.
Some of the requirements are like this:
It should work on iPhone OS 3.0+.
The source of our data is a RESTful Rails application.
We should cache the data locally using Core Data.
The client code (our UI controllers) should have as little knowledge about any network stuff as possible and should query/update the model with the Core Data API.
I've checked out the WWDC10 Session 117 on Building a Server-driven User Experience, spent some time checking out the Objective Resource, Core Resource, and RestfulCoreData frameworks.
The Objective Resource framework doesn't talk to Core Data on its own and is merely a REST client implementation. The Core Resource and RestfulCoreData all assume you talk to Core Data in your code and they solve all the nuts and bolts in the background on the model layer.
All looks okay so far and initially I though either Core Resource or RestfulCoreData will cover all of the above requirements, but... There's a couple of things none of them seemingly happen to solve correctly:
The main thread should not be blocked while saving local updates to the server.
If the saving operation fails the error should be propagated to the UI and no changes should be saved to the local Core Data storage.
Core Resource happens to issue all of its requests to the server when you call - (BOOL)save:(NSError **)error on your Managed Object Context and therefore is able to provide a correct NSError instance of the underlying requests to the server fail somehow. But it blocks the calling thread until the save operation finishes. FAIL.
RestfulCoreData keeps your -save: calls intact and doesn't introduce any additional waiting time for the client thread. It merely watches out for the NSManagedObjectContextDidSaveNotification and then issues the corresponding requests to the server in the notification handler. But this way the -save: call always completes successfully (well, given Core Data is okay with the saved changes) and the client code that actually called it has no way to know the save might have failed to propagate to the server because of some 404 or 421 or whatever server-side error occurred. And even more, the local storage becomes to have the data updated, but the server never knows about the changes. FAIL.
So, I'm looking for a possible solution / common practices in dealing with all these problems:
I don't want the calling thread to block on each -save: call while the network requests happen.
I want to somehow get notifications in the UI that some sync operation went wrong.
I want the actual Core Data save fail as well if the server requests fail.
Any ideas?
You should really take a look at RestKit (http://restkit.org) for this use case. It is designed to solve the problems of modeling and syncing remote JSON resources to a local Core Data backed cache. It supports an offline mode for working entirely from the cache when there is no network available. All syncing occurs on a background thread (network access, payload parsing, and managed object context merging) and there is a rich set of delegate methods so you can tell what is going on.
There are three basic components:
The UI Action and persisting the change to CoreData
Persisting that change up to the server
Refreshing the UI with the response of the server
An NSOperation + NSOperationQueue will help keep the network requests orderly. A delegate protocol will help your UI classes understand what state the network requests are in, something like:
#protocol NetworkOperationDelegate
- (void)operation:(NSOperation *)op willSendRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
- (void)operation:(NSOperation *)op didSuccessfullySendRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
- (void)operation:(NSOperation *)op encounteredAnError:(NSError *)error afterSendingRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
#end
The protocol format will of course depend on your specific use case but essentially what you're creating is a mechanism by which changes can be "pushed" up to your server.
Next there's the UI loop to consider, to keep your code clean it would be nice to call save: and have the changes automatically pushed up to the server. You can use NSManagedObjectContextDidSave notifications for this.
- (void)managedObjectContextDidSave:(NSNotification *)saveNotification {
NSArray *inserted = [[saveNotification userInfo] valueForKey:NSInsertedObjects];
for (NSManagedObject *obj in inserted) {
//create a new NSOperation for this entity which will invoke the appropraite rest api
//add to operation queue
}
//do the same thing for deleted and updated objects
}
The computational overhead for inserting the network operations should be rather low, however if it creates a noticeable lag on the UI you could simply grab the entity ids out of the save notification and create the operations on a background thread.
If your REST API supports batching, you could even send the entire array across at once and then notify you UI that multiple entities were synchronized.
The only issue I foresee, and for which there is no "real" solution is that the user will not want to wait for their changes to be pushed to the server to be allowed to make more changes. The only good paradigm I have come across is that you allow the user to keep editing objects, and batch their edits together when appropriate, i.e. you do not push on every save notification.
This becomes a sync problem and not one easy to solve. Here's what I'd do: In your iPhone UI use one context and then using another context (and another thread) download the data from your web service. Once it's all there go through the sync/importing processes recommended below and then refresh your UI after everything has imported properly. If things go bad while accessing the network, just roll back the changes in the non UI context. It's a bunch of work, but I think it's the best way to approach it.
Core Data: Efficiently Importing Data
Core Data: Change Management
Core Data: Multi-Threading with Core Data
You need a callback function that's going to run on the other thread (the one where actual server interaction happens) and then put the result code/error info a semi-global data which will be periodically checked by UI thread. Make sure that the wirting of the number that serves as the flag is atomic or you are going to have a race condition - say if your error response is 32 bytes you need an int (whihc should have atomic acces) and then you keep that int in the off/false/not-ready state till your larger data block has been written and only then write "true" to flip the switch so to speak.
For the correlated saving on the client side you have to either just keep that data and not save it till you get OK from the server of make sure that you have a kinnf of rollback option - say a way to delete is server failed.
Beware that it's never going to be 100% safe unless you do full 2-phase commit procedure (client save or delete can fail after the signal from the server server) but that's going to cost you 2 trips to the server at the very least (might cost you 4 if your sole rollback option is delete).
Ideally, you'd do the whole blocking version of the operation on a separate thread but you'd need 4.0 for that.

Restkit and Core Data rollback

In my app I'm using RestKit v0.23.3 and I need to call 4 web service sequentially. I'm doing this without any problem following various tutorial that can be found on the web.
My problem is that I need to be sure to download all 4 services or data can be inconsistent.
Now my question: can I do a rollback on CoreData if one of the web services fails during the download/mapping operations ? Or is there a mode to disable the "auto save to persistent store" feature that RestKit has and save "manually" only when the last web service has ended ?
Thanks in advance for your help.
If this is really required then I'd be tempted to use a disk based solution - i.e. before you start any potentially destructive / corrupting operation you ensure everything is saved and make a copy of the data store on disk (note that multiple files may need to be saved so best to use API like
migratePersistentStore:toURL:options:withType:error:). Now, if you have a problem you can tear the Core Data stack down, restore from disk and then recreate the stack. This is safer and more reliable than trying to prevent saving or use an undo manager as the load process is run across multiple threads so you really need the saves to be running.

How does NSIncrementalStore work when there is no internet connection?

Title is the real question. But some advice + sample code re: how to cope within nsincrementalstore subclasses when there is no internet connection would be great.
I was thinking of caching to a sqlite persistent store so when an internet connection was unavailable I'd use a managed context that used the sqlite store instead of the incremental store.Not exactly sure on how the saving to the sqlite store would work to stay in sync with the web service (Parse in this case).
I suggest you rethink your architecture at a somewhat higher level.
Don't get down into the weeds of persistent stores and subclasses. Instead, think about the transactions you need to save.
Each modification you've made to the local store requires some sort of interaction with the web service. But those interactions are going to be messy: they'll fail, the server will be busy, network will come and go.
If you think about this in two parts, the local data update and the remote data update, you'll have much more hair remaining at the end of your project.

Core Data delay when switching NSPersistentStore files

I'm developing an app with Core Data that periodically downloads all the data from a webservice. Since the download can fail or be cancelled by the user, I want to be able to roll back to the previous state. I tried undoing the NSManagedObjectContext, but that seemed a bit slow (I have tens of thousands of entities). What I'm doing right now is making a backup of the persistent store file, download the data, and, if the download fails, replace the store file with the backup. This seems to work correctly, except there seems to be a delay after I can fetch entities from the store: if after the download I go immediately to a UITableView that uses an NSFetchedResultsController, I find it empty. If I wait some seconds, everything is ok.
So my question is: has anyone had this kind of delays too? Is there something that can be done to avoid this problem, something that forces everything to be ready, even if it blocks the thread?
I haven't used this setup but I think the delay you are seeing is probably caused by Core Data having to clear all it's caching. Core Data uses If you use a cache with the fetched results controller it will have to test and then delete it's existing cache.
I think the best thing to do is to tear down you Core Data stack and reboot it from scratch. That includes recreating a fresh fetched results controller.

Resources