iOS App Offline and synchronization - ios

I am trying to build an offline synchronization capability into my iOS App and would like to get some feedback/advice from the community on the strategy and best practice to be followed to do the same. The app details are as follows:
The app shows a digital catalog to users and allows them to perform actions like creating and placing orders, among others.
Currently the app only works when online, and we have APIs for all actions like viewing the catalog, creating/placing orders which return JSON data.
We would like to provide offline/synchronization capability to users, through which users can view the catalog and create/place orders while offline, and when they come online the order details will be synchronized and updated to our server.
We would also like to pull the latest data from the server, and have the app keep itself up to date in case of catalog changes or order changes that happened at the Server while the app was offline.
Can you guys help me to come with the best design and approach for handling this kind of functionality?

I have done something similar just in the beginning of this year. After I read about NSOperationQueue and NSOperation I did a straight forward approach:
Whenever an object is changed/added/... in my local database, I add a new "sync"-operation to the queue and I do not care about, if the app is online or offline (I added a reachability observer which either suspended the queue or takes it back working; of course, I do re-queueing if an error occurs (lost network during sync)). The operation itself reads/writes the database and does the networking stuff. My ViewController use a NSFetchedResultsController (with delegate=self) to get callbacks on changes. In some cases I needed some extra local data (it is about counting objects), where I have used NSManagedObjectContextObjectsDidChangeNotification.
Furthermore, I have used Multi-Context CoreData which sounded quite reasonable to use (I have only two contexts).
To get notified about changes from your server, I believe that iOS 7 has something new for you.
On the server side, you should read a little for the actual approach you want to go for: i.e. Data Synchronization by Dan Grover or Developing Android REST Client Applications (of course there are many more good articles out there).

Caution: you might be disappointed when you expect an easy solution. Your requirement is not unusual, but the solution might become more complex than you expect - depending on the "business rules" and other reasonable requirements. If you intelligently restrict your requirements you may find a solution which you can implement yourself, otherwise you may also consider to use a commercial product.
I could imagine, that if you design the business logic such that it takes an offline state into account and exposes this explicitly in the business logic, you may find a solution which you can implement yourself with moderate effort. What I mean by this is for example, when a user creates an order, it is initially in "not committed" stated. The order will only be committed when there is access to the server and if the server gives the "OK" that this order can actually be placed by this user. The server may also deny the order, sending corresponding messages to the user.
There are probably quite a few subtle issues that may arise due to the requirement of eventual consistency.
See also this question which contains pointers to solutions from commercial products, and if you visit their web sites give valuable information about the complexity of the problem and how this can be solved.

Related

Syncing of memory and database objects upon changes in objects in memory

I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core

FCM device groups vs topics

I'm developing an iOS app that sends notifications to individual groups of users. Number of users per group will most likely be in the order of 1-7, but can exceed that and while the app generally doesn't set a limit, I hardly see it exceeding 20.
Currently I've set it up with the topics approach and it works like it should. I understand this approach is optimized for throughput rather than latency, as opposed to device groups.
Nearing completion of my app, I'm considering to change to device groups. However, I don't see many advantages, especially considering the substantial complexity that comes along with it.
Notifications at the moment is fast enough. As long as delivery time doesn't suddenly increase by a lot, it's perfectly fine at the moment.
How secure are topics compared with device groups?
The app does allow the user to use more than one device, but I don't see that happening often - realistically quite seldom. However if that were to happen, device groups would handle it better. Still, I think it's an acceptable compromise to stick with topics.
For device groups to work, I have to create a new collection server-side to manage device registration tokens and their updates, pairing with my existing data structure and implementing several http requests. I also need to query for the notification_key every time I want to send a notification, instead of sending it to the more obvious id I now use for topics.
I've read through other questions on SO, but wanted to get some fresh thoughts on this. My opinion is to stay with topics unless convinced otherwise
I'm using both of these delivery methods and yes, topics are far easier to manage but that comes at a cost of security. If your groups are public in nature then you should be fine with topics. If they're meant to handle more sensitive/private information you should probably go with device groups / individual tokens. Reason being, topics are more public facing and anyone can listen in on them, even devices not on your app.

How to handle SAP Kapsel Offline app OData conflicts properly?

I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.

Has Apple fixed the CoreData + iCloud sync issues? [closed]

There has been a lot of discussion lately about the issues with iCloud and Core Data and how Apple's APIs are currently broken in iOS 5 and possibly iOS 6.
Is it possible, given the current state of Apple's Core Data API, to reliably sync across multiple devices using iCloud?
If so, how would you do this? If not, please recommend an alternative approach.
This blog post will lead you to a chain of recent articles about the travails of developers attempting this approach.
From my own understanding and experience, I believe it is doable, but don't buy into the idea that you will get anything "for free". Depending on your data model, you may be better off syncing your whole persistent store as a document rather than using the documented core data / iCloud approach.
You may have better luck if you're already comfortable with Core Data. Just be sure you think through how to handle several important cases.
One is what to do if the user signs out of their iCloud account. When this happens, the local ubiquitous persistent store is deleted. If it makes sense for the user to still have access to their data, you'll need to manage a copy in local storage, and then manage resynchronizing when they sign back in.
Another is that changes can apparently be quite slow to propagate by default, so you may want to consider an alternative mechanism, such as the key value store, to quickly propagate sufficient information to avoid a bad user experience.
Conflict management is perhaps the most challenging (depending on your model). While the framework provides a mechanism to inform you of conflicts, you are on your own for providing a mechanism to resolve them, and there are reports that the conflict notifications may not be reliable (see linked articles), which seems strongly linked to the lag in updating.
In short, if you go into this understanding that the actual support is pretty bare bones and that you'll need to code very defensively, you may have a chance. There aren't any good recipes out there, so if you do make it work, please come back and tell us what works!
It depends on what you want to do. There are two types of Core Data-iCloud integration, as described here: http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-iCloudCoreData/_index.html
There are broadly speaking two types of Core Data-based application that integrate with iCloud:
Library-style applications, where the application usually has a single persistent store, and data from the store is used throughout the application.
Examples of this style of application are Music and Photos.
Document-based applications, where different documents may be opened at different times during the lifetime of the application.
Examples of this style of application are Keynote and Numbers.
If you're using the library-type, this article is the first of a series that goes into a lot of the problems that will come up: http://mentalfaculty.tumblr.com/post/23163747823/under-the-sheets-with-icloud-and-core-data-the-basics.
You can also check out sessions 218 (for document-based) or 227 (for library-style) of this year's wwdc.
As of iOS 7, the best solution is probably the Ensembles framework: https://github.com/drewmccormack/ensembles
Additionally, there is a promising project which will essentially allow you to do the same thing using a different cloud service.
Here is a link to the repository: https://github.com/nothirst/TICoreDataSync
Project description:
TICoreDataSync is a collection of classes to enable synchronization via the Cloud (including Dropbox) of Core Data-based applications (including document-based apps) between any number of clients running under Mac OS X or iOS. It's designed to be easy to extend if you need to synchronize via an option that isn't already supported.
Reasons for why iCloud is not currently reliable:
"Sometimes, iCloud simply fails to move data from one computer to another."
"Corrupted baselines are [a] common obstacle.... There is no recovery from a corrupted baseline, short of digging in to the innards of your local iCloud storage and scraping everything out, and there is no visible indication that corruption has occurred — syncing simply stops."
"Sometimes, when initializing the iCloud application subsystem, it will simply return an opaque internal error. When it fails, there’s no option to recover — all you can do is try again (and again…) until it finally works."
"[W]hen you turn off the “Documents & Data” syncing option in the iCloud system preferences, the iCloud system deletes all of your locally stored iCloud data[.]"
When you sign out of iCloud, the system moves your iCloud data to a location outside of your application’s sandbox container, and the application can no longer use it.
"In some circumstances (and we haven’t been able to figure out which, yet), iCloud actually changes the object class of an item when synchronizing it. Loosely described, the object class determines the type of the object in the database[.]"
"In some cases (again, not all the time), iCloud may do one of the following:
Owner relationships in an item’s data will point to the wrong owner;
Owner items get lost in synchronization and never appear on computers other than the one on which they were created. (This leads to the item never appearing in the UI on any other machine.) When this happens, bogus relationships get created between blob items and an arbitrary unrelated owner."
"Sometimes (without any apparent consistency or repeatability), the associated data for an object (for example, the PDF data for a PDF item, or the web archive data for a Web Archive item) would simply fail to show up on the destination machine. Sometimes it would arrive later (much later — minutes or hours)."
Quoted and paraphrased from these sources:
http://www.imore.com/debug-12-icloud-core-data-sync
http://rms2.tumblr.com/post/46505165521/the-gathering-storm-our-travails-with-icloud-sync
Note: I have seen one article where the author mentions getting it to work for iOS 6+, but they don't provide any examples: http://zaal.tumblr.com/post/46718877130/why-you-want-to-use-core-data-icloud-sync-if-only-it
As a reference, here are Apple's docs on iCloud + Core Data:
http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-iCloudCoreData/
http://developer.apple.com/library/ios/#documentation/General/Conceptual/iCloudDesignGuide/Chapters/DesignForCoreDataIniCloud.html
http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/CoreDataVersioning/vmCloud/vmCloud.html
And here is an example app:
http://developer.apple.com/library/ios/#DOCUMENTATION/General/Conceptual/iCloud101/Introduction/Introduction.html
The Apple developer tutorial on using the iCloud API to manipulate documents might be a good place to start.
Your Third iOS App introduces you to the iCloud document storage APIs. You use these APIs to store and manipulate files in a user’s iCloud storage.

Syncing Core Data across multiple devices using iCloud

There has been a lot of discussion lately about the issues with iCloud and Core Data and how Apple's APIs are currently broken in iOS 5 and possibly iOS 6.
Is it possible, given the current state of Apple's Core Data API, to reliably sync across multiple devices using iCloud?
If so, how would you do this? If not, please recommend an alternative approach.
This blog post will lead you to a chain of recent articles about the travails of developers attempting this approach.
From my own understanding and experience, I believe it is doable, but don't buy into the idea that you will get anything "for free". Depending on your data model, you may be better off syncing your whole persistent store as a document rather than using the documented core data / iCloud approach.
You may have better luck if you're already comfortable with Core Data. Just be sure you think through how to handle several important cases.
One is what to do if the user signs out of their iCloud account. When this happens, the local ubiquitous persistent store is deleted. If it makes sense for the user to still have access to their data, you'll need to manage a copy in local storage, and then manage resynchronizing when they sign back in.
Another is that changes can apparently be quite slow to propagate by default, so you may want to consider an alternative mechanism, such as the key value store, to quickly propagate sufficient information to avoid a bad user experience.
Conflict management is perhaps the most challenging (depending on your model). While the framework provides a mechanism to inform you of conflicts, you are on your own for providing a mechanism to resolve them, and there are reports that the conflict notifications may not be reliable (see linked articles), which seems strongly linked to the lag in updating.
In short, if you go into this understanding that the actual support is pretty bare bones and that you'll need to code very defensively, you may have a chance. There aren't any good recipes out there, so if you do make it work, please come back and tell us what works!
It depends on what you want to do. There are two types of Core Data-iCloud integration, as described here: http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-iCloudCoreData/_index.html
There are broadly speaking two types of Core Data-based application that integrate with iCloud:
Library-style applications, where the application usually has a single persistent store, and data from the store is used throughout the application.
Examples of this style of application are Music and Photos.
Document-based applications, where different documents may be opened at different times during the lifetime of the application.
Examples of this style of application are Keynote and Numbers.
If you're using the library-type, this article is the first of a series that goes into a lot of the problems that will come up: http://mentalfaculty.tumblr.com/post/23163747823/under-the-sheets-with-icloud-and-core-data-the-basics.
You can also check out sessions 218 (for document-based) or 227 (for library-style) of this year's wwdc.
As of iOS 7, the best solution is probably the Ensembles framework: https://github.com/drewmccormack/ensembles
Additionally, there is a promising project which will essentially allow you to do the same thing using a different cloud service.
Here is a link to the repository: https://github.com/nothirst/TICoreDataSync
Project description:
TICoreDataSync is a collection of classes to enable synchronization via the Cloud (including Dropbox) of Core Data-based applications (including document-based apps) between any number of clients running under Mac OS X or iOS. It's designed to be easy to extend if you need to synchronize via an option that isn't already supported.
Reasons for why iCloud is not currently reliable:
"Sometimes, iCloud simply fails to move data from one computer to another."
"Corrupted baselines are [a] common obstacle.... There is no recovery from a corrupted baseline, short of digging in to the innards of your local iCloud storage and scraping everything out, and there is no visible indication that corruption has occurred — syncing simply stops."
"Sometimes, when initializing the iCloud application subsystem, it will simply return an opaque internal error. When it fails, there’s no option to recover — all you can do is try again (and again…) until it finally works."
"[W]hen you turn off the “Documents & Data” syncing option in the iCloud system preferences, the iCloud system deletes all of your locally stored iCloud data[.]"
When you sign out of iCloud, the system moves your iCloud data to a location outside of your application’s sandbox container, and the application can no longer use it.
"In some circumstances (and we haven’t been able to figure out which, yet), iCloud actually changes the object class of an item when synchronizing it. Loosely described, the object class determines the type of the object in the database[.]"
"In some cases (again, not all the time), iCloud may do one of the following:
Owner relationships in an item’s data will point to the wrong owner;
Owner items get lost in synchronization and never appear on computers other than the one on which they were created. (This leads to the item never appearing in the UI on any other machine.) When this happens, bogus relationships get created between blob items and an arbitrary unrelated owner."
"Sometimes (without any apparent consistency or repeatability), the associated data for an object (for example, the PDF data for a PDF item, or the web archive data for a Web Archive item) would simply fail to show up on the destination machine. Sometimes it would arrive later (much later — minutes or hours)."
Quoted and paraphrased from these sources:
http://www.imore.com/debug-12-icloud-core-data-sync
http://rms2.tumblr.com/post/46505165521/the-gathering-storm-our-travails-with-icloud-sync
Note: I have seen one article where the author mentions getting it to work for iOS 6+, but they don't provide any examples: http://zaal.tumblr.com/post/46718877130/why-you-want-to-use-core-data-icloud-sync-if-only-it
As a reference, here are Apple's docs on iCloud + Core Data:
http://developer.apple.com/library/ios/#releasenotes/DataManagement/RN-iCloudCoreData/
http://developer.apple.com/library/ios/#documentation/General/Conceptual/iCloudDesignGuide/Chapters/DesignForCoreDataIniCloud.html
http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/CoreDataVersioning/vmCloud/vmCloud.html
And here is an example app:
http://developer.apple.com/library/ios/#DOCUMENTATION/General/Conceptual/iCloud101/Introduction/Introduction.html
The Apple developer tutorial on using the iCloud API to manipulate documents might be a good place to start.
Your Third iOS App introduces you to the iCloud document storage APIs. You use these APIs to store and manipulate files in a user’s iCloud storage.

Resources