Azure offline data sync performance - ios

We are considering using Azure offline data sync for our app which usually has very sporadic connectivity (In most cases users sync their data once a day). Thing is that mobile app needs to hold a lot of data (tens of thousands of products). Currently we have our own sync solution which works fine with sqlite.
My question is, do you have any experience or thoughts about performance of Azure offline data sync? Will it be able to handle really large datasets?
Thanks you

Azure mobile service is the cloud version of popular Microsoft sync framework. This is a light weight json API which tracks changes between local and remote data store. It transfers only changed rows, hence data traffic will be minimum. But when you sync very first time and you have huge data, it might be a problem.
You could overcome this problem by carefully designing your database structure. Azure SDK provides api to sync table by table, which gives you enough flexibility to choose what to sync and not.

Related

Electron app gets slower when there are thousands of records in indexeddb

I have always wondered if indexeddb was ever built to support thousands of records (100K), because I have developed a desktop app using electron that uses pouchdb and has almost 100k records in it, and the desktop app becomes slower often. The desktop app also syncs the data to and fro to a remote couchdb.
To verify whether this is because of the number of data records, I deleted many records and the app was performing better than before.
My question is, isn't indexeddb capable of handling so many data?

Handling streaming data from a mobile app (via POST)

At some point a dedicated iot device and app may be created but I'm working with an app on an iPhone for now that doesn't address the requirements but still helpful.
The app can stream it's data via POST. I have a php file set up that captures the data and writes it out to a csv file.
The data is a time series with several columns of data that is sent as a POST at every second. It's about 10 minutes in total time.
Instead of writing to a csv, the data needs to be persisted to a database
What I'm unsure about...
Since this is just testing a proof of concept, it may not be an issue till later but can the high frequency of new connections inserts be expensive? Assuming that each for POST a new connection is needed. For now I have no way of authenticating the device so I'm assuming I can use a local account for all known devices.
Is there a better way of handling the data than running a web server with a php script that grabs the data? I was thinking of Kafka + a connector for a database to persist the data but I have no way of configuring the mobile app to know what it needs to do to send data to the server. Communication is not both ways. Otherwise, my experience with POST requests are the typical web form inputs
Anyone able to give some guidance?

Best technique for saving and syncing binary data offline in iOS?

I am working on an app that collects user data including photos. It's mandated that this app should work in offline mode - meaning that the user can complete surveys and take photos without an internet connection and that data should sync back to a remote database. How is this generally handled? Do I create a local database with Core Data and write an additional layer to manage saving/reading from a server? Are there any frameworks that help facilitate that syncing?
I have also been looking into backend services such as Firebase that include iOS SDKs that appear to handle a lot of the heavy lifting of offline support, but it does not appear to support offline syncing of image files through the Firebase Storage SDK.
Can anyone recommend the least painful way to handle this?
Couchbase Mobile / Couchbase Lite is probably the best solution I've come across so far.
It allows offline data storage including binary data, and online syncing with a CouchDB compatible server. It works best with their Couchbase Server / Sync Gateway combination, but if you don't need to use filtered replication or 'channels' (e.g. for syncing data specific to a single user with a shared database), you can use Cloudant which saves you having to set up your own server.
Its also available across most platforms.
Generally for images it is best to use NSFileManager and save your images in either the documents directory or the caches directory depending on the types of images you are storing. Core Data or Firebase are databases that are more qualified for data than images although they do support arbitrary data storage.
You can also try SDWebImage which has a lot of features around loading and storing images.

No Sql in IOS application

The backend database of the IOS application is NOSQL / Mongo DB. We have to store the data offline when the user is not connected to the internet & and sync later when he is online.
i gone through the couchbase ios tutorial .
What is the best way to store data locally & sync with server later.
Kindly provide the architectural inputs for the above.
I have experimented with multiple ways of doing this and CoreData is really unbeatable. All the backend is done by Apple with none of the 'hard stuff' done by you. It's quite easy to implement an SQL database app using the SQLite C API, however, cloud synchronisation is another story altogether. Core Data can be used with iCloud which of course stores the data and syncs only when the device is connected to the internet.

AWS DynamoDB client best practice (MVC app)

I'm working to port some data access to dynamo DB in a high-traffic app. A bit of background - the app collects a very high volume of data, and some specific tables were causing performance issues in a traditional DB. So with a bit of re-design and some changes to the data layout we have been able to make them fit the DynamoDB niche nicely.
My question is around the use/creation of the client object. The SDK docs suggest it is better to create one client and share it amongst multiple threads, so in my repository implementation I have the client defined as a lazy singleton. This means it will be created once and all requests will share the same client (currently around 4000 requests per minute, but likely to grow massively as we come out of beta and start promoting the product).
Does anyone have any experience of making the AWS SDK scale?
Thanks
Sam
When you create one client and share it with multiple threads, only one thread can use the client at one point of time in some SDK.
Definitely if you create separate clients for different threads, it is going to slow down the process.
So I would suggest you to take a middle approach here,
Maximize the HTTP connection pooling size, so that more number of clients are allowed to be created.
And then you follow the sharing of client objects.
Batch operation can be used for .Net aws sdk
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BatchOperationsORM.html

Resources