How is data loaded from a remote database stored locally (in memory) - ios

What is the best way that data loaded from a remote database can be stored locally on iOS. (You don't need to provide any code, I just want want to know the best way conceptually.)
For instance, take Twitter for iOS as an example. When it loads the tweets, does it just pull the tweet data from the remote database and store them in a local database on the iPhone? Or would it be better if the data is just stored locally as an array of objects or something similar?
See, I'm figuring that to be able to use/display the data from the remote database (for instance, in a dynamic table view), the data would have to be stored as objects anyway so maybe they should just be stored as objects in an array. However, when researching this, I did see a lot of articles about local databases, so I was thinking maybe its more efficient to load the remote data as a table and store it in a local database and use data directly from the local table to display the data or something similar.
Which one would require more overhead: storing the data as an array of Tweet objects or as a local database of tweets?
What do you think would be the best way of storing remote data locally (in memory) (for an app that loads data similar to how Twitter for iOS)?
I suppose this begs this prerequisite question: when data from a remote database is downloaded, is it usually loaded as a database table (result set) and therefore stored as one?
Thanks!

While it's very easy the put the fetched data right into your array and use it from there, it is likely that you would be benefitted by using a local database for two main reasons: scalability and persistance.
If you are hoping to download and display a large amount of data, it may be unsafe to try to store it in memory all at once. It would be more scalable to download whatever data you need, store it in a local database, and then fetch only the relevant objects you need to display.
If you download the data and only store it in an array, that data will have to be re-fetched from the remote database and re-parsed on next load of your app/view controller/etc before anything can be displayed. Instead, create a local database in which to store the downloaded data, allowing it to be readily available to display on next load while new data is fetched from your remote source.
While there is some initial overhead incurred in creating your database, the scalability and persistance that provides you is more than enough to warrant that. Storing the state of a remote database in a local database is an extremely common practice.
However, if you do not mind the user having to wait for this data to be fetched on every load, or the amount of data being fetched is relatively low or the data is incredibly simple, you will save time and effort on skipping the local database.

Related

Mobile application data management

My question surrounds around one single point - data management in mobile application. I have created a mobile application where data comes from server. The data includes both text and images. Following are the steps I am doing for this :
First launch :
1. Get server data.
2. Save server data in Sqlite database.
3. Show Sqlite data.
Next launches :
1. Show Sqlite data.
2. Get server data in background.
3. Delete previous Sqlite data.
4. Save new server data in Sqlite database.
5. Show Sqlite data.
I have couple of questions on these steps :
1. Is this the right approach ? Other way could be showing data every time from server but that would not display the data on screen immediately (depending on internet speed).
2. I also thought of comparing the Sqlite data with the new server data. But faced a big challenge. The new server data might have new records or deleted records. Also, I could not find an appropriate approach to compare each database field with JSON data.
So what is the best approach to compare local Sqlite data with new server data ?
3. Each time I delete the Sqlite data and insert new data and then refresh the screen (which has a UITableView), it blinks for a second which is obvious. How to avoid this issue if steps 3, 4, 5 are followed ?
4. How should I proceed with data update in case I come back on the screen each time or when the application becomes active ? I am very aware of NSOperationQueues or using GCD for that matter. But what if I am crazy and go back and forth to screen again and again. There will be a number of NSOperations in the queue.
It's a challenge to synchronise server data, I've done that before, and if you can spend time on it I'd say it's the best solution.
You may need creation and modification dates on both server and local objects, to compare them - this will let you decide which objects to add, update and delete.
If the server sends you only the recently updated objects you can save a lot of traffic and improve performance (but deleted objects will be harder to detect).
If the data is only changed in the server it's easier, when the app can change the data too it becomes more complicated (but it seems that it's not your case). It also depends on how complex the database is, of course.
If you don't want to invest some time in doing this, just fetching all data everytime works too, even if it is not ideal! Instead of showing the old data and blinking it, you can just make the user wait 2-3 seconds when entering, while you get the new data. Or instead you can fetch the data only when starting the app, and so when you get to that view controller it will be ready already.
It's a complex problem that everyone faces at some point, so I'm curious to see what other people will suggest :)
This is a good question.
I personally think downloading data, store locally and later try to sync is a dangerous scenario. Easy to introduce bugs, master <-> slave issues (what data should be master, if multiple devices would be used etc.)
I think something like this could be a working approach:
1. I would try to look at possibilities to lazy load the data from the server on-demand. That is when a user have a View that should display data, load that specific data with the creation of that specific View. This ensures the data is allways in sync.
2. Tackling the need to reload data from server from every view, could be done by simply storing the downloaded data as objects in memory (not using SqlLite). The view will try to load the needed data trough your cache manager, and it would serve it from memory, if available. If not in memory simply get the data from your server and add it to your memory cache.
The memory cache could be a home made data manager wrapping a Dictionary stored on you AppDelegate, or some global "Singelton" to wrap the cache management/storing and data loading.
3. With lazy loaded data and memory cache you would need to make sure any updates (changes, new records, deleted records) updates your memory data model, as well as pushing these changes to the server as soon as possible. Depending on data size etc. you could force the user to wait, or do it directly as background process.
4. To ensure the data is in sync, you should make sure that you periodically invalidate (delete) the local memory records in the cache and thereby force data updates from the server. Best approach would probably be to have a last updated timestamp for each record in the memory cache. So the periodical invalidator would only delete "old records" from the memory cache (once again not from the server).
To save server from unnecessary data load, the data should still load on demand when the user needs it in a view, and not as part of "cache invalidation".
5. Depending on the data size you might need to look at "cache invalidation". Could be as simple as when xx records are stored, start deleting old objects from memory cache (not server, only locally on device).
6. If data sync is absolutely critical you might want to look at refreshing your memory cache for a record, just before you allow the user to change data. E.g. when user taps "Edit" or similar, you grab the latest data from server for that record. This is just to make sure the user is not going to update a record using outdated data and thereby accidentally overriding any changes made remote, or on another device etc.
--
My take on it. I do not believe there is a "perfect right way" to do this. But this would be what I would try to do.
Hope this will help with some ideas and inspiration.
How about this:
If data exists in SqlLite, load into "in-memory" copy and show it
In background load new server data
delete old sqlite data if it exists (note that the in-memory copy remains)
save new server data to sqlite
load new sqlite data into "in-memory" copy and show it.
If no data was found in step 1, display a "loading" screen to the user during step 2.
I'm making the assumption that the data from SqlLite is small enough to keep a copy in memory to show in your UITable view (The UITable view would always show data from in-memory).
It may be possible to combine steps 4 and 5 if the data is small enough to hold two copies in memory at the same time (you would create a new in-memory copy and swap with the visible copy when complete).
Note:
I don't talk about error handling here, but I would suggest that you don't delete the sqlite data until you have new data to replace it with.
This approach also eliminates the need to determine if this is the first launch or not. The logic always remains the same which should make it a little easier to implement.
Hope this is useful.
You can do same things more efficiently by MultiVersion Concurrency Control (MVCC), which uses a counter (sort of a very simple "time stamp") for every data record, which is updated whenever the record is changed means you need to get those data which is Updated after last sync call that reduces lots of redundant data and bandwidth.
Source: MultiVersion Concurrency Control

How many records i can store in coredata

First time I make a network hit to (sql)sever to get the table data having 14 fields with image blob mostly. It has above 2 hundred thousand of records in table.
Can we store the 2 hundred thousand records in local database of device using core data.
Best way to place images in local file / DB. (or) we can using remote images loading.
Should work offline
Please suggest the best way of possible to fill this above requirement.
Storing 200k records in core data is not a problem in itself as long as you do the initial importing of those records correctly. Make sure you implement you update-or-insert properly otherwise your users will have to wait proportional to N^2. Apple suggests a nice implementation for this: https://developer.apple.com/library/mac/documentation/cocoa/conceptual/coredata/articles/cdimporting.html
Then once you have the local data, you probably need to fine tune the batch size of your fetch requests, but that's a good idea to do, even if you don't have 200k records.
As for the images, never ever store them in Core Data as binary blobs. Always store them as normal files on disk and store their path in Core Data to access them later on.

Using Core Data to store large numbers of objects

I am somewhat new to Core Data and have a general question.
In my current project, users can access data reported by various sensors in each county of my state. Each sensor is represented in a table view which gathers its data from a web service call. Calling the web service could take some time since this app may be used in rural areas with slow wireless connectivity. Furthermore, users will typically only need data from one or two of the state's 55 counties. Each county could have anywhere from 15 to 500 items returned by the web service. Since the sensor names and locations change rarely, I would like the app to cache the data from the web service call to make gathering the list of sensors locations faster (and offer a refresh button for cases where something has changed). The app already uses Core Data to store bookmarked sensor locations, so it is already set up in the app.
My issue is whether to use Core Data to cache the list of sensors, or to use a SqlLite data store. Since there js already a data model in place, I could simply add another entity to the model. However, I am concerned about whether this would introduce unnecessary overhead, or maybe none at all.
Being new to Core Data, it appears that all that is really happening is that objects are serialized and their properties added as fields in a SqlLite DB managed by Core Data. If this is the case, it seems there really would not be any overhead from using the Core Data store already in place.
Can anyone help clear this up for me? Thanks!
it appears that all that is really happening is that objects are
serialized and their properties added as fields in a SqlLite DB
managed by Core Data
You are right about that. Core Data does a lot more, but that's the basic functionality (if you tell it to use a SQLite store, which is what most people do).
As for the number of records you want to store in Core Data, that shouldn't be a problem. I'm working on a Core Data App right now that also stores over 20,000 records in Core Data and I still get very fast fetch times e.g. for auto completion while typing.
Core Data definitely adds some overhead, but if you only have few entites and relationships and are not creating/modifying objects in more than one context, it is negligible.
Being new to Core Data, it appears that all that is really happening is that objects are serialized and their properties added as fields in a SqlLite DB managed by Core Data. If this is the case, it seems there really would not be any overhead from using the Core Data store already in place.
That's not always the case. Core Data hides its storage implementation from the developer. It is sometimes a SQL db, but in other cases it can be a different data storage. If you need a comprehensive guide to CoreData, I recommend this objc.io article.
As #CouchDeveloper noted, CoreData is a disk io/CPU bound process. If you notice performance hits, throw it in a background thread (yes - this is a pretty big headache), but it will always be faster than the average network.

Local vs. Remote SproutCore queries

What’s the difference between SC.Query.local and SC.Query.remote? Why do both kinds of queries get sent to my data source's fetch method?
The words “local” and “remote” have nothing to do with where the data comes from – all your data probably comes from your server, and at any rate SC.Query doesn’t care. What the words mean is where the decisions are made. Local queries get fulfilled (and are updated live!) locally by SproutCore itself; remote queries rely on your server to make the decisions.
The basic fundamental difference then is: “Can/should/does my local store have local copies of all of the records required to fulfill this request?” If yes, use a local query; if no, use a remote query.
For example, in a document editing application, if the query is “get all of the logged-in user’s documents”, then the decision must be made by someone with access to “all documents across all users” – which the client should clearly not have, for reasons of performance and security! This should therefore be a remote query. But once all of the user’s documents are loaded locally, then another query such as “All of the user’s documents which have been edited in the last three days” can (and should) be executed locally.
Similarly, if a query is looking across a small number of records, it makes sense for the client to load them all and search locally; if it’s a search across millions of records, which the client can’t be expected to load and search, then the remote server will have to be relied upon for fulfillment.
The store will send both remote and local queries to your data source’s fetch method, where you can tell the difference via query.get(‘isRemote’). But the store expects the data source to do different things in each case.
It is your data source’s responsibility to fulfill remote queries. You’ll probably make a server call, then load records (if needed) via store.loadRecords(recordTypes, hashes, *ids*) (which returns an array of store keys), then use store.dataSourceDidFetchQuery(query, storeKeys) to fulfill the query and specify results.
On the other hand, with local queries, the store is essentially making a courtesy call – it’s letting your data source know that this kind of information has been requested, so you can decide if you’d like to load some data from the server. The store will happily fulfill the query all by itself, whether or not the data source takes any action. If you do decide to load some data from the server, you just need to let the store know with store.dataSourceDidFetchQuery(query) – without the store keys this time, because the store is making its own decisions about what records satisfy the query.
Of course, there are other ways to implement this all. SC.Store is set up (somewhat controversially) to aggressively request information from its data source, so it’s possible to run an application using nothing but local queries, or by triggering most server requests with toMany relationships (which run through dataSource.retrieveRecord). My personal favorite approach is to do all loading of data with remote queries in my application’s state chart, and then populate my controllers and relationships with nothing but local queries.
See the Records guide, the SCQL documentation at the top of the SC.Query docs, and the SC.Store / SC.DataSource documentation for a ton more information.

iOS app with remote server - I don't need data to persist on app, should I still use CoreData?

Design question:
My app talks to a server. Json data being sent/received.
Data on server is always changing, and I want users to see most current data, not stored/cached data. So I require a user to be logged in order to use the app, and care not to persist data in the app.
Should I still use CoreData and map it to Json's.?
Or can I just create custom model classes and map Json's to it's properties, and have nsarray properties, which point to its child objects, etc. ?
Which is better?
Thanks
If you dont want to persist data, I personally think core data would be overkill for this application
Core Data is really for local persistance. If the data was not changing so often and you didnt want them to have to get an updated data everytime the user visited the page, then you would load the JSON and store it locally using CoreData.
Use plain old objective-c objects for now. It's not hard to switch to Core Data in future, but once you've done so it gets a lot harder to change your schema.
That depends on what your needs are.
If you need the app to work offline, you need to store your information somehow in the client.
In order to save on network usage, you could store locally, then query the server to see if it had an updated answer -- you could do this by sending a time stamp to the server and return a 304 Not Modified if the entity hasn't changed.
Generally, it depends on how much time you have to put into the app and what your specific requirements are, but as a general rule I would optimise for as low bandwidth usage as possible, as that not only reduces potential data costs, but also means the answers will be more quickly available to your users (when online and they have not changed) and also available offline.
If you do not wish to store data locally at all,

Resources