Local vs. Remote SproutCore queries - sproutcore

What’s the difference between SC.Query.local and SC.Query.remote? Why do both kinds of queries get sent to my data source's fetch method?

The words “local” and “remote” have nothing to do with where the data comes from – all your data probably comes from your server, and at any rate SC.Query doesn’t care. What the words mean is where the decisions are made. Local queries get fulfilled (and are updated live!) locally by SproutCore itself; remote queries rely on your server to make the decisions.
The basic fundamental difference then is: “Can/should/does my local store have local copies of all of the records required to fulfill this request?” If yes, use a local query; if no, use a remote query.
For example, in a document editing application, if the query is “get all of the logged-in user’s documents”, then the decision must be made by someone with access to “all documents across all users” – which the client should clearly not have, for reasons of performance and security! This should therefore be a remote query. But once all of the user’s documents are loaded locally, then another query such as “All of the user’s documents which have been edited in the last three days” can (and should) be executed locally.
Similarly, if a query is looking across a small number of records, it makes sense for the client to load them all and search locally; if it’s a search across millions of records, which the client can’t be expected to load and search, then the remote server will have to be relied upon for fulfillment.
The store will send both remote and local queries to your data source’s fetch method, where you can tell the difference via query.get(‘isRemote’). But the store expects the data source to do different things in each case.
It is your data source’s responsibility to fulfill remote queries. You’ll probably make a server call, then load records (if needed) via store.loadRecords(recordTypes, hashes, *ids*) (which returns an array of store keys), then use store.dataSourceDidFetchQuery(query, storeKeys) to fulfill the query and specify results.
On the other hand, with local queries, the store is essentially making a courtesy call – it’s letting your data source know that this kind of information has been requested, so you can decide if you’d like to load some data from the server. The store will happily fulfill the query all by itself, whether or not the data source takes any action. If you do decide to load some data from the server, you just need to let the store know with store.dataSourceDidFetchQuery(query) – without the store keys this time, because the store is making its own decisions about what records satisfy the query.
Of course, there are other ways to implement this all. SC.Store is set up (somewhat controversially) to aggressively request information from its data source, so it’s possible to run an application using nothing but local queries, or by triggering most server requests with toMany relationships (which run through dataSource.retrieveRecord). My personal favorite approach is to do all loading of data with remote queries in my application’s state chart, and then populate my controllers and relationships with nothing but local queries.
See the Records guide, the SCQL documentation at the top of the SC.Query docs, and the SC.Store / SC.DataSource documentation for a ton more information.

Related

How to connect Jboss BRMS (6.4.0.GA) to any database

I have a SQL Server database with a Person table and I want to load a list of these people from the database to an Arraylist or List in the BRMS to apply the rules. how can I do this?
The best practice is to delegate the data retrieval logic to the caller.
The pattern should be:
Retrive the data from a DB or whatever
Fill in the data in the Working Memory
Fire the rules
Collect the results
Depending on the application you can use the results to update a DB
The BRMS has the ability to retrieve data in the rule logic but it should be considered a bad practice, or something to do when no other options are available (really rare case, in rare situation). Otherwise, the BRMS performances will be terrible and the overall code really hard to maintain.

When fetching data using an api, is it best to store that data on another database, or is it best to keep fetching that data whenever you need it? [duplicate]

This question already has an answer here:
Caching calls to an external API in a rails app
(1 answer)
Closed 6 years ago.
I'm using the TMDB api in order to fetch information such as film titles and release years, but am wondering whether I need to create an extra database in order to store all this information locally, rather than keep having to use the api to get the info? For example, should I create a film model and call:
film.title
and by doing so accessing a local database with the title stored on it, or do I call:
Tmdb::Movie.detail(550).title
and by doing so making another call to the api?
Having dealt with a large Rails application that made service calls to about a dozen other applications, caching is your best bet. The problem with the database solution is keeping it up to date. The problem with making the calls every time is that it's too slow. There is a middle ground. For this you want Ruby on Rails Low Level Caching:
Sometimes you need to cache a particular value or query result instead of caching view fragments. Rails' caching mechanism works great for storing any kind of information.
The most efficient way to implement low-level caching is using the Rails.cache.fetch method. This method does both reading and writing to the cache. When passed only a single argument, the key is fetched and value from the cache is returned. If a block is passed, the result of the block will be cached to the given key and the result is returned.
An example that is pertinent to your use case:
class TmdbService
def self.movie_details(id)
Rails.cache.fetch("tmdb.movie.details.#{id}", expires_in: 4.hours) do
Tmdb::Movie.detail id
end
end
You can then configure your Rails application to use memcached or the database for the cache, it doesn't matter. The point is you want this cached data to expire at some point to ensure you are getting up-to-date information.
This is a big decision to make. If the amount of data you get through the API is not huge you can store all of it in your database. This way you will get the data much faster and your application will work even when the API is down.
If the amount of data you get is huge and you don't have sources to store all the data, you should at least store the most important data in your database as cache.
If you do not store any data on you own you are dependent on the source of data and it can have downtime.
Problem with storing data on your side is when the data change and you need to synchronize. In that case it is still good to store data on your side as cache to get results faster and synchronize the data periodically.
Calls to a local database are way faster than calls to external APIs. I would expect a local database to return within a few milliseconds, whereas an API will probably take hundreds of milliseconds. And local calls are less likely effected by network issues or downtimes.
Therefore I would always cache the result of an API call in a local database and occasionally updated the local version with a newer version from the API.
But in the end it depends on your requirement: Do you need real-time or is a cached version okay? How often do you need that data and how often is is updated? How fast is the API and is latency an issue? Does the API have a rate limit (a maximum number of request per time)?

The best way to handle erratic data on iOS

I am working on an application where I have a connection to a database. The database contains from 300MB to 4GB worth of data as each customer has their own database. My issue that I am having is in gathering the data, because of the potential database size, just downloading and storing the information locally isn't possible. The data can get quite complex and can vary. For an example:
A customer has a Job and they want to search for that job from the app.
I then fetch a list of jobs matching the search criteria.
The customer sees the job they want to view and I start the gathering process.
This job can potentially touch many tables, sometimes repeatedly..
There is the jobs table, a relational table to map to a person. Then there is another table that contains non-customer relational information, then there are calendar events associated to the job, which in tun can associate different people. Then there are emails attached to the job, which in turn can bring in additional people and events.
So I have a working model that gathers all of this information. The problem I have is that I cannot figure out a great method of signaling to my view that the data is completely downloaded. My initial thought was to use the NotificationCenter to message when the certain parts of the task were finished, allowing the core Job object to notify the view when everything was complete.
I know this is a pretty generalized question, but I'm honestly stumped as to how to take an unknown number of table results and translate that into a notice that my app can actually use.
My initial recommendation would be Core Data. It's designed for this kind of problem. No, I'm not saying to download the entire database into Core Data. I'm saying to use Core Data to manage your object model, because that's what it's good at.
As you receive data from the server, compose it into NSManagedObjects and stick them in the data store. On the UI side, create an NSFetchedResultsController to keep you informed as the data updates asynchronously. You don't necessarily need to persist this store. You could just keep it in memory and throw it away whenever you're done with the query, but keeping it on disk could be a nice caching solution. Again, don't think of Core Data as "a local database." Think of it as a model persistence engine that you can query for objects.
One advantage of this model is that you can provide the best available data to the user as it becomes available. But say you really don't want to get the information until it's all available. That's fine, too. Just let the network side keep updating its context, and then only save it when everything's complete. That way NSFetchedResultsController gets a single atomic update. The nice things with Core Data is that it has these concepts built in, so you can adjust your update strategy without requiring massive redesign.
The Notification Center will work great for this.
Post the notification at logical points in your data load to trigger a UI update for your users.

How is data loaded from a remote database stored locally (in memory)

What is the best way that data loaded from a remote database can be stored locally on iOS. (You don't need to provide any code, I just want want to know the best way conceptually.)
For instance, take Twitter for iOS as an example. When it loads the tweets, does it just pull the tweet data from the remote database and store them in a local database on the iPhone? Or would it be better if the data is just stored locally as an array of objects or something similar?
See, I'm figuring that to be able to use/display the data from the remote database (for instance, in a dynamic table view), the data would have to be stored as objects anyway so maybe they should just be stored as objects in an array. However, when researching this, I did see a lot of articles about local databases, so I was thinking maybe its more efficient to load the remote data as a table and store it in a local database and use data directly from the local table to display the data or something similar.
Which one would require more overhead: storing the data as an array of Tweet objects or as a local database of tweets?
What do you think would be the best way of storing remote data locally (in memory) (for an app that loads data similar to how Twitter for iOS)?
I suppose this begs this prerequisite question: when data from a remote database is downloaded, is it usually loaded as a database table (result set) and therefore stored as one?
Thanks!
While it's very easy the put the fetched data right into your array and use it from there, it is likely that you would be benefitted by using a local database for two main reasons: scalability and persistance.
If you are hoping to download and display a large amount of data, it may be unsafe to try to store it in memory all at once. It would be more scalable to download whatever data you need, store it in a local database, and then fetch only the relevant objects you need to display.
If you download the data and only store it in an array, that data will have to be re-fetched from the remote database and re-parsed on next load of your app/view controller/etc before anything can be displayed. Instead, create a local database in which to store the downloaded data, allowing it to be readily available to display on next load while new data is fetched from your remote source.
While there is some initial overhead incurred in creating your database, the scalability and persistance that provides you is more than enough to warrant that. Storing the state of a remote database in a local database is an extremely common practice.
However, if you do not mind the user having to wait for this data to be fetched on every load, or the amount of data being fetched is relatively low or the data is incredibly simple, you will save time and effort on skipping the local database.

iOS app with remote server - I don't need data to persist on app, should I still use CoreData?

Design question:
My app talks to a server. Json data being sent/received.
Data on server is always changing, and I want users to see most current data, not stored/cached data. So I require a user to be logged in order to use the app, and care not to persist data in the app.
Should I still use CoreData and map it to Json's.?
Or can I just create custom model classes and map Json's to it's properties, and have nsarray properties, which point to its child objects, etc. ?
Which is better?
Thanks
If you dont want to persist data, I personally think core data would be overkill for this application
Core Data is really for local persistance. If the data was not changing so often and you didnt want them to have to get an updated data everytime the user visited the page, then you would load the JSON and store it locally using CoreData.
Use plain old objective-c objects for now. It's not hard to switch to Core Data in future, but once you've done so it gets a lot harder to change your schema.
That depends on what your needs are.
If you need the app to work offline, you need to store your information somehow in the client.
In order to save on network usage, you could store locally, then query the server to see if it had an updated answer -- you could do this by sending a time stamp to the server and return a 304 Not Modified if the entity hasn't changed.
Generally, it depends on how much time you have to put into the app and what your specific requirements are, but as a general rule I would optimise for as low bandwidth usage as possible, as that not only reduces potential data costs, but also means the answers will be more quickly available to your users (when online and they have not changed) and also available offline.
If you do not wish to store data locally at all,

Resources