Blackberry | Keeping local Persistant Storage up to date with remote database - blackberry

I'm developing a blackberry application to remotely access an external customer database.
Selected employees can change customer entries via a webinterface accessible in our intranet.
I don't want the blackberry to contact the database on every request, so I built in a local storage, which stores the top 50 selected customers of the blackberry user.
What the best practice to keep both records in sync? I thought about creating an hashcode of each record to reduce the datasize to transfer (and though the energy necessary to transmit it). Can anyone here tell me what they do, to reduce requests by a mobile device?
Thanks,
rAyt

In a couple of different situations I've added a created/modified timestamp to each record. On a successful sync with the server, you note the last server time, store it on the client, and on the next sync only get the records (if any) that have changed since the last one. This will reduce data but you may still have to deal with records that were changed on both client and server since the last sync.

Related

Access Split Database Problem with Simultaneous Users

I have a split database that was created in Access 2016 and has some users using Access 2016 and others using Access 365.
It works fine when only one person is using it. When two people access it at the same time, sometimes it will generate a copy of the back-end file for one of the users, not saving that user's data to the networked back-end file. The problem usually occurs with the second user to open the front end, but not always.
It gives a message like, 'unable to sync ***_be.accdb', generating a copy. It doesn't matter if the user is using 2016 or 365.
Another symptom is when this occurs, if the user whose file was copied opens one of the forms, they see the screen (form) of the person who is correctly linked to the networked back-end file.
When I monitor the networked back-end file, sometimes it updates quickly with changes and other times it takes a while. Basically, it's not consistent in how it allows access and transfers data to the back-end file from user to user.
I've tried one user on vpn the other on the network, both users on the network and both users on vpn with no obvious difference.
Has anyone run into this?

What is the "scope" of a CKServerChangeToken?

As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.

Using database offline, then updating when new connection established with iPhone

I have been requested to make my app available off-line, which means storing the data collected via Api for use when no connection available. The problem is that when a new connection is made my local data may be out of date. Also, any changes made while off-line will need to update the server.
I'm aware of a method of syncing databases so that when new connection is made the data is automatically updated both ways. However, after browsing Google I've not found a definitive method of doing this.
Can anyone help point me in the right direction?
There should be a field like a time stamp to indicate last synced time. When ever connection is online go for a fetch validate against the timestamp and update the data in offline storage.
The same way, when you have updates while offline you can set some bool value to check whether data is synced or not and sync when you are online.

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

REST and JSON for transaction data - iPad

I am searching for a good solution to manage data updates through REST and JSON.
The client is an iPad and I want that client staying up2date.
The problem is, that the amount of data is very large and may be changed often.
Lets assume I have customer data in the backend. The client iPad should synch this data with the backend system. But customer data may be changed or deleted at any time. Further the amount of customers is >1000.
I do not really want the client to connect to http://www.example.com/customers/ causing to send a request for each of the 1000 customers in that list...
Any ideas to solve such a problem nicely?

Resources