Firebase Commit/Rollback for complex writes - ios

I'm writing a financial app with Firebase and for an receipt to be submitted, a number of other objects also need to be updated. For the data to be valid, all data updates need to be completed successfully. If there's an error in one of the writes, all updates must be rolled back.
For example:
If the user submits a receipt, the receipt object must be updated as well as an invoice object as well as other general ledger objects.
If the update started but the user lost internet connection half way through, all changes should be rolled back.
What's the best way to achieve this in Firebase?

First, let's chat for a minute about why someone might want to do commit/rollback on multiple data paths...
Do you need this?
Generally, you do not need this if:
you are not writing with high concurrency (hundreds of write opes per minute to the SAME record by DIFFERENT users)
your dependencies are straightforward (B depends on A, and C depends on A, but A does not depend on B or C)
your data can be merged into a single path
Developers are a bit too worried about orphaned records appearing in their data.
The chance of a web socket failing between one write and the other is probably trivial and somewhere on the order of collisions between
timestamp based IDs. That’s not to say it’s impossible, but it's generally low consequency, highly unlikely, and shouldn’t be your primary concern.
Also, orphans are extremely easy to clean up with a script or even just by typing a few lines of code into the JS console. So again,
they tend to be very low consequence.
What can you do instead of this?
Put all the data that must be written atomically into a single path. Then you can write it as a single set or a transaction if necessary.
Or in the case where one record is the primary and the others depend on this, simply write the primary first, then write the others in the callback. Add security rules to enforce this, so that the primary record always exists before the others are allowed to write.
If you are denormalizing data simply to make it easy and fast to iterate (e.g. to obtain a list of names for users), then simply index that data in a separate path.
Then you can have the complete data record in a single path and the names, emails, etc in a fast, query/sort-friendly list.
When is this useful?
This is an appropriate tool to use if you have a denormalized set of records that:
cannot be merged practically into one path in a practical way
have complex dependencies (A depends on C, and C depends on B, and B depends on A)
records are written with high concurrency (i.e. possibly hundreds of write ops per minute to the SAME record by DIFFERENT users)
How do you do this?
The idea is to use update counters to ensure all paths stay at the same revision.
1) Create an update counter which is incremented using transactions:
function updateCounter(counterRef, next) {
counterRef.transaction(function(current_value) {
return (current_value||0)+1;
}, function(err, committed, ss) {
if( err ) console.error(err)
else if( committed ) next(ss.val());
}, false);
}
2) Give it some security rules
"counters": {
"$counter": {
".read": true,
".write": "newData.isNumber() && ( (!data.exists() && newData.val() === 1) || newData.val() === data.val() + 1 )"
}
},
3) Give your records security rules to enforce the update_counter
"$atomic_path": {
".read": true,
// .validate allows these records to be deleted, use .write to prevent deletions
".validate": "newData.hasChildren(['update_counter', 'update_key']) && root.child('counters/'+newData.child('update_key').val()).val() === newData.child('update_counter').val()",
"update_counter": {
".validate": "newData.isNumber()"
},
"update_key": {
".validate": "newData.isString()"
}
}
4) Write the data with the update_counter
Since you have security rules in place, records can only successfully write if the counter does not move. If it does move, then the records have been overwritten by a concurrent change, so they no longer matter (they are no longer the latest and greatest).
var fb = new Firebase(URL);
updateCounter(function(newCounter) {
var data = { foo: 'bar', update_counter: newCounter, update_key: 'myKey' };
fb.child('pathA').set(data);
fb.child('pathB').set(/* some other data */);
// depending on your use case, you may want transactions here
// to check data state before write, but they aren't strictly necessary
});
5) Rollbacks
Rollbacks are a bit more involved, but can be built off this principle:
store the old values before calling set
monitor each set op for failures
set back to old values on any committed changes, but keep the new counter
A pre-built library
I wrote up a lib today that does this and stuffed it on GitHub. Feel free to use it, but please be sure you aren't making your life complicated by reading "Do you need this?" above.

Related

Realm Swift: Question about Query-based public database

I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}

How to optimize performance of Results change listeners in Realm (Swift) with a deep hierarchy?

We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.

Breeze: When child entities have been deleted by someone else, they still appear after reloading the parent

We have a breeze client solution in which we show parent entities with lists of their children. We do hard deletes on some child entities. Now when the user is the one doing the deletes, there is no problem, but when someone else does, there seems to be no way to invalidate the children already loaded in cache. We do a new query with the parent and expanding to children, but breeze attaches all the other children it has already heard of, even if the database did not return them.
My question: shouldn't breeze realize we are loading through expand and thus completely remove all children from cache before loading back the results from the db? How else can we accomplish this if that is not the case?
Thank you
Yup, that's a really good point.
Deletion is simply a horrible complication to every data management effort. This is true no matter whether you use Breeze or not. It just causes heartache up and down the line. Which is why I recommend soft deletes instead of hard deletes.
But you don't care what I think ... so I will continue.
Let me be straight about this. There is no easy way for you to implement a cache cleanup scheme properly. I'm going to describe how we might do it (with some details neglected I'm sure) and you'll see why it is difficult and, in perverse cases, fruitless.
Of course the most efficient, brute force approach is to blow away the cache before querying. You might as well not have caching if you do that but I thought I'd mention it.
The "Detached" entity problem
Before I continue, remember the technique I just mentioned and indeed all possible solutions are useless if your UI (or anything else) is holding references to the entities that you want to remove.
Oh, you'll remove them from cache alright. But whatever is holding references to them now will continue to have a reference to an entity object which is in a "Detached" state - a ghost. Making sure that doesn't happen is your responsibility; Breeze can't know and couldn't do anything about it if it did know.
Second attempt
A second, less blunt approach (suggested by Jay) is to
apply the query to the cache first
iterate over the results and for each one
detach every child entity along the "expand" paths.
detach that top level entity
Now when the query succeeds, you have a clear road for it to fill the cache.
Here is a simple example of the code as it relates to a query of TodoLists and their TodoItems:
var query = breeze.EntityQuery.from('TodoLists').expand('TodoItems');
var inCache = manager.executeQueryLocally(query);
inCache.slice().forEach(function(e) {
inCache = inCache.concat(e.TodoItems);
});
inCache.slice().forEach(function(e) {
manager.detachEntity(e);
});
There are at least four problems with this approach:
Every queried entity is a ghost. If your UI is displaying any of the queried entities, it will be displaying ghosts. This is true even when the entity was not touched on the server at all (99% of the time). Too bad. You have to repaint the entire page.
You may be able to do that. But in many respects this technique is almost as impractical as the first. It means that ever view is in a potentially invalid state after any query takes place anywhere.
Detaching an entity has side-effects. All other entities that depend on the one you detached are instantly (a) changed and (b) orphaned. There is no easy recovery from this, as explained in the "orphans" section below.
This technique wipes out all pending changes among the entities that you are querying. We'll see how to deal with that shortly.
If the query fails for some reason (lost connection?), you've got nothing to show. Unless you remember what you removed ... in which case you could put those entities back in cache if the query fails.
Why mention a technique that may have limited practical value? Because it is a step along the way to approach #3 that could work
Attempt #3 - this might actually work
The approach I'm about to describe is often referred to as "Mark and Sweep".
Run the query locally and calculate theinCache list of entities as just described. This time, do not remove those entities from cache. We WILL remove the entities that remain in this list after the query succeeds ... but not just yet.
If the query's MergeOption is "PreserveChanges" (which it is by default), remove every entity from the inCache list (not from the manager's cache!) that has pending changes. We do this because such entities must stay in cache no matter what the state of the entity on the server. That's what "PreserveChanges" means.
We could have done this in our second approach to avoid removing entities with unsaved changes.
Subscribe to the EntityManager.entityChanged event. In your handler, remove the "entity that changed" from the inCache list because the fact that this entity was returned by the query and merged into the cache tells you it still exists on the server. Here is some code for that:
var handlerId = manager.entityChanged.subscribe(trackQueryResults);
function trackQueryResults(changeArgs) {
var action = changeArgs.entityAction;
if (action === breeze.EntityAction.AttachOnQuery ||
action === breeze.EntityAction.MergeOnQuery) {
var ix = inCache.indexOf(changeArgs.entity);
if (ix > -1) {
inCache.splice(ix, 1);
}
}
}
If the query fails, forget all of this
If the query succeeds
unsubscribe: manager.entityChanged.unsubscribe(handlerId);
subscribe with orphan detection handler
var handlerId = manager.entityChanged.subscribe(orphanDetector);
function orphanDetector(changeArgs) {
var action = changeArgs.entityAction;
if (action === breeze.EntityAction.PropertyChange) {
var orphan = changeArgs.entity;
// do something about this orphan
}
}
detach every entity that remains in the inCache list.
inCache.slice().forEach(function(e) {
manager.detachEntity(e);
});
unsubscribe the orphan detection handler
Orphan Detector?
Detaching an entity can have side-effects. Suppose we have Products and every product has a Color. Some other user hates "red". She deletes some of the red products and changes the rest to "blue". Then she deletes the "red" Color.
You know nothing about this and innocently re-query the Colors. The "red" color is gone and your cleanup process detaches it from cache. Instantly every Product in cache is modified. Breeze doesn't know what the new Color should be so it sets the FK, Product.colorId, to zero for every formerly "red" product.
There is no Color with id=0 so all of these products are in an invalid state (violating referential integrity constraint). They have no Color parent. They are orphans.
Two questions: how do you know this happened to you and what do your do?
Detection
Breeze updates the affected products when you detach the "red" color.
You could listen for a PropertyChanged event raised during the detach process. That's what I did in my code sample. In theory (and I think "in fact"), the only thing that could trigger the PropertyChanged event during the detach process is the "orphan" side-effect.
What do you do?
leave the orphan in an invalid, modified state?
revert to the equally invalid former colorId for the deleted "red" color?
refresh the orphan to get its new color state (or discover that it was deleted)?
There is no good answer. You have your pick of evils with the first two options. I'd probably go with the second as it seems least disruptive. This would leave the products in "Unchanged" state, pointing to a non-existent Color.
It's not much worse then when you query for the latest products and one of them refers to a new Color ("banana") that you don't have in cache.
The "refresh" option seems technically the best. It is unwieldy. It could easily cascade into a long chain of asynchronous queries that could take a long time to finish.
The perfect solution escapes our grasp.
What about the ghosts?
Oh right ... your UI could still be displaying the (fewer) entities that you detached because you believe they were deleted on the server. You've got to remove these "ghosts" from the UI.
I'm sure you can figure out how to remove them. But you have to learn what they are first.
You could iterate over every entity that you are displaying and see if it is in a "Detached" state. YUCK!
Better I think if the cleanup mechanism published a (custom?) event with the list of entities you detached during cleanup ... and that list is inCache. Your subscriber(s) then know which entities have to be removed from the display ... and can respond appropriately.
Whew! I'm sure I've forgotten something. But now you understand the dimensions of the problem.
What about server notification?
That has real possibilities. If you can arrange for the server to notify the client when any entity has been deleted, that information can be shared across your UI and you can take steps to remove the deadwood.
It's a valid point but for now we don't ever remove entities from the local cache as a result of a query. But.. this is a reasonable request, so please add this to the breeze User Voice. https://breezejs.uservoice.com/forums/173093-breeze-feature-suggestions
In the meantime, you can always create a method that removes the related entities from the cache before the query executes and have the query (with expand) add them back.

Locking before save with fixed concurrencymode

I'm learning about concurrency in conjunction with EF4.0 and have a question about the locking pattern used.
Say I configure a fixed concurrency mode on a version number property.
Now say I fetch a record (entity) from the database (context) and edit some property. Version gets incremented and when SaveChanges is called on its context. If the current database (context) version matches the version of the original record (entity) the save continues, otherwise an OptimisticConcurrencyException gets thrown by EF.
Now, my point of interest is the following: between the check of the versions there's always a small period of time, however small, it is there. So in theory someone else could've just updated the record between the comparison and the actual save, thus possibly corrupting the data.
How does this get solved? It feels as if the problem just gets pushed forward.
There is no period of time between checking versions and updating record because the database command looks like:
UPDATE SomeTable
SET SomeColumn = 'SomeValue'
WHERE Id = #Id AND Version = #OldVersion
SELECT ##ROWCOUNT
The check and update is one atomic operation. Rowcount will return 0 if no record with Id = #Id and Version = #OldVersion exists and that zero is translated to the exception.
This can (and probably is) solved using locking hints.
For SQL Server, EF can query (SELECT) from the database WITH UPDLOCK.
This tells the Database Engine that, you want to read a/several records, and nobody else can change those records until you perform an update thereafter.
If you want to see this for yourself, check out the Sql Server Profiler which will show you the queries in real-time.
Hope that helps.
CAVEAT: I can't say for sure that this is the way EF handles this scenario because I haven't checked myself but, certainly if you were going to do it yourself, this is one way to do it.

Why is Entity framework loading data from the db when I set a property?

I have two tables (there are more in the database but only two are involved here).
Account and AccountStatus, an account can have an AccountStatus (active,inactive etc).
I create a new Account and set a couple of properties but when I reach this code:
1. var status = db.AccountStatuses.SingleOrDefault(s => s.ID == (long)AccountStatusEnum.Active);
2. account.AccountStatus = status;
3. db.Accounts.AddObject(account);
The first line executes fine, but when I reach the second line it takes a REALLY long time, and when I step in to the code it seems that every single account is loaded from the database.
I don't see why it should even want to load all the accounts?
We use Entity Framework 4 and Poco and we have lazy loading enabled.
Any suggestions?
Cheers
/Jimmy
You have to be careful which constructs you use to fetch data, as some will pull in the whole set and filter afterword. (aside: the long time delay may be the database being created and seeded, if there isn't one already, it will occur the first time you touch it, likely with a query of some sort. Also remember that when you retrieve a whole dataset, you may in actuality only have what amounts to a compiled query that won't be evaluated until you interact with it).
Try this form instead and see if you have the same issue:
var status = db.AccountStatuses.Where(s => s.ID == (long)AccountStatusEnum.Active);

Resources