Core data iCloud transaction logs - ios

I'm testing Core Data and iCloud with UIManagedDocument and ubiquity options (NSPersistentStoreUbiquitousContentNameKey and NSPersistentStoreUbiquitousContentURLKey).
Everything is working OK. My devices get synched without problems and in an reasonable time. The DB is small (below 100K).
As i said I'm testing the app and making a lot of changes to the db and as result, a lot of transaction logs are generated. The problem I have is that if I delete and reinstall the app on one of the devices used for testing (without deleting iCloud data) the app take a very long time to open the document. openWithCompletionHandler takes minutes, sometimes never ends. If I turn on debugging (-com.apple.coredata.ubiquity.logLevel 3) i can see that there is a long wait and after that the DB is reconstructed with transaction logs.
If i remove iCloud data and reinsert the data on first device the second one sync without problems. Because of that I think that the reason for the delay is a high number of transaction logs (20-30 while testing as I can see on developer.icloud.com)
According to Managing Core Data iCloud Transaction Logs will handle core data automatically, but I can't see any deletion. Perhaps that needs some more time.
My questions are: Do transaction logs gets consolidated ever ? Can I force the consolidation of logs ? Another recommended option ?
I only store the subset of essential information needed for syncing in iCloud Core Data file. I have another local file with full DB, so I can reconstruct the iCloud DB without any major loss of information. Perhaps I could delete iCloud DB when I detect a bunch of logs and re-create it. Do you think this is a good option ?
Thank you for helping.

Do transaction logs gets consolidated ever ?
That is how it's supposed to work.
Can I force the consolidation of logs ?
No. There is no API that directly affects the existence of transaction logs. The iCloud system will consolidate them at some point, but there's no documentation regarding when that happens, and you can't force it.
Another recommended option ?
You can limit the number of transaction logs indirectly-- save changes less frequently. A transaction log corresponds to saving changes in Core Data. It may not make much of a difference because, honestly, 20-30 transaction logs is not very many. You might be able to reduce the number of log files but you'll still have the same amount of data in them.
Transaction logs aren't really your problem. As you observed, there's a long wait before iCloud starts running through the transaction logs. During that delay, iCloud is communicating with Apple's servers and downloading the transaction logs. Some of this is affected by network speed and latency, and the rest is just the way iCloud is.

Related

How to delete a CKRecord if not yet finished creating due to bad network connection?

I am currently working on an iPhone App which utilizes CloudKit for syncing the app data between the users devices. I use CoreData as my local cache to make sure that the app stays usable when the device is offline.
The Problem
During the development I came across an issue concerning the sync behavior when the device is offline. Let me explain by giving an example:
The user creates a Person (one entity I'm dealing with)
The person is saved to the local cache → CoreData
A CKRecord is created and set up to match the data of the locally cached entity
To save the CKRecord to CloudKit, a CKModifyRecordsOperation is created and set up with all the completion blocks and properties needed
The CKModifyRecordsOperation is added to the database
If the device has a working network connection the CKRecord is created as desired. But when the operation fails due to a bad network connection the CKRecord is not created.
Let's assume the device stays offline and the user decides to delete the person again. This is no problem for the data locally cached on the device. But due to the fact that the local cache has no CKRecordID associated, no CKModifyRecordsOperation can be created to delete the CKRecord in the cloud.
Now the device establishes a network connection and is online again. So now the CKModifyRecordsOperation to create the Person-Entity is executed. This results in the local cache and the cloud being out of sync.
I thought of fixing this issue by keeping track of pending operations concerning a Person-Entity. If the person gets deleted the pending operations get cancelled.
Unfortunately I could not get this running. So I would appreciate some advice if I'm on the right track!
Thank you!
Try adjusting the .qualityOfService attribute on the operation. Per Apple Docs at https://developer.apple.com/library/content/documentation/Performance/Conceptual/EnergyGuide-iOS/PrioritizeWorkWithQoS.html#//apple_ref/doc/uid/TP40015243-CH39:
User-interactive: Work that is interacting with the user, such as
operating on the main thread, refreshing the user interface, or
performing animations. If the work doesn’t happen quickly, the user
interface may appear frozen. Focuses on responsiveness and
performance. Work is virtually instantaneous.
User-initiated Work that the user has initiated and requires
immediate results, such as opening a saved document or performing an
action when the user clicks something in the user interface. The work
is required in order to continue user interaction. Focuses on
responsiveness and performance. Work is nearly instantaneous, such
as a few seconds or less.
Utility Work that may take some time to complete and doesn’t require an immediate result, such as downloading or importing data.
Utility tasks typically have a progress bar that is visible to the
user. Focuses on providing a balance between responsiveness,
performance, and energy efficiency. Work takes a few seconds to a
few minutes.
Background Work that operates in the background and isn’t visible to
the user, such as indexing, synchronizing, and backups. Focuses on
energy efficiency. Work takes significant time, such as minutes or
hours.
The page also says the default is between user-initiated and Utility.
According to this discussion on the Apple dev forums https://forums.developer.apple.com/thread/20047, users reported not receiving errors for queries that failed while being offline and, like you're seeing, it seems those queries were actually persisted and tried again when connection was restored. Users in that thread reported that changing the QoS parameter to User-initiated caused an error to be returned immediately when the operation couldn't be completed due to no network.
It also appears that when the operation is being persisted, the operation's longLivedOperationWasPersistedBlock will be called.
So, option 1: try adjusting the QoS value to a "higher" (more urgent) value on the operation, which should cause errors to be thrown rather than queuing the operation for later.
Option 2: try adding the longLivedOperationWasPersistedBlock. If it fires, you could try canceling the operation in that block, and displaying a "no network" error to the user.

Unique Realm container objects

I implemented real time sync following Realm's tasks demo app.
There a dummy container is used, to hold a List with the models.
The demo app doesn't seem to support offline usage.
I wondered what happens when, given this setup, I start the app on an online as well as an offline device and then go online with the offline device.
My initial expectation was that I'd end with 2 containers (which would be an invalid state), but when I tested surprisingly there was only 1 container at the end.
But sometimes I get 2 containers and haven't been able to identify what causes this.
The question then is, how does this exactly work? I assume the reason that the container is normally not duplicated when I sync the offline device for the first time is that it's handled as the same object, maybe because it doesn't have a primary key or something? But then why is it sometimes duplicated? And what would be the best practice here? Do I maybe have to use a primary key or check after connecting if there's duplication and if yes do a manual merge of the containers?
At the moment, Realm Tasks merely checks if the default Realm is empty before it tries to add a new base list container object. If the synchronization process hasn't completed by the time this check occurs, it's reasonable that a second container would be created. When testing the app on a local network, this usually isn't a problem since the download speeds are so fast, but we definitely should test this a bit more thoroughly.
Adding a primary key will definitely help since it means that if a second list is created locally, it will get merged with the version that comes down from the server.
We've recently been focusing on the 'on-boarding' process when a second device connects to a user's Realm Mobile Platform account via the new progress notification system. A more logical approach would be to wait for the synchronization to complete the initial download after logging in, and then checking for the presence of the objects. Once the documentation is complete, we'll most likely be revamping how Realm Tasks handles this.
The demo app (as well as the Realm Mobile Platform) does support offline, but only after the user has logged in for the first time (which is when these container objects are initially generated). After that time, the apps can be used offline, and any changes done in that interim are synchronized the next time it comes online.
We're planning on building 'anonymous user' feature where a user can start using the app straight away (even offline) and then any changes they made before they log in (due to them being offline) are then transferred to the user account after they do so.

Firebase cache when still online, iOS

In the iOS Firebase SDK, if I perform a .ChildAdded query, for example, and then later perform the same query again, will the query perform on the local cache or will it hit the Firebase servers again?
In general: the Firebase client tries to minimize the number of times it downloads data. But it also tries to minimize the amount of memory/disk space it uses.
The exact behavior depends on many things, such as whether the another listener has remained active on that location and whether you're using disk persistence. If you have two listeners for the same (or overlapping) data, updates will only be downloaded once. But if you remove the last listener for a location, the data for that location is removed from the (memory and/or disk) cache.
Without seeing a complete piece of code, it's hard to tell what will happen in your case.
Alternatively: you can check for yourself by enabling Firebase's logging [Firebase setLoggingEnabled:YES];

iCloud Core Data Reliability & Timing

I have been attempting to implement iCloud with my Core Data based small business apps. Been using a GitHub method called Ubiquity Store Manager (USM) and more generic Apple code example methods. It almost seems to work...but there are 2 major issues that I can't seem to consistently address:
Timing - When the context is saved to the Ubiquity container it is beyond your control to determine when it is upload to iCloud. If two transactions are saved in less than 3-5 seconds often they will be uploaded to iCloud in the reverse chronological order they were entered/saved. For example: trans1 at 8:01:01 and trans2 at 8:01:04, trans2 will often upload and download onto other devices BEFORE trans1. If these are simple records like appointments or contacts, probably not a big deal. With parent-child related records it's a very big deal as the child records arrive before and parents and are effectively "lost" in iCloud. I have tried a timer between transactions 5-7 second delay will eliminate the problem, but is there a better way to handle this?
Reliability - When testing on 2 devices after a pause of as little as 2 minutes, if 2 successive transactions are saved frequently the first transaction will not be displayed on the 2nd device. If a "wake up" transaction is created prior to the entry of the real transaction then the reliability can be restored. Again, this is a kluggy solution, does any one have a better way to handle this?
Key Value iCloud transaction are almost instantaneous, error free and bulletproof. How can this be achieved using Core Data or is Core Data just not appropriate for complex (multiple relationship) business transactions?
Thanks for any help or ideas!

Bootstrapping data at application startup with Simperium

As someone that experienced the pain of iCloud while trying to prototype iCloud enabling one of our CoreData apps, Simperium looks very promising, but I'm interested in seeing how it handles some of the sharp edges.
One issue I came across was how to gracefully handle bootstrapping data when the application starts up. The first time a user launches our app, we will load some default data into our CoreData database. If a user launches the app first on the iPhone and then later on the iPad, they will end up getting the bootstrap data duplicated on both devices because of syncing. With iCloud, the solution was to hook into the iCloud merge process.
How would I handle this with Simperium?
There are at least a couple ways to do this.
You can hardcode the simperiumKey for each seeded object. For example, in a notes app, if every new user gets a welcome note, you can locally create that note with the simperiumKey of welcomeNote. This will ensure that only one welcome note will ever exist in that user's account (on any device). With this approach, there can be some redundant data transfer, so it's best if there's not a large amount of seeded data. On the other hand, this approach is good if you want data to be immediately available to new users even if they're offline when they first launch your app.
With Simperium, you also have the option to use a server process. You can seed new user accounts with data by using a Python or Ruby listener that runs some code when accounts are created. This is a good approach if there's a large amount of data, but has the disadvantage that users need to be online before the seeded data will transfer (and of course the transfer itself will take some time).
There are subtleties with these approaches. With the first approach, using the welcomeNote example, if your user deletes the welcomeNote and subsequently reinstalls your app in the future, the welcomeNote will get resurrected (but never duplicated) because it's being created locally. This is often acceptable. With the second approach, the welcomeNote would be seeded once and only once, so it will never get resurrected even if your app is reinstalled.

Resources