In the iOS Firebase SDK, if I perform a .ChildAdded query, for example, and then later perform the same query again, will the query perform on the local cache or will it hit the Firebase servers again?
In general: the Firebase client tries to minimize the number of times it downloads data. But it also tries to minimize the amount of memory/disk space it uses.
The exact behavior depends on many things, such as whether the another listener has remained active on that location and whether you're using disk persistence. If you have two listeners for the same (or overlapping) data, updates will only be downloaded once. But if you remove the last listener for a location, the data for that location is removed from the (memory and/or disk) cache.
Without seeing a complete piece of code, it's hard to tell what will happen in your case.
Alternatively: you can check for yourself by enabling Firebase's logging [Firebase setLoggingEnabled:YES];
Related
In my application there are several requests that are made during execution and one that is made at the end. I would like the one that runs at the end to be sure to arrive, so I would like to enable offline functionality on this last request but not sure about the others. Is this possible and if so how to do it?
Thanks for your attention
There is no way to control offline availability on a granular level in the Firebase Realtime Database API. Offline disk caching is either on or off, and when it's on Firebase caches all recent data it receives, and all pending writes.
My use-case for Firebase is slightly different than most. We do not use FB exclusively for our back-end. We have a large MariaDB server dealing with relations and all data.
Our goal with FB is to allow clients on iOS devices to have their specific data available. We need to load the data once and then listen for changes to this particular data. Here is a rough overview of how it works:
The main ViewController is loaded
Firebase is initialized and we listen for FIRDataEventTypeChildAdded. (Persistence is enabled)
Firebase loads all matching records. We then loop through and store them locally in the internal SQLite DB.
In the normal userflow, we push other ViewControllers on the screen. The problem is, once the main ViewController is loaded, FIRDataEventTypeChildAdded fires again for each record.
Questions:
When FIRDataEventTypeChildAdded fires again, is it loading the data from its internal cache (Persistence?) or is it re-downloading everything from the Firebase server? I've used Network Link Conditioner to completely cut the internet connection, and when I do, it does not fire the FIRDataEventTypeChildAdded at all, but as soon as the net comes back, it fires FIRDataEventTypeChildAdded for every single record.
How can I achieve the above where we load all records on login and then only listen for changes to those records? I am already using orderBy and startingAt so if the answer involves one of the above, I cannot add another "hasDownloaded=yes" filter.
Thanks in advance.
A Firebase reference listener connects to the server once, and stays connected until that query is turned off. As long as the reference being listened to is in memory, there is only one connection made to the database. Once this connection happens, all data will come through as child added data again.
The issue here is not so much with Firebase but that your app is continuously readding listeners to a reference, making the data be redownloaded from the network every time.
So to your first question, yes it is redownloading from the network. To your second, you just need to make sure the Firebase query never leaves memory. This can be done by making your query globally scopes, or simply by not turning off the query when the view controller exits scope (then you need to make sure not to readd multiple queries on subsequent loads).
When turning on persistence for Firebase in iOS what actually happens to my observers and how they behave on a technical level?
I have a good idea how it should work on a high-level based on this https://firebase.google.com/docs/database/ios/offline-capabilities - Firebase essentially keeps a cached copy of the data you can access whilst offline.
What I don't understand is how many times my observers should fire and with what information.
Does firebase always trigger my observers once with any cached data first (or null if there isn't any data) followed by the server data.
Or does it only send the cached data if it exists followed by the server data.
Is there any difference between observerSingleValue and a continous observer's behaviour when in persistence mode ?
In our app with persistence enabled, I have noticed:
Firebase just sending the server data
Firebase sending the cached data if it exists then the server data.
Firebase sending the cached data and null if it doesn't exist followed by the server data.
It would be good to clear this up so we know which should be the normal behaviour :)
It's actually pretty simple. When you attach an observer (whether using observeEventType or observeSingleEventOfType), Firebase will:
Immediately raise events with any complete cached data.
Request updated data from the server and, when it arrives, raise new events if the data is different than what was cached.
There are a couple subtleties that fall out of this though:
We'll only raise events with cached data if it is complete. This means:
If we have no cached data (you haven't observed this location before), we will not raise events with null or similar. You won't get any events until we get data from the server.
If you have partial data for this location (e.g. you observed /foo/bar previously but now you're observing /foo), you will get ChildAdded events for complete children (e.g. /foo/bar), but you won't get a Value event (e.g. for /foo) until we've gotten complete data from the server for the location you're observing.
If you're using observeSingleEventOfType, you're explicitly asking for only a single event and so if you have cached data, #1 will happen but #2 will not, which may not be what you want (you'll never see the latest server data).
Hope this helps!
I'd like some advice on how to implement the following in Objective C. The actual application in related to an AB testing framework I'm working on, but that context shouldn't matter too much.
I have an IOS application, and when a certain event happens I'd like to send a log message to a HTTP Service endpoint.
I don't want to send the message every time the event happens. Instead I'd prefer to aggregate them, and when it gets to some (configurable) number, I'd like to send them off async.
I'm thinking to wrap up a static NSMutableArray in a class with an add method. That method can check to see if we have reached the configurable max number, if we have, aggregate and send async.
Does objective-c offer any better constructs to store this data? Perhaps one that helps handle concurrency issues? Maybe some kind of message queue?
I've also seen some solutions with dispatching that I'm still trying to get my head around (I'm new).
If the log messages are important, keeping them in memory (array) might not suffice. If the app quits or crashes the NSArray will not persist on subsequent execution.
Instead, you should save them to a database with an 'sync' flag. You can trigger the sync module on every insert to check if the entries with sync flag set to false has reached a threshold and trigger the upload and set sync flag to true for all uploaded records of simply delete the synced records. This also helps you separate your logging module and syncing module and both of them work independently.
You will get a lot of help for syncing SQLite db or CoreData online. Check these links or simply google iOS database sync. If your requirements are not very complex, and you don't mind using third party or open source code, it is always better to go for a readily available solution instead of reinventing the wheel.
I'm testing Core Data and iCloud with UIManagedDocument and ubiquity options (NSPersistentStoreUbiquitousContentNameKey and NSPersistentStoreUbiquitousContentURLKey).
Everything is working OK. My devices get synched without problems and in an reasonable time. The DB is small (below 100K).
As i said I'm testing the app and making a lot of changes to the db and as result, a lot of transaction logs are generated. The problem I have is that if I delete and reinstall the app on one of the devices used for testing (without deleting iCloud data) the app take a very long time to open the document. openWithCompletionHandler takes minutes, sometimes never ends. If I turn on debugging (-com.apple.coredata.ubiquity.logLevel 3) i can see that there is a long wait and after that the DB is reconstructed with transaction logs.
If i remove iCloud data and reinsert the data on first device the second one sync without problems. Because of that I think that the reason for the delay is a high number of transaction logs (20-30 while testing as I can see on developer.icloud.com)
According to Managing Core Data iCloud Transaction Logs will handle core data automatically, but I can't see any deletion. Perhaps that needs some more time.
My questions are: Do transaction logs gets consolidated ever ? Can I force the consolidation of logs ? Another recommended option ?
I only store the subset of essential information needed for syncing in iCloud Core Data file. I have another local file with full DB, so I can reconstruct the iCloud DB without any major loss of information. Perhaps I could delete iCloud DB when I detect a bunch of logs and re-create it. Do you think this is a good option ?
Thank you for helping.
Do transaction logs gets consolidated ever ?
That is how it's supposed to work.
Can I force the consolidation of logs ?
No. There is no API that directly affects the existence of transaction logs. The iCloud system will consolidate them at some point, but there's no documentation regarding when that happens, and you can't force it.
Another recommended option ?
You can limit the number of transaction logs indirectly-- save changes less frequently. A transaction log corresponds to saving changes in Core Data. It may not make much of a difference because, honestly, 20-30 transaction logs is not very many. You might be able to reduce the number of log files but you'll still have the same amount of data in them.
Transaction logs aren't really your problem. As you observed, there's a long wait before iCloud starts running through the transaction logs. During that delay, iCloud is communicating with Apple's servers and downloading the transaction logs. Some of this is affected by network speed and latency, and the rest is just the way iCloud is.