Firebase .observeSingleEvent(of:with:) method is retrieving cached/old data [duplicate] - ios

This question already has answers here:
Firebase Offline Capabilities and addListenerForSingleValueEvent
(4 answers)
Closed 4 years ago.
[Disclaimer] I have personally posted and answered this question after having struggled with it myself and, relevantly, noticed that many people still do
Context
I am developing an iOS mobile application and - for this particular project - decided to use the Firebase Realtime Database as my backend infrastructure.
Problem
When querying data at a specific node using the .observeSingleEvent(of:with:) method, I always find myself retrieving either cached or old data rather than the newly updated one.
In some cases, calling the method twice in a row retrieves the desired server data.
Attempts
Used .keepSynced(true) at the relevant node which, according to the Firebase documentation
automatically downloads the data at these locations and keeps it in sync even if the reference has no active listeners

Overview
Going through the documentation, you notice that there are two primary ways of querying data from the Firebase Realtime Database into your iOS mobile application
The .observe(_:with:) method which, according to the Firebase Documentation, continuously listens for changes at a particular node and triggers the callback every time the data changes at the latter.
This method is triggered once when the listener is attached and again every time the data, including any children, changes. The event callback is passed a snapshot containing all data at that location, including child data. If there is no data, the snapshot will return false when you call exists() and nil when you read its value property.
The .observeSingleEvent(of:with:) method which, according to the Firebase Documentation, is called exactly once.
In some cases you may want a callback to be called once and then immediately removed, such as when initializing a UI element that you don't expect to change. You can use the observeSingleEventOfType method to simplify this scenario, [in which] the event callback [is triggered] once and then does not trigger again.
Problem
After going through the different possible methods of querying your data, you've realized that the .observeSingleEvent(of:with:) method suits better your current database-reading needs. However, implementing it in your application keeps on retrieving cached and old data no matter how many times you modify your database. You've called the .keepSynced(true) at the relevant database reference, yet in vain. You've opted for the .observe(_:with:) method instead, and everything starts to work perfectly fine.
So what might be the issue?
Solution
The reason you might be going through this problem is perfectly logical if you have invalid database security rules. These can easily prevent you from retrieving your desired data and synchronizing your realtime database.
Let's assume you are trying to synchronize the myRef database reference. You need to set the correct rules that allow reading from this database reference - something along the lines of ".read" = true".
[Warning] Please be careful with these database security rules. Incorrect rules can lead to drastically undesired behaviors, such as people illegally reading and/or writing from/into your database. A good video on how to set flawless security rules is The key to firebase security - Google I/O 2016

Related

Some questions about keepSynced(true) on a limited query reference

I have an Firebase-backed app with the following database structure:
posts
/uid
/postId
Originally, i'd load data from the posts/uid node using ObserveEventOfType with .childAdded. This would load stale data frequently (~5 times a day) for all users of my app simutaneously. When attempting to update the data by making a new post, Firebase would still return stale data.
As a result, I decided to try keepSynced. Now, if my reference looked like this:
reference = Database().database.reference.child("posts").child(uid)
keepSynced would load all of the data at that node, which could amount to very large downloads if there are many children in that node. So, I decided to change the reference/query to:
reference = Database().database.reference.child("posts").child(uid).queryLimited(toLast: 25)
When turning keepSynced on for this node, it syncs for the last 25 children in the node successfully. However, I still am facing the issue of receiving stale data rather frequently. So here are my questions:
When adding the keepSynced mode on the limited query, does it only sync from the initial node you added it to, or does it always just sync the 25 latest children under that node?
Where is the best place to add the keepSynced(true) line in code? Before we load the reference, in viewWillAppear, or inside of the actual download callback?
Similarly, where is the best place to use keepSynced(false)?
Do the keepSynced listeners delete when the app fades into the background?
Why does keepSynced sometimes not address for child updates?
I currently use keepSynced(true) inside of the function I use to load posts which is called on viewDidLoad.
Thanks in advance.
As its name implies keepSynced(true) keeps whatever query or reference you call it on synchronized in the local cache. It quite literally just attaches an empty observer to that query/reference. So in your Database().database.reference.child("posts").child(uid).queryLimited(toLast: 25) it will sync the last 25 child nodes, and keep synchronizing those (removing previous ones as new ones are added).
Firebase Realtime Database caching mechanism works most reliably if you repeatedly listen for the exact same data. Specifically, a .value listener attached to Database().database.reference.child("posts").child(uid) may not see data that was cached through Database().database.reference.child("posts").child(uid).queryLimited(toLast: 25). This is because the Firebase client guarantees to never fire events for partial updates, and in this example it can't guarantee that it has all data from the first reference.
For your questions:
See above...
It's most common to add them in viewWillAppear.
I'm not sure why you'd want to call keepSynced false, so can't recommend anything there.
Not sure if this is what you mean, but keepSynced(true) is not persisted between runs of the app. So you have to call keepSynced(true) each time your app/view starts.
See above...
In general you seem to try and work around the way the API works, by calling the API in different ways. I typically don't see great results from that. If you want your app to behave differently than the API does, consider creating a custom wrapper, and caching the data there.

Persist offline changes separately from original data in Core Data

I'm in the middle of adding an "offline mode" feature to an app I'm currently working on. Basically the idea is that users should able to make changes to the data, for example, edit the description of an item, without being connected to the internet, and the changes should survive between app launches.
Each change would normally result in an API request when working online but situation is different in offline mode.
Right now this is implemented by storing all data coming from the API in a Core Data database that acts as a cache. Entities that can be edited by user in addition to normal attributes have the following ones:
locallyCreated - whether the object was created offline
locallyDeleted - object was deleted offline
locallyUpdated - updated
This makes it possible to look for new/deleted/updated objects and send corresponding API requests when doing sync.
This worked well for creating and deleting objects, however, one disadvantage I found with this approach is when new data is retrieved from the API all local changes (i.e. attributes of objects marked as locally updated) are lost, which means that they have to be stored separately somehow.
What would be the best way to approach this problem?
Since you have your locallyUpdated key, the obvious answer is to modify your code that imports server changes, so that it doesn't overwrite changes to any object marked as changed. One way or another you need to avoid overwriting those changes, and you're already keeping a record of which objects have changes, so you already have the tools for a basic solution.
But you'll soon run into the complexity of syncing data. What if the local object has changes on one key, but the incoming data from the server has changes on a different key? You can't resolve that just by knowing that the local copy has changed somehow. Maybe you decide that the server always wins, or that the local copy always wins. Those are easy, if they make sense for your app. If you need to merge changes though, you have some work ahead of you. You would need to record not only a Boolean value indicating that changes were made, but also a list of which keys had changed. This can get complicated, but it's the nature of data syncing.

Async logging of a static queue in Objective-C

I'd like some advice on how to implement the following in Objective C. The actual application in related to an AB testing framework I'm working on, but that context shouldn't matter too much.
I have an IOS application, and when a certain event happens I'd like to send a log message to a HTTP Service endpoint.
I don't want to send the message every time the event happens. Instead I'd prefer to aggregate them, and when it gets to some (configurable) number, I'd like to send them off async.
I'm thinking to wrap up a static NSMutableArray in a class with an add method. That method can check to see if we have reached the configurable max number, if we have, aggregate and send async.
Does objective-c offer any better constructs to store this data? Perhaps one that helps handle concurrency issues? Maybe some kind of message queue?
I've also seen some solutions with dispatching that I'm still trying to get my head around (I'm new).
If the log messages are important, keeping them in memory (array) might not suffice. If the app quits or crashes the NSArray will not persist on subsequent execution.
Instead, you should save them to a database with an 'sync' flag. You can trigger the sync module on every insert to check if the entries with sync flag set to false has reached a threshold and trigger the upload and set sync flag to true for all uploaded records of simply delete the synced records. This also helps you separate your logging module and syncing module and both of them work independently.
You will get a lot of help for syncing SQLite db or CoreData online. Check these links or simply google iOS database sync. If your requirements are not very complex, and you don't mind using third party or open source code, it is always better to go for a readily available solution instead of reinventing the wheel.

How is data typically reloaded/displayed on a device when only a tiny amount of data changes after updating the database? (Conceptually)

Here's and example of what I am talking about:
Take Twitter for iOS. Whenever you tweet, the tweet is sent to the database, and then it is also displayed on your device as part of the list of tweets.
How is the list of tweets that you see on your device updated after just sending one tweet? Here are some possible ways that I thought of how it could be done, but what Im asking for is which one is the best method of doing so:
The whole list of recent Tweets is re-downloaded from the remote Twitter server after sending a tweet (I highly doubt this, as this would take a relatively long time, when it really is just appending one Tweet to the array of Tweets displayed)
The local array that holds the Tweet objects is updated separately from the database (For example, it updates the database, and then updates its array with the same data you sent to the database, and never downloads the Tweet you just sent since you don't need to, because you already have it locally, since you composed it)
Is Core Data capable of updating the remote data server AND the array all in one (or relatively few) step(s)? (Sorry, if this is the obvious answer and if it sounds like I didn't look into it, but I did read about Core Data and started a tutorial. Its just that there is so much content that it would take me a whole day or two just to figure out if its appropriate for my application)
Is there an alternative way of managing this?
Also, if its one of the latter two ideas above, are you able to update the table view cells by just updating the local array and reloading the cells from that array without loading your one tweet from the database? I'm just curious about what would be the most efficient way of doing this.
So again, my main question reworded is: how do you keep data that you sent to a remote database and the local data (stored in a mutable array) in sync whenever you do a tiny single update (such as sending a Tweet) without having to reload all of the data from the database (when there is other content [i.e. other Tweets]) already loaded.
(I am aware that no one except Twitter developers know exactly how Twitter actually done, but I'm just using this Twitter functionality as an example. This same concept could be applied to any similar app.)
(Also, this is a conceptual question about dataflow, so I don't need to see any code, but suggestions to use different technologies like Core Data, or just updating an array will be appreciated.)
(I've been looking into this, and all the different ways of doing it, and it is becoming very time consuming, so I figured to ask you guys who have experience. Additionally, this could help someone else who has similar questions.)
(Sorry if it looks like I'm asking a bunch of questions, but I'm basically asking the same question in different ways, and offering possible solutions.)
Any insight is appreciated!
Immutable messages like tweets are actually quite easy to handle -- server side, and in your app.
When you send a tweet from your client to the server, you also update your "main context" (see "Managed Object Context") which in turn sends notifications to your controller (see NSFetchedResultsController which in turns updates your table view according your local model residing in the Managed Object Context.
Updating from the server is just merging the local tweets with the new ones added in the meantime.
Since there is no mutable tweet, synchronization is really no big deal. As mentioned in the comment, if there were mutable tweets (or any kind of messages) the synchronization will become much more complex.
Core Data will NOT automatically update a remote server. But there are solutions to "view" a remote database through Core Data - see NSIncrementalStore and a related third party libraries (AFIncrementalStore).
This is ridiculously trivial. You update your local database and send off the remote update at the same time.
You use the remote response to mark your local record as synched or try updating again later.

Optimal way of syncing Core Data with server-side data?

I have what I would presume is a very common situation, but as I'm new to iOS programming, I'm not sure of the optimal way to code it.
Synopsis:
I have data on a server which can be retrieved by the iPhone app via a REST service. On the server side, the data is objects with a foreign key (an integer id number).
I'm storing the data retrieved via REST in Core Data. The managed objects have an "objId" attribute so that I can uniquely identify the managed objects in the rest of my code.
My app must always reflect the server data.
On subsequent requests made to the server:
some objects may not be returned, they have been deleted on the server - in which case I need to delete the corresponding objects from Core Data - so that I'm reflecting the state of the server correctly.
some objects have attributes which have changed, therefore the corresponding managed objects need updating with the new data.
my solution - and question to you
To get things going in my app, I made the easiest solution of deleting all objects in Core Data, then adding all new objects in, created with the latest server side data.
I don't think this is the best way to approach it :) As I progress on with my app, I now want to link up my tableview with NSFetchedResultsController, and have realised that my approach of deleting everything and re-adding is not going to work any more.
What is the tried and trusted way of syncing Core Data with server side data?
Do I need to make a fetch request for each object id I get back from the server, and then update the object with the new data?
And then go through all of the objects in core data and see which ones have not been updated, and delete those?
Is that the best way to do it? It just seems a little expensive to do a fetch for each object in Core Data, that's all.
Pseudo code is fine for any answers :)
thanks in advance!
Well, consider your download. First, you should be doing this in a background thread (if not, there are lots of SO posts that talk about how to do that).
I would suggest that you implement what makes sense first, and then, after you can get valid performance data from running Instruments, consider performance optimization. Of course, use some common sense on "easy" performance stuff (your design can take care of the big ones easily enough).
Anyway, get your data from the online resource, and then, for each object fetched, use the "unique object id" to fetch the object from core data. You know there is only one object with that ID, so you can set fetchLimit to 1 on your fetch request. You can also configure your "object id" attribute to be an INDEX in the database. This way, you get the fastest search from the underlying database, and it knows to stop looking once it finds your one object. This should be pretty snappy.
Now you have your object. Change any attributes necessary. Save, rinse, and repeat.
Furthermore, for several reasons, you may want to know when objects were last updated. I'd suggest adding a timestamp to each object that gets changed with the current time every time an object is changed. This will also help in deleting objects. Since your online database does not tell you which objects are deleted, you must have some way to know that an item is "old and no longer needed."
An easy way to do this is to remember the time you started your update. After processing all objects from the download, you now have a way to find all the objects that were deleted from the online database. Basically, any object with a "last update" timestamp before the time you began the update should be removed (since they were not added or modified in the last update). You can also index the database on this field, which will make finding those objects faster - unless your database is huge, I'd wait to see what Instruments has to say about this one though.

Resources