Only read new events from Firebase Database - ios

I was halfway done with implementing Core Data in my iOS app when I realized that Firebase has offline capabilities that would pretty much mimic what I was trying to accomplish the whole time.
In my database which is structured as such:
- Users
- user1
- user2
- Groups
- group1
- members
- user1
- events
- event1_By_Auto_Key
- event2_By_Auto_Key
I wanted to locally store all the events that have already been fetched by a user so that I wouldn't have to read all of them every single time I need to get a group's events. Now that I think I'm just going to stick with Firebase's offline capabilities instead of using Core Data, I have a question regarding how to efficiently read events from the database.
As seen from my database's structure the events are stored using the childByAutoId().setValue(data) method, meaning the keys are unknown when inserted. So my console for a given group might look like this:
My question is: how can I only read the new events from a group? The reason I was implementing Core Data was so that I could cache already fetched events, but I'm not sure how I can make sure that I don't re-read data.

There are a few strategies you could use. Since the ids generated are always lexically greater than any existing, you can use startAt() on your query with the newest record you already have. You just need to skip the record that matches the last ID you have. If you keep a timestamp in the events, you can use orderByChild() and the last timestamp and increment by one ms then you don't get any records you already have. It would be something like:
function getNewEvents(group, arrayOfExistingIds) {
let lastId = arrayOfExistingIds.sort().pop(),
ref = admin.database().ref('/Groups/' + group + '/events')
.orderByKey().startAt(lastId).on('value', function(snap){
if (snap.key === lastId) return;
console.log('New record: ' + snap.key);
})
}

Firebase provide you 10MB persistent memory to cache recently fetch records. In normal scenario 10MB is enough space.
You need to enable offline capabilities.

Related

Firebase: Maintain a Username Directory

I am creating a Social app and want to track if a username already exists or not. The username list is supposed to grow in future and the way I was doing it now was a key value pair of <string,bolean> like this:
name1: true,
name2: true
all the above data was to be stored in a single document and whenever I want to see if a user exists I would call this document and check accordingly. But here's the problem, firebase max document size is 1MBs and as the users grow this can be problematic, so wanted to know from firebase experts that what's the best way to solve this use case in firestore or realtime database but since I need to query exists maybe realtime db won't suit that well.
Note that I don't want any of firestore querying capabilities but only to check if an entry exists in the record or not and if not just add it.
The Realtime Database doesn't have a 1MB limit (since it has no concept of a document, and everything is just a tree of JSON), so I'd typically use that for the index of user names.
Checking whether a name exists is pretty simple there too, and in JavaScript would look something like:
const usernames = firebase.database().ref('usernames');
usernames.child('name1').once((snapshot) => {
if (snapshot.exists()) {
...
}
});

Realm Swift: Question about Query-based public database

I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}

Multi-threading with core data and API requests

Intro
I've read alot of tutorials and articles on Core Data concurrency, but I'm having an issue that is not often covered, or covered in a real-world way that I am hoping someone can help with. I've checked the related questions in SO and none give an answer to this particular question that I can find.
Background
We have an existing application which fetches data from an API (in the background thread) and then saves the records returned into core data. We also need to display these records in the application at the time.
So the process we currently go through is to:
Make a network request for data (background)
Parse the data and map the objects to NSManagedObjects and save (background)
In the completion handler (main thread) we fetch records from core data with the same order and limit that we requested from the API.
Most tutorials on core data concurrency follow this pattern of saving in one thread and then fetching in another, but most of them give examples like:
NSArray *listOfPeople = ...;
[NSManagedObjectHelper saveDataInBackgroundWithContext:^(NSManagedObjectContext *localContext){
for (NSDictionary *personInfo in listOfPeople)
{
PersonEntity *person = [PersonEntity createInContext:localContext];
[person setValuesForKeysWithDictionary:personInfo];
}
} completion:^{
self.people = [PersonEntity findAll];
}];
Source
So regardless of the amount of records you get back, you just fetch all content. This works for small datasets, but I want to be more efficient. I've read many times not to read/write data across threads, so fetching afterwards gets around this issue, but I don't want to fetch all, I just want the new records.
My Problem
So, for my real world example. I want to make a request to my API for the latest information (maybe anything older than my oldest record in core data) and save it, them I need the exact data returned from the API in the main thread ready for display.
So my question is, When I reach my completion handler, how do I know what to fetch? or what did the API return?. A couple of methods I've considered so far:
after saving each record, store the ID in a temporary array and then perform some fetch where id IN array_of_ids.
If I am asking for the latest records, I could just use the count of records returned, then use an order by and limit in my request to the latest x records.
My Question
I realize that the above could be answering my own question but I want to know if there is a better way, or is one of those methods much better to use than the other? I just have this feeling that I am missing something
Thanks
EDIT:
Neither answer below actually addresses the question, This is to do with fetching and saving data in the background and then using the returned data in the main thread. I know it's not a good idea to pass data between threads, so the common way around this is to fetch from core data after inserting. I want to work out the more efficient way.
Have you checked NSFetchedResultsController? Instead of fetching presented objects into array, you will use fetched controller in similar fashion. Through NSFetchedResultsControllerDelegate you would be notified about all the changes performed in background (rows added, removed, changed) and no manual tracking would be needed.
I feel You missing case with two silmultaneous API calls. Both storring ids and counting created enities wont work for that case. Consider adding timestamp property for each PersonEntity.
Assuming that Your intention is to display recently updated persons.
The calcutation of the oldest timestamp to display can look like this:
#property NSDate *lastViewRefreshTime;
#property NSDate *oldestEntityToDisplay;
(...)
if (self.lastViewRefreshTime.timeIntervalSinceNow < -3) {
self.oldestEntityToDisplay = self.lastViewRefreshTime;
}
self.lastViewRefreshTime = [NSDate date];
[self displayPersonsAddedAfter: self.oldestEntityToDisplay];
Now, if two API responses returns in period shorter than 3s their data will be displayed together.

How do I filter Purchase Order query in QBXML to only return records that are not fully received?

When doing a PurchaseOrderQuery in QBXML I am trying to get Quickbooks to only return purchase orders that are not yet processed (i.e. "IsFullyReceived" == false). The response object contains the IsFullyReceived flag, but the query object doesn't seem to have a filter for it??
This means I have to get every single Purchase Order whether or not it's received, then do the filtering logic in my application - which slows down Web Connector transactions.
Any ideas?
Thanks!
You can't.
The response object contains the IsFullyReceived flag, but the query object doesn't seem to have a filter for it??
Correct, there is no filter for it.
You can see this in the docs:
https://developer-static.intuit.com/qbSDK-current/Common/newOSR/index.html
This means I have to get every single Purchase Order whether or not it's received, then do the filtering logic in my application - which slows down Web Connector transactions.
Yep, probably.
Any ideas?
Try querying for only Purchase Orders changed or modified (ModifiedDateRangeFilter) since the last time you synced.
Or, instead of pulling every single PO, keep track of a list of POs that you think may not have been received yet, and then only query for those specific POs based on RefNumber.
Or, watch the ItemReceipt and BillPayment objects, and use that to implement logic about which POs may have been recently filled, since BillPayment andItemReceipt` objects should get created as the PO is fulfilled/received.

CloudKit: Preventing Duplicate Records

I am working through an app that pulls data from an external web service into a private CloudKit database. The app is a single user app, however I am running into a race condition that I am not sure how to avoid.
Every record in my external data has a unique identifier that I map to my CKRecord instances. The general app startup flow is:
Fetch current CKRecords for the relevant record type.
Fetch external records.
For every external record, if it doesn't exist in CloudKit, create it via batch create (modification operation).
Now, the issue is, if this process is kicked off on two of a user's devices simultaneously, since both the CK and external fetch is async, there is a strong possibility that I'll get duplicate records.
I know I can use zones to atomically commit all of my CKRecord instances, but I don't think that solves my issue because if all of these fetches happen at essential the same time, the save is not really the issue.
My questions are:
Does anyone know of a way to "lock" the private database for writes across all of a user's devices?
Alternatively, is there a way to enforce uniqueness on any CKRecord field?
Or, is there a way to use a custom value as the primary key, in that case I could use my external ID as the CK ID and allow the system to prevent duplicates itself.
Thanks for the help in advance!
Answers:
No, you cannot lock the private database
Cloudkit already enforces and assumes uniqueness of your record ID
You can make the record ID anything you like (in the non zone part of it).
Explanation:
Regarding your issue of duplication. If you are the one creating the record IDs (from the external records you mentioned for example) then at worst you should have one record over write the other with the same data if you have a race condition. I do not think that is an issue for the extreme case two devices kick off this process at the same time. Basically you logic of first fetching existing records and then modifying them seems sound to me.
Code:
//employeeID is a unique ID to identify an employee
let employeeID = "001"
//Remember the recordID needs to be unique within the same database.
//Assuming you have different record types, it is better to prefix the record name with the record type so that it is unique
let recordName = "Employee-\(employeeID)"
//If you are using a custom zone
let customZoneID = CKRecordZoneID(zoneName: "SomeCustomZone", ownerName: CKCurrentUserDefaultName)
let recordIDInCustomZone = CKRecordID(recordName: recordName, zoneID: customZoneID)
//If you are using the default zone
let recordIDInDefaultZone = CKRecordID(recordName: recordName)
I had similar issue of duplicates downloaded when I tried to read in a database of more than 100 records; the solution is found in the Apple's Atlas example which uses a Boolean to check if the last process finished before it launches the next. You find a block much like this...
#synchronized (self)
{
// Quickly returns if another loadNextBatch is running or we have the oldest post
if(self.isLoadingBatch || self.haveOldestPost) return;
else self.isLoadingBatch = YES;
}
Incidentally here the code to create your own record key.
CKRecordID *customID = [[CKRecordID alloc] initWithRecordName: [globalEOConfirmed returnEOKey:i]];
newrecord = [[CKRecord alloc] initWithRecordType:#"Blah" recordID:customID];

Resources