I am developing a chatting application where I could receive a number of messages at a time which leads to app freezing. Following is my socket receiver:
func receiveNewDirectMessages() {
self.socket?.on(EventListnerKeys.message.rawValue, callback: { (arrAckData, ack) in
print_debug(arrAckData)
guard let dictMsg = arrAckData.first as? JSONDictionary else { return }
guard let data = dictMsg[ApiKey.data] as? JSONDictionary else { return }
guard let chatData = data[ApiKey.data] as? JSONDictionary else { return }
guard let messageId = chatData[ApiKey._id] as? String , let chatId = chatData[ApiKey.chatId] as? String else { return }
if MessageModel.getMessageModel(msgId: messageId) != nil { return }
let isChatScreen = self.isChatScreen
let localMsgId = "\(arc4random())\(Date().timeIntervalSince1970)"
if let senderInfo = data[ApiKey.senderInfo] as? JSONDictionary, let userId = senderInfo[ApiKey.userId] as? String, userId != User.getUserId() {
_ = AppUser.writeAppUserModelWith(userData: senderInfo)
}
let msgModel = MessageModel.saveMessageData(msgData: chatData, localMsgId: localMsgId, msgStatus: 2, seenByMe: false)
let chatModel = ChatModel.saveInboxData(localChatId: msgModel.localChatId, inboxData: chatData)
if isChatScreen {
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .delivered)
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .seen)
} else {
ChatModel.updateUnreadCount(localChatId: chatModel.localChatId, incrementBy: 1)
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .delivered)
}
TabController.shared.updateChatBadgeCount()
})
}
What's happening above:
1. I am receiving all the undelivered messages ONE-By-ONE in this socket listener.
2. Fetching the message data
3. Saving the received sender's info to Realm DB
4. Saving the message model to realm DB
5. SAVING/UPDATING Chat Thread in realm DB
6. Emitting acknowledgement for the received message
7. Update Chat badge count on tab bar
Below is my emitter for acknowledging the message delivery.
func emitMessageStatus(msgId: String, chatId: String, socketService: SocketService, status: MessageStatusAction) {
// Create Message data packet to be sent to socket server
var msgDataPacket = [String: Any]()
msgDataPacket[ApiKey.type] = socketService.type
msgDataPacket[ApiKey.actionType] = socketService.listenerType
msgDataPacket[ApiKey.data] = [
ApiKey.messageId: msgId,
ApiKey.chatId: chatId,
ApiKey.userId: User.getUserId(),
ApiKey.statusAction: status.rawValue
]
// send the messsage data packet to socket server & wait for the acknowledgement
self.emit(with: EventListnerKeys.socketService.rawValue, msgDataPacket) { (arrAckData) in
print_debug(arrAckData)
guard let dictMsg = arrAckData.first as? JSONDictionary else { return }
if let msgData = dictMsg[ApiKey.data] as? [String: Any] {
// Update delivered Seen Status here
if let msgId = msgData[ApiKey.messageId] as? String, let actionType = msgData[ApiKey.statusAction] as? String, let msgStatusAction = MessageStatusAction(rawValue: actionType) {
switch msgStatusAction {
case .delivered:
if let deliveredTo = msgData[ApiKey.deliveredTo] as? [[String: Any]] {
_ = MessageModel.updateMsgDelivery(msgId: msgId, deliveredTo: deliveredTo)
}
case .seen:
if let seenBy = msgData[ApiKey.seenBy] as? [[String: Any]] {
_ = MessageModel.updateMsgSeen(msgId: msgId, seenBy: seenBy)
}
case .pin:
MessageModel.clearPinnedMessages(chatId: chatId)
if let pinTime = msgData[ApiKey.pinTime] as? Double {
MessageModel.updatePinnedStatus(msgId: msgId, isPinned: true, pinTime: pinTime)
}
case .unPin:
if let pinTime = msgData[ApiKey.pinTime] as? Double {
MessageModel.updatePinnedStatus(msgId: msgId, isPinned: false, pinTime: pinTime)
}
case .delete:
MessageModel.deleteMessage(msgId: msgId)
case .ackMsgStatus, .like, .unlike:
break
}
}
}
}
}
What's happening above:
Encapsulating all the related information to acknowledge the event
Update realm DB after acknowledgement delivery
Now, I'm not able to defies a perfect threading policy here. What to write in background thread and what should I write in Main thread. I tried doing it however but that leades to random crashes or packet lossses.
Can anyone please lead me forward on this topic. I will be highly grateful.
Try to use background thread for data processing/ non-UI processing.
Reduce number of updating UI times
Instead of processing 1 by 1 message, using debounce - like. You can store new messages, then update UI with n new messages. So instead of updating UI/saving data to db 100 times for 100 messages, you can do it 1 time for 100 messages. More detail: with every new message, add it to an array. Call debouncer. The Debouncer will delay a function call, and every time it's getting called it will delay the preceding call until the delay time is over. So after e.g 200ms, if there're no new message, the update func will be call (the callback func that debounce's handling). Then update ui/db with n new stored messages.
You can group message by time, like group by 1 hour. And then update with delay between each time group. You can do it when the debouncer is called -> group messages by time -> update db/ui by each group. You can use setTimeout, like update group 1, 100ms later update group 2, so the ui won't be freezed
Related
I have some code that reads data from Firebase on a custom loading screen that I only want to segue once all of the data in the collection has been read (I know beforehand that there won't be more than 10 or 15 data entries to read, and I'm checking to make sure the user has an internet connection). I have a loading animation I'd like to implement that is started by calling activityIndicatorView.startAnimating() and stopped by calling activityIndicatorView.stopAnimating(). I'm not sure where to place these or the perform segue function in relation to the data retrieval function. Any help is appreciated!
let db = Firestore.firestore()
db.collection("Packages").getDocuments{(snapshot, error) in
if error != nil{
// DB error
} else{
for doc in snapshot!.documents{
self.packageIDS.append(doc.documentID)
self.packageNames.append(doc.get("title") as! String)
self.packageIMGIDS.append(doc.get("imgID") as! String)
self.packageRadii.append(doc.get("radius") as! String)
}
}
}
You don't need to know the progress of the read as such, just when it starts and when it is complete, so that you can start and stop your activity view.
The read starts when you call getDocuments.
The read is complete after the for loop in the getDocuments completion closure.
So:
let db = Firestore.firestore()
activityIndicatorView.startAnimating()
db.collection("Packages").getDocuments{(snapshot, error) in
if error != nil{
// DB error
} else {
for doc in snapshot!.documents{
self.packageIDS.append(doc.documentID)
self.packageNames.append(doc.get("title") as! String)
self.packageIMGIDS.append(doc.get("imgID") as! String)
self.packageRadii.append(doc.get("radius") as! String)
}
}
DispatchQueue.main.async {
activityIndicatorView.stopAnimating()
}
}
As a matter of style, having multiple arrays with associate data is a bit of a code smell. Rather you should create a struct with the relevant properties and create a single array of instances of this struct.
You should also avoid force unwrapping.
struct PackageInfo {
let id: String
let name: String
let imageId: String
let radius: String
}
...
var packages:[PackageInfo] = []
...
db.collection("Packages").getDocuments{(snapshot, error) in
if error != nil{
// DB error
} else if let documents = snapshot?.documents {
self.packages = documents.compactMap { doc in
if let title = doc.get("title") as? String,
let imageId = doc.get("imgID") as? String,
let radius = doc.get("radius") as? String {
return PackageInfo(id: doc.documentID, name: title, imageId: imageId, radius: radius)
} else {
return nil
}
}
}
There is no progress reporting within a single read operation, either it's pending or it's completed.
If you want more granular reporting, you can implement pagination yourself so that you know how many items you've already read. If you want to show progress against the total, this means you will also need to track the total count yourself though.
In a scenario where the product is already in the cart and the user enters the credit card number and enters the OTP, this process could take a few seconds or even minutes, where anything could happen like the admin deleted that product. And if the payment is successful, a batch of functions will get triggered, functions like update the product's stock level, add order to user's purchase history, etc. I use batch write for this operation. However, when the product is no longer in the database, what it will do is that it re-creates the product containing a stock level field. Could you please recommend a way to prevent this error? is batch write right for this operation?
// Update item stocklevel
var newVar = String()
var newIds = [String:Any]()
for (key, value) in itemsDict {
let newValue = value as! [String:Any]
let newId = newValue[kITEMID] as! String
newVar = newValue[kVARIATIONKEY] as! String
let newQty = newValue[kQUANTITY] as! Int
for i in 0..<self.allItems.count {
let newVars = self.allItems[i].variations!
let newKey = self.allItems[i].varKey!
let filteredVars = newVars.filter({$0.key == String(newKey)})
for (_, value) in filteredVars {
guard let resultNew = value as? [String:Any] else { return }
let stock = resultNew[kSTOCK] as! Int
let newStock = stock - newQty
let anyDict = [newVar:[kSTOCK:newStock]] as [String:Any]
updateStock = [newId: [kVARIATIONS:anyDict]] as [String:Any]
newIds.updateValue(updateStock, forKey: key)
}
}
}
let batch = Firestore.firestore().batch()
for (_, value) in newIds {
let newValue = value as! [String:Any]
for (k1, v1) in newValue {
let newV1 = v1 as! [String:Any]
// update stock level
batch.setData(newV1, forDocument: FirebaseReference(.Items).document(k1), merge: true)
}
}
// add to users purchase history
batch.setData(purchaseHistory, forDocument: ref2!, merge: true)
// add to all orders list
batch.setData(newAllOrders, forDocument: ref3!, merge: true)
if oneTimeUse {
if let newVoucherId = voucherId {
let withValues = [kCLAIMEDBY: [MUser.currentUser()!.objectId]]
ref4 = FirebaseReference(.Voucher).document(newVoucherId)
// update voucher
batch.setData(withValues, forDocument: ref4!, merge:true)
}
}
self.showLoadingIndicator()
batch.commit { (err) in
if let err = err {
print("There's an error with your order, please try again \(err.localizedDescription)")
} else {
print("successfully commited batch")
self.finalProcess(transactionId, paymentOption) {
}
}
}
I'd recommend you to use Transactions for these operations you are making, and quoting the documentation
"A transaction consists of any number of get() operations followed by any number of write operations such as set(), update(), or delete(). In the case of a concurrent edit, Cloud Firestore runs the entire transaction again. For example, if a transaction reads documents and another client modifies any of those documents, Cloud Firestore retries the transaction. This feature ensures that the transaction runs on up-to-date and consistent data."
This would help you to prevent these scenarios where the product is not available in stock anymore and you can display/send a message to the user that was interested on this product.
I am using the below code to retrieve the messages in a chat application according to the timestamp but it is not retrieving in order of the timestamp, How should I make sure that messages retrieved are in the order of the timestamp.
I am using Firestore database and Swift IOS for this application
below is the code parts
timestamp saved in database
let timestamp = Int(NSDate().timeIntervalSince1970)
Code to retrieve messages
let ref = Firestore.firestore().collection("messages").order(by: "timestamp", descending: true)
ref.addSnapshotListener { (snapshot, error) in
snapshot?.documentChanges.forEach({ (diff) in
let messageId = diff.document.documentID
let messageRef = Firestore.firestore().collection("messages")
.document(messageId)
messageRef.getDocument(completion: { (document, error) in
guard let dictionary = document?.data() as? [String : Any] else { return }
let message = Message(dictionary: dictionary)
print("we fetched this message \(message.text)")
self.messages.append(message)
DispatchQueue.main.async {
self.collectionView.reloadData()
let indexPath = IndexPath(item: self.messages.count - 1, section: 0)
self.collectionView.scrollToItem(at: indexPath, at: .bottom, animated: true)
}
})
})
}
Perhaps an oversight but what's happening here is the code gets the data you want in descending order by timestamp, but then gets that same data again, which will be unordereed because it's being retrieved asynchronously, and adds to the array.
func doubleGettingData() {
let ref = Firestore.firestore()....
Gets data -> ref.addSnapshotListener { (snapshot, error) in
snapshot?.documentChanges.forEach({ (diff) in
Gets data again -> messageRef.getDocument(completion
To add a bit more context, the 'outside' function shown in the question is in fact getting the documents in the correct order. However, getting those same documents again, they are being returned from Firebase in whatever order they complete in because Firebase calls are asynchronous. This can be proven if we remove all the code except for the two calls. Here's an example Firestore Structure
message_0:
timestamp: 2
message_1
timestamp: 0
message_2
timestamp: 1
and when some print statement are added, here's what's happening
outside func gets: message_0 //timestamp 2
outside func gets: message_2 //timestamp 1
outside func gets: message_1 //timestamp 0
inside func returns: message_1 //timestamp 0
inside func returns: message_2 //timestamp 1
inside func returns: message_0 //timestamp 2
I would make a couple of changes...
Here's my Message class and the array to store the messages in
class Message {
var text = ""
var timestamp = ""
convenience init(withSnap: QueryDocumentSnapshot) {
self.init()
self.text = withSnap.get("text") as? String ?? "No message"
self.timestamp = withSnap.get("timestamp") as? String ?? "No Timestamp"
}
}
var messages = [Message]()
and then the code to read the messages, descending by timestamp and store them in the array. Note
The first query snapshot contains added events for all existing
documents that match the query
func readMessages() {
let ref = Firestore.firestore().collection("messages").order(by: "timestamp", descending: true)
ref.addSnapshotListener { querySnapshot, error in
guard let snapshot = querySnapshot else {
print("Error fetching snapshots: \(error!)")
return
}
snapshot.documentChanges.forEach { diff in
if (diff.type == .added) {
let snap = diff.document
let aMessage = Message(withSnap: snap)
self.messages.append(aMessage)
}
if (diff.type == .modified) {
let docId = diff.document.documentID
//update the message with this documentID in the array
}
if (diff.type == .removed) {
let docId = diff.document.documentID
//remove the message with this documentID from the array
}
}
}
}
This code will also watch for changes and deletions in messages and pass that event to your app when they occur.
I am receiving up to four push notifications for each event I am subscribed to. I have gone through everything related to my CloudKit subscriptions and notification registry and I am convinced this is an Apple problem. I have instead turned my attention toward correctly processing the notifications no matter how many I receive. Here is a simplified version of what I am doing:
func recievePrivatePush(_ pushInfo: [String:NSObject], completion: #escaping ()->Void) {
let notification = CKNotification(fromRemoteNotificationDictionary: pushInfo)
let alertBody = notification.alertBody
if let queryNotification = notification as? CKQueryNotification {
let recordID = queryNotification.recordID
guard let body = queryNotification.alertBody else {
return
}
if recordID != nil {
switch body {
case "Notification Type":
let id = queryNotification.recordID
switch queryNotification.queryNotificationReason {
case .recordCreated:
DataCoordinatorInterface.sharedInstance.fetchDataItem(id!.recordName, completion: {
//
})
break
default:
break
}
}
}
}
}
The fetching code looks something like this:
func fetchDataItem(_ id: String, completion: #escaping ()-> Void) {
if entityExistsInCoreData(id) {return}
let db = CKContainer.default().privateCloudDatabase
let recordID = CKRecordID(recordName: id)
db.fetch(withRecordID: recordID) { (record, error) in
if let topic = record {
//Here I create and save the object to core data.
}
completion()
}
}
All of my code works, the problem I am having is that when I receive multiple notifications, multiple fetch requests are started before the first core data entity is created, resulting in redundant core data objects.
What I would like to do is find a way to add the fetch requests to a serial queue so they are processed one at a time. I can put my request calls in a serial queue, but the callbacks always run asynchronously, so multiple fetch requests are still make before the first data object is persisted.
I have tried using semaphores and dispatch groups with a pattern that looks like this:
let semaphore = DispatchSemaphore(value: 1)
func recievePrivatePush(_ pushInfo: [String:NSObject], completion: #escaping ()->Void) {
_ = semaphore.wait(timeout: .distantFuture)
let notification = CKNotification(fromRemoteNotificationDictionary: pushInfo)
let alertBody = notification.alertBody
if let queryNotification = notification as? CKQueryNotification {
let recordID = queryNotification.recordID
guard let body = queryNotification.alertBody else {
return
}
if recordID != nil {
switch body {
case "Notification Type":
let id = queryNotification.recordID
switch queryNotification.queryNotificationReason {
case .recordCreated:
DataCoordinatorInterface.sharedInstance.fetchDataItem(id!.recordName, completion: {
semaphore.signal()
})
break
default:
break
}
}
}
}
}
Once the above function is called for the second time, and semaphore.wait is called, the execution of the first network request pauses, resulting in a frozen app.
Again, what I would like to accomplish it adding the asynchronous network requests to a queue so that they are made only one at a time i.e. the first network call is completed before the second request is started.
Carl,
Perhaps you'll find your solutions with dispatch groups, a few key expressions to look into.
let group = DispatchGroup()
group.enter()
... code ...
group.leave
group.wait()
I use them to limit the number of http requests I send out in a batch, to wait for the response. Perhaps you could use them together with the suggestion in my comment. Watch this video too, dispatch groups in here, I think more.
https://developer.apple.com/videos/play/wwdc2016/720/
These simple classes helped me solve the problem.
class PushQueue {
internal var pushArray: Array<String> = [String]()
internal let pushQueue = DispatchQueue(label: "com.example.pushNotifications")
public func addPush(_ push: Push) {
pushQueue.sync {
if pushArray.contains(push.id) {
return
} else {
pushArray.append(push.id)
processNotification(push: push)
}
}
}
internal func processNotification(push: Push) {
PushInterface.sharedInstance.recievePrivatePush(push.userInfo as! [String: NSObject])
}
}
class CKPush: Equatable {
init(userInfo: [AnyHashable: Any]) {
let ck = userInfo["ck"] as? NSDictionary
let id = ck?["nid"] as? String
self.id = id!
self.userInfo = userInfo
}
var id: String
var userInfo: [AnyHashable:Any]
public static func ==(lhs: CKPush, rhs: CKPush) -> Bool {
return lhs.id == rhs.id ? true : false
}
}
Please ignore the sloppy force unwraps. They need to be cleaned up.
I have an application using RealmSwift on iOS that stores a small amount of data. The data is updated via CloudKit push notifications every ~5minutes or so.
It works except my Realm file continuously grows forever until the application no longer has enough memory to launch.
I managed to work around it slightly by using the "writeCopy" function to compact the realm at launch - this mostly works if the app is stopped semi-frequently so that has a chance to run, but if this does not happen and the app continues to update the data from push notifications in the background for a couple of days, the database will end up too large and compacting it takes ages, or it just crashes trying.
I tried reading the "file size" section in the FAQ's but I don't think I fully understand the rules that cause the size to grow. It mentions keeping "old realms" open - does that mean instances of Realm e.g. "let realm = try! Realm()" or does it really mean any open object?
If its the latter then that is annoying as I use notifications on result sets while in the background to determine when to update a complication on a companion watch app.
I really like realm and I want to stick with it but I feel like my somewhat unusual use case (long running app that frequently updates in the background) might mean that I cannot use it.
Edit:
Background updates are happening as a result of the 'didReceiveRemoteNotification' app delegate callback, like this:
func application(_ application: UIApplication, didReceiveRemoteNotification userInfo: [AnyHashable : Any], fetchCompletionHandler completionHandler: #escaping (UIBackgroundFetchResult) -> Void) {
let dict = userInfo as! [String: NSObject]
let notification = CKNotification(fromRemoteNotificationDictionary: dict)
if notification.notificationType == .query {
let notification = notification as! CKQueryNotification
if let recordID = notification.recordID {
let requestOperation = CKFetchRecordsOperation(recordIDs: [recordID])
requestOperation.fetchRecordsCompletionBlock = { records, error in
if let error = error {
os_log("Error fetching record: %#", log: Logger.default, type: .error, String(describing: error))
} else {
autoreleasepool {
let realm = try! Realm()
try! realm.write {
records?.forEach { (key, record) in
switch record.recordType {
case "Freeway":
let dict = RealmFreeway.dictionary(from: record)
realm.create(RealmFreeway.self, value: dict, update: true)
case "Segment":
let dict = RealmSegment.dictionary(from: record)
realm.create(RealmSegment.self, value: dict, update: true)
default:
os_log("Unknown record type: %#", log: Logger.default, type: .error, record.recordType)
}
}
}
}
}
}
CKContainer.default().publicCloudDatabase.add(requestOperation)
}
}
completionHandler(.newData)
}
And the watch complication is updated via RxSwift like so:
Observable.combineLatest(watchViewModel.freewayName.asObservable(), watchViewModel.startName.asObservable(), watchViewModel.endName.asObservable(), watchViewModel.travelTime.asObservable(), watchViewModel.color.asObservable(), watchViewModel.direction.asObservable()) {
freewayName, startName, endName, travelTime, color, direction -> [String:Any] in
guard let freewayName = freewayName,
let startName = startName,
let endName = endName,
let direction = direction else {
return [String:Any]()
}
let watchData:[String:Any] = [
"freewayName": freewayName,
"startName": startName,
"endName": endName,
"travelTime": travelTime,
"color": color.htmlRGBColor,
"direction": direction,
"transfers": session.remainingComplicationUserInfoTransfers
]
return watchData
}
.filter { $0.keys.count > 0 }
.throttle(2.0, scheduler: MainScheduler.instance)
.subscribe( onNext: { watchData in
let MIN_DURATION = 24.0 * 60.0 * 60.0 / 50.0 // 50 guaranteed updates per day...
var timeSinceLastUpdate = MIN_DURATION
let now = Date()
if let lastUpdated = self.lastComplicationUpdate {
timeSinceLastUpdate = now.timeIntervalSince(lastUpdated)
}
// Should we use a complication update or an application context update?
let complicationUpdate = timeSinceLastUpdate >= MIN_DURATION
// Send the data via the appropriate method.
if complicationUpdate {
session.transferCurrentComplicationUserInfo(watchData)
} else {
try? session.updateApplicationContext(watchData)
}
self.lastComplicationUpdate = now
})
.addDisposableTo(bag)