I'm using multipeer connectivity to synchronize between several devices connected through Wifi the movement of nodes in a 3D SceneKit environnement.
The node movement is calculated on the master device and sent to the slave devices which then set the node to the position received. The movement is sent via the renderer loop of the SceneView of the master, and through a data stream opened with each slave and managed with the MultipeerConnectivity framework.
The video below shows the result, and the issue which is the jittering of the ball on the slave (right) due to a regular pause every 0.6-0.7 second in the reception of the data through the stream. The counter on the upper left shows that no packet is lost. There is also no issue with the integrity of the data received.
This very regular pause on the slaves is not present in the simulator but only when it runs on real devices, and whatever the devices (iPhone or Ipad, old or recent).
Is there a way to find out what can cause this regular pause on the slaves devices ?
Would it make sense to have the input stream on the slaves executed on a dedicated thread/runloop instead of the runloop of the main thread ?
Below the implementation
Master Multipeer Connectivity initialization
PeerID = MCPeerID(displayName: "Master (" + UIDevice.current.name + ")")
MPCSession = MCSession(peer: PeerID, securityIdentity: nil, encryptionPreference: .none)
MPCSession.delegate = self
ServiceAdvertiser = MCNearbyServiceAdvertiser(peer: PeerID, discoveryInfo: nil, serviceType: "ARMvt")
ServiceAdvertiser.delegate = self
ServiceAdvertiser.startAdvertisingPeer()
Slave Multipeer Connectivity initialization
PeerID = MCPeerID(displayName: "Slave (" + UIDevice.current.name + ")")
MPCSession = MCSession(peer: PeerID, securityIdentity: nil, encryptionPreference: .none)
MPCSession.delegate = self
ServiceBrowser = MCNearbyServiceBrowser(peer: PeerID, serviceType: "ARMvt")
ServiceBrowser.delegate = self
ServiceBrowser.startBrowsingForPeers()
Function that sends the data, called in the renderer function
func MPCSendData(VCRef: GameViewController, DataToSend: Dictionary<String, Any>, ViaStream: Bool = false)
{
var DataFilledToSend = DataToSend
var DataConverted = try! NSKeyedArchiver.archivedData(withRootObject: DataFilledToSend, requiringSecureCoding: true)
var TailleData: Int = 0
var NewTailleData: Int = 0
if ViaStream // Through the stream
{
// Filling data to have a constant size packet
// kSizeDataPack is set to 2048. The bigger it is the worst is the jittering.
VCRef.Compteur = VCRef.Compteur + 1
VCRef.Message.SetText(Text: String(VCRef.Compteur))
DataFilledToSend[eTypeData.Compteur.rawValue] = VCRef.Compteur
DataFilledToSend[eTypeData.FillingData.rawValue] = "A"
TailleData = DataConverted.count
DataFilledToSend[eTypeData.FillingData.rawValue] = String(repeating: "A", count: kSizeDataPack - TailleData)
DataConverted = try! NSKeyedArchiver.archivedData(withRootObject: DataFilledToSend, requiringSecureCoding: false)
NewTailleData = DataConverted.count
DataFilledToSend[eTypeData.FillingData.rawValue] = String(repeating: "A", count: kSizeDataPack - TailleData - (NewTailleData - kSizeDataPack))
DataConverted = try! NSKeyedArchiver.archivedData(withRootObject: DataFilledToSend, requiringSecureCoding: false)
if VCRef.OutStream!.hasSpaceAvailable
{
let bytesWritten = DataConverted.withUnsafeBytes { VCRef.OutStream!.write($0, maxLength: DataConverted.count) }
if bytesWritten == -1 { print("Erreur send stream") }
} else { print("No space in stream") }
}
else // Not through the stream
{
let Peer = VCRef.MPCSession.connectedPeers.first!
try! VCRef.MPCSession.send(DataConverted, toPeers: [Peer], with: .reliable)
}
}
Function that is called when data is received through the stream on the slave
func stream(_ aStream: Stream, handle eventCode: Stream.Event)
{
DispatchQueue.main.async
{
switch(eventCode)
{
case Stream.Event.hasBytesAvailable:
let InputStream = aStream as! InputStream
var Buffer = [UInt8](repeating: 0, count: kSizeDataPack)
let NumberBytes = InputStream.read(&Buffer, maxLength: kSizeDataPack)
let DataString = NSData(bytes: &Buffer, length: NumberBytes)
if let _ = NSKeyedUnarchiver.unarchiveObject(with: DataString as Data) as? [String:Any] //deserializing the NSData
{
ProcessMPCDataReceived(VCRef: self, RawData: DataString as Data)
}
case Stream.Event.hasSpaceAvailable:
break
case Stream.Event.errorOccurred:
print("ErrorOccurred: \(String(describing: aStream.streamError?.localizedDescription))")
default:
break
}
}
}
Function that processes the data received
func ProcessMPCDataReceived(VCRef: GameViewController, RawData: Data)
{
let DataReceived: Dictionary = (try! NSKeyedUnarchiver.unarchiveTopLevelObjectWithData(RawData) as! [String : Any])
switch DataReceived[eTypeData.EventType.rawValue] as! String
{
case eTypeEvent.SetMovement.rawValue:
VCRef.CurrentMovement = eTypeMovement(rawValue: DataReceived[eTypeData.Movement.rawValue] as! String)!
case eTypeEvent.SetPosition.rawValue:
VCRef.Ball.position = DataReceived[eTypeData.Position.rawValue] as! SCNVector3
default:
break
}
}
Looks like you're dispatching work on the main thread. While I wouldn't expect unpacking data to really cause these regular pauses, it's possible that you're running in to a data processing bottleneck. In other words, the time it's taking you to receive, unpack, and reposition nodes (which also incurs implicit SceneKit transactions) may be just enough to make the device slow down. It seems like your code is compact enough to allow you to dispatch the entire thing to a queue. I recommend trying that to see if you get any new behavior. Try DispatchQueue.global(), or better, make your own with DispatchQueue(label:"StreamReceiver", qos: .userInteractive). I think the async dispatch is perfectly fine here.
EDIT: Actually, looking at more, I think this may be related to SceneKit transactions. It looks like you're not really 'pausing', but decelerating. I mentioned that implicit transaction - when you position the node, explicitly start, set the animation time for, and end a SCNTransaction. A snippit I've been using is:
func sceneTransaction(_ duration: Int? = nil,
_ operation: () -> Void) {
SCNTransaction.begin()
SCNTransaction.animationDuration =
duration.map{ CFTimeInterval($0) }
?? SCNTransaction.animationDuration
operation()
SCNTransaction.commit()
}
Try calling this with your repositioning code, or just sticking the transaction start/animationTime/end around your block. Good luck!
EDIT 2: Ok, one more thing. If it makes sense for your use case, make sure you've stopped browsing and advertising for peers. It's an expensive bit of networking, and it may be bogging down the entire subsystem.
I have also encountered this issue, which seems to depend on the environment in which the device is located.
Here are the results of my attempts: on two devices connected by multipeer connectivity, the sender sends data 60 times per second, and the receiver prints log. Sometimes there is a slight pause of 0.1-0.2 seconds every second, but sometimes it works fine.
In the end, I replaced the multipeer connectivity framework with the corebluetooth framework, which does not seem to have this issue.
Related
I am currently writing an analytics system.
Currently, it caches Events in RAM.
It writes to the Filesystem via NSUserDefaults (iOS) and SharedPreferences (Android) when the App closes, as JSON.
This Data is read when the app opens.
It also sends every N seconds or when the amount of Events reaches 20.
When the sending was successful, it deletes all events that were send from the RAM.
This has some obvious flaws: When the app crashes, all data from N seconds is lost. When the server cannot be reached (because Server is down for example) and the app crashes, even more data are lost.
My question here is: How can I improve the "safety" of my data and prevent massive data loss when the server is down or not reachable?
Here is my current code (unimportant parts removed)
import Foundation
class BackendTrackingHandler : TrackingHandler {
static let KEY_CACHE_EVENT = "TrackingCache"
private static let SEND_INTERVAL:TimeInterval = 10
var cachedEvents: [TrackingEvent] = []
var temporaryCachedEvents: [TrackingEvent] = []
var prefix: String
var endpoint: String
var timer : Timer?
//whether we currently wait for a response
var isSending: Bool = false
override init() {
//init
readCachedEventsFromDisk()
timer = Timer.scheduledTimer(timeInterval: BackendTrackingHandler.SEND_INTERVAL, target: self, selector: #selector(send), userInfo: nil, repeats: true)
}
override func trackEvent(_ event: TrackingEvent) {
cachedEvents.append(event)
if((cachedEvents.count) >= 20) {
send()
}
}
#objc func send() {
if((cachedEvents.count) < 1) {
return
}
if(isSending) {
return
}
isSending = true
let enc = JSONEncoder()
enc.outputFormatting = .prettyPrinted
let data = try! enc.encode(cachedEvents)
// Constructring Request here
let session = URLSession.shared
//while the request is on the way, we can trigger new events. Make a temporary copy
temporaryCachedEvents = cachedEvents
let taksID = UIApplication.shared.beginBackgroundTask()
let task = session.dataTask(with: request) { (data: Data?, response: URLResponse?, error: Error?) -> Void in
if(error != nil)
{
self.isSending = false
UIApplication.shared.endBackgroundTask(taksID)
}else {
let httpResponse = response as! HTTPURLResponse
if(httpResponse.statusCode >= 200 && httpResponse.statusCode <= 299) {
//success, Data was sent so we can create a new cached event
//remove all events we already sent
self.cachedEvents = self.cachedEvents.filter{!self.temporaryCachedEvents.contains($0)}
self.isSending = false
UIApplication.shared.endBackgroundTask(taksID)
}else {
self.isSending = false
UIApplication.shared.endBackgroundTask(taksID)
}
}
}
task.resume()
}
func readCachedEventsFromDisk() {
let dec = JSONDecoder()
guard let data = UserDefaults.standard.data(forKey: BackendTrackingHandler.KEY_CACHE_EVENT) else {
cachedEvents = []
return
}
do {
cachedEvents = try dec.decode([TrackingEvent].self, from: data)
} catch {
cachedEvents = []
}
}
func writeCachedEventsToDisk() {
let enc = JSONEncoder()
let data = try! enc.encode(cachedEvents)
UserDefaults.standard.set(data, forKey: BackendTrackingHandler.KEY_CACHE_EVENT)
}
override func onApplicationBecomeActive() {
}
override func onApplicationBecomeInactive() {
let taskID = UIApplication.shared.beginBackgroundTask()
writeCachedEventsToDisk()
UIApplication.shared.endBackgroundTask(taskID)
}
}
€dit:
TrackingEvent is a struct that is shared among multiple TrackingHandlers. There is an additional FirebaseTrackingHandler, which is meant to be operated side-by-side our own analytics system.
I think the easiest way is writing "Property Wrapper" for cachedEvents so it would directly access to UserDefaults, it seems the operation is not so huge to bother.
Second way - you could simply save cache to UserDefaults every N seconds/minutes or so if you care about performance a lot. Though, it wouldn't made your system bulletproof
Now facing some challenges using CoreBlueTooth L2CAP channel. In order to better understand how things work. I have taken the L2CapDemo (master) (https://github.com/paulw11/L2CapDemo) from GitHub and tried to experiment with it. Here is what I have done, along with one question.
In have replaced the sendTextTapped function, with this one :
#IBAction func sendTextTapped(_ sender: UIButton) {
guard let ostream = self.channel?.outputStream else {
return
}
var lngStr = "1234567890"
for _ in 1...10 {lngStr = lngStr + lngStr}
let data = lngStr.data(using: .utf8)!
let bytesWritten = data.withUnsafeBytes { ostream.write($0, maxLength: data.count) }
print("bytesWritten = \(bytesWritten)")
print("WR = \(bytesWritten) / \(data.count)")
}
And the execution result is:
bytesWritten = 8192
WR = 8192 / 10240
That allows me to see what happens in the case where bytesWritten < data.count.
In other words, all the bytes cannot be sent over in one chunk.
Now comes the question. The problem is I see nothing, the bytes left over seems to be just ignored.
I want to know what to do if I do not want to ignore those bytes. What is the way to care about the rest of the bytes? There will be cases where we will need to transfer tens of thousands or even hundreds of thousands of bytes.
You simply need to make a note of how many characters were sent, remove those from the data instance and then when you get a delegate callback indicating space is available in the output stream, send some more.
For example, you could add a couple of properties to hold the queued data and a serial dispatch queue to ensure thread-safe access to that queue:
private var queueQueue = DispatchQueue(label: "queue queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem, target: nil)
private var outputData = Data()
Now, in the sendTextTapped function you can just add the new data to the output queue:
#IBAction func sendTextTapped(_ sender: UIButton) {
var lngStr = "1234567890"
for _ in 1...10 {lngStr = lngStr + lngStr}
let data = lngStr.data(using: .utf8)!
self.queue(data:data)
}
the queue(data:) function adds the data to the outputData object in a thread-safe manner and calls send()
private func queue(data: Data) {
queueQueue.sync {
self.outputData.append(data)
}
self.send()
}
send() ensures that the stream is connected, there is data to send and there is space available in the output stream. If all is OK then it sends as many bytes as it can. The sent bytes are then removed from output data (again in a thread safe manner).
private func send() {
guard let ostream = self.channel?.outputStream, !self.outputData.isEmpty, ostream.hasSpaceAvailable else{
return
}
let bytesWritten = outputData.withUnsafeBytes { ostream.write($0, maxLength: self.outputData.count) }
print("bytesWritten = \(bytesWritten)")
queueQueue.sync {
if bytesWritten < outputData.count {
outputData = outputData.advanced(by: bytesWritten)
} else {
outputData.removeAll()
}
}
}
The final change is to call send() in response to a .hasSpaceAvailable stream event:
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch eventCode {
case Stream.Event.openCompleted:
print("Stream is open")
case Stream.Event.endEncountered:
print("End Encountered")
case Stream.Event.hasBytesAvailable:
print("Bytes are available")
case Stream.Event.hasSpaceAvailable:
print("Space is available")
self.send()
case Stream.Event.errorOccurred:
print("Stream error")
default:
print("Unknown stream event")
}
}
You can see the modified code in the largedata branch of the example
I am developing a chatting application where I could receive a number of messages at a time which leads to app freezing. Following is my socket receiver:
func receiveNewDirectMessages() {
self.socket?.on(EventListnerKeys.message.rawValue, callback: { (arrAckData, ack) in
print_debug(arrAckData)
guard let dictMsg = arrAckData.first as? JSONDictionary else { return }
guard let data = dictMsg[ApiKey.data] as? JSONDictionary else { return }
guard let chatData = data[ApiKey.data] as? JSONDictionary else { return }
guard let messageId = chatData[ApiKey._id] as? String , let chatId = chatData[ApiKey.chatId] as? String else { return }
if MessageModel.getMessageModel(msgId: messageId) != nil { return }
let isChatScreen = self.isChatScreen
let localMsgId = "\(arc4random())\(Date().timeIntervalSince1970)"
if let senderInfo = data[ApiKey.senderInfo] as? JSONDictionary, let userId = senderInfo[ApiKey.userId] as? String, userId != User.getUserId() {
_ = AppUser.writeAppUserModelWith(userData: senderInfo)
}
let msgModel = MessageModel.saveMessageData(msgData: chatData, localMsgId: localMsgId, msgStatus: 2, seenByMe: false)
let chatModel = ChatModel.saveInboxData(localChatId: msgModel.localChatId, inboxData: chatData)
if isChatScreen {
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .delivered)
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .seen)
} else {
ChatModel.updateUnreadCount(localChatId: chatModel.localChatId, incrementBy: 1)
self.emitMessageStatus(msgId: messageId, chatId: chatId, socketService: .messageStatus, status: .delivered)
}
TabController.shared.updateChatBadgeCount()
})
}
What's happening above:
1. I am receiving all the undelivered messages ONE-By-ONE in this socket listener.
2. Fetching the message data
3. Saving the received sender's info to Realm DB
4. Saving the message model to realm DB
5. SAVING/UPDATING Chat Thread in realm DB
6. Emitting acknowledgement for the received message
7. Update Chat badge count on tab bar
Below is my emitter for acknowledging the message delivery.
func emitMessageStatus(msgId: String, chatId: String, socketService: SocketService, status: MessageStatusAction) {
// Create Message data packet to be sent to socket server
var msgDataPacket = [String: Any]()
msgDataPacket[ApiKey.type] = socketService.type
msgDataPacket[ApiKey.actionType] = socketService.listenerType
msgDataPacket[ApiKey.data] = [
ApiKey.messageId: msgId,
ApiKey.chatId: chatId,
ApiKey.userId: User.getUserId(),
ApiKey.statusAction: status.rawValue
]
// send the messsage data packet to socket server & wait for the acknowledgement
self.emit(with: EventListnerKeys.socketService.rawValue, msgDataPacket) { (arrAckData) in
print_debug(arrAckData)
guard let dictMsg = arrAckData.first as? JSONDictionary else { return }
if let msgData = dictMsg[ApiKey.data] as? [String: Any] {
// Update delivered Seen Status here
if let msgId = msgData[ApiKey.messageId] as? String, let actionType = msgData[ApiKey.statusAction] as? String, let msgStatusAction = MessageStatusAction(rawValue: actionType) {
switch msgStatusAction {
case .delivered:
if let deliveredTo = msgData[ApiKey.deliveredTo] as? [[String: Any]] {
_ = MessageModel.updateMsgDelivery(msgId: msgId, deliveredTo: deliveredTo)
}
case .seen:
if let seenBy = msgData[ApiKey.seenBy] as? [[String: Any]] {
_ = MessageModel.updateMsgSeen(msgId: msgId, seenBy: seenBy)
}
case .pin:
MessageModel.clearPinnedMessages(chatId: chatId)
if let pinTime = msgData[ApiKey.pinTime] as? Double {
MessageModel.updatePinnedStatus(msgId: msgId, isPinned: true, pinTime: pinTime)
}
case .unPin:
if let pinTime = msgData[ApiKey.pinTime] as? Double {
MessageModel.updatePinnedStatus(msgId: msgId, isPinned: false, pinTime: pinTime)
}
case .delete:
MessageModel.deleteMessage(msgId: msgId)
case .ackMsgStatus, .like, .unlike:
break
}
}
}
}
}
What's happening above:
Encapsulating all the related information to acknowledge the event
Update realm DB after acknowledgement delivery
Now, I'm not able to defies a perfect threading policy here. What to write in background thread and what should I write in Main thread. I tried doing it however but that leades to random crashes or packet lossses.
Can anyone please lead me forward on this topic. I will be highly grateful.
Try to use background thread for data processing/ non-UI processing.
Reduce number of updating UI times
Instead of processing 1 by 1 message, using debounce - like. You can store new messages, then update UI with n new messages. So instead of updating UI/saving data to db 100 times for 100 messages, you can do it 1 time for 100 messages. More detail: with every new message, add it to an array. Call debouncer. The Debouncer will delay a function call, and every time it's getting called it will delay the preceding call until the delay time is over. So after e.g 200ms, if there're no new message, the update func will be call (the callback func that debounce's handling). Then update ui/db with n new stored messages.
You can group message by time, like group by 1 hour. And then update with delay between each time group. You can do it when the debouncer is called -> group messages by time -> update db/ui by each group. You can use setTimeout, like update group 1, 100ms later update group 2, so the ui won't be freezed
I'm doing a multiplayer game based on the MPC framework for the data communication between the devices of the players (and spritekit for the game).
The logic is that one of the device is the master, does all the game logic and calculations, and sends the positions of the various sprites to the slave devices that just have to update the sprites positions.
The master opens streams between him and all the slaves and sends the various data through these streams. Data are stored in dictionaries of type (String, Any).
The issue is twofold :
little by little I see a difference between the number of time the master send data and the number of time a slave received data
the more the master increase the number of data sent the more the slave receive empty/partial data
I have put various checks and controls, I don't have errors thrown by the send or receive functions, just lost data or empty/partial data.
Here is below some relevant parts of the code. Let me know if more is needed
Thx
J.
Stream initialization between master and slaves
Executed by the master
func InitMPCStream()
{
for Peer in Session.connectedPeers
{
if Peer.displayName != gMasterPeerName
{
let IndexPeer = gtListPlayerMulti.index(where: { $0.Name == Peer.displayName } )
try! gtListPlayerMulti[IndexPeer!].Stream = Session.startStream(withName: "Stream " + Peer.displayName, toPeer: Peer)
gtListPlayerMulti[IndexPeer!].Stream!.delegate = self
gtListPlayerMulti[IndexPeer!].Stream!.schedule(in: RunLoop.main, forMode:RunLoopMode.defaultRunLoopMode)
gtListPlayerMulti[IndexPeer!].Stream!.open()
}
}
}
Executed by the slaves
func session(_ session: MCSession, didReceive InStream: InputStream, withName StreamName: String, fromPeer PeerID: MCPeerID)
{
InputStream = InStream
InputStream.delegate = self
InputStream.schedule(in: RunLoop.main, forMode: RunLoopMode.defaultRunLoopMode)
InputStream.open()
}
Function that sends the data to the slaves
This function is executed on the master side in the update loop of the SKScene, several times per update, so very often.
The logic is that for each update cycle several events occur (between 10 to 100). Each event generates a call to this function. The quantity of data sent is not very important : a dictionary of (String, Any) of 3 to 10 lines where Any can be a string, number (Int, CGFloat...) or bool.
func SendStreamData(IDPeer: Int = 0, DataToSend: Dictionary<String, Any>)
{
let DataConverted = NSKeyedArchiver.archivedData(withRootObject: DataToSend)
if IDPeer == 0
{
for Peer in Session.connectedPeers
{
let IndexPeer = gtListPlayerMulti.index(where: { $0.Name == Peer.displayName } )
if gtListPlayerMulti[IndexPeer!].Stream!.hasSpaceAvailable
{
let bytesWritten = DataConverted.withUnsafeBytes { gtListPlayerMulti[IndexPeer!].Stream!.write($0, maxLength: DataConverted.count) }
if bytesWritten == -1 { print("Erreur send stream") } else { gSendingNumber = gSendingNumber + 1 }
} else { print("No space in stream") }
}
}
else
{
let IndexPeer = gtListPlayerMulti.index(where: { $0.ID == IDPeer } )
let bytesWritten = DataConverted.withUnsafeBytes { gtListPlayerMulti[IndexPeer!].Stream!.write($0, maxLength: DataConverted.count) }
}
}
Function called when data is received on the slave side
func stream(_ aStream: Stream, handle eventCode: Stream.Event)
{
switch(eventCode)
{
case Stream.Event.hasBytesAvailable:
gSendingNumber = gSendingNumber + 1
let InputStream = aStream as! InputStream
var Buffer = [UInt8](repeating: 0, count: 1024)
let NumberBytes = InputStream.read(&Buffer, maxLength:1024)
let DataString = NSData(bytes: &Buffer, length: NumberBytes)
if let Message = NSKeyedUnarchiver.unarchiveObject(with: DataString as Data) as? [String:Any] //deserializing the NSData
{ ProcessMPCDataReceived(VCMain: self, PeerID: PeerID, RawData: DataString as Data) }
else { print("Empty Data") }
case Stream.Event.hasSpaceAvailable:
break
case Stream.Event.errorOccurred:
print("ErrorOccurred: \(aStream.streamError?.localizedDescription)")
default:
break
}
}
The issue here is on the NSKeyedUnarchiver.unarchiveObject line which return nil because DataString does not conform to the expected dictionary
I have an application using RealmSwift on iOS that stores a small amount of data. The data is updated via CloudKit push notifications every ~5minutes or so.
It works except my Realm file continuously grows forever until the application no longer has enough memory to launch.
I managed to work around it slightly by using the "writeCopy" function to compact the realm at launch - this mostly works if the app is stopped semi-frequently so that has a chance to run, but if this does not happen and the app continues to update the data from push notifications in the background for a couple of days, the database will end up too large and compacting it takes ages, or it just crashes trying.
I tried reading the "file size" section in the FAQ's but I don't think I fully understand the rules that cause the size to grow. It mentions keeping "old realms" open - does that mean instances of Realm e.g. "let realm = try! Realm()" or does it really mean any open object?
If its the latter then that is annoying as I use notifications on result sets while in the background to determine when to update a complication on a companion watch app.
I really like realm and I want to stick with it but I feel like my somewhat unusual use case (long running app that frequently updates in the background) might mean that I cannot use it.
Edit:
Background updates are happening as a result of the 'didReceiveRemoteNotification' app delegate callback, like this:
func application(_ application: UIApplication, didReceiveRemoteNotification userInfo: [AnyHashable : Any], fetchCompletionHandler completionHandler: #escaping (UIBackgroundFetchResult) -> Void) {
let dict = userInfo as! [String: NSObject]
let notification = CKNotification(fromRemoteNotificationDictionary: dict)
if notification.notificationType == .query {
let notification = notification as! CKQueryNotification
if let recordID = notification.recordID {
let requestOperation = CKFetchRecordsOperation(recordIDs: [recordID])
requestOperation.fetchRecordsCompletionBlock = { records, error in
if let error = error {
os_log("Error fetching record: %#", log: Logger.default, type: .error, String(describing: error))
} else {
autoreleasepool {
let realm = try! Realm()
try! realm.write {
records?.forEach { (key, record) in
switch record.recordType {
case "Freeway":
let dict = RealmFreeway.dictionary(from: record)
realm.create(RealmFreeway.self, value: dict, update: true)
case "Segment":
let dict = RealmSegment.dictionary(from: record)
realm.create(RealmSegment.self, value: dict, update: true)
default:
os_log("Unknown record type: %#", log: Logger.default, type: .error, record.recordType)
}
}
}
}
}
}
CKContainer.default().publicCloudDatabase.add(requestOperation)
}
}
completionHandler(.newData)
}
And the watch complication is updated via RxSwift like so:
Observable.combineLatest(watchViewModel.freewayName.asObservable(), watchViewModel.startName.asObservable(), watchViewModel.endName.asObservable(), watchViewModel.travelTime.asObservable(), watchViewModel.color.asObservable(), watchViewModel.direction.asObservable()) {
freewayName, startName, endName, travelTime, color, direction -> [String:Any] in
guard let freewayName = freewayName,
let startName = startName,
let endName = endName,
let direction = direction else {
return [String:Any]()
}
let watchData:[String:Any] = [
"freewayName": freewayName,
"startName": startName,
"endName": endName,
"travelTime": travelTime,
"color": color.htmlRGBColor,
"direction": direction,
"transfers": session.remainingComplicationUserInfoTransfers
]
return watchData
}
.filter { $0.keys.count > 0 }
.throttle(2.0, scheduler: MainScheduler.instance)
.subscribe( onNext: { watchData in
let MIN_DURATION = 24.0 * 60.0 * 60.0 / 50.0 // 50 guaranteed updates per day...
var timeSinceLastUpdate = MIN_DURATION
let now = Date()
if let lastUpdated = self.lastComplicationUpdate {
timeSinceLastUpdate = now.timeIntervalSince(lastUpdated)
}
// Should we use a complication update or an application context update?
let complicationUpdate = timeSinceLastUpdate >= MIN_DURATION
// Send the data via the appropriate method.
if complicationUpdate {
session.transferCurrentComplicationUserInfo(watchData)
} else {
try? session.updateApplicationContext(watchData)
}
self.lastComplicationUpdate = now
})
.addDisposableTo(bag)