CKQueryOperation right after CKModifyRecordsOperation - ckmodifyrecordsoperation

in my app I need to fetch all records in a custom zone (~12) shortly after writing them to the private db. The operations are all synchronized via dependencies. The data written to the cloud via CKModifyRecordsOperation is there as seen in the dashboard and verified by correct results in the completion handler. My problem is that CKQueryOperation doesn't return records just written. If somehow I delay the call to CKQueryOperation then it works. This almost sounds like there's somekind of latency between writing and reading.
I've reviewed all the documentation and other than the operation based dependency mechanism I see no way of synchronizing reads and writes.
What am I missing?
Please help!
Ramon.
Edit:
Hello, I found more evidence that there's some undetermined latency when using CloudKit. One thread in SO suggested stitching records to avoid the latency problem. The "stitching" technique was defintely possible in my case; so, I re-wrote my code to take advantage of the technique. And that, basically, bypasses the latency altogther by avoiding the need to load all
records.
Here's the link: Stitching Records

Related

Syncing of memory and database objects upon changes in objects in memory

I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core

Azure Durable Function getting slower and slower over time

My Azure Durable Function(Runtime V3) getting an average of 3M events per day. When it runs for two or three weeks it is getting slower and slower. When I remove two table storages(History & Instances) used by Durable Function Framework, it is getting better and works as expected. I hosted my function app in the consumption plan. And also inside my function app, I'm used Durabel Entities as well. In my code, I'm using sub orchestrators as well for the Fan-Out mechanism.
Is this problem possible when it comes to heavy workload? Do I need to clear those table storages from time to time or do I need to Delete the state of completed entities inside my Durable Entity Function?
Someone, please help me
Yes, you should perform periodic clean-ups yourself by calling the PurgeInstanceHistoryAsync method. See a similar post on how to do this: https://stackoverflow.com/a/60894392
Also review any loops or Monitor patterns that you may have in your code.
Any looping logic, (like foreach, for or while loops) will replay from the initial startup state. Whilst the Durable Function replay architecture is very efficient at doing this, the code we write may not be optimised for repetitive iterations.
Durable Monitor Pattern is almost an Anti-Pattern. The concept is OK but it is easily misinterpreted and is open to abuse. It is designed for a low-frequency loop that polls an endpoint either for a set number of iterations or up until a finite time, or of course when the state of the endpoint being monitoried has changed. That state change will be the trigger to perform the rest of the operation.
It is NOT an example of how to use general or high frequency looping structures in Durable functions
It is NOT and example of how to implement a traditional HTTP endpoint reporting monitor in an infinite loop (while(true)) style, perhaps to record changes into a data store over time.
If your durable function logic has an iterator that may involve many iterations, consider migrating the iteration step to a sub-orchestration that uses the Eternal Orchestration pattern

Difference between CKQueryOperation and Perform(Fetch...)

I'm new to working with CloudKit and database fetching and I've looked at the CKDataBaseOperation calls, so I'm trying to understand the real differences between adding an operation to a database and using "normal" function calls on that database if they both produce, more or less, the same results.
Why would adding an operation be more desirable over a function call and in what situations?
Thanks for helping me understand this. I'm trying to learn as much as I can about Swift.
Overview:
In CloudKit most of the tasks have 2 ways of doing things:
Convenience APIs (functions with completion handlers)
Operations
1. Convenience APIs
Advantages:
As the name implies, they are convenient to use
Disadvantage:
Usually requires more server requests.
Can't build dependencies
2. Operations:
Advantages:
More configurable and more options.
Requires lesser server requests (Better for your server request quota)
It is built using Operation, so you get all the capabilities of Operation like dependencies (you will need them in a real app)
Disadvantages:
It is not so convenient to use, you need to create the operation. It takes a little more time to code but well worth it.
Example 1 (Fetch):
If you use CKDatabase.fetch, you would need to specify the record IDs that you want to fetch.
If you use CKQueryOperation, you can query based on field values.
Example 2 (Save / Update):
If you use CKDatabase.save, you can save 1 record with every function call. Each function call would result in a separate server request. If you want to save 200 records, you would have to run it in a loop and would make 200 server requests which is not very efficient. CloudKit also has a limit on the number of server requests you can make per second. This way you would exhaust your quota very quickly.
If you use CKModifyRecordsOperation, you can save 200 records all at once*, by passing it as an array. So you would be making far lesser server requests.
*Note: The server imposes a limit on the number of records it can save in 1 request but it is definitely better than creating a separate request to save each record.
Reference:
https://developer.apple.com/library/content/documentation/DataManagement/Conceptual/CloudKitQuickStart/Introduction/Introduction.html#//apple_ref/doc/uid/TP40014987-CH1-SW1
Watch WWDC CloudKit videos
Might help to learn and watch WWDC videos about Operation (earlier used to be referred as NSOperation)

CoreData(swift) memory issue while inserting thousands of records

My application is in swift(latest version) language, And its has a bit complex database structure.
I'm dumping records while app launch first time as app must support offline information, My app can have millions of records.
Now Saving records in entities, which has the relationship with around 14-15 entity(one to one and one to many).
My Application through memory warning and gets terminated after around 1000 thousand records. I tried with profiling for leakages but that time app is working fine, however, it take a long time.
I have tried to create singleton class of context manager and also tried with creating local kind of variable while inserting a chunk of records.
For now, I'm fetching 50 records from web API and saving my context by updating my entities.
I have tried with autoreleasepool, but no success.
Please suggest me what should it do?
Thank you
Ashwin
I can advice you to watch this video. It is very inspiring and explains a lot of useful things about core data:
https://developer.apple.com/videos/play/wwdc2013/211/
Are you using fetchBatchSize property?
https://developer.apple.com/reference/coredata/nsfetchrequest/1506558-fetchbatchsize
If you are processing large amounts of Core Data objects in a loop, then you need to periodically save the context so that core data can turn modified objects back into faults instead of keeping them in memory. How often you need to save and when depends on your application and the code you are using to process, which it would be helpful to see in your question. You'll need to experiment yourself to find a balance between speed and memory use.
Use the allocations instrument and you will see where your memory is going. You're not leaking memory, you're just using too much of it.
Disable zombie object of your project. Below I have posted how to disable zombie object follow it via images.
For more details about zombie object enter link description here
Image 1
Image 2

Async logging of a static queue in Objective-C

I'd like some advice on how to implement the following in Objective C. The actual application in related to an AB testing framework I'm working on, but that context shouldn't matter too much.
I have an IOS application, and when a certain event happens I'd like to send a log message to a HTTP Service endpoint.
I don't want to send the message every time the event happens. Instead I'd prefer to aggregate them, and when it gets to some (configurable) number, I'd like to send them off async.
I'm thinking to wrap up a static NSMutableArray in a class with an add method. That method can check to see if we have reached the configurable max number, if we have, aggregate and send async.
Does objective-c offer any better constructs to store this data? Perhaps one that helps handle concurrency issues? Maybe some kind of message queue?
I've also seen some solutions with dispatching that I'm still trying to get my head around (I'm new).
If the log messages are important, keeping them in memory (array) might not suffice. If the app quits or crashes the NSArray will not persist on subsequent execution.
Instead, you should save them to a database with an 'sync' flag. You can trigger the sync module on every insert to check if the entries with sync flag set to false has reached a threshold and trigger the upload and set sync flag to true for all uploaded records of simply delete the synced records. This also helps you separate your logging module and syncing module and both of them work independently.
You will get a lot of help for syncing SQLite db or CoreData online. Check these links or simply google iOS database sync. If your requirements are not very complex, and you don't mind using third party or open source code, it is always better to go for a readily available solution instead of reinventing the wheel.

Resources