Question:
Does deleting an NSManagedObject have to be done with in context.perform / context.performAndWait block ?
Or is it safe to delete the object outside the block ?
Code:
func delete(something: NSManagedObject, context: NSManagedObjectContext) {
context.performAndWait { //Is context.perform / context.performAndWait required to delete an object ?
context.delete(something)
}
}
My thoughts:
Since this code was being called from different threads (both background / main ) it was better to use context.perform / context.performAndWait.
The context might have been created with a specific concurrent type (main / private queue).
The context's concurrent type would need to match that of the thread (main / background) in which the code was being executed.
The block would ensure it runs ok even if a thread with a different mismatched thread type is executing it.
As my personal experience, use performAndWait, because it will wait until operation done. Anyway, both method will run on it's own thread.(context's thread).
From Documentation:
perform(:) and performAndWait(:) ensure the block operations are
executed on the queue specified for the context. The perform(:)
method returns immediately and the context executes the block methods
on its own thread. With the performAndWait(:) method, the context
still executes the block methods on its own thread, but the method
doesn’t return until the block is executed.
In apple documentation it is being said about „...andWait” it works assyncronious.
However „...andWait” should be used to catch the errors inside of perform block...
Moc.performBlock{
for jsonObject in jsonArray {
let your = actions
}
do {
try moc.save()
moc.performBlockAndWait {
do { try moc.save() }
catch { fatalError(„Failure to save context: (error)”) }
}
}...
Better to do it inside in case you have different values / unused values. In most cases ARC (memory management) should fix it.
You should also read here:
Core Data background context best practice
Related
I've been using the CloudKitShare sample code found here as a sample to help me write code for my app. I want to use performWriterBlock and performReaderBlockAndWait as found in BaseLocalCache using a completionHandler without violating the purposes of the design of the code, which focuses on being thread-safe. I include code from CloudKitShare below that are pertinent to my question. I include the comments that explain the code. I wrote comments to identify which code is mine.
I would like to be able to use an escaping completionHandler if possible. Does using an escaping completionHandler still comply with principles of thread-safe code, or does it in any way violate the purpose of the design of this sample code to be thread-safe? If I use an escaping completionHandler, I would need to consider when the completionHandler actually runs relative to other code outside of the scope of the actual perform function that uses the BaseLocalCache perform block. I would for one thing need to be aware of what other code runs in my project between the time the method executes and the time operationQueue in BaseLocalCache actually executes the block of code and thus the completionHandler.
class BaseLocalCache {
// A CloudKit task can be a single operation (CKDatabaseOperation)
// or multiple operations that you chain together.
// Provide an operation queue to get more flexibility on CloudKit operation management.
//
lazy var operationQueue: OperationQueue = OperationQueue()
// This sample ...
//
// This sample uses this dispatch queue to implement the following logics:
// - It serializes Writer blocks.
// - The reader block can be concurrent, but it needs to wait for the enqueued writer blocks to complete.
//
// To achieve that, this sample uses the following pattern:
// - Use a concurrent queue, cacheQueue.
// - Use cacheQueue.async(flags: .barrier) {} to execute writer blocks.
// - Use cacheQueue.sync(){} to execute reader blocks. The queue is concurrent,
// so reader blocks can be concurrent, unless any writer blocks are in the way.
// Note that Writer blocks block the reader, so they need to be as small as possible.
//
private lazy var cacheQueue: DispatchQueue = {
return DispatchQueue(label: "LocalCache", attributes: .concurrent)
}()
func performWriterBlock(_ writerBlock: #escaping () -> Void) {
cacheQueue.async(flags: .barrier) {
writerBlock()
}
}
func performReaderBlockAndWait<T>(_ readerBlock: () -> T) -> T {
return cacheQueue.sync {
return readerBlock()
}
}
}
final class TopicLocalCache: BaseLocalCache {
private var serverChangeToken: CKServerChangeToken?
func setServerChangeToken(newToken: CKServerChangeToken?) {
performWriterBlock { self.serverChangeToken = newToken }
}
func getServerChangeToken() -> CKServerChangeToken? {
return performReaderBlockAndWait { return self.serverChangeToken }
}
// Trial: How to use escaping completionHandler? with a performWriterBlock
func setServerChangeToken(newToken: CKServerChangeToken?, completionHandler: #escaping (Result<Void, Error>)->Void) {
performWriterBlock {
self.serverChangeToken = newToken
completionHandler(.success(Void()))
}
}
// Trial: How to use escaping completionHandler? with a performReaderBlockAndWait
func getServerChangeToken(completionHandler: (Result<CKServerChangeToken, Error>)->Void) {
performReaderBlockAndWait {
if let serverChangeToken = self.serverChangeToken {
completionHandler(.success(serverChangeToken))
} else {
completionHandler(.failure(NSError(domain: "nil CKServerChangeToken", code: 0)))
}
}
}
}
You asked:
Does using an escaping completionHandler still comply with principles of thread-safe code, or does it in any way violate the purpose of the design of this sample code to be thread-safe?
An escaping completion handler does not violate thread-safety.
That having been said, it does not ensure thread-safety, either. Thread-safety is solely a question of whether you ever access some shared resource from one thread while mutating it from another.
If I use an escaping completionHandler, I would need to consider when the completionHandler actually runs relative to other code outside of the scope of the actual perform function that uses the BaseLocalCache perform block.
Yes, you need to be aware that the escaping completion handler is called asynchronously (i.e., later). That is less of a thread-safety concern than a general understanding of the application flow. It is only a question of what you might be doing in that closure.
IMHO, the more important observation is that the completion handler is called on the cacheQueue used internally by BaseLocalCache. So, the caller needs to be aware that the closure is not called on the caller’s current queue, but on cacheQueue.
It should be noted that elsewhere in that project, they employ another common pattern, where the completion handler is dispatched back to a particular queue, e.g., the main queue.
Bottom line, thread-safety is not a question of whether a closure is escaping or not, but rather (a) from what thread does the method call the closure; and (b) what the supplied closure actually does:
Do you interact with the UI? Then you will want to ensure that you dispatch that back to the main queue.
Do you interact with your own properties? Then you will want to make sure you synchronize all of your access with them, either with actors, relying on the main queue, use your own serial queues, or a reader-writer pattern like in the example you shared with us.
If you are ever unsure about your code’s thread-safety, you might consider temporarily turning on TSAN as described in Diagnosing Memory, Thread, and Crash Issues Early
Let me show a simplified example of the problem I'm struggling with:
class CarService {
func getCars() -> Single<[Car]> {
return Single.create { observer in
// Here we're using a thread that was defined in subscribeOn().
someCallbackToAPI { cars in
// Here we're using main thread, because of the someCallbackToAPI implementation.
observer(.success(cars))
}
}
}
}
class CarRepository {
func syncCars() -> Completable {
return CarService().getCars()
.flatMapCompletable { cars in
// Here we're using main thread, but we want some background thread.
saveCars(cars)
}
}
}
class CarViewController {
func loadCar() {
CarRepository().syncCars()
.subscribeOn(someBackgroundScheduler)
.observeOn(MainThread)
.subscribe()
}
}
From the bottom: CarViewController wants to sync all the cars from some external API. It defines what thread should be used for the sync with subscribeOn - we don't want to block the UI thread. Unfortunately, underneath, the CarService has to use some external library methods (someCallbackToAPI) that always returns the result in a main thread. The problem is that after receiving the result, all methods below like e.g. saveCars are called in the same main thread. saveCars may block the UI thread because it saves data to database. Of course I could add observeOn between threads between CarService().getCars() and flatMapCompletable, but I want the CarRepository to be dump and know nothing about the threads. It is the CarViewController responsibility to define working thread.
So my question is, is it a way I could get the scheduler passed in subscribeOn method and switch back to the scheduler after receiving the result from someCallbackToApi?
The short answer is no.
As you surmise, the problem is that your someCallbackToAPI is routing to the main thread which is not what you wanted and there's nothing you can do about that short of re-writing someCallbackToAPI. If you are using Alamofire or Moya, I think they have alternative methods that won't call the closure on the main thread but I'm not sure. URLSession does not switch to the main thread so one idea would be to use it instead.
If you want the saveCars to happen on a background thread, you will have to use observeOn to push the computation back onto a background thread from main. The only thing subscribeOn will do is call someCallbackToAPI(_:) on a background thread, it cannot dictate what thread the function will call its closure on.
So something like:
func syncCars() -> Completable {
return CarService().getCars()
.observeOn(someBackgroundScheduler)
.flatMapCompletable { cars in
// Now this will be on the background thread.
saveCars(cars)
}
}
As a final note, an empty subscribe is a code smell. Any time you find your-self calling .subscribe() for anything other than testing purposes, you are likely doing something wrong.
I'm struggling a little bit trying to create an application for my own education purposes using Swift.
Right now I have the following (desired) order of execution:
TabView
FirstViewController - TableView
Check into CoreData
If data exists update an array using a closure
If data doesn't exists then download it using Alamofire from API and store it into Core Data
SecondViewController - CollectionView
Checks if data of images exists in Core Data, if it does, loads it from there, otherwise download it.
The problem that I'm struggling the most is to know if the code after a closure is executed after (synchronously) the closure ends or it might be executed before or while the closure is executed.
For example:
FirstViewController
var response: [DDGCharacter]
//coreData is an instance of such class
coreData.load(onFinish: { response in //Custom method in another class
print("Finished loading")
self.response = response
})
print("Executed after loading data from Core Data")
//If no data is saved, download from API
if response.count == 0 {
//Download from API
}
I have done the above test with the same result in 10 runs getting:
Finished loading
Executed after loading data from Core Data
In all 10, but it might be because of load is not taking too much time to complete and thus, appear to be synchronous while it's not.
So my question is, is it going to be executed in that order always independent of amount of data? Or it might change? I've done some debugging as well and both of them are executed on the main thread as well. I just want to be sure that my suppositions are correct.
As requested in the comments, here's the implementation done in the load() method:
func load(onFinish: ([DDGCharacter]) -> ()) {
var characters: [DDGCharacter] = []
guard let appDelegate = UIApplication.shared.delegate as? AppDelegate else {
return
}
let managedContext = appDelegate.persistentContainer.viewContext
let fetchRequest = NSFetchRequest<NSManagedObject> (entityName: "DDGCharacter")
do {
characters = try managedContext.fetch(fetchRequest) as! [DDGCharacter]
} catch let error as NSError {
print("Could not fetch. \(error), \(error.userInfo)")
}
onFinish(characters)
}
Your implementation of load(onFinish:) is very surprising and over-complicated. Luckily, though, that helps demonstrate the point you were asking about.
A closure is executed when something calls it. So in your case, onFinish is called at the end of the method, which makes it synchronous. Nothing about being "a closure" makes anything asynchronous. It's just the same as calling a function. It is completely normal to call a closure multiple times (map does this for instance). Or it might never be called. Or it might be called asynchronously. Fundamentally, it's just like passing a function.
When I say "it's slightly different than an anonymous function," I'm just referring to the "close" part of "closure." A closure "closes over" the current environment. That means it captures variables in the local scope that are referenced inside the closure. This is slightly different than a function (though it's more about syntax than anything really deep; functions can actually become closures in some cases).
The better implementation would just return the array in this case.
I'm new to CoreData and I'm trying to create a simple application.
Assume I have a function:
func saveEntry(entry: Entry) {
let moc = NSManagedObjectContext(concurrencyType: .NSPrivateQueueConcurrencyType)
moc.parentContext = savingContext
moc.pefrormBlockAndWait {
// find if MOC has entry
// if not => create
// else => update
// saving logic here
}
}
It can introduce a problem: if I call saveEntry from two threads, passing the same entry it will duplicate it. So I've added serial queue to my DB adapter and doing in following manner:
func saveEntry(entry: Entry) {
dispatch_sync(serialDbQueue) { // (1)
let moc = NSManagedObjectContext(concurrencyType: .NSPrivateQueueConcurrencyType)
moc.parentContext = savingContext
moc.pefrormBlockAndWait { // (2)
// find if MOC has entry
// if not => create
// else => update
// saving logic here
}
}
}
And it works fine, until I'd like to add another interface function:
func saveEntries(entries: [Entry]) {
dispatch_sync(serialDbQueue) { // (3)
let moc = NSManagedObjectContext(concurrencyType: .NSPrivateQueueConcurrencyType)
moc.parentContext = savingContext
moc.pefrormBlockAndWait {
entries.forEach { saveEntry($0) }
}
}
}
And now I have deadlock: 1 will be called on serialDbQueue and wait till saving finishes. 2 will be called on private queue and will wait for 3. And 3 is waiting for 1.
So what is correct way to handle synchronizing access? As far as I understand it's not safe to keep one MOC and perform saves on it because of reasons described here: http://saulmora.com/coredata/magicalrecord/2013/09/15/why-contextforcurrentthread-doesn-t-work-in-magicalrecord.html
I would try to implement this with a single NSManagedObjectContext as the control mechanism. Each context maintains a serial operation queue so multiple threads can call performBlock: or performBlockAndWait: without any danger of concurrent access (though you must be cautious of the context's data changing between the time the block is enqueued and when it eventually executes). As long as all work within the context is being done on the correct queue (via performBlock) there's no inherent danger in enqueuing work from multiple threads.
There are of course some complications to consider and I can't offer real suggestions without knowing much more about your app.
What object will be responsible for creating this context and how will it be made available to every object which needs it?
With a shared context it becomes difficult to know when work on that context is "finished" (it's operation queue is empty) if that represents a meaningful state in your app.
With a shared context it is more difficult to abandon changes should you you want to discard unsaved modifications in the event of an error (you'll need to actually revert those changes rather than simply discard the context without saving).
I have an array.
var array:[customType] = [] // pseudo code
func Generate_New_Array(){
//initialization of generatedNewArray
array = generatedNewArray
for (index,element) in array{
async_process({
Update_Data_From_Web(&array[index])
})
}
})
}
func Update_Data_From_Web(inout object:customType){
download_process{
object = downloadedData
}
}
The question is , what will should I do if I call Generate_New_Array before Update_Data_From_Web will finish for each of elements. They will store value back to not-existing index in array. How to avoid problems with that.
You have a couple of options:
Make the Generate_New_Array process cancelable, and then cancel the old one before starting the new one.
Make the Generate_New_Array serial so that when you make a subsequent call to this method, it will finish the calls first. For example, you could have this enqueue an operation on a serial queue.
Regardless of which approach you adopt, if this is multithreaded code, make sure you synchronize your interaction with the model object (via GCD queues or locks or whatever).