for example in the case:
#NSManaged public var myProperty: SomeType
func someFunc() {
concurrentNSManagedObjectContext.perform {
doSomething(with: self.myProperty)
}
}
and what is the best practices, if yes?
Using #NSManaged has no effect on thread safety. It helps Core Data manage the property but doesn't do anything about keeping the property safe for threads or anything else.
Using perform as you are is good if the doSomething function relates to Core Data, because perform and performAndWait are how Core Data manages thread safety. Accessing the property inside perform or performAndWait is safe; accessing it outside of those functions is usually not safe.
The only exception to the above is that if your managed object context uses main-queue concurrency (for example if you're using NSPersistentContainer and the context is the viewContext property) and you're sure that your code is running on the main queue, then you don't need to use perform or performAndWait. It's not bad to use them in that case but it's not necessary either.
"Thread safety" at least in Apple Platforms thus far resolves to two things; reading, and writing. NSManaged object currently does not protect against a mix of the two. func doSomething(...) is going to throw an exception if some other thread is reading myProperty at time of mutation. Mutation can be done on a serial thread, however thread safety is more complicated than that. someFunc() is called by the first caller; if that caller is coming from a concurrent thread there is no guarantee of the order. If for example myProperty is of type Int and I call someFunc concurrent from 200 threads, the last thread may be 0 and someFunc may be += 1. Now if I make the func serial my end output is 1. So the short answer is it depends on what you're trying to accomplish. generally you are safe on a serial queue, but concurrent takes a lot of planning.
Related
When multiple perform() calls are invoked on the same NSManagedObjectContext object, will they be executed one by one in the order they are invoked? I think this is true because the document says
Core Data uses thread (or serialized queue) confinement to protect
managed objects and managed object contexts (see Core Data Programming
Guide).
which suggests that managed object context and its thread has 1:1 mapping and all perform() calls are serial. But it scares me that I can't find any explicit discussion on this, not even in Apple's doc.
In my App, I set up a CoreData stack with NSPersistentContainer and create a dedicated background context for modifying managed objects. It could occur that when a perform() call is invoked, the previous perform() call hasn't finished yet. So it's critical that they are executed one by one in this case. That's why I'd like to confirm my understanding above.
Note: I understand perform() is asynchronous, but that's from the caller's perspective. What I'm asking about is from the callee's perspective.
Yes the multiple perform calls will be queued up and executed in that same order.
I am getting race conditions in my code when I run TSan tool. As same code has been accessed from different queues and threads at the same time that's why I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues.
I used objc_sync_enter(object) | objc_sync_exit(object) and locks NSLock() or NSRecursiveLock() to protect shared resource but these are also not working.
While when I use #synchronized() keyword in Objective C to protect shared resource, it's working fine as expected and I am not getting race conditions in particular block of code.
So, what is an alternative to protect data in Swift as we can not use #synchronized() keyword in Swift language.
PFA screenshot for reference -
I don't understand "I can not use Serial queues or barrier as Queue will block only single queues accessing the shared resource not the other queues." Using a queue is the standard solution to this problem.
class MultiAccess {
private var _property: String = ""
private let queue = DispatchQueue(label: "MultiAccess")
var property: String {
get {
var result: String!
queue.sync {
result = self._property
}
return result
}
set {
queue.async {
self._property = newValue
}
}
}
}
With this construction, access to property is atomic and thread-safe without the caller having to do anything special. Note that this intentionally uses a single queue for the class, not a queue per-property. As a rule, you want a relatively small number of queues doing a lot of work, not a large number of queues doing a little work. If you find that you're accessing a mutable object from lots of different threads, you need to rethink your system (probably reducing the number of threads). There's no simple pattern that will make that work efficiently and safely without you having to think about your specific use case carefully.
But this construction is useful for problems where system frameworks may call you back on random threads with minimal contention. It is simple to implement and fairly easy to use correctly. If you have a more complex problem, you will have to design a solution for that problem.
Edit: I haven't thought about this answer in a long time, but Brennan's comments brought it back to my attention. Because of the bug I had in the original code, my original answer was ok, but if you fixed the bug it was bad. (If you want to see my original code that used a barrier, look in the edit history, I don't want to put it here because people will copy it.) I've changed it to use a standard serial queue rather than a concurrent queue.
Don't generate concurrent queues without careful thought about how threads will be generated. If you are going to have many simultaneous accesses, you're going to create a lot of threads, which is bad. If you're not going to have many simultaneous accesses, then you don't need a concurrent queue. GCD talks make promises about managing threads that it doesn't actually live up to. You definitely can get thread explosion (as Brennan mentions.)
I use NSPersistentContainer as a dependency in my classes. I find this approach quite useful, but there is a dilemma: I don't know in which thread my methods will be called. I found a very simple solution for this
extension NSPersistentContainer {
func getContext() -> NSManagedObjectContext {
if Thread.isMainThread {
return viewContext
} else {
return newBackgroundContext()
}
}
}
Looks wonderful but I still have a doubt is there any pitfalls? If it properly works, why on earth Core Data confuses us with its contexts?
It's OK as long as you can live with its inherent limitations, i.e.
When you're on the main queue, you always want the viewContext, never any other one.
When you're not on the main queue, you always want to create a new, independent context.
Some drawbacks that come to mind:
If you call a method that has an async completion handler, that handler might be called on a different queue. If you use this method, you might get a different context than when you made the call. Is that OK? It depends what you're doing in the handler.
Changes on one background context are not automatically available in other background contexts, so you run the risk of having multiple contexts with conflicting changes.
The method suggests a potential carelessness about which context you're using. It's important to be aware of which context you're using, because managed objects fetched on one can't be used with another. If your code just says, hey give me some context, but doesn't track the contexts properly, you increase the chance of crashes from accidentally crossing contexts.
If your non-main-queue requirements match the above, you're probably better off using the performBackgroundTask(_:) method on NSPersistentContainer. You're not adding anything to that method here.
[W]hy on earth Core Data confuses us with its contexts?
Managed object contexts are a fundamental part of how Core Data works. Keeping track of them is therefore a fundamental part of having an app that doesn't corrupt its data or crash.
Can I have a single Private Managed Object context that is being accessed by Multiple NSOperation ?
I have 2 two options :
Have a managed object context per NSOperation.
i.e if there are 100 NSoperation 100 context will be created.
Have a single context and multiple NSOperation.
i.e Single Context and 100 NSOperations accessing it.
Which can be a better option.
The correction solution is option 1. Create a queue with a concurrency count of 1 and do all your writing with the queue. This will avoid any write conflict which can lead to losing information. If you need to access information for the main thread you should use a global main thread context (in NSPersistentContainer it is call viewContext).
If this is going to slow then you should investigate the work that you are doing. Generally each operation should be pretty quick, so if you are finding that they are not you might be doing something wrong (a common issue is doing a fetch for each imported object - instead of one large fetch). Another solution is to split up large tasks into several smaller task (importing large amount of data). You can also set different priority - giving higher priority to actions that the user initiated.
You shouldn't be afraid of creating contexts. They are not that expensive.
According to what you said in the comments, that you didn't actually write lots of data in the single operation and just did a fetch for an object, I suggest using single MOC.
Usual reason to have multiple MOC is to read/update/save lots of data independently of Main context and of any other contexts. In such flow you would be able to save that objects concurrently from different contexts.
But it's not your case, If I understood correctly.
For the single fetch there would be enough just one Private context, however I believe there wouldn't be lots overhead in creating many contexts. But why to do extra work?
1.So, you create private MOC
let privateContext = NSManagedObjectContext(concurrencyType:.privateQueueConcurrencyType)
2.Create each operation and pass MOC
let operation = MyOperation(context: privateContext)
3.In the operation perform sync call to private MOC with function.
In such way you should avoid any concurrent problem with single MOC
func performAndWait(_ block: #escaping () -> Swift.Void)
for example
let myObject: Object?
privateContext.performAndWait {
myObject = privateContext.fetch(...)
}
// do what you need with myObject
I'm using GCD to add thread-safety to a class.
Some public methods of my class are called by other public methods in the class. However, this leads to the re-entrant locking problem: if I protect the appropriate publicly visible methods with synchronous GCD blocks (in some cases), the re-use means sometimes I'll be trying to run anther synchronous block on the current queue, which leads to deadlock.
What's the most elegant solution? An obvious approach is to have internal versions of the appropriate methods, without any GCD blocks, and external public versions of the method that have the GCD blocks wrapping calls to the interal methods. This doesn't quite feel right to me.
Here are a few thoughts:
See if you can't use immutable objects. This means that whenever a method would modify the object it actually returns a new object with the modified state. The blocks would then go on and use this new object. This is simple, requires no special care but is not always possible.
See if your public methods can't use private methods that carry around state. Since each block would carry around its own internal state then you are also thread safe.
If you had some example use case it might lead to a more ideas...
I've used a pattern like this very successfully in our C++ classes that use dispatch queues for synchronization:
class Foo
{
public:
void doSomething() {
dispatch_sync(fQueue, ^{ doSomething_sync(); });
}
private:
void doSomething_sync() { ... }
private:
dispatch_queue_t fQueue;
};
The general convention here is that for any given _sync method in a class, you only call other _sync methods and not their non _sync public counterpart.