I have found something like this in performBlockAndWait documentation:
This method may safely be called reentrantly.
My question is whether it means that it never cause deadlock when I e.g. will invoke it like that on single context?:
NSManageObjectContext *context = ...
[context performBlockAndWait:^{
// ... some stuff
[context performBlockAndWait:^{
}];
}];
You can try it yourself with a small code snippet ;)
But true, it won't deadlock.
I suspect, the internal implementation uses a queue specific token in order to identify the current queue on which the code executes (see dispatch_queue_set_specific and dispatch_queue_get_specific).
If it determines that the current executing code executes on its own private queue or on a children-queue, it simply bypasses submitting the block synchronously - which would cause a dead-lock, and instead executing it directly.
A possible implementation my look as below:
func executeSyncSafe(f: () -> ()) {
if isSynchronized() {
f()
} else {
dispatch_sync(syncQueue, f)
}
}
func isSynchronized() -> Bool {
let context = UnsafeMutablePointer<Void>(Unmanaged<dispatch_queue_t>.passUnretained(syncQueue).toOpaque())
return dispatch_get_specific(&queueIDKey) == context
}
And the queue might be created like this:
private var queueIDKey = 0 // global
init() {
dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL,
QOS_CLASS_USER_INTERACTIVE, 0))
let context = UnsafeMutablePointer<Void>(Unmanaged<dispatch_queue_t>.passUnretained(syncQueue).toOpaque())
dispatch_queue_set_specific(syncQueue, &queueIDKey, context, nil)
}
dispatch_queue_set_specific associates a token (here context - which is simply the pointer value of the queue) with a certain key for that queue. And later, you can try to retrieve that token for any queue specifying the key and check whether the current queue is the same queue or a children-queue. If this is true, bypass dispatching to the queue and instead call the function f directly.
Related
I have created an simple singleton in Swift 3:
class MySingleton {
private var myName: String
private init() {}
static let shared = MySingleton()
func setName(_ name: String) {
myName = name
}
func getName() -> String {
return myName
}
}
Since I made the init() private , and also declared shared instance to be static let, I think the initializer is thread safe. But what about the getter and setter functions for myName, are they thread safe?
You are correct that those getters that you've written are not thread safe. In Swift, the simplest (read safest) way to achieve this at the moment is using Grand Central Dispatch queues as a locking mechanism. The simplest (and easiest to reason about) way to achieve this is with a basic serial queue.
class MySingleton {
static let shared = MySingleton()
// Serial dispatch queue
private let lockQueue = DispatchQueue(label: "MySingleton.lockQueue")
private var _name: String
var name: String {
get {
return lockQueue.sync {
return _name
}
}
set {
lockQueue.sync {
_name = newValue
}
}
}
private init() {
_name = "initial name"
}
}
Using a serial dispatch queue will guarantee first in, first out execution as well as achieving a "lock" on the data. That is, the data cannot be read while it is being changed. In this approach, we use sync to execute the actual reads and writes of data, which means the caller will always be forced to wait its turn, similar to other locking primitives.
Note: This isn't the most performant approach, but it is simple to read and understand. It is a good general purpose solution to avoid race conditions but isn't meant to provide synchronization for parallel algorithm development.
Sources:
https://mikeash.com/pyblog/friday-qa-2015-02-06-locks-thread-safety-and-swift.html
What is the Swift equivalent to Objective-C's "#synchronized"?
A slightly different way to do it (and this is from an Xcode 9 Playground) is to use a concurrent queue rather than a serial queue.
final class MySingleton {
static let shared = MySingleton()
private let nameQueue = DispatchQueue(label: "name.accessor", qos: .default, attributes: .concurrent)
private var _name = "Initial name"
private init() {}
var name: String {
get {
var name = ""
nameQueue.sync {
name = _name
}
return name
}
set {
nameQueue.async(flags: .barrier) {
self._name = newValue
}
}
}
}
Using a concurrent queue means that multiple reads from multiple threads aren't blocking each other. Since there is no mutation on getting, the value can be read concurrently, because...
We are setting the new values using a .barrier async dispatch. The block can be performed asynchronously because there is no need for the caller to wait for the value to be set. The block will not be run until all the other blocks in the concurrent queue ahead of it have completed. So, existing pending reads will not be affected while this setter is waiting to run. The barrier means that when it starts running, no other blocks will run. Effectively, turning the queue into a serial queue for the duration of the setter. No further reads can be made until this block completes. When the block has completed the new value has been set, any getters added after this setter can now run concurrently.
Extremely need an advice, currently run out of ideas. I stack with core data concurrency related issue, to debug I use -"com.apple.CoreData.ConcurrencyDebug" and what I have:
Stack:
Thread 3 Queue: coredata(serial)
0 +[NSManagedObjectContext Multithreading_Violation_AllThatIsLeftToUsIsHonor]:
CoreData`-[NSManagedObjectContext executeFetchRequest:error:]:
1 -[NSManagedObjectContext executeFetchRequest:error:]:
2 NSManagedObjectContext.fetch (__ObjC.NSFetchRequest) throws -> Swift.Array:
3 AppDelegate.(fetchRequest NSFetchRequest) -> [A]).(closure #1)
I get into AppDelegate::fetchRequest from here:
let messageRequest: NSFetchRequest<ZMessage> = ZMessage.fetchRequest();
messageRequest.sortDescriptors = [NSSortDescriptor(key: "id", ascending: false)];
let messageArray: Array<ZMessage> = self.fetchRequest(messageRequest);
I perform all coredata stuff on the serial queue(self.queueContainer).
public func fetchRequest<T>(_ request: NSFetchRequest<T>) -> Array<T>
{
var retval: Array<T> = Array<T>();
self.queueContainer.sync {
do {
retval = try self.persistentContainer.viewContext.fetch(request);
} catch {
let nserror = error as NSError;
fatalError("[CoreData] Unresolved fetch error \(nserror), \(nserror.userInfo)");
}
}
return retval;
}
This is what I found useful.
Below are some of the rules that must be followed if you do not want your app that uses CoreData to crash (or) corrupt the database:
A NSManagedObjectContext should be used only on the queue that is associated with it.
If initialized with .PrivateQueueConcurrencyType, a private, internal queue is created that is associated with the object. This queue can be accessed by instance methods .performBlockAndWait (for sync ops) and .performBlock (for async ops)
If initialized with .MainQueueConcurrencyType, the object can be used only on the main queue. The same instance methods ( performBlock and performBlockAndQueue) can be used here as well.
An NSManagedObject should not be used outside the thread in which it is initialized
Now I'm digging around but honestly speaking can't be sure that my managed object context(MOC) is associated with the right queue.
From manual:
...A consequence of this is that a context assumes the default owner is the thread or queue that allocated it—this is determined by the thread that calls its init method.
In the AppDelegate I do not operate with MOC directly, instead of this I instantiating NSPersistentContainer which owns this MOC. Just in case I'm doing this on the same serial queue as well.
public lazy var persistentContainer: NSPersistentContainer =
{
self.queueContainer.sync {
let container = NSPersistentContainer(name: "Joker")
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
return container
}
}()
Thanks in advance.
I am not a Swift coder , but What is queueContainer?
You should not do the threading yourself, you should use the NSManagedObjectContext block methods as you wrote in your quotation:
The same instance methods ( performBlock and performBlockAndQueue) can
be used here as well.
managedObjectContext.performBlock {
Whatever managedObjectContext you are using, you should use that context block method and do your stuff inside the block methods.
Look at the documentation here for examples on how to do this correctly.
Also to avoid crashes and threading errors:
NSManagedObject instances are not intended to be passed between
queues. Doing so can result in corruption of the data and termination
of the application. When it is necessary to hand off a managed object
reference from one queue to another, it must be done through
NSManagedObjectID instances.
You retrieve the managed object ID of a managed object by calling the
objectID method on the NSManagedObject instance.
I'm struggling with the following situation, pleaes bear with me as I've tried to be explain this as clearly as possible:
I have a class CoummintyOperation which is a subclass of a GroupOperation. CommuityOperation is called and added to an NSOperationQueue.
CommunityOperation is in turn calling a whole bunch of NSOperations which are also sublcasses of GroupOperation and in turn are calling NSoperations within them.
I've added dependencies for the GroupOperations in the CommunityOperation class.
The issue I'm dealing with is if a nested operation fails I need to cancel ALL NSOperations in the CommunityOperations class, but I don't have access to the operationQueue that CommunityOperation was added to in order to call the cancelAllOperations method.
Can anyone help me out and explain how I can call this method, cancel All operations in this class (and therefore all nested Operations) and display an error message to the user.
Thanks in advance
Here's some sample code to help explain my issue:
CommunityOperation gets put on an operationQueue from another class (not included)
public class CommunityOperation: GroupOperation {
var updateOperation: UpdateOperation!
var menuOperation: MenuOperation!
var coreDataSaveOperation: CoreDataSaveOperation!
var hasErrors = false
public init() {
super.init(operations: [])
updateOperation = UpdateOperation()
menuOperation = MenuOperation()
coreDataSaveOperation = CoreDataSaveOperation()
coreDataSaveOperation.addDependencies([updateOperation, menuOperation])
self.addOperation(updateOperation)
self.addOperation(menuOperation)
self.addOperation(coreDataSaveOperation)
}
}
MenuOperation class, which has nested Operations in it as well:
class UpdateMenuOperation: GroupOperation {
let downloadGroupsOperation: DownloadGroupsOperation
let downloadMembersOperation: DownloadMembersOperation
init() {
downloadGroupsOperation = DownloadGroupsOperation()
downloadMembersOperation = DownloadMembersOperation(])
super.init(operations: [downloadGroupsOperation,
downloadMembersOperation
])
}
}
DownloadGroupOperation class is again a subclass of GroupOperation. It has 2 operations - the first to download data and the second to parse the data:
class DownloadTopGroupsOperation: GroupOperation {
let downloadOperation: DownloadOperation
let importTopGroupsOperation: ImportOperation
init() {
downloadOperation = DownloadOperation()
importOperation = ImportOperation()
importOperation.addDependency(downloadOperation)
importOperation.addCondition(NoCancelledDependencies())
super.init(operations: [downloadOperation, importOperation])
}
}
And finally (whew) the DownloadOperation class uses a NSURLSession and the method downloadTaskWithURL, it's in this method's completion handler that I want to call cancelAllOperations on the main operatioQueue if an error is returned:
class DownloadOperation: GroupOperation {
init() {
super.init(operations: [])
if self.cancelled {
return
}
let task = session.downloadTaskWithURL(url) { [weak self] url, response, error in
self?.downloadFinished(url, response: response as? NSHTTPURLResponse, error: error)
}
}
func downloadFinished(url: NSURL?, response: NSHTTPURLResponse?, error: NSError?) {
if error {
*cancel allOperations on queue*
}
}
}
It should work in a bit different way. I'd check the isCancelled from the GroupOperation by the end of each NSOperation execution. If operation was cancelled then cancel the current GroupOperation and so on. In the end your CommunityOperation should be cancelled as well.
Here is the rough implementation of the proposed solution:
extension GroupOperation {
func addCancellationObservers() {
self.operations.forEach() { $0.willCancelObservers.append() { [unowned self] operation, errors in
self.cancel() // cancel the group operation. will force it to cancel all child operations
}
}
}
}
Then call addCancellationObservers from the init method of each group operations you have.
if you're using Apple's sample code (or https://github.com/danthorpe/Operations which is the evolution of that project) you can sort this out by attaching a condition to your operations which have dependencies.
Here is the init of your top level GroupOperation
init() {
updateOperation = UpdateOperation()
menuOperation = MenuOperation()
coreDataSaveOperation = CoreDataSaveOperation()
coreDataSaveOperation.addDependencies([updateOperation, menuOperation])
// Attach a condition to ensure that all dependencies succeeded
coreDataSaveOperation.addCondition(NoFailedDependenciesCondition())
super.init(operations: [updateOperation, menuOperation, coreDataSaveOperation])
}
To explain what is happening here... NSOperation has no concept of "failure". The operations always "finish" but whether they finished successfully or failed does not affect how NSOperation dependencies work.
In other words, an operation will become ready when all of its dependencies finish, regardless of whether those dependencies succeeded. This is because "success" and "failure" is something that the subclass must define. Operation (the NSOperation subclass) defines success by having finished without any errors.
To deal with this, add a condition that no dependencies must have failed. In Operations this condition got renamed to make it clearer. But, the concept exists in the Apple sample code too.
I'd like to implement method chaining in my swift code, likely to Alamofire methods. For example, if I have to use my function like below
getListForID(12).Success {
// Success block
}. Failure {
// Failure block
}
How would I create the function getListForID?
To expand on the great points #dasblinkenlight and #Sulthan have made – here's a small example of how you could achieve your request function to take a success and failure closure, in the convenient syntax that you want.
First, you'll have to define a new class to represent the 'result handler'. This is what your success and failure functions will pass around, allowing you to add multiple trailing closures to make up your completion block logic. You'll want it to look something like this:
class ResultHandler {
typealias SuccessClosure = RequestHandler.Output->Void
typealias FailureClosure = Void->Void
// the success and failure callback arrays
private var _successes = [SuccessClosure]()
private var _failures = [FailureClosure]()
/// Invoke all the stored callbacks with a given callback result
func invokeCallbacks(result:RequestHandler.Result) {
switch result {
case .Success(let output): _successes.forEach{$0(output)}
case .Failure: _failures.forEach{$0()}
}
}
// remove all callbacks – could call this from within invokeCallbacks
// depending on the re-usability of the class
func removeAllCallbacks() {
_successes.removeAll()
_failures.removeAll()
}
/// appends a new success callback to the result handler's successes array
func success(closure:SuccessClosure) -> Self {
_successes.append(closure)
return self
}
/// appends a new failure callback to the result handler's failures array
func failure(closure:FailureClosure) -> Self {
_failures.append(closure)
return self
}
}
This will allow you to define multiple success or failure closures to be executed on completion. If you don't actually need the capacity for multiple closures, then you can simplify the class down by stripping out the arrays – and just keeping track of the last added success and failure completion blocks instead.
Now all you have to do is define a function that generates a new ResultHandler instance and then does a given asynchronous request, with the invokeCallbacks method being invoked upon completion:
func doRequest(input:Input) -> ResultHandler {
let resultHandler = ResultHandler()
doSomethingAsynchronous(resultHandler.invokeCallbacks)
return resultHandler
}
Now you can call it like this:
doRequest(input).success {result in
print("success, with:", result)
}.failure {
print("fail :(")
}
The only thing to note is your doSomethingAsynchronous function will have to dispatch its completion block back to the main thread, to ensure thread safety.
Full project (with added example on usage): https://github.com/hamishknight/Callback-Closure-Chaining
In order to understand what is going on, it would help to rewrite your code without the "convenience" syntax, which lets you omit parentheses when a closure is the last parameter of a function:
getListForID(12)
.Success( { /* Success block */ } )
.Failure( { /* Failure block */ } )
This makes the structure of the code behind this API more clear:
The return value of getListForID must be an object
The object must have two function called Success and Failure*
Both Success and Failure need to take a single parameter of closure type
Both Success and Failure need to return self
* The object could have only Success function, and return a different object with a single Failure function, but then you wouldn't be able to re-order the Success and Failure handlers, or drop Success handler altogether.
Why am I deadlocking?
- (void)foo
{
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
[self foo];
});
// whatever...
}
I expect foo to be executed twice on first call.
Neither of the existing answers are quite accurate (one is dead wrong, the other is a bit misleading and misses some critical details). First, let's go right to the source:
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
struct _dispatch_once_waiter_s * volatile *vval =
(struct _dispatch_once_waiter_s**)val;
struct _dispatch_once_waiter_s dow = { NULL, 0 };
struct _dispatch_once_waiter_s *tail, *tmp;
_dispatch_thread_semaphore_t sema;
if (dispatch_atomic_cmpxchg(vval, NULL, &dow)) {
dispatch_atomic_acquire_barrier();
_dispatch_client_callout(ctxt, func);
dispatch_atomic_maximally_synchronizing_barrier();
//dispatch_atomic_release_barrier(); // assumed contained in above
tmp = dispatch_atomic_xchg(vval, DISPATCH_ONCE_DONE);
tail = &dow;
while (tail != tmp) {
while (!tmp->dow_next) {
_dispatch_hardware_pause();
}
sema = tmp->dow_sema;
tmp = (struct _dispatch_once_waiter_s*)tmp->dow_next;
_dispatch_thread_semaphore_signal(sema);
}
} else {
dow.dow_sema = _dispatch_get_thread_semaphore();
for (;;) {
tmp = *vval;
if (tmp == DISPATCH_ONCE_DONE) {
break;
}
dispatch_atomic_store_barrier();
if (dispatch_atomic_cmpxchg(vval, tmp, &dow)) {
dow.dow_next = tmp;
_dispatch_thread_semaphore_wait(dow.dow_sema);
}
}
_dispatch_put_thread_semaphore(dow.dow_sema);
}
}
So what really happens is, contrary to the other answers, the onceToken is changed from its initial state of NULL to point to an address on the stack of the first caller &dow (call this caller 1). This happens before the block is called. If more callers arrive before the block is completed, they get added to a linked list of waiters, the head of which is contained in onceToken until the block completes (call them callers 2..N). After being added to this list, callers 2..N wait on a semaphore for caller 1 to complete execution of the block, at which point caller 1 will walk the linked list signaling the semaphore once for each caller 2..N. At the beginning of that walk, onceToken is changed again to be DISPATCH_ONCE_DONE (which is conveniently defined to be a value that could never be a valid pointer, and therefore could never be the head of a linked list of blocked callers.) Changing it to DISPATCH_ONCE_DONE is what makes it cheap for subsequent callers (for the rest of the lifetime of the process) to check the completed state.
So in your case, what's happening is this:
The first time you call -foo, onceToken is nil (which is guaranteed by virtue of statics being guaranteed to be initialized to 0), and gets atomically changed to become the head of the linked list of waiters.
When you call -foo recursively from inside the block, your thread is considered to be "a second caller" and a waiter structure, which exists in this new, lower stack frame, is added to the list and then you go to wait on the semaphore.
The problem here is that this semaphore will never be signaled because in order for it to be signaled, your block would have to finish executing (in the higher stack frame), which now can't happen due to a deadlock.
So, in short, yes, you're deadlocked, and the practical takeaway here is, "don't try to call recursively into a dispatch_once block." But the problem is most definitely NOT "infinite recursion", and the flag is most definitely not only changed after the block completes execution -- changing it before the block executes is exactly how it knows to make callers 2..N wait for caller 1 to finish.
You could alter code a little, so that the calls are outside the block and there's no deadlock, something like this:
- (void)foo
{
static dispatch_once_t onceToken;
BOOL shouldRunTwice = NO;
dispatch_once(&onceToken, ^{
shouldRunTwice = YES;
});
if (shouldRunTwice) {
[self foo];
}
// whatever...
}