Can I write mainMOC.reset() or should I nest it like here:
mainMOC.performBlockAndWait({
mainMOC.reset()
})
I want to perform it from an arbitrary thread.
Any calls to a context must be on the queue associated with that context. If you are calling reset, it must be from the queue associated with the thread. Coming from an arbitrary thread, call it in the block.
You can test this and other questions like this by turning on the concurrency debug flag. It will let you know if you are violating the confinement constraints.
Related
In iOS, we have GCD and Operation to handle concurrent programming.
looking into GCD we have QoS classes, and they're simple and straight forward, this question is about why DispatchQueue.main.async is commonly used to asynchronies X tasks in the Main Thread.
So when we usually handle updating something in the UI we usually use that function since to prevent any irresponsiveness from the application.
makes me think is writing code inside the UIViewController usually executed in the main thread ?
but also knowing that callback & completionHandler usually execute without specifying on what thread they are in, and the UI never had a problem with that !! so it is on the background ?
How Swift handles this ? and what thread am i writing on by default without specifying anything ?
Since there are more than one question here, let's attempt to answer them one by one.
why DispatchQueue.main.async is commonly used to asynchronies X tasks
in the Main Thread.
Before mentioning a direct answer, make sure that you don't have confusion of understanding:
Serial <===> Concurrent.
Sync <===> Async.
Keep in mind that DispatchQueue.main is serial queue. Using sync or async has nothing to do with determining serialization or currency of a queue, instead they refer to how the task is handled. Thus saying DispatchQueue.main.async means that:
returns control to the current queue right after task has been sent to
be performed on the different queue. It doesn't wait until the task is
finished. It doesn't block the queue.
cited from: https://stackoverflow.com/a/44324968/5501940 (I'd recommend to check it.)
In other words, async means: this will happen on the main thead and update it when it is finished. That's what makes what you said:
So when we usually handle updating something in the UI we usually use
that function since to prevent any irresponsiveness from the
application.
seems to be sensible; Using sync -instead of async- will block the main.
makes me think is writing code inside the UIViewController usually
executed in the main thread ?
First of all: By default, without specifying which thread should execute a chunk of code it would be the main thread. However your question seems to be unspecific because inside a UIViewController we can call functionalities that are not executed on the main thread by specifying it.
but also knowing that callback & completionHandler usually execute
without specifying on what thread they are in, and the UI never had a
problem with that !! so it is on the background ?
"knowing that callback & completionHandler usually execute without specifying on what thread they are in" No! You have to specify it. A good real example for it, actually that's how Main Thread Checker works.
I believe that there is something you are missing here, when dealing when a built-in method from the UIKit -for instance- that returns a completion handler, we can't see that it contains something like DispatchQueue.main.async when calling the completion handler; So, if you didn't execute the code inside its completion handler inside DispatchQueue.main.async so we should assume that it handles it for you! It doesn't mean that it is not implemented somewhere.
Another real-world example, Alamofire! When calling
Alamofire.request("https://httpbin.org/get").responseJSON { response in
// what is going on here work has to be async on the main thread
}
That's why you can call it without facing any "hanging" issue on the main thread; It doesn't mean its not handled, instead it means they handle it for you so you don't have to worry about it.
Now that dispatch_get_current_queue is deprecated in iOS 6, how do I use dispatch_after to execute something in the current queue?
The various links in the comments don't say "it's better not to do it." They say you can't do it. You must either pass the queue you want or dispatch to a known queue. Dispatch queues don't have the concept of "current." Blocks often feed from one queue to another (called "targeting"). By the time you're actually running, the "current" queue is not really meaningful, and relying on it can (and historically did) lead to dead-lock. dispatch_get_current_queue() was never meant for dispatching; it was a debugging method. That's why it was removed (since people treated it as if it meant something meaningful).
If you need that kind of higher-level book-keeping, use an NSOperationQueue which tracks its original queue (and has a simpler queuing model that makes "original queue" much more meaningful).
There are several approaches used in UIKit that are appropriate:
Pass the call-back dispatch_queue as a parameter (this is probably the most common approach in new APIs). See [NSURLConnection setDelegateQueue:] or addObserverForName:object:queue:usingBlock: for examples. Notice that NSURLConnection expects an NSOperationQueue, not a dispatch_queue. Higher-level APIs and all that.
Call back on whatever queue you're on and leave it up to the receiver to deal with it. This is how callbacks have traditionally worked.
Demand that there be a runloop on the calling thread, and schedule your callbacks on the calling runloop. This is how NSURLConnection historically worked before queues.
Always make your callbacks on one of the well-known queues (particularly the main queue) unless told otherwise. I don't know of anywhere that this is done in UIKit, but I've seen it commonly in app code, and is a very easy approach most of the time.
Create a queue manually and dispatch both your calling code and your dispatch_after code onto that. That way you can guarantee that both pieces of code are run from the same queue.
Having to do this is likely because the need of a hack. You can hack around this with another hack:
id block = ^foo() {
[self doSomething];
usleep(delay_in_us);
[self doSomehingOther];
}
Instead of usleep() you might consider to loop in a run loop.
I would not recommend this "approach" though. The better way is to have some method which takes a queue as parameter and a block as parameter, where the block is then executed on the specified queue.
And, by the way, there are ways during a block executes to check whether it runs on a particular queue - respectively on any of its parent queue, provided you have a reference to that queue beforehand: use functions dispatch_queue_set_specific, and dispatch_get_specific.
Following an earlier question on SO, I'm now looking to compare two different grand central dispatch queues to try and determine if the current code is being run on the main thread or not. My question simply: is this a valid way of achieving this? Or are there some pitfalls of doing this that I haven't considered?
if (dispatch_get_current_queue() != dispatch_get_main_queue()) {
// We are currently on a background queue
} else {
// We are on the main queue
}
Cheers
Comparing the current queue against the main queue is not a valid way to check whether you are running on the main thread.
Use [NSThread isMainThread] or pthread_main_np() to explicitly check whether you are on the main thread if that is what you want to know.
You can be on the main thread without the current queue being the main queue, and you can be on the main queue without the current thread being the main thread (the latter only if dispatch_main() has been called, but still).
In recent releases this is documented explicitly in the CAVEATS section of the dispatch_get_main_queue(3) manpage:
The result of dispatch_get_main_queue() may or may not equal the result of dispatch_get_current_queue() when called on the main thread. Comparing the two is not a valid way to test whether code is executing on the main thread. Foundation/AppKit programs should use [NSThread isMainThread]. POSIX programs may use pthread_main_np(3).
In general you should avoid using queue pointer comparison to influence program logic. Dispatch queues exist in a dependency tree (the target queue hierarchy) and comparing individual leaves in that tree without taking their interdependency into account does not provide sufficient information to make safe decisions.
If you really need program logic based on queue interdependency, use the dispatch_get_specific(3)/dispatch_queue_set_specific(3) APIs which are target-queue aware and much more explicit.
Essentially, I have a set of data in an NSDictionary, but for convenience I'm setting up some NSArrays with the data sorted and filtered in a few different ways. The data will be coming in via different threads (blocks), and I want to make sure there is only one block at a time modifying my data store.
I went through the trouble of setting up a dispatch queue this afternoon, and then randomly stumbled onto a post about #synchronized that made it seem like pretty much exactly what I want to be doing.
So what I have right now is...
// a property on my object
#property (assign) dispatch_queue_t matchSortingQueue;
// in my object init
_sortingQueue = dispatch_queue_create("com.asdf.matchSortingQueue", NULL);
// then later...
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
dispatch_async(_sortingQueue, ^{
// do stuff...
});
}
And my question is, could I just replace all of this with the following?
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
#synchronized (self) {
// do stuff...
};
}
...And what's the difference between the two anyway? What should I be considering?
Although the functional difference might not matter much to you, it's what you'd expect: if you #synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.
So they're equivalent for ensuring that a third resource is used from only one queue at a time.
Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to #synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.
Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.
In the second case you would block the calling thread until "do stuff" was done. Using queues and dispatch_async you will not block the calling thread. This would be particularly important if you call sortArrayIntoLocalStore from the UI thread.
Imagine you want to do many thing in the background of an iOS application but you code it properly so that you create threads (for example using GCD) do execute this background activity.
Now what if you need at some point to write update a variable but this update can occur or any of the threads you created.
You obviously want to protect that variable and you can use the keyword #synchronized to create the locks for you but here is the catch (extract from the Apple documentation)
The #synchronized() directive locks a section of code for use by a
single thread. Other threads are blocked until the thread exits the
protected code—that is, when execution continues past the last
statement in the #synchronized() block.
So that means if you synchronized an object and two threads are writing it at the same time, even the main thread will block until both threads are done writing their data.
An example of code that will showcase all this:
// Create the background queue
dispatch_queue_t queue = dispatch_queue_create("synchronized_example", NULL);
// Start working in new thread
dispatch_async(queue, ^
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
});
// won’t actually go away until queue is empty
dispatch_release(queue);
So the question is fairly simple: How to overcome this ? How can we securely add a locks on all the threads EXCEPT the main thread which, we know, doesn't need to be blocked in that case ?
EDIT FOR CLARIFICATION
As you some of you commented, it does seem logical (and this was clearly what I thought at first when using synchronized) that only two the threads that are trying to acquire the lock should block until they are both done.
However, tested in a real situation, this doesn't seem to be the case and the main thread seems to also suffer from the lock.
I use this mechanism to log things in separate threads so that the UI is not blocked. But when I do intense logging, the UI (main thread) is clearly highly impacted (scrolling is not as smooth).
So two options here: Either the background tasks are too heavy that even the main thread gets impacted (which I doubt), or the synchronized also blocks the main thread while performing the lock operations (which I'm starting reconsidering).
I'll dig a little further using the Time Profiler.
I believe you are misunderstanding the following sentence that you quote from the Apple documentation:
Other threads are blocked until the thread exits the protected code...
This does not mean that all threads are blocked, it just means all threads that are trying to synchronise on the same object (the _sharedResource in your example) are blocked.
The following quote is taken from Apple's Thread Programming Guide, which makes it clear that only threads that synchronise on the same object are blocked.
The object passed to the #synchronized directive is a unique identifier used to distinguish the protected block. If you execute the preceding method in two different threads, passing a different object for the anObj parameter on each thread, each would take its lock and continue processing without being blocked by the other. If you pass the same object in both cases, however, one of the threads would acquire the lock first and the other would block until the first thread completed the critical section.
Update: If your background threads are impacting the performance of your interface then you might want to consider putting some sleeps into the background threads. This should allow the main thread some time to update the UI.
I realise you are using GCD but, for example, NSThread has a couple of methods that will suspend the thread, e.g. -sleepForTimeInterval:. In GCD you can probably just call sleep().
Alternatively, you might also want to look at changing the thread priority to a lower priority. Again, NSThread has the setThreadPriority: for this purpose. In GCD, I believe you would just use a low priority queue for the dispatched blocks.
I'm not sure if I understood you correctly, #synchronize doesn't block all threads but only the ones that want to execute the code inside of the block. So the solution probably is; Don't execute the code on the main thread.
If you simply want to avoid having the main thread acquire the lock, you can do this (and wreck havoc):
dispatch_async(queue, ^
{
if(![NSThread isMainThread])
{
// Synchronized that shared resource
#synchronized(sharedResource_)
{
// Write things on that resource
// If more that one thread access this piece of code:
// all threads (even main thread) will block until task is completed.
[self writeComplexDataOnLocalFile];
}
}
else
[self writeComplexDataOnLocalFile];
});