The popular Concurrent-Ruby library has a Concurrent::Event class that I find wonderful. It very neatly encapsulates the idea of, “Some threads need to wait for another thread to finish something before proceeding.”
It only takes three lines of code to use:
One to create the object
One to call .wait to start waiting, and
One to call .set when the thing is ready.
All the locks and booleans you’d need to use to create this out of other concurrency primitives are taken care of for you.
To quote some of the documentation, along with with a sample usage:
Old school kernel-style event reminiscent of Win32 programming in C++.
When an Event is created it is in the unset state. Threads can choose to
#wait on the event, blocking until released by another thread. When one
thread wants to alert all blocking threads it calls the #set method which
will then wake up all listeners. Once an Event has been set it remains set.
New threads calling #wait will return immediately.
require 'concurrent-ruby'
event = Concurrent::Event.new
t1 = Thread.new do
puts "t1 is waiting"
event.wait
puts "event ocurred"
end
t2 = Thread.new do
puts "t2 calling set"
event.set
end
[t1, t2].each(&:join)
which prints output like the following
t1 is waiting
t2 calling set
event occurred
(Several different orders are possible because it is multithreaded, but ‘t2 calling set’ always comes out before ‘event occurred’.)
Is there something like this in Swift on iOS?
I think the closest thing to that is the new async/await syntax in Swift 5.5. There's no equivalent of event.set, but await waits for something asynchronous to finish. A particularly nice expression of concurrency is async let, which proceeds concurrently but then lets you pause to gather up all the results of the async let calls:
async let result1 = // do something asynchronous
async let result2 = // do something else asynchronous at the same time
// ... and keep going...
// now let's gather up the results
return await (result1, result2)
You can achieve the result in your example using a Grand Central Dispatch DispatchSemaphore - This is a traditional counting semaphore. Each call to signal increments the semaphore. Each call to wait decrement the semaphore and if the result is less than zero it blocks and waits until the semaphore is 0
let semaphore = DispatchSemaphore(value: 0)
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
semaphore.wait()
print("event occurred")
}
q2.async {
print("q2 calling signal")
semaphore.signal()
}
Output:
q1 is waiting
q2 calling signal
event occurred
But this object won't work if you have multiple threads that want to wait. Since each call to wait decrements the semaphore the other tasks would remain blocked.
For that you could use a DispatchGroup. You call enter before you start a task in the group and leave when it is done. You can use wait to block until the group is empty, and like your Ruby object, wait will not block if the group is already empty and multiple threads can wait on the same group.
let group = DispatchGroup()
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
group.wait()
print("event occurred")
}
group.enter()
q2.async {
print("q2 calling leave")
group.leave()
}
Output:
q1 is waiting
q2 calling leave
event occurred
You generally want to avoid blocking threads on iOS if possible as there is a risk of deadlocks and if you block the main thread your whole app will become non responsive. It is more common to use notify to schedule code to execute when the group becomes empty.
I understand that your code is simply a contrived example, but depending on what you actually want to do and your minimum supported iOS requirements, there may be better alternatives.
DispatchGroup to execute code when several asynchronous tasks are complete using notify rather than wait
Combine to process asynchronous events in a pipeline (iOS 13+)
Async/Await (iOS 15+)
Related
Why does the print("2") part never get called in the following code?
I'd think that the inner main.async would push the block into the main loop's queue,
and then RunLoop.run would execute it, but apparently that isn't what happens.
(It prints 1, run, run, run, etc.)
Also, if I remove the outer main.async, and just directly run the code in that block
(still on the main queue, in viewDidLoad of a new single-view app),
then the inner main.async block does get executed (prints 1, run, 2).
Why does this change make such a difference?
var x = -1
DispatchQueue.main.async { // comment out this line for question #2
print("1")
x = 1
DispatchQueue.main.async {
print("2")
x = 2
}
while x == 1 {
print("run")
RunLoop.main.run(mode: .default, before: Date() + 1)
}
} // comment out this line for question #2
In your first example, the first async blocks the main serial queue until it returns from that outer async call, something that won’t happen while x is 1. That inner GCD async task (that updates x to 2) will never have a chance to run since that serial GCD queue is now blocked in that while loop. The attempt to run on the main run loop does not circumvent the rules/behavior of GCD serial queues. It only drains the run loop of events that have been added to it.
In your second example, you haven’t blocked the GCD main queue, so when you hit run, the dispatched block that updates x to 2 does have a chance to run, letting it proceed.
Bottom line, don’t conflate the GCD main queue and the main run loop. Yes, they both use the main thread, but run loops can’t be used to circumvent the behavior of serial GCD queues.
import XCTest
#testable import TestWait
class TestWait: XCTestCase {
func testX() {
guard Thread.isMainThread else {
fatalError()
}
let exp = expectation(description: "x")
DispatchQueue.main.async {
print("block execution")
exp.fulfill()
}
print("before wait")
wait(for: [exp], timeout: 2)
print("after wait")
}
}
Output:
before wait
block execution
after wait
I'm trying to rationalize the sequence of the prints. This is what I think:
the test is ran on main thread
it dispatches a block off the main thread, but since the dispatch happens from the main thread, then the block execution has to wait till the current block is executed
"before wait" is printed
we wait for the expectation to get fulfilled. This wait sleeps the current thread, ie the main thread for 2 seconds.
So how in the world does wait succeed even though we still haven't dispatched off of main thread. I mean "after wait" isn't printed yet! So we must still be on main thread. Hence the "block execution" never has a chance to happen.
What is wrong with my explanation? I'm guessing I it must be something with how wait is implemented
The wait(for:timeout:) of XCTestCase is not like the GCD group/semaphore wait functions with which you are likely acquainted.
When you call wait(for:timeout:), much like the GCD wait calls, it will not return until the timeout expires or the expectations are resolved. But, in the case of XCTestCase and unlike the GCD variations, inside wait(for:timeout:), it is looping, repeatedly calling run(mode:before:) until the expectations are resolved or it times out. That means that although testX will not proceed until the wait is satisfied, the calls to run(mode:before:) will allow the run loop to continue to process events (including anything dispatched to that queue, including the completion handler closure). Hence no deadlock.
Probably needless to say, this is a feature of XCTestCase but is not a pattern to employ in your own code.
Regardless, for more information about how Run Loops work, see the Threading Programming Guide: Run Loops.
When in doubt, look at the source code!
https://github.com/apple/swift-corelibs-xctest/blob/ab1677255f187ad6eba20f54fc4cf425ff7399d7/Sources/XCTest/Public/Asynchronous/XCTWaiter.swift#L358
The whole waiting code is not simple but the actual wait boils down to:
_ = runLoop.run(mode: .default, before: Date(timeIntervalSinceNow: timeIntervalToRun))
You shouldn't think about waits in terms of threads but in terms of queues. By RunLoop.current.run() you basically tell the current code to start executing other items in the queue.
The wait function utilizes NSRunLoop inside most likely. The run loop doesn't block the main thread like sleep functions do. Despite execution of the function testX does not move on. The run loop still accepts events scheduled at the thread and dispatches them to be executed.
UPDATE:
This is how I envision the work of a run loop. In a pseudocode:
while (currentDate < dateToStopRunning && someConditionIsTrue())
{
if (hasEventToDispatch()) //has scheduled block?
{
runTheEvent();// yes, we have a block, so we run it!
}
}
The block that you put for async execution is checked inside hasEventToDispatch() method and is executed. It fullfils the expectation which is checked at the next iteration of the while loop in someConditionIsTrue() so the while loop exits. testX continues exection and after wait is printed
I'd like to understand for below case if it's needed to check whether callbackQueue is current queue.
Please help me clear these scenarios, about what could happen if current queue is callback queue:
callbackQueue is main queue.
callbackQueue is concurrent queue.
callbackQueue is serial queue.
- (void)fetchWithCallbackQueue:(dispatch_queue_t)callbackQueue
{
dispatch_async(callbackQueue, ^{
});
}
I highly recommend you to watch these videos. Then go through the examples I provided and then change the code and play around with them as much as you can. It took me 3 years to feel fully comfortable with iOS multi-threading so take your time :D
Watch the first 3 minutes of this RWDevCon video and more if you like.
Also watch 3:45 until 6:15. Though I recommend you watch this video in its entirety.
To summarize the points the videos make in the duration I mentioned:
threading and conccurrency is all about the source queue and destination. queue.
sync vs. async is specifically a matter of the source queue.
Think of source and destination queues of a highway where your work is being done.
If you do async, then it's like you sending a car (has to deliver stuff) exiting the highway and then continue to let other cars drive in the highway.
If you do sync, then it's like you sending a car (has to deliver stuff) exiting the highway and then halting all other cars on the highway until the car delivers all its stuff.
Think of a car delivering stuff as a block of code, starting and finishing execution.
What happens for main queue is identical to what happens for serial queue. They're both serial queues.
So if you're already on main thread and dispatch to main thread and dispatch asynchronously then, anything you dispatch will go to the end of the queue
To show you what I mean: In what order do you think this would print? You can easily test this in Playground:
DispatchQueue.main.async {
print("1")
print("2")
print("3")
DispatchQueue.main.async {
DispatchQueue.main.async {
print("4")
}
print("5")
print("6")
print("7")
}
print("8")
}
DispatchQueue.main.async {
print("9")
print("10")
}
It will print:
1
2
3
8
9
10
5
6
7
4
Why?
It's mainly because every time you dispatch to main from main, the block will be placed at the end of the main queue.
Dispatching to main while you're already on the main queue is very hidden subtle reason for many tiny delays that you see in an app's user-interaction.
What happens if you dispatch to the same serial queue using sync?
Deadlock! See here
If you dispatch to the same concurrent queue using sync, then you won't have a deadlock. But every other thread would just wait the moment you do sync. I've discussed that below.
Now if you're trying to dispatch to a concurrent queue, then if you do sync, it's just like the example of the highway, where the entire 5 lane highway is blocked till the car delivers everything. But it's kinda useless to do sync on a concurrent queue, unless you're doing something like a .barrier queue and are trying to solve a read-write problem.
But to just see what happens if you do sync on a concurrent queue:
let queue = DispatchQueue(label: "aConcurrentQueue", attributes: .concurrent)
for i in 0...4 {
if i == 3 {
queue.sync {
someOperation(iteration: UInt32(i))
}
} else {
queue.async {
someOperation(iteration: UInt32(i))
}
}
}
func someOperation(iteration: UInt32) {
sleep(1)
print("iteration", iteration)
}
will log:
'3' will USUALLY (not always) be first (or closer to the first), because sync blocks get executed on the source queue. As docs on sync say:
As a performance optimization, this function executes blocks on the current thread whenever possible
The other iterations happen concurrently. Each time you run the app, the sequence may be different. That's the inherit unpredictability associated with concurrency. 4 will be closer to being completed last and 0 would be closer to being finished sooner. So something like this:
iteration 3
iteration 0
iteration 2
iteration 1
iteration 4
If you do async on a concurrent queue, then assuming you have a limited number of concurrent threads, e.g. 5 then 5 tasks would get executed at once. Just that each given task is going to the end of the queue. It would make sense to do this for logging stuff. You can have multiple log threads. One thread logging location events, another logging purchases, etc.
A good playground example would be:
let queue = DispatchQueue(label: "serial", attributes: .concurrent)
func delay(seconds: UInt32 ) {
queue.async {
sleep(seconds)
print(seconds)
}
}
for i in (1...5).reversed() {
delay(seconds: UInt32(i))
}
Even though you've dispatched the 5 first, this would print
1
2
3
4
5
In your example with dispatch_async (or just async in Swift), it doesn’t matter. The dispatched block will simply be added to the end of the relevant queue and will run asynchronously whenever that queue becomes available.
If, however, you used dispatch_sync (aka sync in Swift), then suddenly problems are introduced if you dispatch from a serial queue back to itself. With a “synchronous” dispatch from a serial queue to itself, the code will will “deadlock”. (And because the main queue is a serial queue, synchronous dispatches from main queue to itself manifest the same problem.) The dispatch_sync says “block the current thread until the designated queue finishes running this dispatched code”, so obviously if any serial queue dispatches synchronously back to itself, it cannot proceed because it’s blocking the queue to which you’ve dispatched the code to run.
Note that any blocking GCD API, such as dispatch_semaphore_wait and dispatch_group_wait (both known as simply wait in Swift), will suffer this same problem as the synchronous dispatch if you wait on the same thread that the serial queue uses.
But, in your case, dispatching asynchronously with dispatch_async, you shouldn’t have any problems.
I just read long introduction to NSOperationQueues and NSOperation here.
My question is the following. I need to run two operations is the same time. When both those tasks finished I need to make another calculations based on results from two finished operations. If one of the operations fails then whole operation should also fails. Those two operations does not have dependencies and are completely independent from each other so we can run them in parallel.
How to wait for this 2 operation to finish and then continue with calculations? I don't want to block UI Thread. Should I make another NSOperation that main method is creating two NSOperations add them to some local (for this operation) queue and wait with waitUntilAllOperationsAreFinished method. Then continue calculations?
I don't like in this approach that I need to create local queue every time I creating new operation. Can I design it that way that I can reuse one queue but wait only for two local operations? I can imagine that method waitUntilAllOperationsAreFinished can wait until all tasks are done so it will blocks when a lot of tasks will be performed in parallel. Any design advice? Is creating NSOperationQueue expensive? Is there any better ways to do it in iOS without using NSOperation & NSOperationQueue? I'm targeting iOS 9+ devices.
In Swift 4, you can do it this way:
let group = DispatchGroup()
// async
DispatchQueue.global().async {
// enter the group
group.enter()
taskA(onCompletion: { (_) in
// leave the group
group.leave()
})
group.enter()
taskB(onCompletion: { (_) in
group.leave()
})
}
group.notify(queue: DispatchQueue.main) {
// do something on task A & B completion
}
And there is an excellent tutorial on GCD from raywenderlich.com.
I want to have a better idea about the timing of the completion block from a intenet download request. In this case firebase. The following code example does not do anything, but it illustrates my questions.
Say I have 100 values in keysArray, there would be 100 async request to firebase and the completion block will be executed 100 times
func someFunction() {
for keys in keysArray {
loadDataFromFirebaseWithKey(completionHandler: { (success, data) in
print(data)
// Task A Some length for loop
for i in 0...10000 {
print("A")
}
// Task B
for i in 10001...20000 {
print("B")
}
})
// Task C
for i in 20001...30000 {
print("C")
}
// Task D
for i in 30001...40000 {
print("D")
}
}
// Task E
for i in 40001...50000 {
print("E")
}
// Task F
for i in 50001...60000 {
print("F")
}
}
The reason I am using such a big for loop is to illustrate some time consuming/non async proccess. Here are three case that I was wondering
Say if the program is half way through task C, does it finish C and also D before going into the completion block to do A and B
Say if the program is half way through task E, does it finish E and also F before going into the completion block to do A and B
If tasks are running concurrently, they may replace each other as the active thing at an opportunity, and may just proceed with genuine concurrency given that all iOS devices since the 4s have multiple cores. There's no reason that any particular for loop will be at any specific point at the time of interruption.
If Firebase schedules its completion handlers on a serial queue then none of the handlers will overlap with any other.
If Firebase schedules its completion handlers on the main queue, and you're calling it from the main queue, neither its completion handlers nor your calling code will overlap with each other.
So, directly to answer:
yes if Firebase is scheduling completion handlers on the same queue as you called from and that queue is serial — which almost always means 'yes' if everything is main queue linked. Otherwise no.
same answer. There's no special concurrency magic to for loops. They're exactly as usurpable as any other piece of code.