I want to have a better idea about the timing of the completion block from a intenet download request. In this case firebase. The following code example does not do anything, but it illustrates my questions.
Say I have 100 values in keysArray, there would be 100 async request to firebase and the completion block will be executed 100 times
func someFunction() {
for keys in keysArray {
loadDataFromFirebaseWithKey(completionHandler: { (success, data) in
print(data)
// Task A Some length for loop
for i in 0...10000 {
print("A")
}
// Task B
for i in 10001...20000 {
print("B")
}
})
// Task C
for i in 20001...30000 {
print("C")
}
// Task D
for i in 30001...40000 {
print("D")
}
}
// Task E
for i in 40001...50000 {
print("E")
}
// Task F
for i in 50001...60000 {
print("F")
}
}
The reason I am using such a big for loop is to illustrate some time consuming/non async proccess. Here are three case that I was wondering
Say if the program is half way through task C, does it finish C and also D before going into the completion block to do A and B
Say if the program is half way through task E, does it finish E and also F before going into the completion block to do A and B
If tasks are running concurrently, they may replace each other as the active thing at an opportunity, and may just proceed with genuine concurrency given that all iOS devices since the 4s have multiple cores. There's no reason that any particular for loop will be at any specific point at the time of interruption.
If Firebase schedules its completion handlers on a serial queue then none of the handlers will overlap with any other.
If Firebase schedules its completion handlers on the main queue, and you're calling it from the main queue, neither its completion handlers nor your calling code will overlap with each other.
So, directly to answer:
yes if Firebase is scheduling completion handlers on the same queue as you called from and that queue is serial — which almost always means 'yes' if everything is main queue linked. Otherwise no.
same answer. There's no special concurrency magic to for loops. They're exactly as usurpable as any other piece of code.
Related
The popular Concurrent-Ruby library has a Concurrent::Event class that I find wonderful. It very neatly encapsulates the idea of, “Some threads need to wait for another thread to finish something before proceeding.”
It only takes three lines of code to use:
One to create the object
One to call .wait to start waiting, and
One to call .set when the thing is ready.
All the locks and booleans you’d need to use to create this out of other concurrency primitives are taken care of for you.
To quote some of the documentation, along with with a sample usage:
Old school kernel-style event reminiscent of Win32 programming in C++.
When an Event is created it is in the unset state. Threads can choose to
#wait on the event, blocking until released by another thread. When one
thread wants to alert all blocking threads it calls the #set method which
will then wake up all listeners. Once an Event has been set it remains set.
New threads calling #wait will return immediately.
require 'concurrent-ruby'
event = Concurrent::Event.new
t1 = Thread.new do
puts "t1 is waiting"
event.wait
puts "event ocurred"
end
t2 = Thread.new do
puts "t2 calling set"
event.set
end
[t1, t2].each(&:join)
which prints output like the following
t1 is waiting
t2 calling set
event occurred
(Several different orders are possible because it is multithreaded, but ‘t2 calling set’ always comes out before ‘event occurred’.)
Is there something like this in Swift on iOS?
I think the closest thing to that is the new async/await syntax in Swift 5.5. There's no equivalent of event.set, but await waits for something asynchronous to finish. A particularly nice expression of concurrency is async let, which proceeds concurrently but then lets you pause to gather up all the results of the async let calls:
async let result1 = // do something asynchronous
async let result2 = // do something else asynchronous at the same time
// ... and keep going...
// now let's gather up the results
return await (result1, result2)
You can achieve the result in your example using a Grand Central Dispatch DispatchSemaphore - This is a traditional counting semaphore. Each call to signal increments the semaphore. Each call to wait decrement the semaphore and if the result is less than zero it blocks and waits until the semaphore is 0
let semaphore = DispatchSemaphore(value: 0)
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
semaphore.wait()
print("event occurred")
}
q2.async {
print("q2 calling signal")
semaphore.signal()
}
Output:
q1 is waiting
q2 calling signal
event occurred
But this object won't work if you have multiple threads that want to wait. Since each call to wait decrements the semaphore the other tasks would remain blocked.
For that you could use a DispatchGroup. You call enter before you start a task in the group and leave when it is done. You can use wait to block until the group is empty, and like your Ruby object, wait will not block if the group is already empty and multiple threads can wait on the same group.
let group = DispatchGroup()
let q1 = DispatchQueue(label:"q1", target: .global(qos: .utility))
let q2 = DispatchQueue(label:"q2", target: .global(qos: .utility))
q1.async {
print("q1 is waiting")
group.wait()
print("event occurred")
}
group.enter()
q2.async {
print("q2 calling leave")
group.leave()
}
Output:
q1 is waiting
q2 calling leave
event occurred
You generally want to avoid blocking threads on iOS if possible as there is a risk of deadlocks and if you block the main thread your whole app will become non responsive. It is more common to use notify to schedule code to execute when the group becomes empty.
I understand that your code is simply a contrived example, but depending on what you actually want to do and your minimum supported iOS requirements, there may be better alternatives.
DispatchGroup to execute code when several asynchronous tasks are complete using notify rather than wait
Combine to process asynchronous events in a pipeline (iOS 13+)
Async/Await (iOS 15+)
In the following code, when would queueT (serial queue) consider “task A” is completed?
The moment when aNetworkRequest switched to another thread?
Or in the doneInAnotherQueue block? ( commented // 1)
In another word, when would “task B” be executed?
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
aNetworkRequest.doneInAnotherQueue() { // completed in another thread possibly
// 1
}
}
queueT.async { // task B
print("It's my turn")
}
It would much better if you could explain the mechanism how a queue consider a task is completed.
Thanks in advance.
In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).
I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)
One pattern is to use completion handlers:
func performRequestA(completion: #escaping () -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { object in
...
completion()
}
}
Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:
func performRequestA(completion: #escaping (Result<Foo, Error>) -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { result in
guard ... else {
completion(.failure(error))
return
}
let foo = ...
completion(.success(foo))
}
}
Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:
performRequestA { result in
switch result {
case .failure(let error):
print(error)
case .success(let foo):
// update models or initiate next step in the process here
}
}
If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.
We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?
The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.
You can do a simple check
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
DispatchQueue(label: "com.test2.a").async { // create another queue inside
for i in 0..<6 {
print(i)
}
}
}
queueT.async { // task B
for i in 10..<20 {
print(i)
}
}
}
you'll get different output each run this means yes when you switch thread the task is considered done
A GCD work item is complete when the closure you pass returns. So for your example, I'm going to rewrite it to make the function calls and parameters more explicit (rather than using trailing closure syntax).
queueT.async(execute: {
// This is a function call that takes a closure parameter. Whether this
// function returns, then this closure will continue. Whether that is before or
// after running completionHandler is an internal detail of doneInAnotherQueue.
aNetworkRequest.doneInAnotherQueue(closureParameter: { ... })
// At this point, the closure is complete. What doneInAnotherQueue() does with
// its closure is its business.
})
Assuming that doneInAnotherQueue() executes its closure parameter "sometime in the future", then your task B will likely run before that closure runs (it may not; it's really a race at that point, but probably). If the doneInAnotherQueue() blocks on its closure before returning, then closureParameter will definitely run before task B.
There is absolutely no magic here. The system has no idea what doneInAnotherQueue does with its parameter. It may never run it. It may run it immediately. It may run it sometime in the future. The system just calls doneInAnotherQueue() and passes it a closure.
I rewrote async in normal "function with parameters" syntax to make it even more clear that async() is just a function, and it takes a closure parameter. It also isn't magic. It's not part of the language. It's just a normal function in the Dispatch framework. All it does it take its parameter, put it on a dispatch queue, and return. It doesn't execute anything. There's just closures that get put on queues, scheduled, and executed.
Swift is in the process of adding structured concurrency, which will add more language-level concurrency features that will allow you to express much more advanced things than the simple primitives provided by GCD.
Your task A returns straight away. Dispatching work to another queue is synchronous. Think of the block (the trailing closure) after 'doneInAnotherQueue' as just an argument to the doneInAnotherQueue function, no different to passing an Int or a String. You pass that block along and then you return immediately with the closing brace from task A.
In Swift, I used this kind of pattern sometimes.
DispatchQueue.global().async {
// do stuff in background, concurrent thread
DispatchQueue.main.sync {
// update UI
}
}
The purpose of this pattern is clear. Do time consuming calculation in global thread so UI is not locked and update UI in main thread after calculation is done.
What if there's nothing to calculate? I just found a logic in my project which
//A
DispatchQueue.main.sync {
// do something
}
crashes but
// B
DispatchQueue.global().async {
DispatchQueue.main.sync {
// do something
}
}
doesn't crash.
How are they different? And Is case B different with just this?
// C
DispatchQueue.main.async {
// do something
}
And one more question. I know main thread is serial queue, but if I run multiple code block in multiple main.async, it works like concurrent queue.
DispatchQueue.main.async {
// do A
}
DispatchQueue.main.async {
// do B
}
If main thread is really a serial queue, how can they run simultaneously? If it is just a time slicing than how are they different with global concurrent queue other than main thread can update UI?
x.sync means that the calling queue will pause and wait until the sync block finishes to continue. so in your example:
DispatchQueue.global().async {
// yada yada something
DispatchQueue.main.sync {
// update UI
}
// this will happen only after 'update UI' has finished executing
}
Usually you don't need to sync back to main, async is probably good enough and safer to avoid deadlocks. Unless it is a special case where you need to wait until something finishes on main before continuing with your async task.
As for A example crashing - calling sync and targeting current queue is a deadlock (calling queue waits for the sync block to finish, but it does not start because target queue (same) is busy waiting for the sync call to finish) and thats probably why the crash.
As for scheduling multiple blocks on main queue with async: they won't be run in parallel - they will happen one after another.
Also don't assume that queue == thread. Scheduling multiple blocks onto the same queue, might create as many threads as system allow. Just the main queue is special that it utilises Main thread.
I was going through the revisions in the Swift document and found the following
If you need to capture and mutate an in-out parameter, use an explicit local copy, such as in multithreaded code that ensures all mutation has finished before the function returns.
func multithreadedFunction(queue: DispatchQueue, x: inout Int) {
// Make a local copy and manually copy it back.
var localX = x
defer { x = localX }
// Operate on localX asynchronously, then wait before returning.
queue.async { someMutatingOperation(&localX) }
queue.sync {}
}
I had two questions concerning this:
Does calling async and then calling sync block the queue?
Why would you call async in the first place if you wanted to wait? I always thought asynchronous tasks were to return immediately without waiting until the whole code block was executed. Shouldn't one call sync?
EDIT Added link to the document. BTW I don't think whether queue is serial or concurrent is not too relevant.
I'm trying to move my app over to MVC, I have a Parse query which I've moved over to a function in my model class, the function returns a Bool.
When the button in my ViewController below is pressed the model function 'parseQuery' should be run, return a bool and then I need to use that bool to continue. At the moment, the if statement is executed before the function has completed so it always detects false.
How can I ensure that the if statement is completed once the function has completed?
#IBAction func showAllExpiredUsers(sender: AnyObject) {
var success = searchResults.parseQuery()
if success {
print("true")
} else {
print("false")
}
//I have also tried:
searchResults.parseQuery()
if searchResults.parseQuery() {
print("true")
} else {
print("false")
}
You have a few options, but the issue is due to asynchronous calls.
Does Parse expose the same function, with a completion block?
If yes, then you place the processing of the Bool inside the completion block, which is called when the async task is completed.
If not, which I doubt, you could create an NSOperationQueue, with a maxConcurrency of 1 (so it is serial) and dispatch the calls onto the queue with
func addOperationWithBlock(_ block: () -> Void)
This is called on the queue. You would need to store the success bool globally so that you can access it inside the second queued block operation, to check the success state.
Update:
I haven't used parse, but checking the documentation for findObjectsInBackgroundWithBlock (https://parse.com/docs/ios/guide#queries) it takes a completion block where you can process the result, update your bool.
I'm not sure what you are trying to do. You don't need to have the success state of the query. You can check
if (!error) {
// do stuff
} else {
//error occurred - print("error \(error.localizedDescription)"
}
Check the example.
What you need to understand is threading. The async task provides a completion block because its asynchronous, it gets dispatched onto another thread for processing. I'm not sure how much you know about threading but there is something called a thread pool. This thread pool is accessed by Queues. The thread pool is managed by the OS, and makes sure available threads can be used by queues that need work done. As users interact with an application, this (and all UI work) is done on the main thread.
So whenever some processing is going to interfere with possible interaction or UI updates, it should be dispatched (Grand Central Dispatch) or queued (NSOperationQueue, built on top of GCD) off of the main thread.
Anyway, this is why the findObjectsInBackgroundWithBlock call is dispatched off the main thread, because otherwise it would block the main thread until its done, ruining the experience for the user. Also, if the main thread is blocked for more than 1 minute (last time I checked), the OS's watchdog will kill your process.
So yeah, assigning a boolean to the return of the block, would get the return of the function, which occurs before the completion block is done. The completion block is where you code some stuff to be done after the function completes. So the query gets dispatched onto another thread and starts processing, the thread that sent this work off for processing, continues with the rest of its execution. So checking the boolean directly after, wouldn't work because the other thread isn't complete. Even if the other thread finished in time, what is connecting the background thread with the main thread?
This is the beauty of blocks (function pointers), it's a lot cleaner and optimized and keeps code compact. The old way, which is still is use for some older frameworks, is delegates, which detaches the calling code with the callback and adds a delegate dependency. Blocks are beautiful.
Its also important to note that completion blocks don't always get called on the main thread. In many cases its up to you to dispatch the work back to the main thread, to handle any UI work that needs to be done with the objects available inside the completion block.
The query likely takes some time to run and should be run in a background thread with a callback function to handle the response WHEN it completes.
look at the Documentation
Specifically looking at the query.findObjectsInBackgroundWithBlock code:
var query = PFQuery(className:"GameScore")
query.whereKey("playerName", equalTo:"Sean Plott")
query.findObjectsInBackgroundWithBlock {
(objects: [PFObject]?, error: NSError?) -> Void in
if error == nil {
// The find succeeded.
print("Successfully retrieved \(objects!.count) scores.")
// Do something with the found objects
if let objects = objects as? [PFObject] {
for object in objects {
print(object.objectId)
}
}
} else {
// Log details of the failure
print("Error: \(error!) \(error!.userInfo!)")
}
}
The above code will execute the query and run the code in the block when it gets the results from Parse. This is known as an asynchronous task, for more information check out this guide