GCD and Threads - ios

I want to understand something about GCD and Threads.
I have a for loop in my view controller which asks my model to do some async network request.
So if the loop runs 5 times, the model sends out 5 network requests.
Is it correct to state that 5 threads have been created by my model considering the fact that I'm using NSURLConnection's sendAsyncRequest and the completion handlers will be called on an additional 5 threads ?
Now, If I ask my view controller to execute this for loop on a different thread and in every iteration of the loop, the call to the model should be dependent on the previous iteration, would I be creating an "Inception" of threads here ?
Basically, I want the subsequent async requests to my server only if the previous thread has completed entirely (By entirely I mean all of its sub threads should have finished executing too.)
I can't even frame the question properly because I'm massively confused myself.
But if anybody could help with anything, that would be helpful.

It is not correct to state that five threads have been created in the general case.
There is no one-to-one mapping between threads and blocks. GCD is an implementation of thread pooling.
A certain number of threads are created according to the optimal setup for that device — the cost of creating and maintaing threads under that release of the OS, the number of processor cores available, the number of threads it already has but which are presently blocked and any other factors Apple cares to factor in may all be relevant.
GCD will then spread your blocks over those threads. Or it may create new threads. But it won't necessarily.
Beyond that queues are just ways of establishing the sequencing between blocks. A serial dispatch queue does not necessarily own its own thread. All concurrent dispatch queues do not necessarily own their own threads. But there's no reason to believe that any set of queues shares any threads.
The exact means of picking threads for blocks has changed between versions of the OS. E.g. iOS 4 was highly profligate in thread creation, in a way that iOS 5+ definitely haven't been.
GCD will just try to do whatever is best in the circumstances. Don't waste your time trying to second guess it.

"Basically, I want the subsequent async requests to my server only if the previous thread has completed entirely (By entirely I mean all of its sub threads should have finished executing too.)"
Only focusing on the above statement to avoid confusion. Simple solution would be create a queue. feed the queue with 5 loops. Each loop will be making network request synchronously(you can use sendSynchronousRequest: method available in NSURLConnection), performing the operations after request completion and then start the next loop. queue following FIFO will execute the your requests subsequently.

GCD : Think of this as a simple queue that can accept tasks. Tasks are blocks of your code. You can put in as many tasks as you want in a queue (permitting system limits). Queues come in different flavors. Concurrent vs Serial. Main vs Global. High Priority vs Low Priority. A queue is not a thread.
Thread : It is a single line of execution of code in sequence. You can have multiple threads working on your code at the same time. A thread is not a queue.
Once you separate the 2 entities things start become clear.
GCD basically uses the threads in the process to work on tasks. In a serial queue everything is processed in sequence. So you don't need to have synchronization mechanisms in your code, the very nature of serial queue ensures synchronization. If this is a concurrent queue (i.e. 2 or more tasks being processed at the same time, then you need to ensure critical sections of your code are protected with synchronization).
Here is how you queue work to be done.
dispatch_async(_yourDispatchQueue, ^() {
NSLog (#"work queued");
});
The above NSLog will now get executed in a background thread in a near future time, but in a background thread.
If you notice when we put a request in we use dispatch_async. The other variation is dispatch_sync. The different between the 2 is after you put the request in the queue, the async variation will move on. The sync variation will not !!
If you are going to use a GCD for NSURLConnection you need to be careful on which thread you start the connection. Here is a SO link for more info. GCD with NSURLConnection

Related

Understanding dispatch_queues and synchronous/asynchronous dispatch

I'm an Android engineer trying to port some iOS code that uses 5 SERIAL dispatch queues. I want to make sure I'm thinking about things the right way.
dispatch_sync to a SERIAL queue is basically using the queue as a synchronized queue- only one thread may access it and the block that gets executed can be thought of as a critical region. It happens immediately on the current thread- its the equivalent of
get_semaphore()
queue.pop()
do_block()
release_semaphore()
dispatch_async to a serial queue- performs the block on another thread and lets the current thread return immediately. However since its a serial queue it promises only one of these asynchronous thread is going to execute at a time (the next call to dispatch_async will wait until all other threads are finished). That block can also be thought of as a critical region, but it will occur on another thread. So the same code as above, but its passed to a worker thread first.
Am I off in any of that, or did I figure it out correctly?
This feels like an overly complicated way of thinking of it and there are lots of little details of that description that aren't quite right. Specifically, "it happens immediately on the current thread" is not correct.
First, let's step back: The distinction between dispatch_async and dispatch_sync is merely whether the current thread waits for it or not. But when you dispatch something to a serial queue, you should always imagine that it's running on a separate worker thread of GCD's own choosing. Yes, as an optimization, sometimes dispatch_sync will use the current thread, but you are in no way guaranteed of this fact.
Second, when you discuss dispatch_sync, you say something about it running "immediately". But it's by no means assured to be immediate. If a thread does dispatch_sync to some serial queue, then that thread will block until (a) any block currently running on on that serial queue finish; (b) any other queued blocks for that serial queue run and complete; and (c) obviously, the block that thread A itself dispatched runs and completes.
Now, when you use a serial queue for synchronization for, some thread-safe access to some object in memory, often that synchronization process is very quick, so the waiting thread will generally be blocked for a negligible amount of time as its dispatched block (and any prior dispatched blocks) to finish. But in general, it's misleading to say that it will run immediately. (If it always could run immediately, then you wouldn't need a queue to synchronize access).
Now your question talks about a "critical region", to which I assume you're talking about some bit of code that, in order to ensure thread-safety or for some other reason like that, must be synchronized. So, when running this code to be synchronized, the only question re dispatch_sync vs dispatch_async is whether the current thread must wait. A common pattern, for example, is to say that one may dispatch_async writes to some model (because there's no need to wait for the model to update before proceeding), but dispatch_sync reads from some model (because you obviously don't want to proceed until the read value is returned).
A further optimization of that sync/async pattern is the reader-writer pattern, where concurrent reads are permissible but concurrent writes are not. Thus, you'll use a concurrent queue, dispatch_barrier_async the writes (achieving serial-like behavior for the writes), but dispatch_sync the reads (enjoying concurrent performance with respect to other read operations).
To pick nits, dispatch_sync doesn't necessarily run the code on the current thread, but if it doesn't, it still blocks the current thread until the task completes. The distinction is only potentially important if you're relying on thread IDs or thread-local storage.
But otherwise, yes, unless I missed something subtle.

Threads and Queues in iOS - not NSThread

I'd like to get a conceptual understanding of serial/concurrent queues and threads in iOS. I have a solid grasp of the queue data structure and how it's used.
Are threads just, in an unofficial sense, an abstraction of a queue? Meaning they're implemented using queue data structures. Each queue is then an actual thread, but they act as queues in that processes are executed in a first in, first out fashion?
That would stand for serial queues, as those DO indeed follow FIFO, but then concurrent queues are a different ball game. You don't know what processes are executing when, and even though you have them on one queue, they are actually getting fired on different threads wherever there is availability? Meaning that queues can actually contain or refer to multiple threads?
Any help or pointers to resources (not including apple documentation, which I'm currently going through) would be greatly appreciated.
Queues are one of the methods to create threads in iOS.
A thread is the current execution of a code.
The main thread (Thread 0) is actually the single thread which runs always during the app lifetime. The others, if they are not attached to a NSRunLoop which have something like while(1) { code...}, then they'll finish immediately after the code is executed.
Wikipedia should be your friend: https://en.wikipedia.org/wiki/Thread_(computing)

If I want a task to run in the background, how does the "dispatch_get_global_queue" queue work?

When selecting which queue to run dispatch_async on, dispatch_get_global_queue is mentioned a lot. Is this one special background queue that delegates tasks to a certain thread? Is it almost a singleton?
So if I use that queue always for my dispatch_async calls, will that queue get full and have to wait for things to finish before another one can start, or can it assign other tasks to different threads?
I guess I'm a little confused because when I'm choosing the queue for an NSOperation, I can choose the queue for the main thread with [NSOperationQueue mainQueue], which seems synonymous to dispatch_get_main_queue but I was under the impression background queues for NSOperation had to be individually made instances of NSOperationQueue, yet GCD has a singleton for a background queue? (dispatch_get_global_queue)
Furthermore - silly question but wanted to make sure - if I put a task in a queue, the queue is assigned to one thread, right? If the task is big enough it won't split it up over multiple threads, will it?
When selecting which queue to run dispatch_async on,
dispatch_get_global_queue is mentioned a lot. Is this one special
background queue that delegates tasks to a certain thread?
A certain thread? No. dispatch_get_global_queue retrieves for you a global queue of the requested relative priority. All queues returned by dispatch_get_global_queue are concurrent, and may, at the system's discretion, dispatch work to many different threads. The mechanics of this are an implementation detail that is opaque to you as a consumer of the API.
In practice, and at the risk of oversimplifying it, there is one global queue for each priority level, and at the time of this writing, based on my experience, each of those will at any given time be dispatching work to between 0 and 64 threads.
Is it almost a singleton?
Strictly no, but you can think of them as singletons where there is one singleton per priority level.
So if I use that queue always for my dispatch_async calls, will that
queue get full and have to wait for things to finish before another
one can start, or can it assign other tasks to different threads?
It can get full. Practically speaking, if you are saturating one of the global concurrent queues (i.e. more than 64 background tasks of the same priority in flight at the same time), you probably have a bad design. (See this answer for more details on queue width limits)
I guess I'm a little confused because when I'm choosing the queue for
an NSOperation, I can choose the queue for the main thread with
[NSOperationQueue mainQueue], which seems synonymous to
dispatch_get_main_queue
They are not strictly synonymous. Although NSOperationQueue uses GCD under the hood, there are some important differences. For instance, in a single pass of the main run loop, only one operation enqueued to +[NSOperationQueue mainQueue] will be executed, whereas more than one block submitted to dispatch_get_main_queue might be executed on a single run loop pass. This probably doesn't matter to you, but they are not, strictly speaking, the same thing.
but I was under the impression background
queues for NSOperation had to be individually made instances of
NSOperationQueue, yet GCD has a singleton for a background queue?
(dispatch_get_global_queue)
In short, yes. It sounds like you're conflating GCD and NSOperationQueue. NSOperationQueue is not just a "trivial wrapper" around GCD, it's its own thing. The fact that it's implemented on top of GCD should not really matter to you. NSOperationQueue is a task queue, with an explicitly settable width, that you can create instances of "at will." You can make as many of them as you like. At some point, all instances of NSOperationQueue are, when executing NSOperations, pulling resources from the same pool of system resources as the rest of your process, including GCD, so yes, there are some interactions there, but they are opaque to you.
Furthermore - silly question but wanted to make sure - if I put a task
in a queue, the queue is assigned to one thread, right? If the task is
big enough it won't split it up over multiple threads, will it?
A single task can only ever be executed on a single thread. There's not some magical way that the system would have to "split" a monolithic task into subtasks. That's your job. With regard to your specific wording, the queue isn't "assigned to one thread", the task is. The next task from the queue to be executed might be executed on a completely different thread.

Concurrency and synchronous execution

I was reading OReilly's iOS6 Programming Cookbook and am confused about something. Quoting from page 378, chapter 6 "Concurrency":
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD.
These allow either synchronous or asynchronous execution. But synchronous execution
does not mean your program waits for the code to finish before continuing. It
simply means that the concurrent queue will wait until your task has finished before it
continues to the next block of code on the queue. When you put a block object on a
concurrent queue, your own program always continues right away without waiting for
the queue to execute the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I bolded the text that intrigues me. I think it is false because as I've just learned today synchronous execution means precisely that the program waits for the code to finish before continuing.
Is this correct or how does it really work?
How is this paragraph wrong? Let us count the ways:
For any task that doesn’t involve the UI, you can use global
concurrent queues in GCD.
This is overly specific and inaccurate. Certain UI centric tasks, such as loading images, could be done off the main thread. This would be better said as "In most cases, don't interact with UIKit classes except from the main thread," but there are exceptions (for instance, drawing to a UIGraphicsContext is thread-safe as of iOS 4, IIRC, and drawing is a great example of a CPU intensive task that could be offloaded to a background thread.) FWIW, any work unit you can submit to a global concurrent queue you can also submit to a private concurrent queue too.
These allow either synchronous or asynchronous execution. But
synchronous execution does not mean your program waits for the code to
finish before continuing. It simply means that the concurrent queue
will wait until your task has finished before it continues to the next
block of code on the queue.
As iWasRobbed speculated, they appear to have conflated sync/async work submission with serial/concurrent queues. Synchronous execution does, by definition, mean that your program waits for the code to return before continuing. Asynchronous execution, by definition, means that your program does not wait. Similarly, serial queues only execute one submitted work unit at a time, executing each in FIFO order. Concurrent queues, private or global, in the general case (more in a second), schedule submitted blocks for execution, in the order in which they were enqueued, on one or more background threads. The number of background threads used is an opaque implementation detail.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code.
Nope. Not true. Again, they're mixing up sync/async and serial/concurrent. I suspect what they're trying to say is: When you enqueue a block asynchronously, your own program always continues right away without waiting for the queue to execute the code.
This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
This is also not correct. For instance, if you have a private concurrent queue that you are using to act as a reader/writer lock that protects some mutable state, if you dispatch_sync to that queue from the main thread, your code will, in many cases, execute on the main thread.
Overall, this whole paragraph is really pretty horrible and misleading.
EDIT: I mentioned this in a comment on another answer, but it might be helpful to put it here for clarity. The concept of "Synchronous vs. Asynchronous dispatch" and the concept of "Serial vs. Concurrent queues" are largely orthogonal. You can dispatch work to any queue (serial or concurrent) in either a synchronous or asynchronous way. The sync/async dichotomy is primarily relevant to the "dispatch*er*" (in that it determines whether the dispatcher is blocked until completion of the block or not), whereas the Serial/Concurrent dichotomy is primarily relevant to the dispatch*ee* block (in that it determines whether the dispatchee is potentially executing concurrently with other blocks or not).
I think that bit of text is poorly written, but they are basically explaining the difference between execution on a serial queue with execution on a concurrent queue. A serial queue is run on one thread so it doesn't have a choice but to execute one task at a time, whereas a concurrent queue can use one or more threads.
Serial queue's execute one task after the next in the order in which they were put into the queue. Each task has to wait for the prior task to be executed before it can then be executed (i.e. synchronously).
In a concurrent queue, tasks can be run at the same time that other tasks are also run since they normally use multiple threads (i.e. asynchronously), but again they are still executed in the order they were enqueued and they can effectively be finished in any order. If you use NSOperation, you can also set up dependencies on a concurrent queue to ensure that certain tasks are executed before other tasks are.
More info:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
The author is Vandad Nahavandipoor, I don't want to affect this guy's sales income, but all his books contain the same mistakes in the concurrency chapters:
http://www.amazon.com/Vandad-Nahavandipoor/e/B004JNSV7I/ref=sr_tc_2_rm?qid=1381231858&sr=8-2-ent
Which is ironic since he's got a 50-page book exactly on this subject.
http://www.amazon.com/Concurrent-Programming-Mac-iOS-Performance/dp/1449305636/ref=la_B004JNSV7I_1_6?s=books&ie=UTF8&qid=1381232139&sr=1-6
People should STOP reading this guy's books.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I find it confusing, and the only explanation I can think of, is that she is talking about who blocks who. From man dispatch_sync:
Conceptually, dispatch_sync() is a convenient wrapper around
dispatch_async() with the addition of a semaphore to wait for
completion of the block, and a wrapper around the block to signal its
completion.
So execution returns to your code right away, but the next thing the dispatch_sync does after queueing the block, is wait on a semaphore until the block is executed. Your code blocks because it chooses to.
The other way your code would block is when the queue chooses to run a block using your thread (the one from where you executed dispatch_sync). In this case, your code wouldn't recover control until the block is executed, so the check on the semaphore would always find the block is done.
Erica Sadun for sure knows better than me, so maybe I'm missing some nuance here, but this is my understanding.

What is the difference between dispatch_get_global_queue and dispatch_queue_create?

I'm writing a moderately complex iOS program that needs to have multiple threads for some of its longer operations (parsing, connections to the network...etc). However, I'm confused as to what the difference is between dispatch_get_global_queue and dispatch_queue_create.
Which one should I use and could you give me a simple explanation of what the difference is in general? Thanks.
As the documentation describes, a global queue is good for concurrent tasks (i.e. you're going to dispatch various tasks asynchronously and you're perfectly happy if they run concurrently) and if you don't want to encounter the theoretical overhead of creating and destroying your own queue.
The creating of your own queue is very useful if you need a serial queue (i.e. you need the dispatched blocks to be executed one at a time). This can be useful in many scenarios, such as when each task is dependent upon the preceding one or when coordinating interaction with some shared resource from multiple threads.
Less common, but you will also want to create your own queue if you need to use barriers in conjunction with a concurrent queue. In that scenario, create a concurrent queue (i.e. dispatch_queue_create with the DISPATCH_QUEUE_CONCURRENT option) and use the barriers in conjunction with that queue. You should never use barriers on global queues.
My general counsel is if you need a serial queue (or need to use barriers), then create a queue. If you don't, go ahead and use the global queue and bypass the overhead of creating your own.
If you want a concurrent queue, but want to control how many operations can run concurrently, you can also consider using NSOperationQueue which has a maxConcurrentOperationCount property. This can be useful when doing network operations and you don't want too many concurrent requests being submitted to your server.
Just posted in a different answer, but here is something I wrote quite a while back:
The best way to conceptualize queues is to first realize that at the very low-level, there are only two types of queues: serial and concurrent.
Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks. Serial queues always wait for a task to finish before going to the next one. Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.
The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are "dating" many threads but only one at time, the main queue is "married" to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don't block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order.
If serial queues are monogamous, then concurrent queues are promiscuous. They will submit tasks to any available thread or even make new threads depending on system load. They may perform multiple tasks simultaneously on different threads. It's important that tasks submitted to the global queue are thread-safe and minimize side-effects. Tasks are submitted for execution in FIFO order, but order of completion is not guaranteed. As of this writing, there are only three concurrent queues and you can't make them, you can only fetch them with dispatch_get_global_queue.
edit: blog post expanding on this answer: http://amattn.com/p/grand_central_dispatch_gcd_summary_syntax_best_practices.html
One returns the existing global queue, the other creates a new one. Instead of using GCD, I would consider using NSOperation and operation queue. You can find more information about it in this guide. Typically, of you want the operations to execute concurrently, you want to create your own queue and put your operations in it.

Resources