What is good practice for using NSOperation and NSOperationQueue? - ios

I just start using NSOperation and NSOperationQueue. I really want to know what should be better practice for using them.
Should I create a only one queue for whole app, or should I create one queue for each specific type of task? What is the effect if I create so many queues?

It depends on what you're using them for. If you're using them for performance, maxing out the CPU, then there is little point going much beyond how many cores you expect to have.
If you're offloading processing just to keep the UI responsive, but you're not particularily concerned about maxing out performance, I'd just use the default background queue for simplicity.
If you're using sequential background queues to trigger actions on external events ( network request completion, for example), then it doesn't do any harm to use a bunch, as they won't lead to a lot of threads trying to run concurrently.
There are many instances where you can simplify the design using an operation queue.
NSOperationQueues are built on top of GCD, and AFAIK, there is no performance impact of having loads of queues, as long as they're not trying to execute concurrently.

Read up on the Executing Operations section of the Apple Concurrency Guide. It depends of course on your implementation and application but in summary ther can only be a certain amount of NSOperations running concurrently and increasing the number of NSOperationQueues won't serve any benefit to you. So my recommendation would be to opt for 1 or fewer queues if your design can allow it.

Related

How many NSOperationQueues should an app have?

The question is if an app should have one instance of the queue for all async operations, or several queues can be created?
With one queue it's pretty simple because all tasks are executed based on assigned priority. So for me it's more favourable because there is no need to write extra code.
In the case of multiple queues at least one of them should be main.
So some sort of queue manager should be implemented that will be able to suspend "sub" queues and allow execution of operations from main queue if needed.
The analogy with only one single connection to database make me think that the one centralised queue should be used for all async operations.
So what would you recommend? What are the best practices?
After doing some searching and brainstorming I came up to the solution.
In my application I'm going to use 2 async queues:
one queue (NSOperationQueue) for all background operations (like parsing of downloaded .json files) and another queue (which is already implemented by NSURLSession class) for requests (API requests, downloading images, etc).

Swift, number of threads in app

I am making an app for website. I use JSON to get data. I want to load all posts in threads (1 post - 1 thread). How many threads I can make? Should I control the number of threads?
With Cocoa you usually don't work with Threads directly. Grand Central Dispatch (GCD) is an API that handles this for your. You just have to partition your task into small managable chunks, dispatch them onto a background queue and the rest is handled for you. You don't need to worry about creating threads, how many are currently running etc. When you dispatch enough work on one (or possibly more) queues, the CPU is going to run at maximum load.
You can also use NSOperationQueue, which has the ability to throttle the execution to some extend or cancel currently running tasks (not possible with GCD).
Unless you are doing anything unusual there is no need to use NSThread directly. Use GCD when you just need to perform a simple small task asynchronously. Use NSOperationQueue when you need more control, like cancelling submitted tasks or setting priorities. It's API is also a bit higher level and in Objective-C. GCD is a C level API, so for example it can't catch ObjC-Exceptions. NSOperationQueue uses GCD internally, so both should work equally well.

What is the difference between 'thread' and 'queue' in iOS development? [duplicate]

This question already has answers here:
Use of the terms "queues", "multicore", and "threads" in Grand Central Dispatch
(3 answers)
Closed 8 years ago.
I am new to iOS development. Now I am quite confused about the two concepts: "thread" and "queue". All I know is that they both are about multithread programming. Can anyone interpret those two concepts and the difference between them for me?
Thanks in advance!
How NSOperationQueue and NSThread Works:
NSThread:
iOS developers have to write code for the work/process he want to perform along with for the creation and management of the threads themselves.
iOS developers have to be careful about a plan of action for using threads.
iOS developer have to manage posiable problems like reuseability of thread, lockings etc. by them self.
Thread will consume more memory too.
NSOperationQueue:
The NSOperation class is an abstract class which encapsulates the code and data associated with a single task.
Developer needs to use subclass or one of the system-defined subclasses of NSOperation to perform the task.
Add operations into NSOperationQueue to execute them.
The NSOperationQueue creates a new thread for each operation and runs them in the order they are added.
Operation queues handle all of the thread management, ensuring that operations are executed as quickly and efficiently as possible.
An operation queue executes operations either directly by running them on secondary threads or indirectly using GCD (Grand Central Dispatch).
It takes care of all of the memory management and greatly simplifies the process.
If you don’t want to use an operation queue, you can also execute an operation by calling its start method. It may make your code too complex.
How To Use NSThread And NSOperationQueue:
NSThread:
Though Operation queues is the preferred way to perform tasks concurrently, depending on application there may still be times when you need to create custom threads.
Threads are still a good way to implement code that must run in real time.
Use threads for specific tasks that cannot be implemented in any other way.
If you need more predictable behavior from code running in the background, threads may still offer a better alternative.
NSOperationQueue:
Use NSOperationQueue when you have more complex operations you want to run concurrently.
NSOperation allows for subclassing, dependencies, priorities, cancellation and a supports a number of other higher-level features.
NSOperation actually uses GCD under the hood so it is as multi-core, multi-thread capable as GCD.
Now you should aware about advantages and disadvantages of NSTread and NSOperation. You can use either of them as per needs of your application.
Before you read my answer you might want to consider reading this - Migrating away from Threads
I am keeping the discussion theoretical as your question does not have any code samples. Both these constructs are required for increasing app responsiveness & usability.
A message queue is a data structure for holding messages from the time they're sent until the time the receiver retrieves and acts on them. Generally queues are used as a way to 'connect' producers (of data) & consumers (of data).
A thread pool is a pool of threads that do some sort of processing. A thread pool will normally have some sort of thread-safe queue (refer message queue) attached to allow you to queue up jobs to be done. Here the queue would usually be termed 'task-queue'.
So in a way thread pool could exist at your producer end (generating data) or consumer end (processing the data). And the way to 'pass' that data would be through queues. Why the need for this "middleman" -
It decouples the systems. Producers do not know about consumers & vice versa.
The Consumers are not bombarded with data if there is a spike in Producer data. The queue length would increase but the consumers are safe.
Example:
In iOS the main thread, also called the UI thread, is very important because it is in charge of dispatching the events to the appropriate widget and this includes the drawing events, basically the UI that the user sees & interacts.
If you touch a button on screen, the UI thread dispatches the touch event to the app, which in turn sets its pressed state and posts an request to the event queue. The UI thread dequeues the request and notifies the widget to redraw itself.

IOS - Threads are stopped during web service call

I am running an application with multiple threads. My application has set of queue objects(simply array).The input for the thread will be from this queue. Each thread will get a queue object as an input. And I am creating and starting this thread in simple manner as below.
threadOne = [[NSThread alloc] initWithTarget:self
selector:#selector(initThread:)
object:nil];
[threadOne main];
Likewise I am starting multiple available threads for every objects.
When I run my application in debug mode during my webservice call, My current running thread doesn't wait for response (approximately my webservice call will take 2-3 seconds to generate response). Here my queue process gets stopped due to web service call.
Below is the code for reference.
dispatch_sync(dispatch_get_main_queue(), ^{
[service genID:self action:#selector(genHandler:) username: username password:password ];
});
I want to run multiple threads in parallel, in my application. Is there any solution to accomplish the above.
You probably have a deadlock because you are calling
dispatch_sync(dispatch_get_main_queue()
from main thread. You can find an excellent explanation in this answer.
As a side note, I would recommend using NSOperationQueue or Grand Central Dispatch, because Apple does not recommend using NSThread objects directly:
In the past, introducing concurrency to an application required the creation of one or more additional threads. Unfortunately, writing threaded code is challenging. Threads are a low-level tool that must be managed manually. Given that the optimal number of threads for an application can change dynamically based on the current system load and the underlying hardware, implementing a correct threading solution becomes extremely difficult, if not impossible to achieve. In addition, the synchronization mechanisms typically used with threads add complexity and risk to software designs without any guarantees of improved performance.
Both OS X and iOS adopt a more asynchronous approach to the execution of concurrent tasks than is traditionally found in thread-based systems and applications. Rather than creating threads directly, applications need only define specific tasks and then let the system perform them. By letting the system manage the threads, applications gain a level of scalability not possible with raw threads. Application developers also gain a simpler and more efficient programming model.

Using a single shared background thread for iOS data processing?

I have an app where I'm downloading a number of resources from the network, and doing some processing on each one. I don't want this work happening on the main thread, but it's pretty lightweight and low-priority, so all of it can really happen on the same shared work thread. That seems like it'd be a good thing to do, because of the work required to set up & tear down all of these work threads (none of which will live very long, etc.).
Surprisingly, though, there doesn't seem to be a simple way to get all of this work happening on a single, shared thread, rather than spawning a new thread for each task. This is complicated by the large number of paths to achieving concurrency that seem to have cropped up over the years. (Explicit NSThreads, NSOperationQueue, GCD, etc.)
Am I over-estimating the overhead involved in spawning all of these threads? Should I just not sweat it, and use the easier thread-per-task approaches? Use GCD, and assume that it's smarter than I about thread (re)use?
Use GCD — it's the current official recommendation and it's less effort than any of the other solutions. If you explicitly need the things you pass in to occur serially (ie, as if on a single thread) then you can achieve that but it's probably smarter just to change, e.g.
[self doCostlyTask];
To:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^()
{
[self doCostlyTask];
dispatch_async(dispatch_get_main_queue(), ^()
{
// most UIKit tasks are permissible only from the main queue or thread,
// so if you want to update an UI as a result of the completed action,
// this is a safe way to proceed
[self costlyTaskIsFinished];
});
});
That essentially tells the OS "do this code with low priority wherever it would be most efficient to do it". The various things you post to any of the global queues may or may not execute on the same thread as each other and as the thread that dispatched them and may or may not occur concurrently. The OS applies the rules it considers optimal.
Exposition:
GCD is Apple's implementation of thread pooling, and they introduced closures (as 'blocks') at the same time to make it usable. So the ^(C-style args){code} syntax is a block/closure. That is, it's code plus the state of any variables (subject to caveats) that the code references. You can store and call blocks yourself with no GCD knowledge or use.
dispatch_async is a GCD function issues a block to the nominated queue. It executes the block on some thread at some time, and applies unspecified internal rules to do so in an optimal fashion. It'll judge that based on factors such as how many cores you have, how busy each is, what it's currently thinking on power saving (which may depend on power source), how the power costs for that specific CPU work out, etc.
So as far as the programmer is developed, blocks make code into something you can pass around as an argument. GCD lets you request that blocks are executed according to the best scheduling the OS can manage. Blocks are very lightweight to create and copy — a lot more so than e.g. NSOperations.
GCD goes beyond the basic asynchronous dispatch in the above example (eg, you can do a parallel for loop and wait for it to finish in a single call) but unless you have specific needs it's probably not all that relevant.
Surprisingly, though, there doesn't seem to be a simple way to get all
of this work happening on a single, shared thread, rather than
spawning a new thread for each task.
This is exactly what GCD is for. GCD maintains a pool of threads that can be used for executing arbitrary blocks of code, and it takes care of managing that pool for best results on whatever hardware is at hand. This avoids the cost of constantly creating and destroying threads and also saves you from having to figure out how many processors are available, etc.
Tommy provides the right answer if you really care that only a single thread should be used, but it sounds like you're really just trying to avoid creating one thread per task.
This is complicated by the large number of paths to achieving
concurrency that seem to have cropped up over the years. (Explicit
NSThreads, NSOperationQueue, GCD, etc.)
NSOperationQueue uses GCD, so you can use that if it makes life easier than using GCD directly.
Use GCD, and assume that it's smarter than I about thread (re)use?
Exactly.
I would use NSOperationQueue or GCD and profile. Can't imagine thread overhead will beat out network delays.
NSOperationQueue would let you limit the number of simultaneous operations, if they end up getting too greedy. In fact, you can limit it to one if you need to.

Resources