I am studying the NSOperation Object. According to the App Document (Listing 2-5), we can implement an asynchronously NSOperation. The part of the start part is as below:
- (void)start {
// Always check for cancellation before launching the task.
if ([self isCancelled])
{
// Must move the operation to the finished state if it is canceled.
[self willChangeValueForKey:#"isFinished"];
finished = YES;
[self didChangeValueForKey:#"isFinished"];
return;
}
// If the operation is not canceled, begin executing the task.
[self willChangeValueForKey:#"isExecuting"];
[NSThread detachNewThreadSelector:#selector(main) toTarget:self withObject:nil];
executing = YES;
[self didChangeValueForKey:#"isExecuting"];
}
I find out that a new thread is assigned to run the main function:
[NSThread detachNewThreadSelector:#selector(main) toTarget:self withObject:nil];
So, it seems that the NSOperation did nothing about the concurrent execution. The asynchrony is achieve by create a new thread. So why we need NSOperation?
You may use a concurrent NSOperation to execute an asynchronous task. We should emphasize on the fact, that the asynchronous task's main work will execute on a different thread than where its start method has been invoked.
So, why is this useful?
Wrapping an asynchronous task into a concurrent NSOperation lets you leverage NSOperationQueue and furthermore enables you to setup dependencies between other operations.
For example, enqueuing operations into a NSOperationQueue lets you define how many operations will execute in parallel. You can also easily cancel the whole queue, that is, all operations which have been enqueued.
An NSOperationQueue is also useful to associate it with certain resources - for example, one queue executes CPU bound asynchronous tasks, another executes an IO bound task, and yet another executes tasks related to Core Data, and so force. For each queue you can define the maximum number of concurrent operations.
With dependencies you can achieve composition. That means for example, you can define that Operation C only executes after Operation A and B have been finished. With composition you can solve more complex asynchronous problems.
That's it, IMHO.
I would like to mention, that using NSOperations is somewhat cumbersome, clunky and verbose, which requires a lot of boilerplate code and subclasses and such. There are much better alternatives which require third party libraries, though.
Related
I'm calling a function on a thread in my project.
[self performSelectorInBackground:#selector(shortVibration) withObject: nil];
It's called in a loop.
I would like for the function to be called on its own thread.
I don't want it to be called at the same time (if this thread Call is in a loop... and it is)
So, I don't want to call my thread function again until the last one is done executing.
How can I do this?
don't want it to be called at the same time
That suggests a "serial queue". That could be a dispatch queue or an operation queue. But a serial queue is one that can run only one task at a time.
Or, you can decouple the loop from the repeating vibration and set up a timer to run while your loop progresses which will repeatedly call your vibration routine and then cancel the timer at the end of the loop. You can either use a standard NSTimer and have it dispatch the calls to whatever queue you want, or you can use a GCD timer, which you can schedule on a background queue.
It depends upon the details of how this vibration routine works and the nature of your loop. We'd need more detail (e.g. describe the broader problem and the nature of this "vibrate" routine) to help you further.
Perhaps you should take a look at NSOperationQueue which allows you to call functions in own created Queues. The Queues are executed on an own Thread.
For example:
NSOperationQueue *backgroundQueue = [[NSOperationQueue alloc]init];
backgroundQueue.maxConcurrentOperationCount = 1;
backgroundQueue.name = #"com.foo.bar";
[_backgroundQueue addOperationWithBlock:^{
do what you want.... here you also have access to properties in your class.
}];
With the operationCount you can handle the count of parallel executed operations. You can also create an own Subclass of NSOperation and execute your code there. Then you have to add the Operation like this [_backgroundQueue addOperation:SubclassOfNSOperation].
I hope this helps you a little. Out of your Question I can't get more information to help you more in detail. Post some code perhaps.
I am newer to iPhone development and going through GCD concept for multithreading.
'dispatch_queue_t' creates a serial queue and I have read that a serial queue will only execute one job at a time. GCD is intended to execute multiple tasks simultaneously then why serial queue even exist ?
For example, I want to do 2 task. Task A and Task B. I create one serial queue for executing both these tasks. I am doing this in the main queue. Here is the code what I am doing:
dispatch_queue_t my_serial_queue = dispatch_queue_create("queue_identifier", 0);
dispatch_sync(my_serial_queue, ^{
NSLog(#"Task 1");
});
dispatch_sync(my_serial_queue, ^{
NSLog(#"Task 2");
});
Now, As per the rule, both the task will execute serially as it is serial queue i.e. Task A is executed first and then after Task A is finished, Task B will be executed. And also it is giving me the same output in log.
So, my question is, what if I want to execute both the tasks simultaneously ? If the answer of this question is to create another serial queue for Task B then the code should be structured like this:
dispatch_queue_t my_serial_queue_1 = dispatch_queue_create("queue_identifier_1", 0);
dispatch_queue_t my_serial_queue_2 = dispatch_queue_create("queue_identifier_2", 0);
dispatch_sync(my_serial_queue_1, ^{
NSLog(#"Task 1");
});
dispatch_sync(my_serial_queue_2, ^{
NSLog(#"Task 2");
});
I am getting the same output. Reason is I am using 'dispatch_sync' call instead of 'dispatch_async' call. But as I am running both the tasks in different queues, shouldn't they execute simultaneously ? If not, then why should we create a different queue ? I might have used the same queue by 'dispatch_async' call for executing both the tasks simultaneously.
I really need answer of this question because, before designing structure of my multi-tasking Apps in future, it will guide me better.
Your confusion is pretty much entirely because you're using dispatch_sync. dispatch_sync is not a tool for getting concurrent execution, it is a tool for temporarily limiting it for safety.
Once you're using dispatch_async, you can get concurrency either by having more than one queue, or by using concurrent queues. The purpose of using serial queues in this context is to control which work is done simultaneously.
Consider the following very silly example:
__block void *shared = NULL;
for (;;) {
dispatch_async(aConcurrentDispatchQueue, ^{
shared = malloc(128);
free(shared);
}
}
this will crash, because eventually, two of the concurrently executing blocks will free 'shared' in a row. This is a contrived example obviously, but in practice, nearly all shared mutable state must not be changed concurrently. Serial queues are your tool for making sure that you do that.
Summarizing:
When the work in your blocks is truly independent and thread-safe, and purely computational, go ahead and use dispatch_async to a concurrent queue
When you have logically independent tasks that each consist of several related blocks, use a serial queue for each task
When you have work that accesses shared mutable state, do those accesses in dispatch_sync()ed blocks on a serial queue, and the rest of the work on a concurrent queue
When you need to do UI-related work, dispatch_async or dispatch_sync to the main queue, depending on whether you need to wait for it to finish or not
'dispatch_queue_t' doesn't create anything. dispatch_queue_t is a dispatch queue, either serial or concurrent.
dispatch_queue_create has two parameters. The second parameter decides whether the queue that it creates is a serial or concurrent queue. But usually you don't create concurrent queues yourself but use one of the three existing concurrent queues.
dispatch_sync dispatches a block on a queue and waits until the block is finished. It should be obvious that this very much limits concurrency. You should almost always use dispatch_async.
Sequential queues can only execute one block at a time. It should be obvious that this very much limits concurrency. Sequential queues are still useful when you need to perform various blocks one after another, and they can be executed concurrently with blocks in other queues.
So for maximum use of processors use dispatch_async on a concurrent queue. And there is no need to create more than one concurrent queue. It's concurrent. It can run any number of blocks concurrently.
Say I want to implement a pattern like so:
a = some array I download from the internet
b = manipulate a somehow (long operation)
c = first object of b
These would obviously need to be called synchronously, which is causing my problems in Objective C. I've read about NSOperationQueue and GCD, and I don't quite understand them, or which would be appropriate here. Can someone please suggest a solution? I know I can also use performSelector:#selector(sel)WaitUntilDone, but that doesn't seem efficient for larger operations.
So create a serial dispatch queue, dump all the work there (each in a block), and for the last block post a method back to yourself on the main queue, telling your control class that the work is done.
This is by far and away the best architecture for much such tasks.
I'm glad your question is answered. A couple of additional observations:
A minor refinement in your terminology, but I'm presuming you want to run these tasks asynchronously (i.e. don't block the main queue and freeze the user interface), but you want these to operate in a serial manner (i.e., where each will wait for the prior step to complete before starting the next task).
The easiest approach, before I dive into serial queues, is to just do these three in a single dispatched task:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self doDownloadSynchronously];
[self manipulateResultOfDownload];
[self doSomethingWithFirstObject];
});
As you can see, since this is all a single task running these three steps after each other, you can really dispatch this to any background queue (in the above, it's to one of the global background queues), but because you're doing it all within a given dispatched block, these three steps will sequentially, one after the next.
Note, while it's generally inadvisable to create synchronous network requests, as long as you're doing this on a background queue, it's less problematic (though you might want to create an operation-based network request as discussed below in point #5 if you want to enjoy the ability to cancel an in-progress network request).
If you really need to dispatch these three tasks separately, then just create your own private serial queue. By default, when you create your own custom dispatch queue, it is a serial queue, e.g.:
dispatch_queue_t queue = dispatch_queue_create("com.company.app.queuename", 0);
You can then schedule these three tasks:
dispatch_async(queue, ^{
[self doDownloadSynchronously];
});
dispatch_async(queue, ^{
[self manipulateResultOfDownload];
});
dispatch_async(queue, ^{
[self doSomethingWithFirstObject];
});
The operation queue approach is just as easy, but by default, operation queues are concurrent, so if we want it to be a serial queue, we have to specify that there are no concurrent operations (i.e. the max concurrent operation count is 1):
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 1;
[queue addOperationWithBlock:^{
[self doDownloadSynchronously];
}];
[queue addOperationWithBlock:^{
[self manipulateResultOfDownload];
}];
[queue addOperationWithBlock:^{
[self doSomethingWithFirstObject];
}];
This begs the question of why you might use the operation queue approach over the GCD approach. The main reason is that if you need the ability to cancel operations (e.g. you might want to stop the operations if the user dismisses the view controller that initiated the asynchronous operation), operation queues offer the ability to cancel operations, whereas it's far more cumbersome to do so with GCD tasks.
The only tricky/subtle issue here, in my opinion, is how do you want to do that network operation synchronously. You can use NSURLConnection class method sendSynchronousRequest or just grabbing the NSData from the server using dataWithContentsOfURL. There are limitations to using these sorts of synchronous network requests (e.g., you can't cancel the request once it starts), so many of us would use an NSOperation based network request.
Doing that properly is probably beyond the scope of your question, so I might suggest that you might consider using AFNetworking to create an operation-based network request which you could integrate in solution #4 above, eliminating much of the programming needed if you did your own NSOperation-based network operation.
The main thing to remember is that when you're running this sort of code on a background queue, when you need to do UI updates (or update the model), these must take place back on the main queue, not the background queue. Thus if doing a GCD implementation, you'd do:
dispatch_async(queue, ^{
[self doSomethingWithFirstObject];
dispatch_async(dispatch_get_main_queue(),^{
// update your UI here
});
});
The equivalent NSOperationQueue rendition would be:
[queue addOperationWithBlock:^{
[self doSomethingWithFirstObject];
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// update your UI here
}];
}];
The essential primer on these issues is the Concurrency Programming Guide.
There are a ton of great WWDC videos on the topic including WWDC 2012 videos Asynchronous Design Patterns with Blocks, GCD, and XPC and Building Concurrent User Interfaces on iOS and WWDC 2011 videos Blocks and Grand Central Dispatch in Practice and Mastering Grand Central Dispatch
I would take a look at reactive cocoa (https://github.com/ReactiveCocoa/ReactiveCocoa), which is a great way to chain operations together without blocking. You can read more about it here at NSHipster: http://nshipster.com/reactivecocoa/
This is a two part question. Hope someone could reply with a complete answer.
NSOperations are powerful objects. They can be of two different types: non-concurrent or concurrent.
The first type runs synchronously. You can take advantage of a non-concurrent operations by adding them into a NSOperationQueue. The latter creates a thread(s) for you. The result consists in running that operation in a concurrent manner. The only caveat regards the lifecycle of such an operation. When its main method finishes, then it is removed form the queue. This is can be a problem when you deal with async APIs.
Now, what about concurrent operations? From Apple doc
If you want to implement a concurrent operation—that is, one that runs
asynchronously with respect to the calling thread—you must write
additional code to start the operation asynchronously. For example,
you might spawn a separate thread, call an asynchronous system
function, or do anything else to ensure that the start method starts
the task and returns immediately and, in all likelihood, before the
task is finished.
This is quite almost clear to me. They run asynchronously. But you must take the appropriate actions to ensure that they do.
What it is not clear to me is the following. Doc says:
Note: In OS X v10.6, operation queues ignore the value returned by
isConcurrent and always call the start method of your operation from a
separate thread.
What it really means? What happens if I add a concurrent operation in a NSOperationQueue?
Then, in this post Concurrent Operations, concurrent operations are used to download some HTTP content by means of NSURLConnection (in its async form). Operations are concurrent and included in a specific queue.
UrlDownloaderOperation * operation = [UrlDownloaderOperation urlDownloaderWithUrlString:url];
[_queue addOperation:operation];
Since NSURLConnection requires a loop to run, the author shunt the start method in the main thread (so I suppose adding the operation to the queue it has spawn a different one). In this manner, the main run loop can invoke the delegate included in the operation.
- (void)start
{
if (![NSThread isMainThread])
{
[self performSelectorOnMainThread:#selector(start) withObject:nil waitUntilDone:NO];
return;
}
[self willChangeValueForKey:#"isExecuting"];
_isExecuting = YES;
[self didChangeValueForKey:#"isExecuting"];
NSURLRequest * request = [NSURLRequest requestWithURL:_url];
_connection = [[NSURLConnection alloc] initWithRequest:request
delegate:self];
if (_connection == nil)
[self finish];
}
- (BOOL)isConcurrent
{
return YES;
}
// delegate method here...
My question is the following. Is this thread safe? The run loop listens for sources but invoked methods are called in a background thread. Am I wrong?
Edit
I've completed some tests on my own based on the code provided by Dave Dribin (see 1). I've noticed, as you wrote, that callbacks of NSURLConnection are called in the main thread.
Ok, but now I'm still very confusing. I'll try to explain my doubts.
Why including within a concurrent operation an async pattern where its callback are called in the main thread? Shunting the start method to the main thread it allows to execute callbacks in the main thread, and what about queues and operations? Where do I take advantage of threading mechanisms provided by GCD?
Hope this is clear.
This is kind of a long answer, but the short version is that what you're doing is totally fine and thread safe since you've forced the important part of the operation to run on the main thread.
Your first question was, "What happens if I add a concurrent operation in a NSOperationQueue?" As of iOS 4, NSOperationQueue uses GCD behind the scenes. When your operation reaches the top of the queue, it gets submitted to GCD, which manages a pool of private threads that grows and shrinks dynamically as needed. GCD assigns one of these threads to run the start method of your operation, and guarantees this thread will never be the main thread.
When the start method finishes in a concurrent operation, nothing special happens (which is the point). The queue will allow your operation to run forever until you set isFinished to YES and do the proper KVO willChange/didChange calls, regardless of the calling thread. Typically you'd make a method called finish to do that, which it looks like you have.
All this is fine and well, but there are some caveats involved if you need to observe or manipulate the thread on which your operation is running. The important thing to remember is this: don't mess with threads managed by GCD. You can't guarantee they'll live past the current frame of execution, and you definitely can't guarantee that subsequent delegate calls (i.e., from NSURLConnection) will occur on the same thread. In fact, they probably won't.
In your code sample, you've shunted start off to the main thread so you don't need to worry much about background threads (GCD or otherwise). When you create an NSURLConnection it gets scheduled on the current run loop, and all of its delegate methods will get called on that run loop's thread, meaning that starting the connection on the main thread guarantees its delegate callbacks also happen on the main thread. In this sense it's "thread safe" because almost nothing is actually happening on a background thread besides the start of the operation itself, which may actually be an advantage because GCD can immediately reclaim the thread and use it for something else.
Let's imagine what would happen if you didn't force start to run on the main thread and just used the thread given to you by GCD. A run loop can potentially hang forever if its thread disappears, such as when it gets reclaimed by GCD into its private pool. There's some techniques floating around for keeping the thread alive (such as adding an empty NSPort), but they don't apply to threads created by GCD, only to threads you create yourself and can guarantee the lifetime of.
The danger here is that under light load you actually can get away with running a run loop on a GCD thread and think everything is fine. Once you start running many parallel operations, especially if you need to cancel them midflight, you'll start to see operations that never complete and never deallocate, leaking memory. If you wanted to be completely safe, you'd need to create your own dedicated NSThread and keep the run loop going forever.
In the real world, it's much easier to do what you're doing and just run the connection on the main thread. Managing the connection consumes very little CPU and in most cases won't interfere with your UI, so there's very little to gain by running the connection completely in the background. The main thread's run loop is always running and you don't need to mess with it.
It is possible, however, to run an NSURLConnection connection entirely in the background using the dedicated thread method described above. For an example, check out JXHTTP, in particular the classes JXOperation and JXURLConnectionOperation
In GCD, is there a way to tell if the current queue is concurrent or not?
I'm current attempting to perform a delayed save on some managed object contexts but I need to make sure that the queue the code is currently executed on is thread-safe (in a synchronous queue).
If you actually have to determine whether or not the queue passed in to you is serial or concurrent, you've almost certainly designed things incorrectly. Typically, an API will hide an internal queue as an implementation detail (in your case, your shared object contexts) and then enqueue operations against its internal queue in order to achieve thread safety. When your API takes a block and a queue as parameters, however, then the assumption is that the passed-in block can be safely scheduled (async) against the passed-queue (when, say, an operation is complete) and the rest of the code is factored appropriately.
Yes, assuming you're doing the work in an NSOperation subclass:
[myOperation isConcurrent] //or self, if you're actually in the NSOperation
If you need to ensure some operations are always executed synchronously, you can create a specific operation queue and set its maximum concurrent operations to 1.
NSOperationQueue * synchronousQueue = [[NSOperationQueue alloc] init];
[synchronousQueue setMaxConcurrentOperationCount:1];
GCD takes some planning ahead. The only other way I can think of is to observe the value isExecuting (or similar) on your NSOperation objects. Check out this reference on that. This solution would be more involved, so I hope the other one works for you.