What's the difference between operation and block in NSOperation? - ios

When I use following codes:
let queue = OperationQueue()
let operation = BlockOperation()
for i in 0..<10 {
operation.addExecutionBlock({
print("===\(Thread.current)===\(i)"
})
}
queue.addOperation(operation)
I create a asynchronous queue to execute these operations.
And if I use codes like following:
let queue = OperationQueue()
for i in 0..<10 {
queue.addOperation(
print("===\(Thread.current)===\(i)"
)
}
When I make the queue concurrent,they produce the same result.
But when I set
queue.maxConcurrentOperationCount = 1
to make the queue serial, they are different!
The first one still print the unordered result like the concurrent queue. But the second one can print the ordered result.
So what's the difference between them? When I want to use NSOperation, which one should I use? Any help much appreciated!

The docs on OperationQueue tell you about concurrency and order of execution of the blocks you submit. You should read the Xcode article on OperationQueue. Here is a relevant bit:
An operation queue executes its queued operation objects based on
their priority and readiness. If all of the queued operation objects
have the same priority and are ready to execute when they are put in
the queue—that is, their isReady method returns true—they are executed
in the order in which they were submitted to the queue. However, you
should never rely on queue semantics to ensure a specific execution
order of operation objects. Changes in the readiness of an operation
can change the resulting execution order. If you need operations to
execute in a specific order, use operation-level dependencies as
defined by the Operation class.

Check please the official documentation regarded addExecutionBlock: function. It just adds the specified block to the receiver's list of blocks to perform in context of executing operation.
If you would like to do it synchronously, here is a code sample:
let queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
for i in 0..<10 {
let operation = BlockOperation {
print("===\(Thread.current)===\(i)")
}
queue.addOperation(operation)
}
When I want to use NSOperation, which one should I use?
Use the second one.

Just a guess.
In this case:
let queue = OperationQueue()
let operation = BlockOperation()
for i in 0..<10 {
operation.addExecutionBlock({
print("===\(Thread.current)===\(i)"
})
}
queue.addOperation(operation)
Inside the BlockOperation, blocks are asynchronous while the BlockOperation itself
is synchronous. So it actually is a synchronous queue.
So the use of queue.addOperation(operation) is nonsense. Instead of it,
I should use operation.start() because this is a synchronous queue.
The function addExecutionBlock() should be used when you need a synchronous queue.
The function addOperation() should be used when you need a asynchronous queue.

Difference -> BlockOperation has a addDependency whereas OperationQueue() needs to addOperations. Following code with console output will elaborate:
let opQueue = OperationQueue()
opQueue.addOperation {
print("operation 1")
}
let operation2 = BlockOperation {
print("operation 2")
}
let operation3 = BlockOperation {
print("operation 3")
}
operation2.addDependency(operation3)
opQueue.addOperation(operation2)
opQueue.addOperation(operation3)
Console output:
operation 1
operation 3
operation 2

Related

How does a serial queue/private dispatch queue know when a task is complete?

(Perhaps answered by How does a serial dispatch queue guarantee resource protection? but I don't understand how)
Question
How does gcd know when an asynchronous task (e.g. network task) is finished? Should I be using dispatch_retain and dispatch_release for this purpose? Update: I cannot call either of these methods with ARC... What do?
Details
I am interacting with a 3rd party library that does a lot of network access. I have created a wrapper via a small class that basically offers all the methods i need from the 3rd party class, but wraps the calls in dispatch_async(serialQueue) { () -> Void in (where serialQueue is a member of my wrapper class).
I am trying to ensure that each call to the underlying library finishes before the next begins (somehow that's not already implemented in the library).
The serialisation of work on a serial dispatch queue is at the unit of work that is directly submitted to the queue. Once execution reaches the end of the submitted closure (or it returns) then the next unit of work on the queue can be executed.
Importantly, any other asynchronous tasks that may have been started by the closure may still be running (or may not have even started running yet), but they are not considered.
For example, for the following code:
dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}
dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}
The output would be something like:
Start
Done 1st
Start
Done 2nd
10 seconds later
10 seconds later
Note that the first 10 second task hasn't completed before the second serial task is dispatched. Now, compare:
dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}
dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}
The output would be something like:
Start
10 seconds later
Done 1st
Start
10 seconds later
Done 2nd
Note that this time because the 10 second task was dispatched synchronously the serial queue was blocked and the second task didn't start until the first had completed.
In your case, there is a very good chance that the operations you are wrapping are going to dispatch asynchronous tasks themselves (since that is the nature of network operations), so a serial dispatch queue on its own is not enough.
You can use a DispatchGroup to block your serial dispatch queue.
dispatch_async(serialQueue) {
let dg = dispatch_group_create()
dispatch_group_enter(dg)
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
dispatch_group_leave(dg)
}
dispatch_group_wait(dg)
print("Done")
}
This will output
Start
10 seconds later
Done
The dg.wait() blocks the serial queue until the number of dg.leave calls matches the number of dg.enter calls. If you use this technique then you need to be careful to ensure that all possible completion paths for your wrapped operation call dg.leave. There are also variations on dg.wait() that take a timeout parameter.
As mentioned before, DispatchGroup is a very good mechanism for that.
You can use it for synchronous tasks:
let group = DispatchGroup()
DispatchQueue.global().async(group: group) {
syncTask()
}
group.notify(queue: .main) {
// done
}
It is better to use notify than wait, as wait does block the current thread, so it is safe on non-main threads.
You can also use it to perform async tasks:
let group = DispatchGroup()
group.enter()
asyncTask {
group.leave()
}
group.notify(queue: .main) {
// done
}
Or you can even perform any number of parallel tasks of any synchronicity:
let group = DispatchGroup()
group.enter()
asyncTask1 {
group.leave()
}
group.enter() //other way of doing a task with synchronous API
DispatchQueue.global().async {
syncTask1()
group.leave()
}
group.enter()
asyncTask2 {
group.leave()
}
DispatchQueue.global().async(group: group) {
syncTask2()
}
group.notify(queue: .main) {
// runs when all tasks are done
}
It is important to note a few things.
Always check if your asynchronous functions call the completion callback, sometimes third party libraries forget about that, or cases when your self is weak and nobody bothered to check if the body got evaluated when self is nil. If you don't check it then you can potentially hang and never get the notify callback.
Remember to perform all the needed group.enter() and group.async(group: group) calls before you call the group.notify. Otherwise you can get a race condition, and the group.notify block can fire, before you actually finish your tasks.
BAD EXAMPLE
let group = DispatchGroup()
DispatchQueue.global().async {
group.enter()
syncTask1()
group.leave()
}
group.notify(queue: .main) {
// Can run before syncTask1 completes - DON'T DO THIS
}
The answer to the question in your questions body:
I am trying to ensure that each call to the underlying library finishes before the next begins
A serial queue does guarantee that the tasks are progressed in the order you add them to the queue.
I do not really understand the question in the title though:
How does a serial queue ... know when a task is complete?

How to wait until all NSOperations is finished?

I have the following code:
func testFunc(completion: (Bool) -> Void) {
let queue = NSOperationQueue()
queue.maxConcurrentOperationCount = 1
for i in 1...3 {
queue.addOperationWithBlock{
Alamofire.request(.GET, "https://httpbin.org/get").responseJSON { response in
switch (response.result){
case .Failure:
print("error")
break;
case .Success:
print("i = \(i)")
}
}
}
//queue.addOperationAfterLast(operation)
}
queue.waitUntilAllOperationsAreFinished()
print("finished")
}
and output is:
finished
i = 3
i = 1
i = 2
but I expect the following:
i = 3
i = 1
i = 2
finished
So, why queue.waitUntilAllOperationsAreFinished() don't wait?
Each operation you've added into queue is immediately executed because Alamofire.request simply returns without waiting for the response data.
Furthermore, there is a possibility of deadlock there. Since responseJSON block is executed within the main queue by default, blocking the main thread by calling waitUntilAllOperationsAreFinished will prevent it from executing the completion block at all.
First, in order to fix the deadlock issue, you can tell Alamofire to execute the completion block in a different queue, second, you can use dispatch_group_t to group the number of asynchronous HTTP requests and keep the main thread waiting till all those requests in the group finish executing:
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)
let group = dispatch_group_create()
for i in 1...3 {
dispatch_group_enter(group)
Alamofire.request(.GET, "https://httpbin.org/get").responseJSON(queue: queue, options: .AllowFragments) { response in
print(i)
dispatch_async(dispatch_get_main_queue()) {
// Main thread is still blocked. You can update the UI here but it will take effect after all HTTP requests are finished.
}
dispatch_group_leave(group)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
print("finished")
I would suggest you to use KVO and observe when the queue has finish all the task instead of blocking the current thread until all the operations finished. Or you can use dependencies. Take a look at this SO question
To check whether all operations finished - We could use KVO to observe number of operations in the Queue. Unfortunately both operations and operationCount are currently deprecated..!
So it's safe to use following option using dependency.
To check few operations are finished - Use Dependencies :
Create a final operation called "finishOperation" then add dependencies to all other required operation. This way, "finishOperation" will be executed only when depended operations are finished. Check this answer for code sample.

Alamofire multitask Execution over after one func

use Alamofire multitask Execution over after one func.
my use gcd,NSOperationQueue all failure.
Please help me to solve the master.
The following pseudo code:
let imgDatas1 = UIImageJPEGRepresentation(UIImage(named: "aar")!, 0.1)
let strUrl1 = "http://www.baidu.com"
let group = dispatch_group_create()
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
//let queue = dispatch_get_main_queue()
dispatch_group_async(group, queue) {
print("threed 1.1")
Alamofire.upload(.POST, strUrl1, data: imgDatas1!).responseString(completionHandler: { (dd) in
NSThread.sleepForTimeInterval(3.0)
print("threed1.3")
})
print("threed1.2")
}
dispatch_group_async(group, queue) {
print("threed2.1")
Alamofire.upload(.POST, strUrl1, data: imgDatas1!).responseString(completionHandler: { (dd) in
NSThread.sleepForTimeInterval(2.0)
print("threed2.3")
})
print("threed2.2")
}
dispatch_group_notify(group, queue) {
print("voer")
}
let operationQueue = NSOperationQueue()
let operation1 = NSBlockOperation {
Alamofire.upload(.POST, strUrl1, data: imgDatas1!).responseString(completionHandler: { (dd) in
NSThread.sleepForTimeInterval(2.0)
print("xian 1.2")
})
print("xian 1.1")
}
let operation2 = NSBlockOperation {
Alamofire.upload(.POST, strUrl1, data: imgDatas1!).responseString(completionHandler: { (dd) in
NSThread.sleepForTimeInterval(3.0)
print("xian 2.2")
})
print("xian 2.1")
}
let operation3 = NSBlockOperation {
print("xian 3")
}
operation2.addDependency(operation1)
operation3.addDependency(operation2)
operationQueue.addOperation(operation1)
operationQueue.addOperation(operation2)
operationQueue.addOperation(operation3)
The problem with both of these approaches is that you're synchronizing the issuing of the requests, but not the actual response of the request. In your GCD example, you're exiting your dispatch_group_async as soon as the request is issued, although the response has not yet been received. Likewise in your operation queue example, you're completing your block operation as soon as the request is issued, but these operations aren't waiting for the requests to finish.
The simple, if inelegant approach is to just call one in the completion handler of the prior one. If you put them in separate functions, it avoids the unseemly nesting of completion handlers.
If you're looking for a more elegant solution, you solve this with operation queues by wrapping this in an asynchronous NSOperation/Operation subclass, and only trigger the isFinished KVO when the request is done. See Asynchronous Versus Synchronous Operations section of the Operation Reference, or the slightly more detailed (though dated) discussion in Operation Queues: Concurrent Versus Non-concurrent Operations in the Concurrency Programming Guide.
Another elegant approach is to use promises, something like PromiseKit. It strikes me as a fairly dramatic solution (introduce an entirely new asynchronous pattern), but it does solve this sort of issue well.

NSOperationQueue addOperations waitUntilFinished

Hi I am building an app using Swift. I need to process notifications in a specific order. Therefore I am trying to use addOperations waitUntilFinished.
Here is what I did:
let oldify = NSOperation()
oldify.completionBlock = {
println("oldify")
}
let appendify = NSOperation()
appendify.completionBlock = {
println("appendify")
}
let nettoyify = NSOperation()
nettoyify.completionBlock = {
println("nettoyify")
}
NSOperationQueue.mainQueue().maxConcurrentOperationCount = 1
NSOperationQueue.mainQueue().addOperations([oldify, appendify, nettoyify], waitUntilFinished: true)
With this code none of the operations is being executed. When I try this instead:
NSOperationQueue.mainQueue().maxConcurrentOperationCount = 1
NSOperationQueue.mainQueue().addOperation(oldify)
NSOperationQueue.mainQueue().addOperation(appendify)
NSOperationQueue.mainQueue().addOperation(nettoyify)
The operations get executed but not in the right order.
Does anyone know what I'm doing wrong? I am getting confident in swift but completely new to NSOperations
A couple of issues:
You are examining behavior of the completion block handlers. As the completionBlock documentation says:
The exact execution context for your completion block is not guaranteed but is typically a secondary thread. Therefore, you should not use this block to do any work that requires a very specific execution context.
The queue will manage the operations themselves, but not their completion blocks (short of making sure that the the operation finishes before its completionBlock is started). So, bottom line, do not make any assumptions about (a) when completion blocks are run, (b) the relation of one operation's completionBlock to other operations or their completionBlock blocks, etc., nor (c) which thread they are performed on.
Operations are generally executed in the order in which they were added to the queue. If you add an array of operations, though, the documentation makes no formal assurances that they are enqueued in the order they appear in that array. You might, therefore, want to add the operations one at a time.
Having said that, the documentation goes on to warn us:
An operation queue executes its queued operation objects based on their priority and readiness. If all of the queued operation objects have the same priority and are ready to execute when they are put in the queue—that is, their isReady method returns YES—they are executed in the order in which they were submitted to the queue. However, you should never rely on queue semantics to ensure a specific execution order of operation objects. Changes in the readiness of an operation can change the resulting execution order. If you need operations to execute in a specific order, use operation-level dependencies as defined by the NSOperation class.
To establish explicit dependencies, you might do something like:
let oldify = NSBlockOperation() {
NSLog("oldify")
}
oldify.completionBlock = {
NSLog("oldify completion")
}
let appendify = NSBlockOperation() {
NSLog("appendify")
}
appendify.completionBlock = {
NSLog("appendify completion")
}
appendify.addDependency(oldify)
let nettoyify = NSBlockOperation() {
NSLog("nettoyify")
}
nettoyify.completionBlock = {
NSLog("nettoyify completion")
}
nettoyify.addDependency(appendify)
let queue = NSOperationQueue()
queue.addOperations([oldify, appendify, nettoyify], waitUntilFinished: false)
BTW, as you'll see above, you should not add operations to the main queue in conjunction with the waitUntilFinished. Feel free to add them to a different queue, but don't dispatch from a serial queue, back to itself, with the waitUntilFinished option.

Why is my NSOperationQueue running on main thread?

I have set up an operation queue:
func initialiseOperationQueue(){
self.operationQueue = NSOperationQueue()
self.operationQueue.name = "General queue"
self.operationQueue.maxConcurrentOperationCount = 2
}
Then I added an operation to my queue
let op = HPSyncDataOperation(type: HPSyncDataOperationType.OnlineRecord, delegate: self, date: self.latestLastUpdateAt)
self.operationQueue.addOperation(op)
It is basically using Parse framework to asynchronously download some record data online. Its implementation looks like the following:
PFCloud.callFunctionInBackground("recordPosts", withParameters: param, block: { (objects:AnyObject!, error:NSError!) -> Void in
if error == nil {
let dataObjects = objects as [PFObject]
//TROUBLE HERE:
for object in dataObjects {
object.pinWithName("Received Posts")
}
//abcdefg
}
})
But in execution, when object.pinWithName("Received Posts") is run, it invokes
Warning: A long-running operation is being executed on the main thread.
Should an operation be run on a separate thread? So pinWithName, regardless of its sync or async, should be run on a separate thread as well?
Please help! Why is this?
Your operation will be run on a background thread, but all it's doing is starting another asynchronous process (PFCloud.callFunctionInBackground) which will start another thread. When that other process is complete it calls the completion block on the main thread.
So, in this case your operation and queue are doing basically nothing, and really you should be taking the result of the call to PFCloud.callFunctionInBackground (i.e. objects) and processing that on a background thread if it's likely to be time consuming.

Resources