Why the GCD does not work? - ios

Why follewing GCD does not work? All subthreads pause at __ulock_wait, but have not deadlock.
dispatch_queue_t queue = dispatch_queue_create("test_gcd_queue", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 10000; i++)
{
dispatch_async(queue, ^{
dispatch_sync(queue, ^{
NSLog(#"---- gcd: %d", i);
});
});
//NSLog(#"---------- async over: %d", i); //Have this, OK.
}
NSLog(#"-------------------- cycle over");

This can't work because the inner dispath_sync() uses the same queue it runs on. Its block must wait until the last item in the queue is executed. Since the current code is in the queue this is a deadlock, because the dispatch_sync() waits of the termination of its surrounding block.
On a concurrent queue you may have the same effect if you start more tasks than threads in the queue. Each loop iteration needs two threads. If at some point during execution all threads are blocked by an asynchronous task at the start of dispatch_sync() no synchronous task has the chance to start, and thus no asynchronous task has the chance to finish.
The loop in your code will create very quickly a huge amount of asynchronous tasks. They clog up the queue because of the startup overhead of every task. So only some few synchronous tasks have the chance to start and to let their asynchronous task to finish.
If to inserts a small delay (say 1ms for instance) into the outer loop, this clogging should be mitigated or even removed.

Related

Suspending serial queue

Today i've tried following code:
- (void)suspendTest {
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_BACKGROUND, 0);
dispatch_queue_t suspendableQueue = dispatch_queue_create("test", attr);
for (int i = 0; i <= 10000; i++) {
dispatch_async(suspendableQueue, ^{
NSLog(#"%d", i);
});
if (i == 5000) {
dispatch_suspend(suspendableQueue);
}
}
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(6 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(#"Show must go on!");
dispatch_resume(suspendableQueue);
});
}
The code starts 10001 tasks, but it should suspend the queue from running new tasks halfway for resuming in 6 seconds. And this code works as expected - 5000 tasks executes, then queue stops, and after 6 seconds it resumes.
But if i use a serial queue instead of concurrent queue, the behaviour is not clear for me.
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL, QOS_CLASS_BACKGROUND, 0);
In this case a random number of tasks manage to execute before suspending, but often this number is close to zero (suspending happens before any tasks).
The question is - Why does suspending work differently for serial and concurrent queue and how to suspend serial queue properly?
As per its name, the serial queue performs the tasks in series, i.e., only starting on the next one after the previous one has been completed. The priority class is background, so it may not even have started on the first task by the time the current queue reaches the 5000th task and suspends the queue.
From the documentation of dispatch_suspend:
The suspension occurs after completion of any blocks running at the time of the call.
i.e., nowhere does it promise that asynchronously dispatched tasks on the queue would finish, only that any currently running task (block) will not be suspended part-way through. On a serial queue at most one task can be "currently running", whereas on a concurrent queue there is no specified upper limit. edit: And according to your test with a million tasks, it seems the concurrent queue maintains the conceptual abstraction that it is "completely concurrent", and thus considers all of them "currently running" even if they actually aren't.
To suspend it after the 5000th task, you could trigger this from the 5000th task itself. (Then you probably also want to start the resume-timer from the time it is suspended, otherwise it is theoretically possible it will never resume if the resume happened before it was suspended.)
I think the problem is that you are confusing suspend with barrier. suspend stops the queue dead now. barrier stops when everything in the queue before the barrier has executed. So if you put a barrier after the 5000th task, 5000 tasks will execute before we pause at the barrier on the serial queue.

Concurrent Dispatch Queue not getting blocked

I am reading a iOS book which says that "dispatch_sync function blocks the concurrent queue to which the block is submitted i.e it makes the queue wait". Based on that concept I created my own example which is as follows.The following snippet is written in the "viewDidLoad" method
dispatch_queue_t concQueue1 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
void (^secondBlock)(void) =^( void)
{
//Second Block
for (int count=0; count<1000; count++)
{
if( [NSThread currentThread] == [NSThread mainThread ] )
{
NSLog(#"2 block main Thread") ;
}
else
{
NSLog(#"2 block other THREAD") ;
}
}
};
void (^firstBlock)(void) =^( void)
{
//First Block
for (int count=0; count<100; count++)
{
if( [NSThread currentThread] == [NSThread mainThread ] )
{
NSLog(#"1 block main Thread") ;
}
else
{
NSLog(#"1 block other THREAD") ;
}
}
dispatch_sync(concQueue1, secondBlock) ;
};
dispatch_async(concQueue1,firstBlock);
//Making the main thread sleep for some time
[NSThread sleepForTimeInterval:0.1] ;
dispatch_async( concQueue1, ^(void) {
//Third Block
for (int count=0; count<1000; count++)
{
if( [NSThread currentThread] == [NSThread mainThread ] )
{
NSLog(#"3 block main Thread") ;
}
else
{
NSLog(#"3 block other THREAD") ;
}
}
});
I am making the main thread sleep for some time so that dispatch_sync function in the "first block " gets executed. The output i am getting is this. I am showing the part of the output.
GCDExamples[2459:554259] 2 block other THREAD
.
.
.
GCDExamples[2459:554259] 2 block other THREAD
GCDExamples[2459:554256] 3 block other THREAD
GCDExamples[2459:554256] 3 block other THREAD
GCDExamples[2459:554259] 2 block other THREAD //Point first
GCDExamples[2459:554256] 3 block other THREAD
Some points about the output : The output shown in "3 block other THREAD" and "2 block other THREAD" are the first occurences of that output lines
MY QUESTION:
According to the concept because of dispatch_sync function once the second block starts it should make the queue wait rather than allowing "Third block" to start. But as shown in the earlier output "2 block other THREAD" follows "3 block other THREAD" statement at "//Point first" .This shows that the dispatch_sync function did'nt make the queue wait. How is that possible?? .Please ask me any another other infomration if needed.
EDIT 1 : I am putting the text of the well known book here to explain my point . The book is "iOS 7 Programming cookbook". Text follows:-
"For any task that doesn’t involve the UI, you can use global concurrent queues in GCD. These allow either synchronous or asynchronous execution. But synchronous execution does not mean your program waits for the code to finish before continuing. It simply means that the concurrent queue will wait until your task has finished before it continues to the next block of code on the queue. When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code. This is because concurrent queues, as their name implies, run their code on threads other than the main thread."
As the bold text says that the concurrent queue would wait UNTIL my task is finished before continuing with the next block. My block printing "2 block other THREAD" should be allowed to finish before "3 block other THREAD" starts, but that is not the case my " 2 block other THREAD" is printed again intermingling with "3 block other THREAD" statement when in fact my all "2 block other THREAD" should be allowed to get completed and then "3 block other THREAD" should follow. Comment if more info is required.
"dispatch_sync function blocks the concurrent queue to which the block is submitted i.e it makes the queue wait"
If that's what the book says, throw it away. That's just wrong.
Synchronous vs. asynchronous is about the caller. When some code calls dispatch_sync(), that code can not proceed until the task being dispatched has completed. The queue is not blocked or forced to wait or anything like that.
By contrast, when code calls dispatch_async(), the task is put on the queue and the caller proceeds to its next step. It does not wait for the task that was dispatched to start, let alone finish.
That's a completely separate issue from whether a queue is concurrent or serial. That distinction belongs to the queues and the tasks they run, but doesn't directly affect the caller. A serial queue will only run one task at a time. If other tasks have been queued, they wait and run in strict sequence.
A concurrent queue can allow multiple tasks to run at the same time, depending on available system resources.
Update in response to edited question with new quote from the book:
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD. These allow either synchronous or asynchronous execution. But synchronous execution does not mean your program waits for the code to finish before continuing. It simply means that the concurrent queue will wait until your task has finished before it continues to the next block of code on the queue. When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code. This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
This continues to be completely wrong. To take it part by part:
But synchronous execution does not mean your program waits for the code to finish before continuing.
That's exactly what "synchronous execution" does mean. When you submit a task synchronously, the submitting thread waits for the code to finish before continuing.
It simply means that the concurrent queue will wait until your task has finished before it continues to the next block of code on the queue.
No. The whole point of concurrent queues is that they don't wait for one task that is running before starting subsequent tasks. That's what "concurrent" means. A concurrent queue can run multiple tasks concurrently, at the same time, simultaneously.
When you put a block object on a concurrent queue, your own program always continues right away without waiting for the queue to execute the code.
No, this is wrong. It completely depends on what function you use to put that block on the queue. If it uses dispatch_sync(), it waits for the queue to execute the block. If it uses dispatch_async(), it does not wait; it continues right away. This is true whether the queue is serial or concurrent.
This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
Any queue, serial or concurrent, other than the main queue, may run the blocks submitted to it on a background thread. And, if you use dispatch_sync() to submit a block to a concurrent queue from the main thread, it's very possible that the block will execute on the main thread. That's because GCD knows that the main thread isn't doing anything else, because it's blocked inside of the dispatch_sync() call, so it might as well run the block there.
In other words, the type of queue does not dictate which thread the block runs on.
The author of this book simply doesn't know what s/he is talking about.
First to understand, GCD you have to understand the difference between synchronous and asynchronous execution.
Synchronous = Executes as they are submitted and blocks the thread/queue they are submitted on.This means that:
The block (of code) only executes when it is it's turn in the queue.
The block (of code) will block the queue for executing and other blocks(of code & synchronous) will wait.
Basically the blocks will execute in a FIFO format.
Asynchronous = Starts executing immediately regardless of the queue/thread and does not block queue/thread.Executes even if something is executing on the queue(both synchronous and asynchronous).
To understand what when wrong, we will work through the code.
Lines 1-19 - Defined secondBlock
Lines 23-43 - Defined firstBlock
Line 47 - dispatch_async() firstBlock (Remember:Asynchronous execution)
Line 41[firstBlock] dispatch_sync() secondBlock (Remember: Synchronous execution)
Line 43[firstBlock] - firstBlock exits
Line 50 - Thread sleeps for 0.1 seconds
Line 52 - Define and Execute thirdBlock (Remember: Asynchronous execution).
The thirdBlock was executing asynchronously and started executing even if there is secondBlock executing on the queue.To achieve queuing of blocks(of code), us dispatch_sync().
Note
This functions operate relative to the concurrent queue.This means that dispatch_sync() will only be synchronous to the current queue. On other threads (such as the main thread) it appears to be asynchronous.

Performing UI updates on main thread synchronously from a concurrent queue

As far as I have understood GCD UI operations should always be performed on the main thread/main queue asynchronously. But the following code seems to also work without any problem. Can someone please explain why ?
I am passing 2 blocks synchronously to a dispatch_async. One block downloads an image and the other displays it on the view.
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
__block UIImage *image = nil;
dispatch_sync(concurrentQueue, ^{
/* Download the image here */
});
dispatch_sync(dispatch_get_main_queue(), ^{
/* Show the image to the user here on the main queue */
});
});
The queue is important (it has to be the main queue) but whether the gcd calls are synchronous or asynchronous is irrelevant - that just affects how the rest of your code around the gcd calls is timed. Once a block is running on a queue it doesn't matter how it was scheduled.
Synchronous dispatch can simplify your code (since it won't return until the block is executed) but does come with the risk of locking if you end up waiting for things to finish.

Performing loop iterations serially in Objective-C

I need to create a loop which iterations should be executed in one thread and one after another serially.
I have tried to add every operation to queue in a loop with dispatch_sync and my custom serial queue myQueue
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
//a few seconds long operation
};
for (int i = 0; i < 10; ++i) {
dispatch_sync(myQueue, myBlock);
}
But is doesn't work.
I also have tried dispatch_apply but is doesn't work to.
I also tried to add operations to my queue without loop.
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
But nothing works... So, why can't I do it?
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory.
When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
Can anyone help me with this case? Maybe I should use something like #synchronize(), or NSOperation & NSOperationQueue, or NSLock?
I had a much more complicated answer using barriers, but then I realized.
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
for (int i = 0; i < 10; ++i) {
//a few seconds long operation
}
};
dispatch_async(myQueue, myBlock);
This is apparently your real problem:
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory. When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
The problem you describe here sounds like an autorelease pool problem. Each operation allocates some objects and autoreleases them. By default, the autorelease pool is drained (and the objects can be deallocated) at the “end” of the run loop (before the run loop looks for the next event to dispatch). So if you do a lot of operations during a single pass through the run loop, then each operation will allocate and autorelease objects, and none of those objects will be deallocated until all the operations have finished.
You can explicitly drain the run loop like this:
for (int i = 0; i < 10; ++i) {
#autoreleasepool {
// a few seconds long operation
}
};
You attempted to use dispatch_sync, but a queue doesn't necessarily run a block inside a new autorelease pool. In fact, dispatch_sync tries to run the block immediately on the calling thread when possible. That's what's happening in your case. (A queue is not a thread! Only the “main” queue cares what thread it uses; other queue will run their blocks on any thread.)
If the operation is really a few seconds long, then you should definitely run it on a background thread, not the main thread. You run a block on a background thread by using dispatch_async. If you want to do something after all the operations complete, queue one last block to do the extra something:
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
for (int i = 0; i < 10; ++i) {
dispatch_async(myQueue, ^{
#autoreleasepool {
//a few seconds long operation
}
});
}
dispatch_async(myQueue, ^{
// code to run after all long operations complete
});
dispatch_queue_release();
// Execution continues here on calling thread IMMEDIATELY, while operations
// run on a background thread.
It is too late to answer this question but recently I faced exactly same problem and I create on category(NSArray+TaskLoop) over NSArray to perform iteration serially as well as parallely
you can download same from here
https://github.com/SunilSpaceo/DemoTaskLoop
To perform iteration serially you should use
[array enumerateTaskSequentially:^(.... ];
put your iteration in block and call
completion(nil) when you done with that iteration
Do not forgot to call completion block otherwise it will not go to next iteration

Delay in executing method after dispatch group operations are complete in iOS

I am calling a method after after the queues in my dispatch group complete executing. However, there is a significant delay in executing the final method even after all the queues have been executed. Can anyone explain any probable reasons?
dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_async(group, queue,^{
//some code
}
dispatch_group_notify(group, queue,
^{
[self allTasksDone];
});
What I meant was that the method allTasksDone is executed after some delay even when the operation in the async queue has completed.
How does -allTasksDone work? If it's communicating with the user by updating user interface elements, it need to run on in the main thread's context, or else it'll appear that the UI elements in question are "delayed" -- they won't update until the main run loop happens to make them update.
Try this instead:
dispatch_group_notify(group, dispatch_get_main_queue(),
^{
[self allTasksDone];
});
As it is, you're running -allTasksDone on the default background queue, which doesn't play nice with AppKit or UIKit.
I suggest an alternative approach although you can most certainly accomplish this using dispatch groups.
// Important note: This does not work with global queues, but you can use target queues to direct your custom queue to one of your global queues if you need priorities.
dispatch_queue_t queue = dispatch_queue_create("com.mycompany.myqueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue,^{
//some code
}
dispatch_barrier_async(queue,
^{
// this executes when all previously dispatched blocks have finished.
[self allTasksDone];
});

Resources