I need to create a loop which iterations should be executed in one thread and one after another serially.
I have tried to add every operation to queue in a loop with dispatch_sync and my custom serial queue myQueue
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
//a few seconds long operation
};
for (int i = 0; i < 10; ++i) {
dispatch_sync(myQueue, myBlock);
}
But is doesn't work.
I also have tried dispatch_apply but is doesn't work to.
I also tried to add operations to my queue without loop.
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
But nothing works... So, why can't I do it?
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory.
When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
Can anyone help me with this case? Maybe I should use something like #synchronize(), or NSOperation & NSOperationQueue, or NSLock?
I had a much more complicated answer using barriers, but then I realized.
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
for (int i = 0; i < 10; ++i) {
//a few seconds long operation
}
};
dispatch_async(myQueue, myBlock);
This is apparently your real problem:
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory. When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
The problem you describe here sounds like an autorelease pool problem. Each operation allocates some objects and autoreleases them. By default, the autorelease pool is drained (and the objects can be deallocated) at the “end” of the run loop (before the run loop looks for the next event to dispatch). So if you do a lot of operations during a single pass through the run loop, then each operation will allocate and autorelease objects, and none of those objects will be deallocated until all the operations have finished.
You can explicitly drain the run loop like this:
for (int i = 0; i < 10; ++i) {
#autoreleasepool {
// a few seconds long operation
}
};
You attempted to use dispatch_sync, but a queue doesn't necessarily run a block inside a new autorelease pool. In fact, dispatch_sync tries to run the block immediately on the calling thread when possible. That's what's happening in your case. (A queue is not a thread! Only the “main” queue cares what thread it uses; other queue will run their blocks on any thread.)
If the operation is really a few seconds long, then you should definitely run it on a background thread, not the main thread. You run a block on a background thread by using dispatch_async. If you want to do something after all the operations complete, queue one last block to do the extra something:
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
for (int i = 0; i < 10; ++i) {
dispatch_async(myQueue, ^{
#autoreleasepool {
//a few seconds long operation
}
});
}
dispatch_async(myQueue, ^{
// code to run after all long operations complete
});
dispatch_queue_release();
// Execution continues here on calling thread IMMEDIATELY, while operations
// run on a background thread.
It is too late to answer this question but recently I faced exactly same problem and I create on category(NSArray+TaskLoop) over NSArray to perform iteration serially as well as parallely
you can download same from here
https://github.com/SunilSpaceo/DemoTaskLoop
To perform iteration serially you should use
[array enumerateTaskSequentially:^(.... ];
put your iteration in block and call
completion(nil) when you done with that iteration
Do not forgot to call completion block otherwise it will not go to next iteration
Related
Why follewing GCD does not work? All subthreads pause at __ulock_wait, but have not deadlock.
dispatch_queue_t queue = dispatch_queue_create("test_gcd_queue", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 10000; i++)
{
dispatch_async(queue, ^{
dispatch_sync(queue, ^{
NSLog(#"---- gcd: %d", i);
});
});
//NSLog(#"---------- async over: %d", i); //Have this, OK.
}
NSLog(#"-------------------- cycle over");
This can't work because the inner dispath_sync() uses the same queue it runs on. Its block must wait until the last item in the queue is executed. Since the current code is in the queue this is a deadlock, because the dispatch_sync() waits of the termination of its surrounding block.
On a concurrent queue you may have the same effect if you start more tasks than threads in the queue. Each loop iteration needs two threads. If at some point during execution all threads are blocked by an asynchronous task at the start of dispatch_sync() no synchronous task has the chance to start, and thus no asynchronous task has the chance to finish.
The loop in your code will create very quickly a huge amount of asynchronous tasks. They clog up the queue because of the startup overhead of every task. So only some few synchronous tasks have the chance to start and to let their asynchronous task to finish.
If to inserts a small delay (say 1ms for instance) into the outer loop, this clogging should be mitigated or even removed.
The reason for this question is because of the reactions to this question.
I realized the understanding of the problem was not fully there as well as the reason for the question in the first place. So I am trying to boil down the reason for the other question to this one at it's core.
First a little preface, and some history, I know NSOperation(Queue) existed before GCD, and and they were implemented using threads before dispatch queues.
The next thing is that you need to understand is that by default, meaning no "waiting" methods being use on operations or operation queues (just a standard "addOperation:"), an NSOperation's main method is executed on the underlying queue of the NSOperationQueue asynchronously (e.g. dispatch_async()).
To conclude my preface, I'm questioning the purpose of setting NSOperationQueue.mainQueue.maxConcurrentOperationCount to 1 in this day and age, now that the underlyingQueue is actually the main GCD serial queue (e.g. the return of dispatch_get_main_queue()).
If NSOperationQueue.mainQueue already executes it's operation's main methods serially, why worry about maxConcurrentOperationCount at all?
To see the issue of it being set to 1, please see the example in the referenced question.
It's set to 1 because there's no reason to set it to anything else, and it's probably slightly better to keep it set to 1 for at least three reasons I can think of.
Reason 1
Because NSOperationQueue.mainQueue's underlyingQueue is dispatch_get_main_queue(), which is serial, NSOperationQueue.mainQueue is effectively serial (it could never run more than a single block at a time, even if its maxConcurrentOperationCount were greater than 1).
We can check this by creating our own NSOperationQueue, putting a serial queue in its underlyingQueue target chain, and setting its maxConcurrentOperationCount to a large number.
Create a new project in Xcode using the macOS > Cocoa App template with language Objective-C. Replace the AppDelegate implementation with this:
#implementation AppDelegate {
dispatch_queue_t concurrentQueue;
dispatch_queue_t serialQueue;
NSOperationQueue *operationQueue;
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
concurrentQueue = dispatch_queue_create("q", DISPATCH_QUEUE_CONCURRENT);
serialQueue = dispatch_queue_create("q2", nil);
operationQueue = [[NSOperationQueue alloc] init];
// concurrent queue targeting serial queue
//dispatch_set_target_queue(concurrentQueue, serialQueue);
//operationQueue.underlyingQueue = concurrentQueue;
// serial queue targeting concurrent queue
dispatch_set_target_queue(serialQueue, concurrentQueue);
operationQueue.underlyingQueue = serialQueue;
operationQueue.maxConcurrentOperationCount = 100;
for (int i = 0; i < 100; ++i) {
NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"operation %d starting", i);
sleep(3);
NSLog(#"operation %d ending", i);
}];
[operationQueue addOperation:operation];
}
}
#end
If you run this, you'll see that operation 1 doesn't start until operation 0 has ended, even though I set operationQueue.maxConcurrentOperationCount to 100. This happens because there is a serial queue in the target chain of operationQueue.underlyingQueue. Thus operationQueue is effectively serial, even though its maxConcurrentOperationCount is not 1.
You can play with the code to try changing the structure of the target chain. You'll find that if there is a serial queue anywhere in that chain, only one operation runs at a time.
But if you set operationQueue.underlyingQueue = concurrentQueue, and do not set concurrentQueue's target to serialQueue, then you'll see that 64 operations run simultaneously. For operationQueue to run operations concurrently, the entire target chain starting with its underlyingQueue must be concurrent.
Since the main queue is always serial, NSOperationQueue.mainQueue is effectively always serial.
In fact, if you set NSOperationQueue.mainQueue.maxConcurrentOperationCount to anything but 1, it has no effect. If you print NSOperationQueue.mainQueue.maxConcurrentOperationCount after trying to change it, you'll find that it's still 1. I think it would be even better if the attempt to change it raised an assertion. Silently ignoring attempts to change it is more likely to lead to confusion.
Reason 2
NSOperationQueue submits up to maxConcurrentOperationCount blocks to its underlyingQueue simultaneously. Since the mainQueue.underlyingQueue is serial, only one of those blocks can run at a time. Once those blocks are submitted, it may be too late to use the -[NSOperation cancel] message to cancel the corresponding operations. I'm not sure; this is an implementation detail that I haven't fully explored. Anyway, if it is too late, that is unfortunate as it may lead to a waste of time and battery power.
Reason 3
As with mentioned with reason 2, NSOperationQueue submits up to maxConcurrentOperationCount blocks to its underlyingQueue simultaneously. Since mainQueue.underlyingQueue is serial, only one of those blocks can execute at a time. The other blocks, and any other resources the dispatch_queue_t uses to track them, must sit around idly, waiting for their turns to run. This is a waste of resources. Not a big waste, but a waste nonetheless. If mainQueue.maxConcurrentOperationCount is set to 1, it will only submit a single block to its underlyingQueue at a time, thus preventing GCD from allocating resources uselessly.
Today i've tried following code:
- (void)suspendTest {
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_BACKGROUND, 0);
dispatch_queue_t suspendableQueue = dispatch_queue_create("test", attr);
for (int i = 0; i <= 10000; i++) {
dispatch_async(suspendableQueue, ^{
NSLog(#"%d", i);
});
if (i == 5000) {
dispatch_suspend(suspendableQueue);
}
}
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(6 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(#"Show must go on!");
dispatch_resume(suspendableQueue);
});
}
The code starts 10001 tasks, but it should suspend the queue from running new tasks halfway for resuming in 6 seconds. And this code works as expected - 5000 tasks executes, then queue stops, and after 6 seconds it resumes.
But if i use a serial queue instead of concurrent queue, the behaviour is not clear for me.
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL, QOS_CLASS_BACKGROUND, 0);
In this case a random number of tasks manage to execute before suspending, but often this number is close to zero (suspending happens before any tasks).
The question is - Why does suspending work differently for serial and concurrent queue and how to suspend serial queue properly?
As per its name, the serial queue performs the tasks in series, i.e., only starting on the next one after the previous one has been completed. The priority class is background, so it may not even have started on the first task by the time the current queue reaches the 5000th task and suspends the queue.
From the documentation of dispatch_suspend:
The suspension occurs after completion of any blocks running at the time of the call.
i.e., nowhere does it promise that asynchronously dispatched tasks on the queue would finish, only that any currently running task (block) will not be suspended part-way through. On a serial queue at most one task can be "currently running", whereas on a concurrent queue there is no specified upper limit. edit: And according to your test with a million tasks, it seems the concurrent queue maintains the conceptual abstraction that it is "completely concurrent", and thus considers all of them "currently running" even if they actually aren't.
To suspend it after the 5000th task, you could trigger this from the 5000th task itself. (Then you probably also want to start the resume-timer from the time it is suspended, otherwise it is theoretically possible it will never resume if the resume happened before it was suspended.)
I think the problem is that you are confusing suspend with barrier. suspend stops the queue dead now. barrier stops when everything in the queue before the barrier has executed. So if you put a barrier after the 5000th task, 5000 tasks will execute before we pause at the barrier on the serial queue.
Need to unzip 10 files in document directory for that i use dispatch async like this
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip 5 files
})
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip another 5 files
})
my doubt is will it do the unzip concurrently?
If is it so, while first 5 files getting unzip, another 5 files also
getting unzipped at the same time ?
how can i do it efficiently ?
any help would be appreciable.
If you unzip total files it will be good.Because It is asynchronous.While doing background operation simultaneously if you try to do other background process,it makes problem.
DISPATCH_QUEUE_PRIORITY_HIGH Items dispatched to the queue will run at high priority, i.e. the queue will be scheduled for execution before any default priority or low priority queue.
If you want to run a single independent queued operation and you’re not concerned with other concurrent operations, you can use the global concurrent queue
dispatch_queue_t globalConcurrentQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
It runs asynchronously on a background thread. This is done because parsing data may be a time consuming task and it could block the main thread which would stop all animations and the application wouldn't be responsive.
Grand Central Dispatch
dispatch_async
Asynchronous Operations
Apple Document
UI and main Thread
GLobal Queue
dispatch_sync and dispatch_async Process
Calling dispatch_get_global_queue returns a concurrent, background queue (ie a queue capable of running more than on queue item at once on a background thread) that is managed by the OS. The concurrency of the operations really depends on the number of cores the device has.
Each block you pass to dispatch_async is a queue item, so the code inside the block will run linearly on the background thread when that block is dequeued and run. If for example you are unzipping with a for loop, as such:
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
for (int i = 0; i<5; i++) {
// Unzip here
}
})
Then the task running time will be 5 x file unzip time. Batching them in to two sets of 5 could potentially mean the total unzip time is halved.
If you want all 10 files unzipped with max concurrency (ie as much concurrency as the system will allow), then you'd be better off dispatching 10 blocks to the global_queue, as such:
for (int i = 0; i<5; i++) {
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// Unzip here
})
}
Because global queues are concurrent queues, your second dispatch_async will happen concurrently with the first one. If you do this, you must ensure that:
The unzip class/instance is thread safe;
All model updates must be synchronized; and
The peak memory usage resulting from simultaneous zip operations isn't too high.
If the above conditions aren't satisfied (i.e. if you want to run these zip tasks consecutively, not concurrently), you can create a serial queue for unzipping (with dispatch_queue_create). That way, any unzipping tasks dispatched to that queue will be performed serially on a background queue, preventing them from running concurrently.
I have several threads and once they are all finished working, I need to call a myMergeBlock method exactly once per action. I can't use dispatch_once because I want to be able to call myMergeBlock at a later time.
Some pseudo code looks like this but is not yet thread safe:
BOOL worker1Finished, worker2Finished, worker3Finished;
void (^mergeBlock)(void) = ^{
if (worker1Finished && worker2Finished && worker3Finished)
dispatch_async(queue, myMergeBlock); // Must dispatch this only once
}
void (^worker1)(void) = ^{
...
worker1Finished = YES;
mergeBlock();
}
void (^worker2)(void) = ^{
...
worker2Finished = YES;
mergeBlock();
}
void (^worker3)(void) = ^{
...
worker3Finished = YES;
mergeBlock();
}
Also, based on the way the workers are called, I do not call them directly, but instead pass them into a function as arguments.
You want to use dispatch groups. First you create a group, schedule the three workers in the group, then add a notification block to the group.
It should look something like this:
//create dispatch group
dispatch_group_t myWorkGroup = dispatch_group_create();
//get one of the global concurrent queues
dispatch_queue_t myQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, NULL);
//submit your work blocks
dispatch_group_async(myWorkGroup, myQueue, worker1);
dispatch_group_async(myWorkGroup, myQueue, worker2);
dispatch_group_async(myWorkGroup, myQueue, worker3);
//set the mergeBlock to be submitted when all the blocks in the group are completed
dispatch_group_notify(myWorkGroup, myQueue, mergeBlock);
//release the group as you no longer need it
dispatch_release(myWorkGroup);
You could hang on the the group and reuse it later if you prefer. Be sure to schedule the work before the notification. If you try to schedule the notification first it will be dispatched immediately.
I haven't tested this code but I do use dispatch_groups in my projects.
This sounds very messy and low level. Have you looked at Operation Queues and Dispatch Groups and semaphores as discussed in the Concurrency Programming Guide. I think they may offer simpler solutions to your problem.
If you're targeting Lion or iOS 5 and up, you can use barrier blocks as long as the blocks are dispatched on a non-global, concurrent queue. For example:
dispatch_queue_t customConcurrentQueue = dispatch_queue_create("customQueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(customConcurrentQueue, worker1);
dispatch_async(customConcurrentQueue, worker2);
dispatch_async(customConcurrentQueue, worker3);
dispatch_barrier_async(customConcurrentQueue, mergeBlock);
//Some time later, after you're sure all of the blocks have executed.
dispatch_queue_release(customConcurrentQueue);
A barrier block executes after all previously submitted blocks have finished executing, and any blocks submitted after the barrier block will be forced to wait until the barrier block has finished. Again, for reasons which should be obvious, you can't use barrier blocks on the global queues. You must create your own concurrent queue.