Need to unzip 10 files in document directory for that i use dispatch async like this
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip 5 files
})
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip another 5 files
})
my doubt is will it do the unzip concurrently?
If is it so, while first 5 files getting unzip, another 5 files also
getting unzipped at the same time ?
how can i do it efficiently ?
any help would be appreciable.
If you unzip total files it will be good.Because It is asynchronous.While doing background operation simultaneously if you try to do other background process,it makes problem.
DISPATCH_QUEUE_PRIORITY_HIGH Items dispatched to the queue will run at high priority, i.e. the queue will be scheduled for execution before any default priority or low priority queue.
If you want to run a single independent queued operation and you’re not concerned with other concurrent operations, you can use the global concurrent queue
dispatch_queue_t globalConcurrentQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
It runs asynchronously on a background thread. This is done because parsing data may be a time consuming task and it could block the main thread which would stop all animations and the application wouldn't be responsive.
Grand Central Dispatch
dispatch_async
Asynchronous Operations
Apple Document
UI and main Thread
GLobal Queue
dispatch_sync and dispatch_async Process
Calling dispatch_get_global_queue returns a concurrent, background queue (ie a queue capable of running more than on queue item at once on a background thread) that is managed by the OS. The concurrency of the operations really depends on the number of cores the device has.
Each block you pass to dispatch_async is a queue item, so the code inside the block will run linearly on the background thread when that block is dequeued and run. If for example you are unzipping with a for loop, as such:
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
for (int i = 0; i<5; i++) {
// Unzip here
}
})
Then the task running time will be 5 x file unzip time. Batching them in to two sets of 5 could potentially mean the total unzip time is halved.
If you want all 10 files unzipped with max concurrency (ie as much concurrency as the system will allow), then you'd be better off dispatching 10 blocks to the global_queue, as such:
for (int i = 0; i<5; i++) {
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// Unzip here
})
}
Because global queues are concurrent queues, your second dispatch_async will happen concurrently with the first one. If you do this, you must ensure that:
The unzip class/instance is thread safe;
All model updates must be synchronized; and
The peak memory usage resulting from simultaneous zip operations isn't too high.
If the above conditions aren't satisfied (i.e. if you want to run these zip tasks consecutively, not concurrently), you can create a serial queue for unzipping (with dispatch_queue_create). That way, any unzipping tasks dispatched to that queue will be performed serially on a background queue, preventing them from running concurrently.
Related
Why follewing GCD does not work? All subthreads pause at __ulock_wait, but have not deadlock.
dispatch_queue_t queue = dispatch_queue_create("test_gcd_queue", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 10000; i++)
{
dispatch_async(queue, ^{
dispatch_sync(queue, ^{
NSLog(#"---- gcd: %d", i);
});
});
//NSLog(#"---------- async over: %d", i); //Have this, OK.
}
NSLog(#"-------------------- cycle over");
This can't work because the inner dispath_sync() uses the same queue it runs on. Its block must wait until the last item in the queue is executed. Since the current code is in the queue this is a deadlock, because the dispatch_sync() waits of the termination of its surrounding block.
On a concurrent queue you may have the same effect if you start more tasks than threads in the queue. Each loop iteration needs two threads. If at some point during execution all threads are blocked by an asynchronous task at the start of dispatch_sync() no synchronous task has the chance to start, and thus no asynchronous task has the chance to finish.
The loop in your code will create very quickly a huge amount of asynchronous tasks. They clog up the queue because of the startup overhead of every task. So only some few synchronous tasks have the chance to start and to let their asynchronous task to finish.
If to inserts a small delay (say 1ms for instance) into the outer loop, this clogging should be mitigated or even removed.
Hello iOS experts just to clear my concept I have a bit confusion about UI updation from Main Thread. Apple requirements are that all UI related stuff should be carried out in main Thread.So to test:
Case1: I dispatch a task to a global dispatch concurrent queue asynchronously . After some processing I update my UI stuff directly from the concurrent queue (background thread), working fine using the below code.
dispatch_queue_t myGlobalQueue;
myGlobalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(myGlobalQueue, ^{
// Some processing
// Update UI;
});
Case2: Than i tried the Apple required way, dispatch a block to global dispatch concurrent queue asynchronously. After some processing I update the UI stuff in Main thread using below code:
dispatch_queue_t myGlobalQueue;
myGlobalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(myGlobalQueue, ^{
// Some processing
dispatch_async(dispatch_get_main_queue(), ^{
// UI Updation
});
});
Now in both cases I am getting the same result. Now listen my questions
Questions: 1: In case1 are we still in Main Thread not in background thread ? If so than why Apple doc say:
Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given
point is variable and depends on system conditions.
Now if we are on Main Thread than this is a contradiction with the bold part of Apple Doc.
2: In Case1 if we are in background thread, than why Apple require to get Main Thread for UI Updation, Even though we can update UI from background Thread too?.
Kindly read my question fully and suggest me if I am doing something wrong. Your help and time would be greatly appreciated.
To 1)
This simply says, that tasks from the same queue can run on distinct threads. It does not say, that a task cannot run on a specific thread. (But I really do not expect to run a task on the main thread.)
To 2)
Apple does not say, that updating the UI from a different thread will fail in every case, but can fail. You shouldn't do it: One time it will fail.
You should read this:
https://en.wikipedia.org/wiki/Necessity_and_sufficiency
Today i've tried following code:
- (void)suspendTest {
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT, QOS_CLASS_BACKGROUND, 0);
dispatch_queue_t suspendableQueue = dispatch_queue_create("test", attr);
for (int i = 0; i <= 10000; i++) {
dispatch_async(suspendableQueue, ^{
NSLog(#"%d", i);
});
if (i == 5000) {
dispatch_suspend(suspendableQueue);
}
}
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(6 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(#"Show must go on!");
dispatch_resume(suspendableQueue);
});
}
The code starts 10001 tasks, but it should suspend the queue from running new tasks halfway for resuming in 6 seconds. And this code works as expected - 5000 tasks executes, then queue stops, and after 6 seconds it resumes.
But if i use a serial queue instead of concurrent queue, the behaviour is not clear for me.
dispatch_queue_attr_t attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL, QOS_CLASS_BACKGROUND, 0);
In this case a random number of tasks manage to execute before suspending, but often this number is close to zero (suspending happens before any tasks).
The question is - Why does suspending work differently for serial and concurrent queue and how to suspend serial queue properly?
As per its name, the serial queue performs the tasks in series, i.e., only starting on the next one after the previous one has been completed. The priority class is background, so it may not even have started on the first task by the time the current queue reaches the 5000th task and suspends the queue.
From the documentation of dispatch_suspend:
The suspension occurs after completion of any blocks running at the time of the call.
i.e., nowhere does it promise that asynchronously dispatched tasks on the queue would finish, only that any currently running task (block) will not be suspended part-way through. On a serial queue at most one task can be "currently running", whereas on a concurrent queue there is no specified upper limit. edit: And according to your test with a million tasks, it seems the concurrent queue maintains the conceptual abstraction that it is "completely concurrent", and thus considers all of them "currently running" even if they actually aren't.
To suspend it after the 5000th task, you could trigger this from the 5000th task itself. (Then you probably also want to start the resume-timer from the time it is suspended, otherwise it is theoretically possible it will never resume if the resume happened before it was suspended.)
I think the problem is that you are confusing suspend with barrier. suspend stops the queue dead now. barrier stops when everything in the queue before the barrier has executed. So if you put a barrier after the 5000th task, 5000 tasks will execute before we pause at the barrier on the serial queue.
I need to create a loop which iterations should be executed in one thread and one after another serially.
I have tried to add every operation to queue in a loop with dispatch_sync and my custom serial queue myQueue
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
//a few seconds long operation
};
for (int i = 0; i < 10; ++i) {
dispatch_sync(myQueue, myBlock);
}
But is doesn't work.
I also have tried dispatch_apply but is doesn't work to.
I also tried to add operations to my queue without loop.
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
dispatch_sync(myQueue, myBlock);
But nothing works... So, why can't I do it?
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory.
When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
Can anyone help me with this case? Maybe I should use something like #synchronize(), or NSOperation & NSOperationQueue, or NSLock?
I had a much more complicated answer using barriers, but then I realized.
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
void (^myBlock)() = ^{
for (int i = 0; i < 10; ++i) {
//a few seconds long operation
}
};
dispatch_async(myQueue, myBlock);
This is apparently your real problem:
I need it for memory economy. Every operation takes some memory and after completion saves the result. So, the next operation can reuse this memory. When I run them manually (tapping button on the screen every time when previous operation is finished) my app takes a little bit of memory, but when I do it with loop, they run all together and take a lot of memory.
The problem you describe here sounds like an autorelease pool problem. Each operation allocates some objects and autoreleases them. By default, the autorelease pool is drained (and the objects can be deallocated) at the “end” of the run loop (before the run loop looks for the next event to dispatch). So if you do a lot of operations during a single pass through the run loop, then each operation will allocate and autorelease objects, and none of those objects will be deallocated until all the operations have finished.
You can explicitly drain the run loop like this:
for (int i = 0; i < 10; ++i) {
#autoreleasepool {
// a few seconds long operation
}
};
You attempted to use dispatch_sync, but a queue doesn't necessarily run a block inside a new autorelease pool. In fact, dispatch_sync tries to run the block immediately on the calling thread when possible. That's what's happening in your case. (A queue is not a thread! Only the “main” queue cares what thread it uses; other queue will run their blocks on any thread.)
If the operation is really a few seconds long, then you should definitely run it on a background thread, not the main thread. You run a block on a background thread by using dispatch_async. If you want to do something after all the operations complete, queue one last block to do the extra something:
dispatch_queue_t myQueue = dispatch_queue_create("samplequeue", NULL);
for (int i = 0; i < 10; ++i) {
dispatch_async(myQueue, ^{
#autoreleasepool {
//a few seconds long operation
}
});
}
dispatch_async(myQueue, ^{
// code to run after all long operations complete
});
dispatch_queue_release();
// Execution continues here on calling thread IMMEDIATELY, while operations
// run on a background thread.
It is too late to answer this question but recently I faced exactly same problem and I create on category(NSArray+TaskLoop) over NSArray to perform iteration serially as well as parallely
you can download same from here
https://github.com/SunilSpaceo/DemoTaskLoop
To perform iteration serially you should use
[array enumerateTaskSequentially:^(.... ];
put your iteration in block and call
completion(nil) when you done with that iteration
Do not forgot to call completion block otherwise it will not go to next iteration
I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple
- (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion
{
dispatch_async(_queue, ^{
// dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage *image = nil;
NSURL *URL = [NSURL URLWithString:URLString];
if (URL) {
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image, URLString);
});
});
}
When I replace
dispatch_async(_queue, ^{
with commented out
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?
The kernel creates additional threads when workunits on existing GCD worker threads for a global concurrent queue are blocked in the kernel for a significant amount of time (as long as there is further work pending on the global queue).
This is necessary so that the application can continue to make progress overall (e.g. the execution of one of the pending blocks may be what allows the blocked threads to become unblocked).
If the reason for worker threads to be blocked in the kernel is IO (e.g. the +[NSData dataWithContentsOfURL:] in this example), the best solution is replace those calls with an API that will perform that IO asynchronously without blocking, e.g. NSURLConnection for networking or dispatch I/O for filesystem IO.
Alternatively you can limit the number of concurrent blocking operations manually, e.g. by using a counting dispatch semaphore.
The WWDC 2012 GCD session went over this topic in some detail.
Well from http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
and
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue.
By dispatching all your blocks to the high priority concurrent dispatch queue with
[NSData dataWithContentsOfURL:URL]
which is a synchronous blocking network operation, it looks like the default GCD behaviour will be to spawn a load of threads to execute your blocks ASAP.
You should be dispatching to DISPATCH_QUEUE_PRIORITY_BACKGROUND. These tasks are in no way "High Priority". Any image processing should be done when there is spare time and nothing is happening on the main thread.
If you want more control over how many of these things are happening at once i reccommend that you look into using NSOperation. You can take your blocks and embed them in an operation using NSBlockOperation and then you can submit these operations to your own NSOperationQueue. An NSOperationQueue has a - (NSInteger)maxConcurrentOperationCount and as an added benefit operations can also be cancelled after scheduling if needed.
You can use NSOperationqueue, which is supported by NSURLConnection
And it has the following instance method:
- (void)setMaxConcurrentOperationCount:(NSInteger)count