what the benefit of using dispatch_sync ? ios - ios

Is there a difference between these two implementation :
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
__block UIImage *image = nil;
dispatch_sync(concurrentQueue, ^{
/* Download the image here sync downloading */
});
dispatch_sync(dispatch_get_main_queue(), ^{
/* Show the image to the user here on the main queue */
});
});
and
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
__block UIImage *image = nil;
/* Download the image here sync downloading */
dispatch_sync(dispatch_get_main_queue(), ^{
/* Show the image to the user here on the main queue */
});
});
In the first code, I download the image using a block by dispatch sync, and in the second one I download it without block!
I think I should get deadlock in the first implementation because Apple says: dispatch_sync->Calling this function and targeting the current queue results in deadlock

I think I should get deadlock in the first implementation because apple say :dispatch_sync->Calling this function and targeting the current queue results in deadlock
It's true if the queue is a serial queue. However, in this case, you are using global queue, thus deadlock never happen. So the difference is which thread will do Download the image here sync downloading in the global queue thread pool.
First implementation
main queue -----------+---------------------------------+-----------+------
async(global)| ^ |
v sync(main)| v
global queue thread1 +-----+BLOCKED +-----------+BLOCKED +------
sync(global)| ^
v |
global queue thread2 +---------------+
Second implementation
main queue -----------+---------------------------------+-----------+------
async(global)| ^ |
v sync(main)| v
global queue thread1 +---------------------------------+BLOCKED +------
But dispatch_sync is not good idea in this case. https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
Important: You should never call the dispatch_sync or dispatch_sync_f function from a task that is executing in the same queue that you are planning to pass to the function. This is particularly important for serial queues, which are guaranteed to deadlock, but should also be avoided for concurrent queues.
... Even though I can't find any evidence of dispatch_sync should also be avoided for concurrent queues from
http://opensource.apple.com/source/libdispatch/libdispatch-339.92.1/src/queue.c

dispatch_sync dispatch_get_main_queue() done the action immediately.Because we are doing the cation in main queue.Here we are doing the UI updates and all that stuffs in lazy loading and asynchronous download etc.
Its related to GCD for more details please refer AppleDevelopers forum.
The main benefits of using dispatch_sync is we can do the operation concurrently.

Related

Updating UI element without dispatching to the main queue

I have a dispatch_block_t which is passed to another function, and this block will be called when the function finishes the asynchronous task. But the problem is that I don't know which thread this block will be called.
I want to update my UI in the main thread, hence I want to use
dispatch_async(dispatch_get_main_queue(), ^{...})
to update my UI. But I am afraid that this will cause a deadlock if such occasion happens
dispatch_queue_t queue = dispatch_queue_create("my.label", DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
dispatch_async(queue, ^{
// outer block is waiting for this inner block to complete,
// inner block won't start before outer block finishes
// => deadlock
});
// this will never be reached
});
Is there a way to prevent the deadlock? Like updating the UI element without using the dispatch queue. Is it possible to create a weak reference to self in order to update the UI?
Try running your example with NSLogs and you'll notice that deadlock doesn't occur. This is due to the fact that using dispatch_async just submits a block to the queue without waiting for it to finish execution (in contrary to dispatch_sync).
So running this code:
dispatch_queue_t queue = dispatch_queue_create("my.label", DISPATCH_QUEUE_SERIAL);
dispatch_async(queue, ^{
NSLog(#"1");
dispatch_async(queue, ^{
NSLog(#"2");
});
NSLog(#"3");
});
Will produce the following log:
Testtt[32153:2250572] 1
Testtt[32153:2250572] 3
Testtt[32153:2250572] 2
Moreover, I'm concerned that using dispatch_async(dispatch_get_main_queue(), ^{...}) here is commonly-used technique which ensures that the consumer gets the result on the main thread (i.e. consumer doesn't 'care' about threading).
Why, though, you use dispatch_block_t to pass a completion block? In my opinion, it's a bit confusing to use something like that on the consumer side - I would pass an anonymous (without typedef) block or create my own typedef for these simple completion blocks.

How to parallelize many (100+) tasks without hitting global GCD limit?

The problem:
When lazy-loading a list of 100+ icons in the background, I hit the GCD thread limit (64 threads), which causes my app to freeze up with a semaphore_wait_trap on the main thread. I want to restructure my threading code to prevent this from happening, while still loading the icons asynchronous to prevent UI blocking.
Context:
My app loads a screen with SVG icons on it. The amount differs on average from 10-200. The icons get drawn by using a local SVG image or a remote SVG image (if it has a custom icon), then they get post-processed to get the final image result.
Because this takes some time, and they aren't vital for the user, I want to load and post-process them in the background, so they would pop in over time. For every icon I use the following:
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
//code to be executed in the background
SVGKImage *iconImage = [Settings getIconImage:location];
dispatch_async(dispatch_get_main_queue(), ^{
//code to be executed on the main thread when background task is finished
if (iconImage) {
[iconImgView setImage:iconImage.UIImage];
}
});
});
The getIconImage method handles the initial loading of the base SVG, which reads it synchronized with [NSInputStream inputStreamWithFileAtPath:path] if local, and [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&errorWithNSData] if it should load remotely. This all happens synchronous.
Then there is some post-processing of recoloring the SVG, before it gets returned and put in the UIImageView on the main thread.
Question:
Is there a way to structure my code to allow for parallelized background loading but prevent deadlock because of too many threads?
Solution EDIT:
_iconOperationQueue = [[NSOperationQueue alloc]init];
_iconOperationQueue.maxConcurrentOperationCount = 8;
// Code will be executed on the background
[_iconOperationQueue addOperationWithBlock:^{
// I/O code
SVGKImage *baseIcon = [Settings getIconBaseSVG:location];
// CPU-only code
SVGKImage *iconImage = [Settings getIconImage:location withBaseSVG:baseIcon];
UIImage *svgImage = iconImage.UIImage; // Converting SVGKImage to UIImage is expensive, so don't do this on the main thread
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// Code to be executed on the main thread when background task is finished
if (svgImage) {
[iconImgView setImage:svgImage];
}
}];
}];
Instead of directly using GCD with a concurrent queue, use an NSOperationQueue. Set its maxConcurrentOperationCount to something reasonable, like 4 or 8.
If you can, you should also separate I/O from pure computation. Use the width-restricted operation queue for the I/O. The pure computation you can use an unrestricted operation queue or pure GCD for.
The reason is that I/O blocks. GCD detects that the system is idle and spins up another worker thread and starts another task from the queue. That blocks in I/O, too, so it does that some more until it hits its limit. Then, the I/O starts completing and the tasks unblock. Now you have oversubscribed the system resources (i.e. CPU) because there are more tasks in flight than cores and suddenly they are actually using CPU instead of being blocked by I/O.
Pure computation tasks don't provoke this problem because GCD sees that the system is actually busy and doesn't dequeue more tasks until earlier ones have completed.
You can stay with GCD by using a semaphore something like this running the whole operation in the background otherwise waiting for the semaphore will stall the UI:
dispatch_semaphore_t throttleSemaphore = dispatch_semaphore_create(8);
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for /* Loop through your images */ {
dispatch_semaphore_wait(throttleSemaphore, DISPATCH_TIME_FOREVER);
dispatch_async(concurrentQueue, ^{
//code to be executed in the background
SVGKImage *iconImage = [Settings getIconImage:location];
dispatch_async(dispatch_get_main_queue(), ^{
//code to be executed on the main thread when background task is finished
if (iconImage) {
[iconImgView setImage:iconImage.UIImage];
}
dispatch_semaphore_signal(throttleSemaphore);
});
});
}

Performing UI updates on main thread synchronously from a concurrent queue

As far as I have understood GCD UI operations should always be performed on the main thread/main queue asynchronously. But the following code seems to also work without any problem. Can someone please explain why ?
I am passing 2 blocks synchronously to a dispatch_async. One block downloads an image and the other displays it on the view.
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
__block UIImage *image = nil;
dispatch_sync(concurrentQueue, ^{
/* Download the image here */
});
dispatch_sync(dispatch_get_main_queue(), ^{
/* Show the image to the user here on the main queue */
});
});
The queue is important (it has to be the main queue) but whether the gcd calls are synchronous or asynchronous is irrelevant - that just affects how the rest of your code around the gcd calls is timed. Once a block is running on a queue it doesn't matter how it was scheduled.
Synchronous dispatch can simplify your code (since it won't return until the block is executed) but does come with the risk of locking if you end up waiting for things to finish.

Simple GCD Serial Queue example like FIFO using blocks

I read Apple documentation on how to Use serial queues to ensure that tasks to execute in a predictable order but now i am confused too much.
Some how i am able to work serially but still i am not clear so i need simple serial example for my methods to execute serially.
I divided my functionality in to 4 parts and now want them to execute Serially
[self ReadAllImagesFromPhotosLibrary];
[self WriteFewImagestoDirectory];
[self GettingBackAllImagesFromFolder];
[self MoveToNextView];
To follow-up and improve iCoder's answer, you could and should do the following.
dispatch_queue_t serialQueue = dispatch_queue_create("com.unique.name.queue", DISPATCH_QUEUE_SERIAL);
dispatch_async(serialQueue, ^{
[self ReadAllImagesFromPhotosLibrary];
});
dispatch_async(serialQueue, ^{
[self WriteFewImagestoDirectory];
});
dispatch_async(serialQueue, ^{
[self GettingBackAllImagesFromFolder];
});
dispatch_async(serialQueue, ^{
[self MoveToNextView];
});
Despite the above calls being async, they will be queued and run serially as the DISPATCH_QUEUE_SERIAL states. The difference between sync and async is that with sync, your code will pause and wait for the block answer before running the following code, thus potentially freezing your UI if the execution time is long. Whereas with async, the code runs on and the block is returned asynchronously.
However, the tasks you have stored in the DISPATCH_QUEUE_SERIAL will wait and be executed one after the other in the order they were added, thanks to GCD (Grand Central Dispatch).
dispatch_queue_t serialQueue = dispatch_queue_create("com.unique.name.queue", DISPATCH_QUEUE_SERIAL);
dispatch_async(serialQueue, ^{
[self ReadAllImagesFromPhotosLibrary];
dispatch_async(serialQueue, ^{
[self WriteFewImagestoDirectory];
dispatch_async(serialQueue, ^{
[self GettingBackAllImagesFromFolder];
dispatch_async(serialQueue, ^{
[self MoveToNextView];
});
});
});
});
I think the above code should work, but make sure the UI operations are executed in the main thread. Hope it helps.
You can use NSOperationQueue with maxConcurrentOperationCount set to 1 (or even set dependency for each NSOperation, so it won't start before its dependency is finished).
Here is NSOperationQueue Class Reference.
Also take a look at this question.
I am not much aware of existing API for doing the same with blocks, if any.
But the same can be done by defining blocks(representing the operations you want) in a fashion that they point to next block to proceed if any. Also, you can put the whole processing in a separate queue.
snippet for having blocks executing in serial fashion
BLOCK A(NEXT BLOCK reference){
->Do the the required Task
->If(next Block reference)
--->Then call that block
->Else
--->Exit or have a callback on mainthread
}
why not try the GCD, it guarantees the sequence of operation and also has sync and async capabilities
I had some success with a pattern like this in a similar hunt in Swift 3.0 ...
let serialQueue = DispatchQueue.init(label: "com.foo.bar")
serialQueue.sync {self.readAllImagesFromPhotosLibrary()}
serialQueue.sync {self.rriteFewImagestoDirectory()}
serialQueue.sync {self.gettingBackAllImagesFromFolder()}
serialQueue.sync {self.moveToNextView()}

Delay in executing method after dispatch group operations are complete in iOS

I am calling a method after after the queues in my dispatch group complete executing. However, there is a significant delay in executing the final method even after all the queues have been executed. Can anyone explain any probable reasons?
dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_async(group, queue,^{
//some code
}
dispatch_group_notify(group, queue,
^{
[self allTasksDone];
});
What I meant was that the method allTasksDone is executed after some delay even when the operation in the async queue has completed.
How does -allTasksDone work? If it's communicating with the user by updating user interface elements, it need to run on in the main thread's context, or else it'll appear that the UI elements in question are "delayed" -- they won't update until the main run loop happens to make them update.
Try this instead:
dispatch_group_notify(group, dispatch_get_main_queue(),
^{
[self allTasksDone];
});
As it is, you're running -allTasksDone on the default background queue, which doesn't play nice with AppKit or UIKit.
I suggest an alternative approach although you can most certainly accomplish this using dispatch groups.
// Important note: This does not work with global queues, but you can use target queues to direct your custom queue to one of your global queues if you need priorities.
dispatch_queue_t queue = dispatch_queue_create("com.mycompany.myqueue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue,^{
//some code
}
dispatch_barrier_async(queue,
^{
// this executes when all previously dispatched blocks have finished.
[self allTasksDone];
});

Resources