I have configured a URLSession to fetch data from network & have set a delegate queue so that all subsequent operations happen within my queue.
However, when using breakpoints to view the Debug navigator, Xcode shows the completion block of URLSession is invoked on arbitrary threads.
The URLSession is setup as follows -
NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration defaultSessionConfiguration];
self.urlSession = [NSURLSession sessionWithConfiguration:configuration delegate:nil delegateQueue:[MyManager sharedInstance].queue];
...
[self.urlSession dataTaskWithRequest:urlRequest completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
/... operations expected to be executed on WEGQueue .../
}] resume];
Below is screen capture of processes on my serial queue named WEGQueue before URLSession has started.
Now I expect the completion block operations to be invoked on the specified delegate queue of URLSession i.e. WEGQueue here.
However, using a breakpoint to view the Debug Navigator shows that block is being processed on an arbitrary queue. Attached pic below.
Below is Debug Navigator in "View Process by Queue" filter.
This is really weird!
I'm not sure why URLSession is not invoking completion blocks on the specified delegate queue.
And that's not all.
It gets weirder due to the fact that when I do a po, lldb says that the completion block is (as expected) on the WEGQueue. See pic below.
It's confusing that Xcode's Debug Navigator says URLSession's completion block is being executed on an arbitrary thread while lldb says it is executed as expected on the delegate queue WEGQueue.
Did anyone face a similar scenario? Is this just an Xcode GUI bug? Or something is really amiss over here?
I'd say it is just an XCode GUI limitation, that it does not always label the Queue in the "view by queue" with the name that you gave it. And it does not always label the threads with the Queue that thread is currently handling. As you say, when you do po [NSOperationQueue currentQueue], it does report the correct queue. So you have no indication that it actually uses another queue.
There is no guarantee that the same queue is always handled by the same thread. Each event can be handled by a different thread. There is only a guarantee that the events are not running in parallel, for a Serial Queue.
Related
I have been trying to figure out when it's okay to "just type in what I need done" and when I need to be specific about what kind of work I am doing on what kind of thread.
As I understand I should only update the UI on my main thread. Does this mean that it's not okay to do something like this? Should I put this into a GDC call?
[sessionManager dataTaskWithRequest:aRequest completionHandler:^(NSURLResponse * _Nonnull response, id _Nullable responseObject, NSError * _Nullable error) {
someUILabel.text = #"Hello!"; // Updating my UI
[someTableView reloadData]; // Ask a table view to reload data
}];
That's it for the UI part. Now, let's assume I had an NSMutableArray somewhere in in my class. I would be adding or removing objects to this array by for instance tapping a UIButton. Then again I have a NSURLSessionDataTask going to a server somewhere to get some data and load it into my NSMutableArray, like so:
[sessionManager dataTaskWithRequest:aRequest completionHandler:^(NSURLResponse * _Nonnull response, id _Nullable responseObject, NSError * _Nullable error) {
myMutableArray = [[responseObject objectForKey:#"results"] mutableCopy];
}];
This is not a UI operation. Does this need to be wrapped in a GDC call to avoid crashing in a race condition between my button-tap adding an object (i.e. [myMutableArray insertObject:someObj atIndex:4];) while the completion block runs, or are these designed to not clash into each other?
I have left out all error handling to focus on the question at hand.
TLDR: It costs you nothing to call dispatch_async(dispatch_get_main_queue()... inside your completion handler, so just do it.
Long Answer:
Let's look at the documentation, shall we?
completionHandler The completion handler to call when the load request is complete. This handler is executed on the delegate queue.
The delegate queue is the queue you passed in when you created the NSURLSession with sessionWithConfiguration:delegate:delegateQueue:. If that's not how you created this NSURLSession, then I suggest you make no assumptions about what queue the completion handler is called on. If you didn't pass [NSOperationQueue mainQueue] as this parameter, you are on a background queue and you should break out to the main queue before doing anything that is not thread-safe.
So now the question is:
Is it thread-safe to update the UI and talk to the table view? No, you must do those things only on the main queue.
Is it thread-safe to set myMutableArray? No, because you would then be sharing a property, self.myMutableArray, between two threads (the main queue, where you usually talk to this property, and this queue, whatever it is).
I'm using NSURLSession to make multiple asynchronous requests to my server with following code:
[[session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
...
[self.dict setObject:some_obj forKey:some_key];
}] resume];
Inside the response block I'm setting key/value pairs for an mutable Dictionary.
My question is:
As the requests are asynchronous, can it be, that theoretically my program tries to set key/value pairs for the dictionary at the same time? An if this is possible, what will happen?
Does the app crash?
Will be certain key/value pairs not set?
Or will it work, as one key/value setting will wait for the other to finish?
If 3. is not the case, what can I do to make 3) work?
NSMutableDictionary is not documented as thread-safe, so it almost certainly isn't.
However, the Apple docs on NSURLSession say:
The completion handler to call when the load request is complete. This handler is executed on the delegate queue.
You (may) pass the delegate queue at session creation, the docs say:
An operation queue for scheduling the delegate calls and completion handlers. The queue need not be a serial queue. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
So as far as setting the keys, if you don't explicitly create the session with a parallel queue you should be fine. If you did, then you'll need to synchronize access. The easiest way is an #synchronized block:
#synchronized (self.dict) {
self.dict[key] = value;
}
Depending on when and where you're reading the values, you may need the synchronized block anyway.
Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.
__block NSHTTPURLResponse *httpResponse;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
NSURLSessionDataTask *task = [session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
if (!error)
httpResponse = (NSHTTPURLResponse *)response;
}
dispatch_semaphore_signal(semaphore);
}];
[task resume];
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
Is it safe to read httpResponse after this? The semaphore waits for the block to compete execution. If there was no error, will the assignment be seen immediately or do I have to synchronise or create a memory barrier outside the block?
Does waiting on the semaphore implicitly perform some synchronisation which makes the __block variable safe to read immediately. If this was done with Thread.join() in Java instead of a semaphore, it would be safe since it guarantees a happens-before relationship with the assignment in the "block".
The short answer is yes.
The semaphore lock essentially forces the thread that is currently operating to stop execution until it receives enough unlock signals to proceed.
The variable you have defined is modified on some other thread before the semaphore is allowed to continue executing, so your assignment should have safely occurred.
Strictly speaking, this code will block on the executing thread (probably the main thread) until the semaphore lock is signalled. So - short answer, yes it should work, but it isn't best practice because it blocks the main thread.
Longer answer:
Yes, the semaphore will make sure the __block captured storage isn't accessed until it has been filled in. However, the calling thread will be blocked by the wait until the block has completed. This isn't ideal - normal UI tasks like making sure Activity Indicators spin won't happen.
Best practice would be to have the block signal the main object (potentially using a dispatch_async call to the main queue) once it has completed, and only accessing it after that. This is especially true given that if your session task fails (e.g. from network connectivity), then the calling thread will potentially block until the completion handler is called with a timeout error. This will appear to a user like the app has frozen, and they can't really do anything about it but kill the app.
For more information on working with blocks, see:
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/WorkingwithBlocks/WorkingwithBlocks.html
For best practice with Data Session tasks in particular:
http://www.raywenderlich.com/51127/nsurlsession-tutorial
It seems dispatch_semaphore_wait is also a memory barrier, so the value can be safely read.
I have a class set up to handle my web API calls. This is done using an NSMutableURLRequest and an NSRLlConnection. I initially used connectionWithRequest: delegate: and that worked well for the most part, except when I depended on this request being truly asynchronous, not just partially executing in the main run loop.
To do this, I thought I would just use the ever so convenient sendAsynchronousRequest: queue: completionHandler: and at first in all of my unit tests I thought this worked great. It performed asynchronously, my semaphores were waited on and signaled correctly, it was great.
Until I tried to re-use this new modified version of my Web service class in my actual app. Part of my app plays a video and uses a repeating NSTimer to update part of the screen based on the current playback time of the video. For some unknown reason, as long as I have executed at least one of these new asynchronous NSURLConnections both the video playback and the timer no longer work.
Here is how I initialize the connection:
[NSURLConnection sendAsynchronousRequest:requestMessage
queue:[[NSOperationQueue alloc] init]
completionHandler:^(NSURLResponse *response, NSData *data, NSError *connectionError)
{
if ( data.length > 0 && connectionError == nil )
{
_webServiceData = [data mutableCopy];
[self performSelector:#selector(connectionDidFinishLoading:) withObject:nil];
}
else if ( connectionError != nil )
{
_webServiceData = [data mutableCopy];
[self performSelector:#selector(webServiceDidFinishExecutingWithError:) withObject:connectionError];
}
}];
Here is how I initialize my repeating timer:
playbackTimeTimer = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:#selector(checkPlaybackTime) userInfo:nil repeats:YES];
And I have absolutely no idea why the asynchronous NSURLConnection is causing aspects of my app that are completely unrelated to stop functioning.
EDIT:
For clarification I have ViewControllerA that performs the web requests to retrieve some data. When that data is successfully retrieved, ViewControllerA automatically segues to ViewControllerB. In ViewControllerB's viewWillAppear is where I set up my movie player and timer.
In the future, do not use semaphores to make the test wait. Use the new XCTestExpectation, which is designed for testing asynchronous processes.
And unlike the traditional semaphore trick, using the test expectation doesn't block the main thread, so if you have completion blocks or delegates that require the main thread, you can do this in conjunction with the test expectation.
You're clearly doing something that isn't working because you're running this on the background queue. Typical problems include
If you were trying to run that timer from the background thread, that wouldn't work, unless you used one of the standard timer workarounds (scheduling it on main runloop, dispatching the creation of the timer back to main thread, using dispatch timer, etc.).
Having earlier described these alternatives in great detail, it turns out you're initiating the timer from viewWillAppear, so that's all academic.
any UI updates (including performing segues, reloading tables, etc.).
care should be taken when synchronizing class properties updated from background thread (these might best be dispatched to main thread, too).
Anyway, you might remedy this by just tell sendAsynchronousRequest to run its completion block on the main queue:
[NSURLConnection sendAsynchronousRequest:requestMessage
queue:[NSOperationQueue mainQueue]
completionHandler:^(NSURLResponse *response, NSData *data, NSError *connectionError)
...
}];
Then, the completion block would have run on the background queue, and everything would probably be fine. Or you can manually dispatch the calling of your completion handlers back to the main queue.
But make sure that any UI updates (including the performing a segue programmatically) are run on the main thread (either by running the whole completion block on the main thread, or manually dispatching the relevant calls to the main thread).