Objective C: async block structure VS async network request + completion block - ios

I have been using the following structure in my projects for consuming API data:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
// SYNCHRONOUS network request
// Data processing
dispatch_async(dispatch_get_main_queue(), ^{
// UI update
});
});
On the other hand, I have seen quite frequently another structure where the network request is asynchronous (i.e. using AFNetworking) and then the data processing and UI update are handled in the completion block (which is not async - I think).
Here is an example of what I am saying:
NSURLRequest *request = [NSURLRequest requestWithURL:url];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
operation.responseSerializer = [AFJSONResponseSerializer serializer];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
// Data processing
// UI update
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
// Error handling
}];
[operation start];
So, my questions are:
In the second structure there the data processing is not being run asynchronously, it is?
Why are we generally encouraged to run async network requests instead of sync ones from async blocks/threads.
Why the second structure is much more known and spread than the first?
Is there something that I am missing?

In my opinion async calls are much more elegant solutions for networking in any language.
Yes data processing is done in main thread in that example. Parsing strings and creating objects in UI isn't really a cpu cycle consuming job. However lets say you download an image and you want to process it, you shouldn't do it in this block (maybe spawn another thread). But image processing is another scope.it isn't related to the network code so network code is working as intended.
With async network calls, you can cancel/pause the request. But in sync network calls you cannot do that.
First of all gcd is new to the objective-c (i am not really sure but it was available after iOS 4. correct me if i am wrong). Before that we were using delegation and it was a lot of boilerplate codes. But with the second approach you can easily manage the networking code.
I hope this helps.

In the second structure there the data processing is not being run
asynchronously, it is?
No its not. Its run on the main thread. But the time consuming task of fetching data from remote server is already done on another thread. Once the data is retrieved, data processing will not take much time generally and can be run on the main thread. In rare cases(i have not come across any such) if data processing itself takes time and appears to block UI, then we can use NSOperations or GCD queues to process them.
This is achieved by making use of blocks which is just like a callback function.
Why the second structure is much more known and spread than the first?
Its easier to implement and also to understand once you know the syntax for blocks ;)

Related

Send next request when last one is done

I have 100+ request.I need send a new request when the last one is done,so the server will not return error code - 429.
How to make this by afnetworking 3.0?
I'm not very familiar with the specific APIs of AFNetworking, but you could setup:
A variable array containing all your pending requests,
A method called (e.g.) sendNext() that removes the first entry of the array, performs the request asynchronously, and inside the completion block, calls itself.
Of course, you will need a terminating condition, and that is simply stop when the array becomes empty.
There are 2 approaches that can deal with your problem.
Firstly, create an operation queue and add all requests to the queue. After that, create an operation of your new request, then add the dependency to all requests in the queue. As a result, your new operation (will execute the new request) will be performed after the last request is done.
Secondly, you can use dispatch_barrier_async, which will create a synchronized point on your concurrent queue. That means you should create a concurrency queue to execute your 100+ request, and that dispatch_barrier_async blocks in your custom queue will execute the new request.
Thanks Sendoa for the link to the GitHub issue where Mattt explains why this functionality is not working anymore. There is a clear reason why this isn't possible with the new NSURLSession structure; Tasks just aren't operations, so the old way of using dependencies or batches of operations won't work.
I've created this solution using a dispatch_group that makes it possible to batch requests using NSURLSession, here is the (pseudo-)code:
// Create a dispatch group
dispatch_group_t group = dispatch_group_create();
for (int i = 0; i < 10; i++) {
// Enter the group for each request we create
dispatch_group_enter(group);
// Fire the request
[self GET:#"endpoint.json"
parameters:nil
success:^(NSURLSessionDataTask *task, id responseObject) {
// Leave the group as soon as the request succeeded
dispatch_group_leave(group);
}
failure:^(NSURLSessionDataTask *task, NSError *error) {
// Leave the group as soon as the request failed
dispatch_group_leave(group);
}];
}
// Here we wait for all the requests to finish
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
// Do whatever you need to do when all requests are finished
});
I want to look write something that makes this easier to do and discuss with Matt if this is something (when implemented nicely) that could be merged into AFNetworking. In my opinion it would be great to do something like this with the library itself. But I have to check when I have some spare time for that.
This question possible duplicate of AFNetworking 3.0 AFHTTPSessionManager using NSOperation . You can follow #Darji comment for few call, For 100+ call add these utility classes https://github.com/robertmryan/AFHTTPSessionOperation/ .
It is very impractical approach to sent 100+ request operation concurrently. If possible try to reduce it.

Running multiple background threads iOS

Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.

Does AFHTTPRequestOperationManager run requests on the operationQueue?

If I have a manager created and instance-varred and I do:
AFHTTPRequestOperation *operation = [self.manager HTTPRequestOperationWithRequest:request success:mySuccessBlock failure:myFailureBlock];
[operation start];
will this run on the manager's operationQueue? I can't seem to find anything guaranteeing it will verse using one of the GET, POST, PUT methods instead which I assume will add the operation to the queue.
I see mattt's answer here, but I want to be sure one way or the other.
How send request with AFNetworking 2 in strict sequential order?
What I'm trying to do is queue up requests and have them run synchronously by setting
[self.manager.operationQueue setMaxConcurrentOperationCount:1];
If you want to run an operation on a queue, you have to explicitly add the operation to the queue:
AFHTTPRequestOperation *operation = [self.manager HTTPRequestOperationWithRequest:request success:mySuccessBlock failure:myFailureBlock];
[queue addOperation:operation];
This presumes that you would create a queue:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.name = #"com.domain.app.networkQueue";
queue.maxConcurrentOperationCount = 1;
If you just start the operations without adding them to a queue (e.g. [operation start]), they'll just run, without honoring any particular queue parameters.
Note, you could also have added your operation to the operation manager's queue (which you can set the maxConcurrentOperationCount, as you alluded to in your question):
[self.manager.operationQueue addOperation:operation];
That saves you from having to create your own queue. But it then presumes that all of your other AFNetworking requests for this manager would also run serially, too. Personally, I'd leave manager.operationQueue alone, and create a dedicated queue for my logic that required serial operations.
One final note: Using serial network operations imposes a significant performance penalty over concurrent requests. I'll assume from your question that you absolutely need to do it serially, but in general I now design my apps to use concurrent requests wherever possible, as it's a much better UX. If I need a particular operation to be serial (e.g. the login), then I do that with operation dependencies or completion blocks, but the rest of the app is running concurrent network requests wherever possible.

AFNetworking: parsing xml in background

I think I am on the right track, but just wanted to double check here. I recently started using AFNetworking to obtain a large XML file from a database, which I then need to parse (I got that part all figured out). I would like the parsing to happen on a background thread, and then update my UI on the main thread. So I added another dispatch_async block inside the success block of the AFXMLRequestOperation:
self.xmlOperation =
[AFXMLRequestOperation XMLParserRequestOperationWithRequest: request
success: ^(NSURLRequest *request, NSHTTPURLResponse *response, NSXMLParser *XMLParser) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
XMLParser.delegate = self;
[XMLParser setShouldProcessNamespaces:YES];
[XMLParser parse];
dispatch_async(dispatch_get_main_queue(), ^{
[self.searchResultViewController didFinishImport];
[[NSManagedObjectContext MR_defaultContext] MR_saveToPersistentStoreAndWait];
});
});
}
failure: ^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, NSXMLParser *XMLParser) {
// show error
}];
[self.xmlOperation start];
Is the the proper/correct/preferred way to do this?
This looks pretty good. Two observations, though:
Does any of your code on the main thread can access any of the objects actively being updated by your NSXMLParserDelegate methods? If not, you're fine.
But, if you have any code (driving the UI, for example) that is accessing the same objects/collections that the NSXMLParserDelegate methods are updating, then you have to be careful about synchronizing those shared resources. (For more information about synchronizing resources, see the Synchronization section of the Threading Programming Guide and/or the Eliminating Lock Based Code section of the Concurrency Programming Guide.)
Personally, I like to move the NSXMLParserDelegate code into a separate class, and instantiate that for the individual request, that way I know that my request and subsequent parsing process can never be a source of synchronization issues. You still need to synchronize the update model/store process, but you are effectively doing that by performing that final update on the main queue.
Does your UI allow you to issue another XML request while the first one is in progress? If not, you're fine.
If the user can initiate second request while the first is in progress, it opens you up to the (admittedly unlikely) scenario that you could two concurrent processing requests using the same instance of the delegate object. Clearly, you could solve this by preventing subsequent requests until the first one finished (e.g. disable UI elements that request refresh), or use a serial queue, or move the parser into a separate class that you'll instantiate for every request. Personally, I'd be inclined to make make this parse request cancelable and make the issuance of a new request cancel any prior, on-going ones.
Those are two concurrency-related issues as I look at your code sample. Perhaps neither of these are, in fact, an issue with your particular implementation. Having said that, the very fact that the code is so contingent on the rest of your implementation is, itself, an issue.

MagicalRecord : how to perform background saves

I am building a news application, which basically fetches data from a distant server, using AFNetworkOperation (all operations are put in a NSOperationQueue in order to properly manage the synchronisation process and progress).
Each completion block of each AFNetworkOperation creates/deletes/updates core data entities.
At the whole end of the synchronisation process, in order to make all changes persistent, I perform a full save with following lines of code
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);
dispatch_async(queue, ^{
NSLog(#"saveInBackground : starting...");
[[NSManagedObjectContext defaultContext] saveToPersistentStoreWithCompletion:^(BOOL success, NSError *error) {
NSLog(#"saveInBackground : finished!");
}];
});
Unfortunately it always blocks the main thread during my save operation.
I might not use MagicalRecord properly and so any advice would be welcome.
After digging deeper inside MagicalRecord, it seems that my code is working well and does not block main thread at all.
My issue is not on MagicalRecord, but on the way I should use it on completion blocks of afnetworking operation.
I Will start a new discussion to provide full details on it.

Resources