If I have a manager created and instance-varred and I do:
AFHTTPRequestOperation *operation = [self.manager HTTPRequestOperationWithRequest:request success:mySuccessBlock failure:myFailureBlock];
[operation start];
will this run on the manager's operationQueue? I can't seem to find anything guaranteeing it will verse using one of the GET, POST, PUT methods instead which I assume will add the operation to the queue.
I see mattt's answer here, but I want to be sure one way or the other.
How send request with AFNetworking 2 in strict sequential order?
What I'm trying to do is queue up requests and have them run synchronously by setting
[self.manager.operationQueue setMaxConcurrentOperationCount:1];
If you want to run an operation on a queue, you have to explicitly add the operation to the queue:
AFHTTPRequestOperation *operation = [self.manager HTTPRequestOperationWithRequest:request success:mySuccessBlock failure:myFailureBlock];
[queue addOperation:operation];
This presumes that you would create a queue:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.name = #"com.domain.app.networkQueue";
queue.maxConcurrentOperationCount = 1;
If you just start the operations without adding them to a queue (e.g. [operation start]), they'll just run, without honoring any particular queue parameters.
Note, you could also have added your operation to the operation manager's queue (which you can set the maxConcurrentOperationCount, as you alluded to in your question):
[self.manager.operationQueue addOperation:operation];
That saves you from having to create your own queue. But it then presumes that all of your other AFNetworking requests for this manager would also run serially, too. Personally, I'd leave manager.operationQueue alone, and create a dedicated queue for my logic that required serial operations.
One final note: Using serial network operations imposes a significant performance penalty over concurrent requests. I'll assume from your question that you absolutely need to do it serially, but in general I now design my apps to use concurrent requests wherever possible, as it's a much better UX. If I need a particular operation to be serial (e.g. the login), then I do that with operation dependencies or completion blocks, but the rest of the app is running concurrent network requests wherever possible.
Related
I have 100+ request.I need send a new request when the last one is done,so the server will not return error code - 429.
How to make this by afnetworking 3.0?
I'm not very familiar with the specific APIs of AFNetworking, but you could setup:
A variable array containing all your pending requests,
A method called (e.g.) sendNext() that removes the first entry of the array, performs the request asynchronously, and inside the completion block, calls itself.
Of course, you will need a terminating condition, and that is simply stop when the array becomes empty.
There are 2 approaches that can deal with your problem.
Firstly, create an operation queue and add all requests to the queue. After that, create an operation of your new request, then add the dependency to all requests in the queue. As a result, your new operation (will execute the new request) will be performed after the last request is done.
Secondly, you can use dispatch_barrier_async, which will create a synchronized point on your concurrent queue. That means you should create a concurrency queue to execute your 100+ request, and that dispatch_barrier_async blocks in your custom queue will execute the new request.
Thanks Sendoa for the link to the GitHub issue where Mattt explains why this functionality is not working anymore. There is a clear reason why this isn't possible with the new NSURLSession structure; Tasks just aren't operations, so the old way of using dependencies or batches of operations won't work.
I've created this solution using a dispatch_group that makes it possible to batch requests using NSURLSession, here is the (pseudo-)code:
// Create a dispatch group
dispatch_group_t group = dispatch_group_create();
for (int i = 0; i < 10; i++) {
// Enter the group for each request we create
dispatch_group_enter(group);
// Fire the request
[self GET:#"endpoint.json"
parameters:nil
success:^(NSURLSessionDataTask *task, id responseObject) {
// Leave the group as soon as the request succeeded
dispatch_group_leave(group);
}
failure:^(NSURLSessionDataTask *task, NSError *error) {
// Leave the group as soon as the request failed
dispatch_group_leave(group);
}];
}
// Here we wait for all the requests to finish
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
// Do whatever you need to do when all requests are finished
});
I want to look write something that makes this easier to do and discuss with Matt if this is something (when implemented nicely) that could be merged into AFNetworking. In my opinion it would be great to do something like this with the library itself. But I have to check when I have some spare time for that.
This question possible duplicate of AFNetworking 3.0 AFHTTPSessionManager using NSOperation . You can follow #Darji comment for few call, For 100+ call add these utility classes https://github.com/robertmryan/AFHTTPSessionOperation/ .
It is very impractical approach to sent 100+ request operation concurrently. If possible try to reduce it.
Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.
I am using AFNetworking 2.0 and Objective-C to create a class which synchronizes the client database with the database on the server.
Of course, the client shouldn't notice when this happens, so the class has to make the calls asynchronous.
However, some calls in this class are dependent on the results of others calls.
Example:
Object: Mammal, id = 15, new id = 26
Subobject: Zebra, clade_id = 15
The zebra can only update its properties once the mammals are done, because it uses some of the mammals properties (id) and if it sends the call too early it results in corrupted data (id = 15 instead of the correct 26).
My question is, how one could use AFNetworking to make these synchronous calls (zebra after mammal has finished) in a way that the user doesn't notice (asynchronous).
Thanks for your answers.
You not need to use a synchronous calls.
1) The first way
// create request with URL
NSMutableURLRequest *request = ....;
// create AFHTTPRequestOperation
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperationManager manager] HTTPRequestOperationWithRequest:request success:^{success block} failure:^{failure block}];
// Call operation
[self.operationQueue addOperation:operation];
In the success block you schedule second operation.
2) The second way
Create both operations at once and set the first operation how dependency for the second operation (but need the additional code for managing shared data).
[operation2 addDependency:operation1];
[self.operationQueue addOperation:operation1];
[self.operationQueue addOperation:operation2];
I have been using the following structure in my projects for consuming API data:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
// SYNCHRONOUS network request
// Data processing
dispatch_async(dispatch_get_main_queue(), ^{
// UI update
});
});
On the other hand, I have seen quite frequently another structure where the network request is asynchronous (i.e. using AFNetworking) and then the data processing and UI update are handled in the completion block (which is not async - I think).
Here is an example of what I am saying:
NSURLRequest *request = [NSURLRequest requestWithURL:url];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
operation.responseSerializer = [AFJSONResponseSerializer serializer];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
// Data processing
// UI update
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
// Error handling
}];
[operation start];
So, my questions are:
In the second structure there the data processing is not being run asynchronously, it is?
Why are we generally encouraged to run async network requests instead of sync ones from async blocks/threads.
Why the second structure is much more known and spread than the first?
Is there something that I am missing?
In my opinion async calls are much more elegant solutions for networking in any language.
Yes data processing is done in main thread in that example. Parsing strings and creating objects in UI isn't really a cpu cycle consuming job. However lets say you download an image and you want to process it, you shouldn't do it in this block (maybe spawn another thread). But image processing is another scope.it isn't related to the network code so network code is working as intended.
With async network calls, you can cancel/pause the request. But in sync network calls you cannot do that.
First of all gcd is new to the objective-c (i am not really sure but it was available after iOS 4. correct me if i am wrong). Before that we were using delegation and it was a lot of boilerplate codes. But with the second approach you can easily manage the networking code.
I hope this helps.
In the second structure there the data processing is not being run
asynchronously, it is?
No its not. Its run on the main thread. But the time consuming task of fetching data from remote server is already done on another thread. Once the data is retrieved, data processing will not take much time generally and can be run on the main thread. In rare cases(i have not come across any such) if data processing itself takes time and appears to block UI, then we can use NSOperations or GCD queues to process them.
This is achieved by making use of blocks which is just like a callback function.
Why the second structure is much more known and spread than the first?
Its easier to implement and also to understand once you know the syntax for blocks ;)
I have to carry out a series of download and database write operations in my app. I am using the NSOperation and NSOperationQueue for the same.
This is application scenario:
Fetch all postcodes from a place.
For each postcode fetch all houses.
For each house fetch inhabitant details
As said, I have defined an NSOperation for each task. In first case (Task1), I am sending a request to server to fetch all postcodes. The delegate within the NSOperation will receive the data. This data is then written to database. The database operation is defined in a different class. From NSOperation class I am making a call to the write function defined in database class.
My question is whether the database write operation occur in main thread or in a background thread? As I was calling it within a NSOperation I was expecting it to run in a different thread (Not MainThread) as the NSOperation. Can someone please explain this scenario while dealing with NSOperation and NSOperationQueue.
My question is whether the database write operation occur in main
thread or in a background thread?
If you create an NSOperationQueue from scratch as in:
NSOperationQueue *myQueue = [[NSOperationQueue alloc] init];
It will be in a background thread:
Operation queues usually provide the threads used to run their
operations. In OS X v10.6 and later, operation queues use the
libdispatch library (also known as Grand Central Dispatch) to initiate
the execution of their operations. As a result, operations are always
executed on a separate thread, regardless of whether they are
designated as concurrent or non-concurrent operations
Unless you are using the mainQueue:
NSOperationQueue *mainQueue = [NSOperationQueue mainQueue];
You can also see code like this:
NSOperationQueue *myQueue = [[NSOperationQueue alloc] init];
[myQueue addOperationWithBlock:^{
// Background work
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// Main thread work (UI usually)
}];
}];
And the GCD version:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void)
{
// Background work
dispatch_async(dispatch_get_main_queue(), ^(void)
{
// Main thread work (UI usually)
});
});
NSOperationQueue gives finer control with what you want to do. You can create dependencies between the two operations (download and save to database). To pass the data between one block and the other, you can assume for example, that a NSData will be coming from the server so:
__block NSData *dataFromServer = nil;
NSBlockOperation *downloadOperation = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakDownloadOperation = downloadOperation;
[weakDownloadOperation addExecutionBlock:^{
// Download your stuff
// Finally put it on the right place:
dataFromServer = ....
}];
NSBlockOperation *saveToDataBaseOperation = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakSaveToDataBaseOperation = saveToDataBaseOperation;
[weakSaveToDataBaseOperation addExecutionBlock:^{
// Work with your NSData instance
// Save your stuff
}];
[saveToDataBaseOperation addDependency:downloadOperation];
[myQueue addOperation:saveToDataBaseOperation];
[myQueue addOperation:downloadOperation];
Edit: Why I am using __weak reference for the Operations, can be found here. But in a nutshell is to avoid retain cycles.
If you want to perform the database writing operation in the background thread you need to create a NSManagedObjectContext for that thread.
You can create the background NSManagedObjectContext in the start method of your relevant NSOperation subclass.
Check the Apple docs for Concurrency with Core Data.
You can also create an NSManagedObjectContext that executes requests in its own background thread by creating it with NSPrivateQueueConcurrencyType and performing the requests inside its performBlock: method.
From NSOperationQueue
In iOS 4 and later, operation queues use Grand Central Dispatch to execute operations. Prior to iOS 4, they create separate threads for non-concurrent operations and launch concurrent operations from the current thread.
So,
[NSOperationQueue mainQueue] // added operations execute on main thread
[NSOperationQueue new] // post-iOS4, guaranteed to be not the main thread
In your case, you might want to create your own "database thread" by subclassing NSThread and send messages to it with performSelector:onThread:.
The execution thread of NSOperation depends on the NSOperationQueue where you added the operation. Look out this statement in your code -
[[NSOperationQueue mainQueue] addOperation:yourOperation]; // or any other similar add method of NSOperationQueue class
All this assumes you have not done any further threading in main method of NSOperation which is the actual monster where the work instructions you have (expected to be) written.
However, in case of concurrent operations, the scenario is different. The queue may spawn a thread for each concurrent operation. Although it's not guarrantteed and it depends on system resources vs operation resource demands at that point in the system. You can control concurrency of operation queue by it's maxConcurrentOperationCount property.
EDIT -
I found your question interesting and did some analysis/logging myself. I have NSOperationQueue created on main thread like this -
self.queueSendMessageOperation = [[[NSOperationQueue alloc] init] autorelease];
NSLog(#"Operation queue creation. current thread = %# \n main thread = %#", [NSThread currentThread], [NSThread mainThread]);
self.queueSendMessageOperation.maxConcurrentOperationCount = 1; // restrict concurrency
And then, I went on to create an NSOperation and added it using addOperation. In the main method of this operation when i checked for current thread,
NSLog(#"Operation obj = %#\n current thread = %# \n main thread = %#", self, [NSThread currentThread], [NSThread mainThread]);
it was not as main thread. And, found that current thread object is not main thread object.
So, custom creation of queue on main thread (with no concurrency among its operation) doesn't necessarily mean the operations will execute serially on main thread itself.
The summary from the docs is operations are always executed on a separate thread (post iOS 4 implies GCD underlying operation queues).
It's trivial to check that it is indeed running on a non-main thread:
NSLog(#"main thread? %#", [NSThread isMainThread] ? #"YES" : #"NO");
When running in a thread it's trivial to use GCD/libdispatch to run something on the main thread, whether core data, user interface or other code required to run on the main thread:
dispatch_async(dispatch_get_main_queue(), ^{
// this is now running on the main thread
});
If you're doing any non-trivial threading, you should use FMDatabaseQueue.