i want to run some operations task with completion continuation- sequentially
for example i want to execute getSomethingsWithResultWithCompletion method for 3 times as serialized tasks (like op1 depend op2 , ... depen op N ) :
[MFLayer getSomethingsWithResultWithCompletion:^(id _Nullable response)Completion {
// it will be run on another thread!**
[MFRequestManager retrivesomeDataWithCompletion:^(id _Nullable response1) {
// it will be run on another thread!**
[MFRequestManager retriveAnothersomeDataWithInfo:response1 WithCompletion:^(id _Nullable response2) {
NSLog(#"Finished with Result : %#",response2);
}];
}];
}];
Problem
if retrive methods execute in another thread (like send a request with AFNetworking) i have a problem with serialize and another task will be start.
i have try With NSOperationQueue and Semaphore but still have a problem
i have implemented something like this with NSOperationQueue and NSOperation but implementation of them have run on the same thread so all of task start sequentially so it's works fine.
operationQueueExample
I strongly discourage that approach, but if you send the task on a background thread, you can use GCD semaphores.
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
[MFRequestManager retrivesomeDataWithCompletion:^(id _Nullable response) {
if(Completion)
Completion(response)
dispatch_semaphore_signal(sema);
}];
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
Related
What I'm trying to achieve is to make a network request and wait for it to finish, so that I can make a decission what should be apps next step.
Normally I would avoid such solution, but this is a rare case in which codebase has a lot of legacy and we don't have enough time to apply necessary changes in order to make things right.
I'm trying to write a simple input-output method with following definition:
- (nullable id<UserPaymentCard>)validCardForLocationWithId:(ObjectId)locationId;
The thing is that in order to perform some validation inside this method I need to make a network request just to receive neccessary information, so I'd like to wait for this request to finish.
First thing that popped in my mind was using dispatch_semaphore_t, so I ended up with something like this:
- (nullable id<UserPaymentCard>)validCardForLocationWithId:(ObjectId)locationId {
id<LocationsReader> locationsReader = [self locationsReader];
__block LocationStatus *status = nil;
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
[locationsReader fetchLocationProviderStatusFor:locationId completion:^(LocationStatus * _Nonnull locationStatus) {
status = locationStatus;
dispatch_semaphore_signal(sema);
} failure:nil];
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
return [self.paymentCards firstCardForStatus:status];
}
Everything compiles and runs, but my UI freezes and I actually never receive sempahore's signal.
So, I started playing with dispatch_group_t with exactly the same results.
Look like I might have some problems with where code gets executed, but I don't know how to approach this and get the expected results. When I try wrapping everything in dispatch_async I actually stop blocking main queue, but dispatch_async return immediatelly, so I return from this method before the network request finishes.
What am I missing? Can this actually be acheived without some while hacks? Or am I trying to fight windmills?
I was able to achieve what I want with the following solution, but it really feels like a hacky way and not something I'd love to ship in my codebase.
- (nullable id<UserPaymentCard>)validCardForLocationWithId:(ObjectId)locationId {
id<LocationsReader> locationsReader = [self locationsReader];
__block LocationStatus *status = nil;
__block BOOL flag = NO;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[locationsReader fetchLocationProviderStatusFor:locationId completion:^(LocationStatus * _Nonnull locationStatus) {
status = locationStatus;
flag = YES;
} failure:nil];
});
while (CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, true) && !flag){};
return [self.paymentCards firstCardForStatus:status];
}
I guess fetchLocationProviderStatusFor:completion:failure: calls those callbacks in main queue. That's why you get deadlock. It's impossible. We can't time travel yet.
The deprecated NSURLConnection.sendSynchronousRequest API is useful for those instances when you really can't (or just can't be bothered to) do things properly, like this example:
private func pageExists(at url: URL) -> Bool {
var request = URLRequest(url: url)
request.httpMethod = "HEAD"
request.timeoutInterval = 10
var response: URLResponse?
try! NSURLConnection.sendSynchronousRequest(request,
returning: &response)
let httpResponse = response as! HTTPURLResponse
if httpResponse.statusCode != 200 { return false }
if httpResponse.url != url { return false }
return true
}
Currently, your method causes work to be done on the main thread, which freezes the UI. Your solution works, but it would be best to change the method to include a completion block. Then, you could call the completion block at the end of the async block. Here's the example code for that:
- (void)validCardForLocationWithId:(ObjectId)locationId completion:(nullable id<UserPaymentCard> (^)(void))completion {
id<LocationsReader> locationsReader = [self locationsReader];
__block LocationStatus *status = nil;
[locationsReader fetchLocationProviderStatusFor:locationId completion:^(LocationStatus * _Nonnull locationStatus) {
status = locationStatus;
completion([self.paymentCards firstCardForStatus:status]);
} failure:nil];
}
I am currently using the following method to send GET API requests. This method works, but I was wondering if there is a faster way. All I need regarding requirements is to know when all of the Deleted mail has been synced. Any tips or suggestions are appreciated.
- (void)syncDeletedMail:(NSArray *)array atIdx:(NSInteger)idx {
if (idx < array.count) {
NSInteger idNumber = array[idx];
[apiClient deleteMail:idNumber onSuccess:^(id result) {
[self syncDeletedMail:array atIdx:(idx + 1)];
} onFailure:^(NSError *error){
[self syncDeletedMail:array atIdx:(idx + 1)];
}];
} else {
NSLog(#"finished");
}
}
Edit: I don't care what order it is completed (not sure if it matters in terms of speed), as long as all the API requests come back completed.
You can just send deleteMail requests at once and use dispatch_group to know when all the requests are finished. Below is the implementation,
- (void)syncDeletedMail:(NSArray *)array {
dispatch_group_t serviceGroup = dispatch_group_create();
for (NSInteger* idNumber in array)
{
dispatch_group_enter(serviceGroup);
[apiClient deleteMail:idNumber onSuccess:^(id result) {
dispatch_group_leave(serviceGroup);
} onFailure:^(NSError *error){
dispatch_group_leave(serviceGroup);
}];
}
dispatch_group_notify(serviceGroup,dispatch_get_main_queue(),^{
NSLog(#"All email are deleted!");
});
}
Here you can see all the requests are fired at the same time so it will reduce the time from n folds to 1.
Swift Version of #Kamran :
let group = DispatchGroup()
for model in self.cellModels {
group.enter()
HTTPAPI.call() { (result) in
// DO YOUR CHANGE
switch result {
...
}
group.leave()
}
}
group.notify(queue: DispatchQueue.main) {
// UPDATE UI or RELOAD TABLE VIEW etc.
// self.tableView.reloadData()
}
I suppose your request is due to the fact that you might have huge amounts of queued delete requests, not just five or ten of them.
In this case, I'd also try and consider adding a server side API call that allows you to delete more than just one item at a time, maybe up to ten or twenty, so that you could also reduce the overhead of the network traffic you'd be generating (a single GET isn't just sending the id of the item you are deleting but also a bunch of data that will basically sent on and on again for each and every call) by grouping the mails in batches.
Currently I am using PromiseKit to chain a logic, which is like the following:
[NSURLConnection promise:rq1].then(^(id data1) {
return [NSURLConnection promise:rq2];
}).then(^(id data2) {
return [NSURLConnection promise:rq3];
}).then(^(id data3) {
return [self promiseToDoSomeWorkOnData:data3];
}).finally(^{
[self cleanup];
});
The problem that I am facing is that the method I call in the finally clause is asynchronous, but I have no way to chain the finally method together with the other promises so that any usage of the whole piece of code somewhere else waits also for the finally clause to finish before continuing to a next promise.
I'm using AFNetworking 2, and would like to use the NSURLSession approach, but read the GitHub issue where Mattt explains why this doesn't work with batching. So, instead, I'm using AFHTTPRequestOperations from a singleton class containing an NSOperationQueue.
I've created a significant number of discrete operations. Each of these operations is called from different areas of the app, but in some parts of the app, its useful to batch them together (think "full refresh"). Here's a method that does this:
-(void) getEverything {
AFHTTPRequestOperation *ssoA = [SecurityOps authenticateSSO];
AFHTTPRequestOperation *atSC = [SecurityOps attachSessionCookies];
[atSC addDependency:ssoA];
AFHTTPRequestOperation *comL = [CommunityOps communityListOp];
[comL addDependency:ssoA];
AFHTTPRequestOperation *comS = [CommunityOps searchCommunityOp:nil :nil];
[comS addDependency:comL];
AFHTTPRequestOperation *stu1 = [StudentOps fdpFullOp]; // 3 Ops in Sequence
[stu1 addDependency:ssoA];
AFHTTPRequestOperation *stu2 = [StudentOps progressDataOp];
[stu2 addDependency:ssoA];
AFHTTPRequestOperation *stu3 = [StudentOps programTitleOp];
[stu3 addDependency:ssoA];
AFHTTPRequestOperation *stu4 = [StudentOps graduationDateOp];
[stu4 addDependency:ssoA];
NSArray *ops = [AFURLConnectionOperation
batchOfRequestOperations:#[ssoA, atSC, comL, comS, stu1, stu2, stu3, stu4]
progressBlock:^(NSUInteger numberOfFinishedOperations, NSUInteger totalNumberOfOperations) {
NSLog(#"%lu of %lu complete", numberOfFinishedOperations, totalNumberOfOperations);
} completionBlock:^(NSArray *operations) {
NSLog(#"All operations in batch complete");
}];
[self.Que addOperations:ops waitUntilFinished:NO];
}
This works just fine, with one exception: The "fdpFullOp" actually launches other operations in a sequence. In its completion block, it adds opB to the queue, and then opB adds opC to the queue in its completion block. These additional operations are, of course, not counted in the "batch" (as written above), so this batch completes before opB and opC are done.
Question 1: When adding an op from the completion block of another, can I add it to the "batch" (for overall batch completion tracking)?
One alternative I've tried is to sequence all of the ops in the queue at batch creation (below). This provides accurate batch completion notice. However, as stu1B requires data from stu1A, and stu1C requires data from stu1B, this only works if predecessor operations persist their data somewhere (e.g. NSUserDefaults) that successor operations can get it. This seems a bit "inelegant", but it does work.
-(void) getEverything {
AFHTTPRequestOperation *ssoA = [SecurityOps authenticateSSO];
AFHTTPRequestOperation *atSC = [SecurityOps attachSessionCookies];
[atSC addDependency:ssoA];
AFHTTPRequestOperation *comL = [CommunityOps communityListOp];
[comL addDependency:ssoA];
AFHTTPRequestOperation *comS = [CommunityOps searchCommunityOp:nil :nil];
[comS addDependency:comL];
AFHTTPRequestOperation *stu1A = [StudentOps fdpFullOp]; // 1 of 3 op sequence
[stu1A addDependency:ssoA];
AFHTTPRequestOperation *stu1B = [StudentOps fdpSessionOp]; // 2 of 3 op sequence
[stu1B addDependency:stu1A];
AFHTTPRequestOperation *stu1C = [StudentOps fdpDegreePlanOp]; // 3 of 3 op sequence
[stu1C addDependency:stu1B];
AFHTTPRequestOperation *stu2 = [StudentOps progressDataOp];
[stu2 addDependency:ssoA];
AFHTTPRequestOperation *stu3 = [StudentOps programTitleOp];
[stu3 addDependency:ssoA];
AFHTTPRequestOperation *stu4 = [StudentOps graduationDateOp];
[stu4 addDependency:ssoA];
NSArray *ops = [AFURLConnectionOperation
batchOfRequestOperations:#[ssoA, atSC, comL, comS, stu1A, stu1B, stu1C, stu2, stu3, stu4]
progressBlock:^(NSUInteger numberOfFinishedOperations, NSUInteger totalNumberOfOperations) {
NSLog(#"%lu of %lu complete", numberOfFinishedOperations, totalNumberOfOperations);
} completionBlock:^(NSArray *operations) {
NSLog(#"All operations in batch complete");
}];
[self.Que addOperations:ops waitUntilFinished:NO];
}
Question 2: Is there a better way (other than persisting data in each op and then reading from storage in the successor op) to pass data between dependent operations in a batch?
Finally, it occurs to me that I might be making this entire process more difficult than it should be. I'd love to hear about alternate approaches that still provide an overall concurrent queue, still provide overall batch progress/completion tracking, but also allow inter-op dependency management and data passing. Thanks!
You shouldn't use NSOperation dependencies for this because later operations rely on processing with completionBlock but NSOperationQueue considers that work a side effect.
According to the docs, completionBlock is "the block to execute after the operation’s main task is completed". In the case of AFHTTPRequestOperation, "the operation’s main task" is "making an HTTP request". The "main task" doesn't include parsing JSON, persisting data, checking HTTP status codes, etc. - that's all handled in completionBlock.
So in your code, if the ssoA operation succeeds in making a network request, but authentication fails, all the later operations will still continue.
Instead, you should just add dependent operations from the completion blocks of the earlier operations.
When adding an op from the completion block of another, can I add it to the "batch" (for overall batch completion tracking)?
You can't, because:
At this point it's too late to construct a batch operation (see the implementation)
It doesn't make sense, because the later operations may not ever get created (for example, if authentication fails)
As an alternative, you could create one NSProgress object, and update it as work progresses to reflect what's been done and what is known to remain. You could use this, for example, to update a UIProgressView.
Is there a better way (other than persisting data in each op and then reading from storage in the successor op) to pass data between dependent operations in a batch?
If you add dependent operations from the completion blocks of the earlier operations, then you can just pass local variables around after validating the success conditions.
I am trying to implement code, so I can serialize network requests, basically, the next request should start only after the first one is done. I also want to subscribe to these requests, so I can handle errors. The code looks like follows:
- (RACSignal * ) sendRequest: (Request *) request{
[[[RACSignal return:nil
deliverOn: [RACScheduler scheduler]
mapReplace: [self.network sendRequest]]; // A different thread is spawned to execute the request
}
and it is called as:
[self sendRequest:request
subscribeNext: ^(id x) {
NSLog(#"Request has been sent");
}];
Note that sendRequest can be called from multiple threads in parallel, so the requests need to be queued.
Putting the requests on the same scheduler, didn't work, as the send happens on another thread, and the next request gets picked up, before the previous is finished.
I also looked at using RACSubject that can help in buffering the requests, but it is good for fire and forget.
I was able to achieve the above using the concat command, therefore it is something like:
- (RACSignal * ) sendRequest: (Request *) request subscriber:(id<RACSubscriber>) subscriber{
[[[RACSignal return:nil
deliverOn: [RACScheduler scheduler]
flattenMap:^RACStream *(id value) {
[self.network sendRequest]]; // A different thread is spawned to execute the request
}]
doNext: ^(id x) {
[subscriber sendNext];
}
[[self sendRequest:request
concat]
subscribeNext: ^(id x) {
NSLog(#"Request has been sent");
}];
It turns out that an NSOperationQueue is unavoidable.
I have made RACSerialCommand to serialize the command execution. It has an interface similar to RACCommand, but with built-in NSOperationQueue to serialize the executions.
Feel free to try it.