I am trying to implement code, so I can serialize network requests, basically, the next request should start only after the first one is done. I also want to subscribe to these requests, so I can handle errors. The code looks like follows:
- (RACSignal * ) sendRequest: (Request *) request{
[[[RACSignal return:nil
deliverOn: [RACScheduler scheduler]
mapReplace: [self.network sendRequest]]; // A different thread is spawned to execute the request
}
and it is called as:
[self sendRequest:request
subscribeNext: ^(id x) {
NSLog(#"Request has been sent");
}];
Note that sendRequest can be called from multiple threads in parallel, so the requests need to be queued.
Putting the requests on the same scheduler, didn't work, as the send happens on another thread, and the next request gets picked up, before the previous is finished.
I also looked at using RACSubject that can help in buffering the requests, but it is good for fire and forget.
I was able to achieve the above using the concat command, therefore it is something like:
- (RACSignal * ) sendRequest: (Request *) request subscriber:(id<RACSubscriber>) subscriber{
[[[RACSignal return:nil
deliverOn: [RACScheduler scheduler]
flattenMap:^RACStream *(id value) {
[self.network sendRequest]]; // A different thread is spawned to execute the request
}]
doNext: ^(id x) {
[subscriber sendNext];
}
[[self sendRequest:request
concat]
subscribeNext: ^(id x) {
NSLog(#"Request has been sent");
}];
It turns out that an NSOperationQueue is unavoidable.
I have made RACSerialCommand to serialize the command execution. It has an interface similar to RACCommand, but with built-in NSOperationQueue to serialize the executions.
Feel free to try it.
Related
I am currently using the following method to send GET API requests. This method works, but I was wondering if there is a faster way. All I need regarding requirements is to know when all of the Deleted mail has been synced. Any tips or suggestions are appreciated.
- (void)syncDeletedMail:(NSArray *)array atIdx:(NSInteger)idx {
if (idx < array.count) {
NSInteger idNumber = array[idx];
[apiClient deleteMail:idNumber onSuccess:^(id result) {
[self syncDeletedMail:array atIdx:(idx + 1)];
} onFailure:^(NSError *error){
[self syncDeletedMail:array atIdx:(idx + 1)];
}];
} else {
NSLog(#"finished");
}
}
Edit: I don't care what order it is completed (not sure if it matters in terms of speed), as long as all the API requests come back completed.
You can just send deleteMail requests at once and use dispatch_group to know when all the requests are finished. Below is the implementation,
- (void)syncDeletedMail:(NSArray *)array {
dispatch_group_t serviceGroup = dispatch_group_create();
for (NSInteger* idNumber in array)
{
dispatch_group_enter(serviceGroup);
[apiClient deleteMail:idNumber onSuccess:^(id result) {
dispatch_group_leave(serviceGroup);
} onFailure:^(NSError *error){
dispatch_group_leave(serviceGroup);
}];
}
dispatch_group_notify(serviceGroup,dispatch_get_main_queue(),^{
NSLog(#"All email are deleted!");
});
}
Here you can see all the requests are fired at the same time so it will reduce the time from n folds to 1.
Swift Version of #Kamran :
let group = DispatchGroup()
for model in self.cellModels {
group.enter()
HTTPAPI.call() { (result) in
// DO YOUR CHANGE
switch result {
...
}
group.leave()
}
}
group.notify(queue: DispatchQueue.main) {
// UPDATE UI or RELOAD TABLE VIEW etc.
// self.tableView.reloadData()
}
I suppose your request is due to the fact that you might have huge amounts of queued delete requests, not just five or ten of them.
In this case, I'd also try and consider adding a server side API call that allows you to delete more than just one item at a time, maybe up to ten or twenty, so that you could also reduce the overhead of the network traffic you'd be generating (a single GET isn't just sending the id of the item you are deleting but also a bunch of data that will basically sent on and on again for each and every call) by grouping the mails in batches.
I have a method that calls a request with a response block inside. I want to stub my response and return fake data. How can this be done?
-(void)method:(NSString*)arg1{
NSURLRequest *myRequest = ...........
[self request:myRequest withCompletion:^(NSDictionary* responseDictionary){
//Do something with responseDictionary <--- I want to fake my responseDictionary
}];
}
- (void)request:(NSURLRequest*)request withCompletion:(void(^)(NSDictionary* responseDictionary))completion{
//make a request and passing a dictionary to completion block
completion(dictionary);
}
If I understand you correctly, you want to mock request:withCompletion: and call the passed completion block.
Here is how I have done this in the past. I've adapted this code to your call, but I cannot check it for compilation errors, but it should show you how to do it.
id mockMyObj = OCClassMock(...);
OCMStub([mockMyObj request:[OCMArg any] completion:[OCMArg any]])).andDo(^(NSInvocation *invocation) {
/// Generate the results
NSDictionary *results = ....
// Get the block from the call.
void (^__unsafe_unretained completionBlock)(NSDictionary* responseDictionary);
[invocation getArgument:&callback atIndex:3];
// Call the block.
completionBlock(results);
});
You will need the __unsafe_unretained or things will go wrong. Can't remember what right now. You could also combine this with argument captures as well if you wanted to verify the passed arguments such as the setup of the request.
Currently I am using PromiseKit to chain a logic, which is like the following:
[NSURLConnection promise:rq1].then(^(id data1) {
return [NSURLConnection promise:rq2];
}).then(^(id data2) {
return [NSURLConnection promise:rq3];
}).then(^(id data3) {
return [self promiseToDoSomeWorkOnData:data3];
}).finally(^{
[self cleanup];
});
The problem that I am facing is that the method I call in the finally clause is asynchronous, but I have no way to chain the finally method together with the other promises so that any usage of the whole piece of code somewhere else waits also for the finally clause to finish before continuing to a next promise.
I have a following method in my app for which I need to write unit test cases.
Can anyone suggest how can I test whether the success block or error block is called.
- (IBAction)loginButtonTapped:(id)sender
{
void (^SuccessBlock)(id, NSDictionary*) = ^(id response, NSDictionary* headers) {
[self someMethod];
};
void (^ErrorBlock)(id, NSDictionary*, id) = ^(NSError* error, NSDictionary* headers, id response) {
// some code
};
[ServiceClass deleteWebService:#“http://someurl"
data:nil
withSuccessBlock:SuccessBlock
withErrorBlock:ErrorBlock];
}
You have to use expectations, a relatively recently introduced API. They were added to solve exactly the problem you are having right now, verifying callbacks of asynchronous methods are called.
Note that you can also set a timeout that will affect the result of the test (slow network connections for example can fire false positives, unless you are checking for slow connections of course, although there are much better ways to do that).
- (void)testThatCallbackIsCalled {
// Given
XCTestExpectation *expectation = [self expectationWithDescription:#"Expecting Callback"];
// When
void (^SuccessBlock)(id, NSDictionary*) = ^(id response, NSDictionary* headers) {
// Then
[self someMethod];
[expectation fulfill]; // This tells the test that your expectation was fulfilled i.e. the callback was called.
};
void (^ErrorBlock)(id, NSDictionary*, id) = ^(NSError* error, NSDictionary* headers, id response) {
// some code
};
[ServiceClass deleteWebService:#“http://someurl"
data:nil
withSuccessBlock:SuccessBlock
withErrorBlock:ErrorBlock];
};
// Here we set the timeout, play around to find what works best for your case to avoid false positives.
[self waitForExpectationsWithTimeout:2.0 handler:nil];
}
I use the following code to start N requests, where every request is made of two request that must go hand-by-hand ( I do not care of blocking the UI because I want the app blocked):
objectManager.operationQueue.maxConcurrentOperationCount = 1;
for (int i = 0; i< n; i++)
{
[objectManager postObject:reqObj
path:#"sync.json"
parameters:nil
success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) {
[operation waitUntilFinished];
// Do something and then send the second request
[self sendAck];
} // end success
failure:^(RKObjectRequestOperation *operation, NSError *error) {}
];
}
And the second request is very similar:
-(void)sendAck
{
[objectManager postObject:reqObj
path:#"sync.json"
parameters:nil
success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) {
[operation waitUntilFinished];
}
failure:^(RKObjectRequestOperation *operation, NSError *error) {}
]
}
But after checking the logs at the server I realized that all the "acks", I mean all the second requests come after all the first requests. And the results are obviously incorrect.
If a request i is started, we must wait the second request to finish before sending the i+1 request. That is:
req. i, second req. on i, req. i+1, second req. on i+ 1,...
and not
req .i ,req. i+1, ....., second req. on i, second req. on i+1
The use of operation queue is wrong or am I missing something?
I never tried this, but a good way to ensure that you are calling the requests in a specific order is by placing them in a queue like described here.
Another approach is make the calls synchronous, a good way to do it is described here.
The reason for this behavior is how you use the `NSOperationQueue:
In the for loop you are effectively enqueueing N "send" requests. All are executed in order and sequentially.
When the first request got finished, the next "send" request will be executed. Since the first "send" request is finished you enqueue the corresponding "sendAck". That is, it will be appended to the tail of the queue, where other "send" requests are still waiting.
When the second "send "request get finished, the next "send" request will be executed and so on. Since the second "send" request is finished you enqueue the corresponding "sendAck" and so force.
When all "send" requests have been executed, the first "sendAck" request gets send. When it finished, the next "sendAck" will be executed and so force until all "sendAck" requests have eventually been send.
Using "recursion", i.e. eliminating the for loop and using a global variabile which counts the number of total requests is a better approach, as in this answer of SO