objective-c, possible to queue async NSURLRequests? - ios

I realize this question sounds contradictory. I have several Async requests going out in an application. The situation is that the first async request is an authentication request, and the rest will use an access token returned by the successful authentication request.
The two obvious solutions would be:
run them all synchronous, and risk UI block. (bad choice)
run them async, and put request 2-N in the completion handler for the first one. (not practical)
The trouble is that the subsequent requests may be handled anywhere in the project, at anytime. The failure case would be if the 2nd request was called immediately after the 1st authentication request was issued, and before the access token was returned.
My question thus is, is there any way to queue up Async requests, or somehow say not to issue them until the first request returns successfully?
EDIT:
Why (2) is not practical: The first is an authentication request, happening when the app loads. The 2nd+ may occur right away, in which case it is practical, but it also may occur in a completely separate class or any other part of a large application. I can't essentially put the entire application in the completion handler. Other accesses to the API requests may occur in other classes, and at anytime. Even 1-2 days away after many other things have occurred.
SOLUTION:
//pseudo code using semaphore lock on authentication call to block all other calls until it is received
// at start of auth
_semaphore = dispatch_semaphore_create(0)
// at start of api calls
if(_accessToken == nil && ![_apiCall isEqualToString:#"auth]){
dispatch_semaphore_wait(_semaphore, DISPATCH_TIME_FOREVER);
}
// at end of auth with auth token
dispatch_semaphore_signal([[SFApi Instance] semaphore]);
_accessToken = ...;

This sounds like a case where you'd want to use NSOperation's dependencies
From apple docs:
Operation Dependencies
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency: and removeDependency: methods. By default, an operation object that has dependencies is not considered ready until all of its dependent operation objects have finished executing. Once the last dependent operation finishes, however, the operation object becomes ready and able to execute.
note that in order for this to work, you must subclass NSOperation "properly" with respect to KVO-compliance
The NSOperation class is key-value coding (KVC) and key-value observing (KVO) compliant for several of its properties. As needed, you can observe these properties to control other parts of your application.

You can't really have it both ways-- there's no built-in serialization for the NSURLConnection stuff. However, you are probably already funneling all of your API requests through some common class anyway (presumably you're not making raw network calls willy-nilly all over the app).
You'll need to build the infrastructure inside that class that prevents the execution of the later requests until the first request has completed. This suggests some sort of serial dispatch queue that all requests (including the initial auth step) are funneled through. You could do this via dependent NSOperations, as is suggested elsewhere, but it doesn't need to be that explicit. Wrapping the requests in a common set of entry points will allow you to do this any way you want behind the scenes.

In cases like this I always find it easiest to write the code synchronously and get it running on the UI thread first, correctly, just for debugging. Then, move the operations to separate threads and make sure you handle concurrency.
In this case the perfect mechanism for concurrency is a semaphore; the authentication operation clears the semaphore when it is done, and all the other operations are blocking on it. Once authentication is done, floodgates are open.
The relevant functions are dispatch_semaphore_create() and dispatch_semaphore_wait() from the Grand Central Dispatch documentation: https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/doc/uid/TP40008079-CH2-SW2
Another excellent solution is to create a queue with a barrier:
A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.
Looks like you got it running with a semaphore, nicely done!

Use blocks... 2 ways that I do it:
First, a block inside of a block...
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
// this won't run until the first one has finished
[myCommKit adjustSomething: thingToAdjust withCallback:^(ReturnCode returnCode, NSDictionary *successCode) {
if (successCode) {
// this won't run until both the first and then the second one finished
}
}];
}
}];
// don't be confused.. anything down here will run instantly!!!!
Second way is a method inside of a block
[myCommKit getPlayerInfoWithCallback:^(ReturnCode returnCode, NSDictionary *playerInfo) {
if (playerInfo) {
[self doNextThingAlsoUsingBlocks];
}
}];
Either way, any time I do async communication with my server I use blocks. You have to think differently when writing code that communicates with a server. You have to force things to go in the order you want and wait for the return success/fail before doing the next thing. And getting used to blocks is the right way to think about it. It could be 15 seconds between when you start the block and when it gets to the callback and executes the code inside. It could never come back if they're not online or there's a server outage.
Bonus way.. I've also sometimes done things using stages:
switch (serverCommunicationStage) {
case FIRST_STAGE:
{
serverCommunicationStage = FIRST_STAGE_WAITING;
// either have a block in here or call a method that has a block
[ block {
// in call back of this async call
serverCommunicationStage = SECOND_STAGE;
}];
break;
}
case FIRST_STAGE_WAITING:
{
// this just waits for the first step to complete
break;
}
case SECOND_STAGE:
{
// either have a block in here or call a method that has a block
break;
}
}
Then in your draw loop or somewhere keep calling this method. Or set up a timer to call it every 2 seconds or whatever makes sense for your application. Just make sure to manage the stages properly. You don't want to accidentally keep calling the request over and over. So make sure to set the stage to waiting before you enter the block for the server call.
I know this might seem like an older school method. But it works fine.

Related

What is the default thread

In iOS, we have GCD and Operation to handle concurrent programming.
looking into GCD we have QoS classes, and they're simple and straight forward, this question is about why DispatchQueue.main.async is commonly used to asynchronies X tasks in the Main Thread.
So when we usually handle updating something in the UI we usually use that function since to prevent any irresponsiveness from the application.
makes me think is writing code inside the UIViewController usually executed in the main thread ?
but also knowing that callback & completionHandler usually execute without specifying on what thread they are in, and the UI never had a problem with that !! so it is on the background ?
How Swift handles this ? and what thread am i writing on by default without specifying anything ?
Since there are more than one question here, let's attempt to answer them one by one.
why DispatchQueue.main.async is commonly used to asynchronies X tasks
in the Main Thread.
Before mentioning a direct answer, make sure that you don't have confusion of understanding:
Serial <===> Concurrent.
Sync <===> Async.
Keep in mind that DispatchQueue.main is serial queue. Using sync or async has nothing to do with determining serialization or currency of a queue, instead they refer to how the task is handled. Thus saying DispatchQueue.main.async means that:
returns control to the current queue right after task has been sent to
be performed on the different queue. It doesn't wait until the task is
finished. It doesn't block the queue.
cited from: https://stackoverflow.com/a/44324968/5501940 (I'd recommend to check it.)
In other words, async means: this will happen on the main thead and update it when it is finished. That's what makes what you said:
So when we usually handle updating something in the UI we usually use
that function since to prevent any irresponsiveness from the
application.
seems to be sensible; Using sync -instead of async- will block the main.
makes me think is writing code inside the UIViewController usually
executed in the main thread ?
First of all: By default, without specifying which thread should execute a chunk of code it would be the main thread. However your question seems to be unspecific because inside a UIViewController we can call functionalities that are not executed on the main thread by specifying it.
but also knowing that callback & completionHandler usually execute
without specifying on what thread they are in, and the UI never had a
problem with that !! so it is on the background ?
"knowing that callback & completionHandler usually execute without specifying on what thread they are in" No! You have to specify it. A good real example for it, actually that's how Main Thread Checker works.
I believe that there is something you are missing here, when dealing when a built-in method from the UIKit -for instance- that returns a completion handler, we can't see that it contains something like DispatchQueue.main.async when calling the completion handler; So, if you didn't execute the code inside its completion handler inside DispatchQueue.main.async so we should assume that it handles it for you! It doesn't mean that it is not implemented somewhere.
Another real-world example, Alamofire! When calling
Alamofire.request("https://httpbin.org/get").responseJSON { response in
// what is going on here work has to be async on the main thread
}
That's why you can call it without facing any "hanging" issue on the main thread; It doesn't mean its not handled, instead it means they handle it for you so you don't have to worry about it.

Asynchronous swift 3

I need to make an asynchronous call so that the second method is only called after the first one is completed.Both methods are network calls. Something like this:
signIn()
getContacts()
I want to make sure that getContacts only gets called after the signIn is completed. FWIW, I can't edit the methods signatures because they are from a Google SDK.
This is what I tried:
let queue = DispatchQueue(label: "com.app.queue")
queue.async {
signIn()
getContacts()
}
Async calls, by their nature, do not run to completion then call the next thing. They return immediately, before the task they were asked to complete has even been scheduled.
You need some method to make the second task wait for the first to complete.
NauralInOva gave one good solution: use a pair of NSOprations and make them depend on each other. You could also put the 2 operations into a serial queue and the second one wouldn't begin until the first is complete.
However, if those calls trigger an async operation on another thread, they may still return and the operation queue may trigger the second operation (the getContacts() call without waiting for signIn() to complete.
Another option is to set up the first function to take a callback:
signIn( callback: {
getContacts()
}
A third option is to design a login object that takes a delegate, and the login object would call a signInComplete method on the delegate once the signIn is complete.
This is such a common thing to do that most networking APIs are built for it "out out of the box." I'd be shocked if the Google API did not have some facility for handling this.
What Google framework are you using? Can you point to the documentation for it?
You're looking for NSOperation. You can use NSOperation to chain operations together using dependencies. Once one operation complete's it can pass it's completion block to the next operation. An example for your use case might be:
// AuthOperation is a subclass of NSOperation
let signInOperation = AuthOperation()
// ContactsOperation is a subclass of NSOperation
let getContactsOperation = ContactsOperation()
getContactsOperation.addDependency(signInOperation)
Ray Wenderlich has a great tutorial covering NSOperation. The tutorial uses a downloading operation to load images asynchronously with a dependency that will filter the photo upon completion of the network request.
There is also a great sample project by Apple that uses operations with asynchronous network requests.

Whether those two ways of dispatching work to main thread (CGD and NSOperationQueue) are equivalent?

I'm curious whether those two types to dispatch work to main queue are equivalent or maybe there are some differentials?
dispatch_async(dispatch_get_main_queue()) {
// Do stuff...
}
and
NSOperationQueue.mainQueue().addOperationWithBlock { [weak self] () -> Void in
// Do stuff..
}
There are differences, but they are somewhat subtle.
Operations enqueued to -[NSOperationQueue mainQueue] get executed one operation per pass of the run loop. This means, among other things, that there will be a "draw" pass between operations.
With dispatch_async(dispatch_get_main_queue(),...) and -[performSelectorOnMainThread:...] all enqueued blocks/selectors are called one after the other without spinning the run loop (i.e. allowing views to draw or anything like that). The runloop will continue after executing all enqueued blocks.
So, with respect to drawing, dispatch_async(dispatch_get_main_queue(),...) and -[performSelectorOnMainThread:...] batch operations into one draw pass, whereas -[NSOperationQueue mainQueue] will draw after each operation.
For a full, in-depth investigation of this, see my answer over here.
At a very basic level they are not both the same thing.
Yes, the operation queue method will be scheduled on GCD queue. But it also gets all the rich benefits of using operation queues, such as an easy way to add dependent operations; state observation; the ability to cancel an operation…
So no, they are not equivalent.
Yes there are difference in GCD and NSOperation.
GCD is light weight can be used to give flavor of multithreading like loading profile pic, loading web page, network call that surely returns at earliest.
NSOperation queue 1. Usually used to make heavy network calls, sort thousand's of record etc.2. Can add new operation, delete, get current status at any operation3. Add completion handler4. get operation count etc are added advantages over GCD

Clarifications needed for concurrent operations, NSOperationQueue and async APIs

This is a two part question. Hope someone could reply with a complete answer.
NSOperations are powerful objects. They can be of two different types: non-concurrent or concurrent.
The first type runs synchronously. You can take advantage of a non-concurrent operations by adding them into a NSOperationQueue. The latter creates a thread(s) for you. The result consists in running that operation in a concurrent manner. The only caveat regards the lifecycle of such an operation. When its main method finishes, then it is removed form the queue. This is can be a problem when you deal with async APIs.
Now, what about concurrent operations? From Apple doc
If you want to implement a concurrent operation—that is, one that runs
asynchronously with respect to the calling thread—you must write
additional code to start the operation asynchronously. For example,
you might spawn a separate thread, call an asynchronous system
function, or do anything else to ensure that the start method starts
the task and returns immediately and, in all likelihood, before the
task is finished.
This is quite almost clear to me. They run asynchronously. But you must take the appropriate actions to ensure that they do.
What it is not clear to me is the following. Doc says:
Note: In OS X v10.6, operation queues ignore the value returned by
isConcurrent and always call the start method of your operation from a
separate thread.
What it really means? What happens if I add a concurrent operation in a NSOperationQueue?
Then, in this post Concurrent Operations, concurrent operations are used to download some HTTP content by means of NSURLConnection (in its async form). Operations are concurrent and included in a specific queue.
UrlDownloaderOperation * operation = [UrlDownloaderOperation urlDownloaderWithUrlString:url];
[_queue addOperation:operation];
Since NSURLConnection requires a loop to run, the author shunt the start method in the main thread (so I suppose adding the operation to the queue it has spawn a different one). In this manner, the main run loop can invoke the delegate included in the operation.
- (void)start
{
if (![NSThread isMainThread])
{
[self performSelectorOnMainThread:#selector(start) withObject:nil waitUntilDone:NO];
return;
}
[self willChangeValueForKey:#"isExecuting"];
_isExecuting = YES;
[self didChangeValueForKey:#"isExecuting"];
NSURLRequest * request = [NSURLRequest requestWithURL:_url];
_connection = [[NSURLConnection alloc] initWithRequest:request
delegate:self];
if (_connection == nil)
[self finish];
}
- (BOOL)isConcurrent
{
return YES;
}
// delegate method here...
My question is the following. Is this thread safe? The run loop listens for sources but invoked methods are called in a background thread. Am I wrong?
Edit
I've completed some tests on my own based on the code provided by Dave Dribin (see 1). I've noticed, as you wrote, that callbacks of NSURLConnection are called in the main thread.
Ok, but now I'm still very confusing. I'll try to explain my doubts.
Why including within a concurrent operation an async pattern where its callback are called in the main thread? Shunting the start method to the main thread it allows to execute callbacks in the main thread, and what about queues and operations? Where do I take advantage of threading mechanisms provided by GCD?
Hope this is clear.
This is kind of a long answer, but the short version is that what you're doing is totally fine and thread safe since you've forced the important part of the operation to run on the main thread.
Your first question was, "What happens if I add a concurrent operation in a NSOperationQueue?" As of iOS 4, NSOperationQueue uses GCD behind the scenes. When your operation reaches the top of the queue, it gets submitted to GCD, which manages a pool of private threads that grows and shrinks dynamically as needed. GCD assigns one of these threads to run the start method of your operation, and guarantees this thread will never be the main thread.
When the start method finishes in a concurrent operation, nothing special happens (which is the point). The queue will allow your operation to run forever until you set isFinished to YES and do the proper KVO willChange/didChange calls, regardless of the calling thread. Typically you'd make a method called finish to do that, which it looks like you have.
All this is fine and well, but there are some caveats involved if you need to observe or manipulate the thread on which your operation is running. The important thing to remember is this: don't mess with threads managed by GCD. You can't guarantee they'll live past the current frame of execution, and you definitely can't guarantee that subsequent delegate calls (i.e., from NSURLConnection) will occur on the same thread. In fact, they probably won't.
In your code sample, you've shunted start off to the main thread so you don't need to worry much about background threads (GCD or otherwise). When you create an NSURLConnection it gets scheduled on the current run loop, and all of its delegate methods will get called on that run loop's thread, meaning that starting the connection on the main thread guarantees its delegate callbacks also happen on the main thread. In this sense it's "thread safe" because almost nothing is actually happening on a background thread besides the start of the operation itself, which may actually be an advantage because GCD can immediately reclaim the thread and use it for something else.
Let's imagine what would happen if you didn't force start to run on the main thread and just used the thread given to you by GCD. A run loop can potentially hang forever if its thread disappears, such as when it gets reclaimed by GCD into its private pool. There's some techniques floating around for keeping the thread alive (such as adding an empty NSPort), but they don't apply to threads created by GCD, only to threads you create yourself and can guarantee the lifetime of.
The danger here is that under light load you actually can get away with running a run loop on a GCD thread and think everything is fine. Once you start running many parallel operations, especially if you need to cancel them midflight, you'll start to see operations that never complete and never deallocate, leaking memory. If you wanted to be completely safe, you'd need to create your own dedicated NSThread and keep the run loop going forever.
In the real world, it's much easier to do what you're doing and just run the connection on the main thread. Managing the connection consumes very little CPU and in most cases won't interfere with your UI, so there's very little to gain by running the connection completely in the background. The main thread's run loop is always running and you don't need to mess with it.
It is possible, however, to run an NSURLConnection connection entirely in the background using the dedicated thread method described above. For an example, check out JXHTTP, in particular the classes JXOperation and JXURLConnectionOperation

Recommended Design Patterns for Asynchronous Blocks?

I am working on an iOS app that has a highly asynchronous design. There are circumstances where a single, conceptual "operation" may queue many child blocks that will be both executed asynchronously and receive their responses (calls to remote server) asynchronously. Any one of these child blocks could finish execution in an error state. Should an error occur in any child block, any other child blocks should be cancelled, the error state should be percolated up to the parent, and the parent's error-handling block should be executed.
I am wondering what design patterns and other tips that might be recommended for working within an environment like this?
I am aware of GCD's dispatch_group_async and dispatch_group_wait capabilities. It may be a flaw in this app's design, but I have not had good luck with dispatch_group_async because the group does not seem to be "sticky" to child blocks.
Thanks in advance!
There is a WWDC video (2012) that will probably help you out. It uses a custom NSOperationQueue and places the asynchronous blocks inside NSOperationsso you can keep a handle on the blocks and cancel remaining queued blocks.
An idea would be to have the error handling of the child blocks to call a method on the main thread in the class that handles the NSOperationQueue. The class could then cancel the rest appropriately. This way the child block only need to know about their own thread and the main thread. Here is a link to the video
https://developer.apple.com/videos/wwdc/2012/
The video is called "Building Concurrent User Interfaces on iOS". The relevant part is mainly in the second half, but you'll probably want to watch the whole thing as it puts it in context nicely.
EDIT:
If possible, I'd recommend handling the response in an embedded block, which wraps it nicely together, which is what I think you're after..
//Define an NSBlockOperation, and get weak reference to it
NSBlockOperation *blockOp = [[NSBlockOperation alloc]init];
__weak NSBlockOperation *weakBlockOp = blockOp;
//Define the block and add to the NSOperationQueue, when the view controller is popped
//we can call -[NSOperationQueue cancelAllOperations] which will cancel all pending threaded ops
[blockOp addExecutionBlock: ^{
//Once a block is executing, will need to put manual checks to see if cancel flag has been set otherwise
//the operation will not be cancelled. The check is rather pointless in this example, but if the
//block contained multiple lines of long running code it would make sense to do this at safe points
if (![weakBlockOp isCancelled]) {
//substitute code in here, possibly use *synchronous* NSURLConnection to get
//what you need. This code will block the thread until the server response
//completes. Hence not executing the following block and keeping it on the
//queue.
__block NSData *temp;
response = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]];
[operationQueue addOperationWithBlock:^{
if (error) {
dispatch_async(dispatch_get_main_queue(), ^{
//Call selector on main thread to handle canceling
//Main thread can then use handle on NSOperationQueue
//to cancel the rest of the blocks
});
else {
//Continue executing relevant code....
}
}];
}
}];
[operationQueue addOperation:blockOp];
One pattern that I have come across since posting this question was using a semaphore to change what would be an asynchronous operation into a synchronous operation. This has been pretty useful. This blog post covers the concept in greater detail.
http://www.g8production.com/post/76942348764/wait-for-blocks-execution-using-a-dispatch-semaphore
There are many ways to achieve async behavior in cocoa.
GCD, NSOperationQueue, performSelectorAfterDelay, creating your own threads. There are appropriate times to use these mechanisms. Too long to discuss here, but something you mentioned in your post needs addressing.
Should an error occur in any child block, any other child blocks should be cancelled, the error state should be percolated up to the parent, and the parent's error-handling block should be executed.
Blocks cant throw errors up the stack. Period.

Resources