Synchronous calls : calls that will send and expect a reply.(will block current process)
Asynchronous calls : calls that will send(Wont block current process)
Though I understand the concept of what sync and async are,but when it comes to converting these concept in to code,I usually fail.
This link explains how deadlocks can be avoided by choosing async calls to send data with other processes and sync calls to send data to itself.
My Question : How can a person choose over sync or async calls while making a real world application in erlang/OTP.
my rules of thumb:
-> use asynchronous message
-> if your process need a result and cannot do something in between you may use synchronous message (easier to read)
-> if your process need a result and must not do anything in between use use synchronous call
When you a deciding what to use, try ask yourself this questions:
Do I need call's result?
Is it important to me if call fails? How do I know about this?
Is receiving side capable to handle all async calls with the same speed as they arrive? What will happen if not?
One important point about sync/async calls is overload protection. Try to divide you program flow in to concurrent entities, which must proportionally slows down when load is increasing, so you can estimate how many units of such work you can exec.
Related
As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.
An app I am working on requires creating a container object on a server and inserting items into that container. I don't want to create the container object until the first item needs to be inserted. However, creating the container object requires some initialization that may take a little time. While that container is still initializing the user can still to send insertion requests that aren't getting handled because the container isn't ready yet. I have two main questions:
Should this be dealt with on the client or server side?
What is the best practice for dealing with kind of this issue?
Essentially, I need to ensure my initial createContainer data task in complete before any insertItem requests are sent.
Addition Information
An insertItem request is sent by clicking on a corresponding tableViewCell. The first tableViewCell a user clicks on sends a createContainer request that creates a container holding the first item.
For a container holding n items, the request should be sent in the following order:
createContainer(Container(with: item1)
insertItem(item2)
...
insertItem(itemn)
After the first request completes, the remaining n – 1 requests may complete in any order.
My Thoughts
It sounds like I want the createContainer request to be handled synchronously while the insertItem request should be handled asynchronously. I'm not sure if that is the best approach or even how to perform that appropriately, so any guidance would be greatly appreciated.
You can use a NSOperationQueue and multiple NSOperations to implement your desired behavior. A NSOperation instance can be dependent on the completion of another NSOperation instance:
dependencies
An array of the operation objects that must finish
executing before the current object can begin executing.
For your example this would mean that the insertItem-Operations are dependent on the createContainer operation.
When you add all those operations to a NSOperationQueue your createContainer operation will run first. When it has finished, the other operations will start running as their dependencies are now satisfied. You can also control how many operations you want to run concurrently using maxConcurrentOperationCount on NSOperationQueue.
As you will be using asynchronous API in your NSOperations you will need to implement a ConcurrentOperation and handle the state changes yourself. The API Reference is explaining this in pretty good detail.
Check out the API Reference for NSOperation for further information.
There is also a nice NSHipster article on NSOperations.
Adding to the NSOperationQueue answer, it's sometimes difficult to manually manage all the state changes that an NSOperation requires to handle something asynchronous like a network call.
To simplify that, you can use a Swift Library called Overdrive. It's an amazing library in which you simply subclass a Task class and write your network code in the run() function. And when you're done, you simply call self.finish to finish the task. Here's an example: Just create a simple download task:
Then, just add it to the queue.
You can also add dependencies between tasks, which basically solves your use case.
Hope this helps.
I am in the process of converting a JavaScript-based hybrid app to a native iOS app. When I started developing the app with JavaScript, I was disappointed to find out that if you want to make an HTTP request, you have to do it asynchronously. I tried to get around this in various ways, basically:
var done = false;
$.post(url, data, function() { done = true; });
while (!done) {}
//Continue
But I came to find that this is ugly and just plain bad practice, so I got over it and just did it asynchronously.
So when I started with iOS I was excited with the idea that I might be able to do it synchronously, but again I was disappointed to find that the recommended practices are asynchronous, favoring closures or delegates to handle responses.
My question has two parts:
Why is it such common practice in almost every case for HTTP requests to be made asynchronously instead of synchronously?
Is there a way to make synchronous requests in iOS that isn't ugly or problematic?
Essentially, I've always wanted to be able to do something like:
var response = SubmitHTTPPostRequest(url, data)
Is this not really a thing? I never learned this kind of thing in school, so I apologize if this is a rudimentary question. I've just never understood why this is the way it's typically done.
You need to understand the process from sending a request to getting a response. The request will most likely go through some network adapter, to some server, back to the adapter and then back to your CPU. In general there are no cases where there is only one processor involved, in the case I described are 3 but usually there are more. That means synchronisation as doing all the work in one process is impossible since multiple processors are involved. The path to synchronisation (as already mentioned) is for your current thread to wait. I can not agree that will freeze your UI but will freeze your thread (which will freeze the UI if it is the main thread). Still putting the whole process into another thread which will wait for response will produce many other issues and questions such as "should I create a thread for each request", "memory consumption if responses take too long to return?"...
I can understand you want this synchronisation so you can do the operation in a single method but in the end this is exactly what makes an ugly code. Your method then consists of creating the request, getting response, processing response and processing the data received all in one. This might seem a good idea on the beginning but when this method becomes too long you will want to refractor the code into at least 3 methods which by coincidence is exactly what you need to do with asynchronous request. So to answer your second question: Very unlikely, the asynchronous procedure looks much less ugly.
What you should do and is done in most cases is to create some class that handles your requests and responses so from the UI part of your code you only need to do a single call. Lets say you have a table view on which you will display a list of your friend received from some social network. When you first come to this list you would like some activity indicator view to notify the user the data is loading, then send some asynchronous request to get the friends not caring when and if the response will return but when the response is received you simply remove the activity indicator and reload the table view with new data received. Now I hope you can imagine this is a very elegant code and by doing so you enable the user to be able to cancel the request by pressing back.
So the main reason for doing request asynchronous is not to block the threads because that may generate multiple issues or even blocking the main thread which will block the UI and if the main thread is blocked for too long the application will be killed in iOS (watchdog). And the reasons to do synchronisation? Well, in long term I can not think of any, you should always break operations into many methods and use callbacks.
First of all, you should be very clear with synchronous and asynchronous terms.
When Synchronous request sent, caller has to wait for the request to complete the process.
And Asynchronous request don't wait for finish.
As per stack overflow answer , i have read once :
When an HttpHandler is called, a thread pool thread is used to run that request and the same thread is used to process the entire request. If that request calls out to a database or another web service or anything else that can take time, the thread pool thread waits. This means thread pool threads spend time waiting on things when they could be used to process other requests.
In contrast, when an HttpAsyncHandler, a mechanism exists to allow the request to register a callback and return the thread pool thread to the pool before the request is fully processed. The thread pool thread starts doing some processing for the request.At that point, the thread pool thread that was processing the HTTP request is returned to the pool to process another HTTP request.
Your Answers :
1.Because , asynchronous request do not wait for task to complete. send request and while in the same time thread can perform other task without waiting. i use ASIHttpRequest in my ios app.
2.We can send request synchronously but not common this days in practice.
I'm developing a network based iOS app that downloads json data from the server and processes it. Both the downloading task and processing task can take a significant time to complete, So I don't want to perform either on the main thread.
I think there are 2 ways to do this:
Perform asynchronous loading using NSURLConnection and in the didFinishLoading method use GCD (say) to do the processing in background.
Use GCD's dispatch_async (say) to start work in background and use NSURLConnection's sendSynchronousRequest:returningResponse:error to download the data synchronously, Do the processing of the data, And call UI updates on the main thread.
I think the 2nd method would be easier to write and would produce cleaner code, Especially if one "download/process data" task involves multiple sequential service calls for data download. So rather than execution going like:
main (start) -> background (download) -> main (NSURLConnectionDelegate method) -> background (data processing) -> main (UI update)
We would have:
main (start) -> background (download) -> background (data processing) -> main (UI update)
which seems to be cleaner to me.
I found 2 similar questions: Good pattern for Internet requests with Grand Central Dispatch?
And
NSURLConnection and grand central dispatch
And the answers to both seem to suggest using something conceptually similar to method 1.
Is there no proper way to achieve what's described in method 2?
Thanks in advance!
I would not be inclined to pursue option #2. Although it enjoys a certain simplicity, sendSynchronousRequest does not afford progress updates during the download, the ability to cancel the request (and other more complicated scenarios). A NSURLConnectionDataDelegate approach gives you far more control over the network requests.
This question presumes GCD-based patterns, but I think operation queue patterns merit consideration. You can marry the control offered by the NSURLConnectionDataDelegate methods with cancelable operations that encapsulate the network request. When you start to get more sophisticated, you can start employing concurrent requests, but also constrain the degree of concurrency (e.g. not more than five concurrent requests).
I'd suggest taking a look at AFNetworking. Maybe you don't want to use that framework, but I'd still take a look at the operation-queue-based patterns it employs. I'd personally use that pattern over either of the aforementioned GCD approaches.
Does anyone know of any good resources that fully explain how functions and results will fire in an Adobe AIR app where multiple things are happening at once?
As a simple test, I've created a single service that I just keep changing the url of, then issuing a send(). It seems that no matter how many send() calls I put in, all of these get executed before the 'result' eventListener function gets called for the first time.
Is this how it works? i.e. the current function gets fully executed, with the async returns queueing up to be processed after AIR has finished what it's currently doing.
Likewise, if the user does something while all this is going on, I presume their request goes to the back of the queue as well?
All that makes sense, but I'm just wondering if it's documented anywhere.
While I'm on one, is it recommended practice to reuse the same HTTPService in this way, or is it better to create one for each concurrent transaction? Just because it works, doesn't mean it's the right thing to do...
I'm not aware of any documentation that explains this, but I can confirm that code blocks get executed before async calls are made, or at least before their result is being processed. If it didn't work that way, you would for instance not always be able to attach a responder to a token of a service call, because the result might already have been processed.
var token:AsyncToken = myService.someMethod();
token.addResponder(new Responder(resultHandler, faultHandler));
Developers coming from other platforms find this strange as they would expect the assignment of the responder to be too late.
So while I don't have an official explanation about the technical details inside the Flash Player, I can assure that it works this way.
If the user does something while a call is pending, the new request will indeed just be added as a new asynchronous call. Note that we can't realy speak of a queue, as there is no guarantee that the response of the first call comes in before the response of the second call. This depends on how much time the actual requests take.
You can perfectly reuse an HTTPService instance.
PS: Based on this, we were able to build the Operation API in Spring ActionScript. It is basically an API that allows you to execute asynchronous processes in a uniform way, without having to worry about the details of the actual async process.
The following code executes an async process and attaches a handler to it. This is also something that puzzles many developers at first, for reasons similar to the asyncToken situation.
var operation:IOperation = doSomeOperation();
operation.addCompleteListener(aCompleteHandler);
operation.addErrorListener(anErrorHandler);