What is the difference between async and streams, where we should use streams instead of async, in Dart language.As descried in the official documentation stream represents the sequence of data.
Async execution is registering a callback that is called when some other computation completes.
This can be a an operating system like file.readAsString() , or an HTTP request to a server where the client continues executing UI rendering (or other things) and when the response from the server arrives, your code gets called to process the response.
In Dart you usually get a Future back from such async call, where you can register a callback using .then(/* pass callback here */).
async and await is syntactic sugar so you don't need to clutter your code with .then(...).then(...).
A stream can be sync or async, but async means something different here than the async explained above.
A stream is similar to a Future in some ways, but the callback can be called more than once if multiple events are emitted, until the sender or receiver closes the stream.
An async execution completes a Future once when it's done and that was it.
A stream can also be seen as iterable like an array, but where the items are pushed instead of pulled.
A main difference is also that there are many operators available for streams to map streams, fork and join multiple streams, and many more.
Many of these operators remind of methods available for collections like arrays, because as mentioned, a stream has similarities to iterables.
Streams compose well, and with the set of available operators, stream allow a kind of declarative programming which can be quite powerful, where a lot can be achived with a few streams and operators combined well.
Related
I have a system, connected to financial markets, that makes a very heavy use of events.
All the code is structured as a cascade of events with filters, aggregations, etc in between.
Originally the system was written in C# and then ported to F# (which in retrospect was a great move) and events in the C# code got replaced by events in F# without giving it much thoughts.
I have heard about the observer pattern, but I haven't really gone through the topic. And recently, I have read, through some random browsing, about F#'s Mailbox processor.
I read this: Difference between Observer Pattern and Event-Driven Approach and I didn't get it, but apparently over 150 people voted that the answer wasn't too clear as well :)
In an article like this: https://hackernoon.com/observer-vs-pub-sub-pattern-50d3b27f838c it seems like the observer pattern is strictly identical to events...
At first glance, they seem to be solving the same kind of problems, just with different interfaces but that got me to think about 2 questions:
Is the mailbox processor really a thing being used? it seems to appear mostly in older documentation and, in the packages I'm using, I haven't come across any using it
Regarding the observer pattern, only one package across the sizeable amount we're using makes internal use of it, but everything else is just using basic events.
Are there specific use cases fitting the Observable pattern and the MailboxProcessor? Do they have features that are unique? or are they just syntactic help around events in the end?
As simplified as possible:
Mailbox
This is a minimal implementation of the actor model.
You post messages to a queue, and your loop reads the messages from the queue, one by one. Maybe it posts to another mailbox or it does something with the messages.
Any action can only take place when a message is received.
Posting to the queue is non-blocking, i.e, no back-pressure.
All exceptions are caught and exposed as an event on the mailbox. They are expected to be handled by the actor above it.
Other actor frameworks provide features like supervisors, contracts, failover, etc.
Events
Events are a language supported callback mechanism.
It's a simple implementation. You register a callback delegate, and when the event is raised, your delegate is called.
Delegates are called in the order they are added.
Events are blocking, and synchronous. The one delegate blocks, the rest are delayed.
Events are about writing code to respond to events, as opposed what came before it, which was polling.
The handler for an event is usually the final end-point for that event, and it usually has side-effects.
Sharing a handler is common. For example, ten buttons might have the same function handling clicks, because the sender of the event is known.
You handle exceptions by yourself, typically in the handler code
Observables
There's a source (Observable) which you can subscribe to with a sink (Observer). An observable represents a bounded or un-bounded stream of values. An unbounded stream (an Observable which never completes) seems similar to an event, but there are several important properties to Observables.
An Observable emits a series of notifications, which follows this contract:
OnNext* (OnError|OnCompleted)+
All notifications are serialized
Notifications may or may not be synchronous. There's no guarantee of back-pressure.
The value of Observables lies in the fact that they are compose-able.
An observable represents a stream of future notifications, operators act to transform this stream.
This approach is sometimes called complex event processing (CEP).
Exception handling is part of the pipeline, and there are many combinators to deal with it.
You typically never implement an Observer yourself. You use combinators to set up a pipeline which models the behavior you want.
There are many usages of Fuseable interface in Reactor source code but I can't find any reference what is it. Could someone explain it's purpose?
The Fuseable interface, and its containing interfaces define the contracts used for stream fusion. Stream fusion is a reactive streams optimisation.
Without any such optimisation (in "normal" execution if you will), each reactive operator:
Subscribes to a previous operator in the chain
Is notified when the subscriber has completed
Performs its operation
Notifies its subscribers
...and then the cycle repeats for all operators. This is fantastic for making sure everything stays non-blocking, but all of those asynchronous calls come with some amount of overhead.
"Stream fusion" (or "operator fusion") significantly reduces this overhead by performing two or more of the operations in one chunk (fusing them together as one unit), passing values between them using a Queue or similar rather than via subscriptions, eliminating this overhead. It's not always possible of course - it can't be done this way if running in parallel, when certain side effects come into play, etc. - but a neat optimisation when it is possible.
A dart:html.WebSocket opens once and closes once. So I'd expect that the onOpen and onClose properties would be Futures, but in reality they are Streams.
Of course, I can use stream.onClose.first to get a Future. But the done property on the dart:io version of WebSocket is a Future as expected, so I'm wondering if I'm missing something.
Why use Streams and not Futures in dart:html?
In a word: transparency. The purpose of the dart:html package is to provide access to the DOM of the page, and so it mirrors the underlying DOM constructs as closely as possible. This is in contrast with dart:io where the goal is to provide a convenient server-side API rather than exposing some underlying layer.
Sure, as a consumer of the API you would expect open and close to be fired only once, whereas message would be fired multiple times, but at the root of things, open, close, message, and error are all just events. And in dart:html, DOM events are modeled as streams.
And actually, a WebSocket could very well fire multiple open events (or close events). The following is definitely a contrived example, but consider this snippet of javascript:
var socket = new WebSocket('ws://mysite.com');
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
How would a Dart WebSocket object behave in a situation like this, if onOpen were a Future rather than a Stream? Of course I highly, highly doubt this would ever surface out there in the "real world". But the DOM allows for it, and dart:html should not be making judgment calls trying to determine which situations are likely and which are not. If it's possible according to the spec, dart:html should reflect that. Its role is simply to pass through the behavior - as transparently as possible - and let the consumer of the API decide what cases they need to handle and which they can ignore.
Synchronous calls : calls that will send and expect a reply.(will block current process)
Asynchronous calls : calls that will send(Wont block current process)
Though I understand the concept of what sync and async are,but when it comes to converting these concept in to code,I usually fail.
This link explains how deadlocks can be avoided by choosing async calls to send data with other processes and sync calls to send data to itself.
My Question : How can a person choose over sync or async calls while making a real world application in erlang/OTP.
my rules of thumb:
-> use asynchronous message
-> if your process need a result and cannot do something in between you may use synchronous message (easier to read)
-> if your process need a result and must not do anything in between use use synchronous call
When you a deciding what to use, try ask yourself this questions:
Do I need call's result?
Is it important to me if call fails? How do I know about this?
Is receiving side capable to handle all async calls with the same speed as they arrive? What will happen if not?
One important point about sync/async calls is overload protection. Try to divide you program flow in to concurrent entities, which must proportionally slows down when load is increasing, so you can estimate how many units of such work you can exec.
My app includes a back-end server, with many transactions which must be carried out in the background. Many of these transactions require many synchronous bits of code to run.
For example, do a query, use the result to do another query, create a new back-end object, then return a reference to the new object to a view controller object in the foreground so that the UI can be updated.
A more specific scenario would be to carry out a sequence of AJAX calls in order, similar to this question, but in iOS.
This sequence of tasks is really one unified piece of work. I did not find existing facilities in iOS that allowed me to cleanly code this sequence as a "unit of work". Likewise I did not see a way to provide a consistent context for the "unit of work" that would be available across the sequence of async tasks.
I recently had to do some JavaScript and had to learn to use the Promise concept that is common in JS. I realized that I could adapt this idea to iOS and objective-C. The results are here on Github. There is documentation, code and unit tests.
A Promise should be thought of as a promise to return a result object (id) or an error object (NSError) to a block at a future time. A Promise object is created to represent the asynchronous result. The asynchronous code delivers the result to the Promise and then the Promise schedules and runs a block to handle the result or error.
If you are familiar with Promises on JS, you will recognize the iOS version immediately. If not, check out the Readme and the Reference.
I've used most of the usual suspects, and I have to say that for me, Grand Central Dispatch is the way to go.
Apple obviously care enough about it to re-write a lot of their library code to use completion blocks.
IIRC, Apple have also said that GCD is the preferred implementation for multitasking.
I also remember that some of the previous options have been re-implemented using GCD under the hood, so you're not already attached to something else, Go GCD!
BTW, I used to find writing the block signatures a real pain, but if you just hit return when the placeholder is selected, it does all that for you. What could be sweeter than that.