A dart:html.WebSocket opens once and closes once. So I'd expect that the onOpen and onClose properties would be Futures, but in reality they are Streams.
Of course, I can use stream.onClose.first to get a Future. But the done property on the dart:io version of WebSocket is a Future as expected, so I'm wondering if I'm missing something.
Why use Streams and not Futures in dart:html?
In a word: transparency. The purpose of the dart:html package is to provide access to the DOM of the page, and so it mirrors the underlying DOM constructs as closely as possible. This is in contrast with dart:io where the goal is to provide a convenient server-side API rather than exposing some underlying layer.
Sure, as a consumer of the API you would expect open and close to be fired only once, whereas message would be fired multiple times, but at the root of things, open, close, message, and error are all just events. And in dart:html, DOM events are modeled as streams.
And actually, a WebSocket could very well fire multiple open events (or close events). The following is definitely a contrived example, but consider this snippet of javascript:
var socket = new WebSocket('ws://mysite.com');
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
socket.dispatchEvent(new Event('open'));
How would a Dart WebSocket object behave in a situation like this, if onOpen were a Future rather than a Stream? Of course I highly, highly doubt this would ever surface out there in the "real world". But the DOM allows for it, and dart:html should not be making judgment calls trying to determine which situations are likely and which are not. If it's possible according to the spec, dart:html should reflect that. Its role is simply to pass through the behavior - as transparently as possible - and let the consumer of the API decide what cases they need to handle and which they can ignore.
Related
Please note that I am asking about a strictly dart only application this does not concern flutter in any means, dartvm refers to the dart virtual machine.
As far as I understand Dart's idea of reactive state is implemented through streams, the responsibility of handling the lifetime of a stream object is given to the programmer, at runtime one could manipulate the stream as they see fit according to what works for their design by adding to the stream; listening to it or disposing it.
My question is this, Is it necessary that I need to call the dispose() method of a stream before my application quits? If I do, how do I go about accomplishing that? Hooking into the VM state isn't well documented and using ProcessSignal listeners is not portable, If I don't, does the GC handle this case? What's the best practice in this case?
Dart streams do not have a dispose method. Therefore you don't need to call it.
But just to give a little more detail ...
Dart streams are many things. Or rather, streams are pretty simple, they're just a way to provide a connection between code which provides events and code which consumes events. After calling listen, the stream object is no longer part of the communication, events and pushback goes directly between the event source (possibly a StreamController) and the consumer (a StreamSubscription).
Event providers are many things.
Some events are triggered just by code doing things. There is no need to clean up after those, it's just Dart objects like everything else, and they will die with the program, and can be garbage collected earlier if no live code refers to them.
Some events are triggered by I/O operations on the underlying operating system. Those will usually be cleaned up when the program ends, because they are allocated through the Dart runtime system, and it knows how to stop them again.
It's still a good idea to cancel the subscription as soon as you don't need any more events. That way, you won't keep a file open too long and prevent another part of the program from overwriting it.
Some code might allocate other resources, not managed by the runtime, and you should take extra care to say when that resource is no longer needed.
You'll have to figure that out on a case-by-case basis, by reading the documentation of the stream.
For resources allocated through dart:ffi, you can also use NativeFinalizer to register a dispose function for the resource.
Generally, you should always cancel the subscription if you don't need any more events from a stream. That's the one thing you can do. If nothing else, it allows garbage collection to collect things a little earlier.
I have a system, connected to financial markets, that makes a very heavy use of events.
All the code is structured as a cascade of events with filters, aggregations, etc in between.
Originally the system was written in C# and then ported to F# (which in retrospect was a great move) and events in the C# code got replaced by events in F# without giving it much thoughts.
I have heard about the observer pattern, but I haven't really gone through the topic. And recently, I have read, through some random browsing, about F#'s Mailbox processor.
I read this: Difference between Observer Pattern and Event-Driven Approach and I didn't get it, but apparently over 150 people voted that the answer wasn't too clear as well :)
In an article like this: https://hackernoon.com/observer-vs-pub-sub-pattern-50d3b27f838c it seems like the observer pattern is strictly identical to events...
At first glance, they seem to be solving the same kind of problems, just with different interfaces but that got me to think about 2 questions:
Is the mailbox processor really a thing being used? it seems to appear mostly in older documentation and, in the packages I'm using, I haven't come across any using it
Regarding the observer pattern, only one package across the sizeable amount we're using makes internal use of it, but everything else is just using basic events.
Are there specific use cases fitting the Observable pattern and the MailboxProcessor? Do they have features that are unique? or are they just syntactic help around events in the end?
As simplified as possible:
Mailbox
This is a minimal implementation of the actor model.
You post messages to a queue, and your loop reads the messages from the queue, one by one. Maybe it posts to another mailbox or it does something with the messages.
Any action can only take place when a message is received.
Posting to the queue is non-blocking, i.e, no back-pressure.
All exceptions are caught and exposed as an event on the mailbox. They are expected to be handled by the actor above it.
Other actor frameworks provide features like supervisors, contracts, failover, etc.
Events
Events are a language supported callback mechanism.
It's a simple implementation. You register a callback delegate, and when the event is raised, your delegate is called.
Delegates are called in the order they are added.
Events are blocking, and synchronous. The one delegate blocks, the rest are delayed.
Events are about writing code to respond to events, as opposed what came before it, which was polling.
The handler for an event is usually the final end-point for that event, and it usually has side-effects.
Sharing a handler is common. For example, ten buttons might have the same function handling clicks, because the sender of the event is known.
You handle exceptions by yourself, typically in the handler code
Observables
There's a source (Observable) which you can subscribe to with a sink (Observer). An observable represents a bounded or un-bounded stream of values. An unbounded stream (an Observable which never completes) seems similar to an event, but there are several important properties to Observables.
An Observable emits a series of notifications, which follows this contract:
OnNext* (OnError|OnCompleted)+
All notifications are serialized
Notifications may or may not be synchronous. There's no guarantee of back-pressure.
The value of Observables lies in the fact that they are compose-able.
An observable represents a stream of future notifications, operators act to transform this stream.
This approach is sometimes called complex event processing (CEP).
Exception handling is part of the pipeline, and there are many combinators to deal with it.
You typically never implement an Observer yourself. You use combinators to set up a pipeline which models the behavior you want.
I want to see if the other side gave up and closed the sink of a StreamChannel, without actually reading the messages yet.
(I'm going to be handing the stream to someone else, so i can't listen() to it, since you're only allowed to listen once per stream.)
[posting for a friend, credit to them for asking the question]
In short, no.
There is no concept of "giving up". If you put events into a non-broadcast stream, they'll stay there until someone listens to the stream (which is why you shouldn't put data there until someone listens, you're just wasting memory).
That includes the done event, and you won't get to the done event without first reading all the preceding events. That's the core abstraction of a stream - a source of events accessed in order, it's not done until it's actually done.
What I think you are looking for is a "side channel" that can communicate information about the stream without going through the stream (that is, out-of-band).
Something like that can surely be built - in about one gazillion different ways, depending on what you want, but it's just not something that a Stream supports by default, nor does a StreamChannel, if I read it correctly (I have never used a StreamChannel myself).
I'm developing a application in Lazarus, that need to check if there is a new version of a XML file on every Form_Create.
How can I do this?
I have used the synapse library in the past to do this kind of processing. Basically include httpsend in your uses clause, and then call httpgetbinary(url,xmlstream) to retrieve a stream containing the resource. I wouldn't do this in the OnCreate though, since it can take some time to pull the resource. Your better served by placing this in another thread that can make a synchronize call back to the form to enable updates, or set an application flag. This is similar to how the Chrome browser displays updates on the about page, a thread is launched when the form is displayed to check to see if there are updates, and when the thread completes it updates the GUI...this allows other tasks to occur (such as a small animation, or the ability for the user to close the dialog).
Synapse is not a visual component library, it is a library of blocking functions that wrap around most of the common internet protocols.
You'll need to read up on FPC Networking, lNet looks especially useful for this task.
I worked my way through the Prism guidance and think I got a grasp of most of their communication vehicles.
Commanding is very straightforward, so it is clear that the DelegateCommand will be used just to connect the View with its Model.
It is somewhat less clear, when it comes to cross Module Communication, specifically when to use EventAggregation over Composite Commands.
The practical effect is the same e.g.
You publish an event -> all subscribers receive notice and execute code in response
You execute a composite command -> all registered commands get executed and with it their attached code
Both work along the lines of "fire and forget", that is they don't care about any responses from their subscribers after firing the event/executing the commands.
I have trouble seeing a practical difference in usage although I understand that the implementation of both (under the hood) is very different.
So should we think of what it actually means - Event? Is that when something happens (an event occurs)? Something the user did not directly request like a "web request completed"?
And Command? Does that mean a user clicked something and thus issued a command to our application, requesting a service directly?
Is that it? Or are there other ways to determine when to use one of these communication vehicles over the other. The guidance, although one of the best documentations I read, gives no specific explanation.
So I hope people involved in/using Prism can help in shedding some light on this.
There are two primary differences between these two.
CanExecute for Commands. A Command
can say whether or not it is valid
for execution by calling
Command.RaiseCanExecuteChanged() and
having its CanExecute delegate
return false. If you consider the
case of a "Save All"
CompositeCommand compositing several
"Save" commands, but one of the
commands saying that it can't
execute, the Save All button will
automatically disable (nice!).
EventAggregator is a Messaging
pattern and Commands are a
Commanding pattern. Although
CompositeCommands are not explicitly
a UI pattern, it is implicitly so
(generally they are hooked up to an
input action, like a Button click).
EventAggregator is not this way -
any part of the application
effectively raise an EventAggregator
event: background processes,
ViewModels, etc. It is a
brokered avenue for messaging
across your application with support
for things like filtering,
background thread execution, etc.
Hope this helps explain the differences. It's more difficult to say when to use each, but generally I use the rule of thumb that if it's user interaction that raises the event, use a command for anything else, use EventAggregator.
Hope this helps.
Additionally, there is one more important difference: With the current implementation, an event from the EventAggregator is asynchronous, while the CompositeCommand is synchronous.
If you want to implement something like "notify that event X happened; do something that relies on the event handlers for event X to have executed", you either have to do something like Application.DoEvents() or use CompositeCommands.