EventAggregator vs CompositeCommand - communication

I worked my way through the Prism guidance and think I got a grasp of most of their communication vehicles.
Commanding is very straightforward, so it is clear that the DelegateCommand will be used just to connect the View with its Model.
It is somewhat less clear, when it comes to cross Module Communication, specifically when to use EventAggregation over Composite Commands.
The practical effect is the same e.g.
You publish an event -> all subscribers receive notice and execute code in response
You execute a composite command -> all registered commands get executed and with it their attached code
Both work along the lines of "fire and forget", that is they don't care about any responses from their subscribers after firing the event/executing the commands.
I have trouble seeing a practical difference in usage although I understand that the implementation of both (under the hood) is very different.
So should we think of what it actually means - Event? Is that when something happens (an event occurs)? Something the user did not directly request like a "web request completed"?
And Command? Does that mean a user clicked something and thus issued a command to our application, requesting a service directly?
Is that it? Or are there other ways to determine when to use one of these communication vehicles over the other. The guidance, although one of the best documentations I read, gives no specific explanation.
So I hope people involved in/using Prism can help in shedding some light on this.

There are two primary differences between these two.
CanExecute for Commands. A Command
can say whether or not it is valid
for execution by calling
Command.RaiseCanExecuteChanged() and
having its CanExecute delegate
return false. If you consider the
case of a "Save All"
CompositeCommand compositing several
"Save" commands, but one of the
commands saying that it can't
execute, the Save All button will
automatically disable (nice!).
EventAggregator is a Messaging
pattern and Commands are a
Commanding pattern. Although
CompositeCommands are not explicitly
a UI pattern, it is implicitly so
(generally they are hooked up to an
input action, like a Button click).
EventAggregator is not this way -
any part of the application
effectively raise an EventAggregator
event: background processes,
ViewModels, etc. It is a
brokered avenue for messaging
across your application with support
for things like filtering,
background thread execution, etc.
Hope this helps explain the differences. It's more difficult to say when to use each, but generally I use the rule of thumb that if it's user interaction that raises the event, use a command for anything else, use EventAggregator.
Hope this helps.

Additionally, there is one more important difference: With the current implementation, an event from the EventAggregator is asynchronous, while the CompositeCommand is synchronous.
If you want to implement something like "notify that event X happened; do something that relies on the event handlers for event X to have executed", you either have to do something like Application.DoEvents() or use CompositeCommands.

Related

Dealing with application processmessages

In our software we have issues with asynchronous execution of events. Sometimes one function will "break into" another one with all kinds of weird results.
I'm dealing here with user-input and timers that fire events.
The issue is that rather then executing the code tied to the events one by one, it's done at the first possible moment that Delphi gives a window for it: application.processmessages. This gives problems in that sometimes half of function A gets done, then function B "breaks in", gets done and after that , the last half of function A gets done. This can give "surprising" results.
Are there good ways to deal with this?
Things I tried:
--
Using a "busy-flag", this has some ups and downs, mostly that everything you do has to know of it.
Removing all application.processmessages where I can. This has pretty good results. But we're relying on some 3rd party components which I found out also fire application.processmessages.
Next I was thinking of trying building some kind of "command-queue" where I can receive my events and fire them in a fifo way.
Apart from rebuilding everything we have from the ground up, are there other/better ways to tackle these issues?
The best way is to eliminate the call to Application.ProcessMessages. Most of the time there is other ways to do what Application.ProcessMessages is supposed to do. You'll need to take a closer look why you need that call, and then find a better solution. For example, you don't need Application.ProcessMessages to update the UI. There is other ways to do that.
If a 3rd party component is calling Application.ProcessMessages then contact that vendor that they should replace this call with a better suited function. If this is not an option, you can try using workarounds like using that component in a thread (if possible). Or create an invisible modal window and execute the methods of the component inside the ShowModal function. That will at least avoid user input messages. (The "invisible modal window" is a Form with BorderStyle=bsNone, size=1×1 and 100% transparency).
Are there good ways to deal with this?
First eliminate all your use of ProcessMessages. As you discovered, it screws up timer event handlers when called from there. Used in other places, it is often subject to race conditions and may hide the real problem. Find out what that problem is and solve it.
But we're relying on some 3rd party components which I found out also fire application.processmessages.
Timer Event handlers are supposed to do only short time work. If you are calling ProcessMessages via a call to a 3rd party library inside a timer event handler, eliminate that call. There is no other cure, except rewriting the library or calling it in another way.
Apart from rebuilding everything we have from the ground up, are there other/better ways to tackle these issues?
Normally you can do background work in threads as well, providing the rules of not calling any VCL RTL methods directly are followed. Here it is not possible if the 3rd party component is calling ProcessMessages.
If you can't alter the 3rd party component, there is possibility to post a message to your form, and put the call in the method that handles this message. With a modern Delphi you could use DelayedAction by #MasonWheeler. But I really recommend you to take the "hard" way and fix the 3rd party lib instead.

OTP behaviours: gen_fsm; gen_event. Practical examples?

I have used both the supervisor and gen_server behaviours, and I can understand the practical uses for both of them. However, I don't really understand the use of the gen_fsm and the gen_event behaviours. Can someone clarify with practical examples?
Thanks in advance
One classic example for FSM would be lock with timeout which is mentioned in manual,
Another example which I implemented in my experience would be telephone lines, because phone have states, like ringing, connected, disconnected etc and some operations are allowed and some are not allowed during this states.
An example for event is logging used in https://github.com/basho/lager
gen_fsm is a neat implementation of finite state machine, you cane do roughly the same thing that you do with a gen_server, and in addition manage easily the different states of your application (for example in a game server select a level, a table, modify player attribute, play, save, restore...).
gen-event is an easy way to dispach event, your application send all event to the gen_event knowing nothing about potential usage, and you dynamically add and delete handlers, with different behavior (log in file, in a database, display information in graphical interface...). I have used this to have a graphical view of the processes state and communication of my application, and file log for performance analysis.
Some good examples you can find them here:
"Event handlers" and "Finite State Machines"
gen_fsm:
The gen_fsm behaviour is somewhat similar to gen_server in that it is
a specialised version of it. The biggest difference is that rather
than handling calls and casts, we're handling synchronous and
asynchronous events. Much like our dog and cat examples, each state is
represented by a function. Again, we'll go through the callbacks our
modules need to implement in order to work.
gen_event:
The gen_event behaviour differs quite a bit from the gen_server and
gen_fsm behaviours in that you are never really starting a process.
The gen_event behaviour basically runs the process that accepts and
calls functions, and you only provide a module with these functions.
This is to say, you have nothing to do with regards to event
manipulation except give your callback functions in a format that
pleases the event manager. All managing is done for free; you only
provide what's specific to your application. This is not really
surprising given OTP is, again, all about separating what's generic
from specific.

Is there a way to log every gui event in Delphi?

The Delphi debugger is great for debugging linear code, where one function calls other functions in a predictable, linear manner, and we can step through the program line by line.
I find the debugger less useful when dealing with event driven gui code, where a single line of code can cause new events to be triggered, which may in turn trigger other events.
In this situation, the 'step through the code' approach doesn't let me see everything that is going on.
The way I usually solve this is to 1) guess which events might be part of the problem, then 2) add breakpoints or logging to each of those events.
The problem is that this approach is haphazard and time consuming.
Is there a switch I can flick in the debugger to say 'log all gui events'? Or is there some code I can add to trap events, something like
procedure GuiEventCalled(ev:Event)
begin
log(ev);
ev.call();
end
The end result I'm looking for is something like this (for example):
FieldA.KeyDown
FieldA.KeyPress
FieldA.OnChange
FieldA.OnExit
FieldB.OnEnter
This would take all the guesswork out of Delphi gui debugging.
I am using Delphi 2010
[EDIT]
A few answers suggested ways to intercept or log Windows messages. Others then pointed out that not all Delphi Events are Windows messages at all. I think it is these type of "Non Windows Message" Events that I was asking about; Events that are created by Delphi code. [/EDIT]
[EDIT2]
After reading all the information here, I had an idea to use RTTI to dynamically intercept TNotifyEvents and log them to the Event Log in the Debugging window. This includes OnEnter, OnExit, OnChange, OnClick, OnMouseEnter, OnMouseLeave events. After a bit of hacking I got it to work pretty well, at least for my use (it doesn't log Key events, but that could be added).
I've posted the code here
To use
Download the EventInterceptor Unit and add it to your project
Add the EventInterceptor Unit to the Uses clause
Add this line somewhere in your code for each form you want to track.
AddEventInterceptors(MyForm);
Open the debugger window and any events that are called will be logged to the Event Log
[/EDIT2]
Use the "delphieventlogger" Unit I wrote download here. It's only one method call and is very easy to use. It logs all TNotifyEvents (e.g. OnChange, OnEnter, OnExit) to the Delphi Event Log in the debugger window.
No, there's no generalized way to do this, because Delphi doesn't have any sort of "event type" that can be hooked in some way. An event handler is just a method reference, and it gets called like this:
if assigned(FEventHandler) then
FEventHandler(self);
Just a normal method reference call. If you want to log all event handlers, you'll have to insert some call into each of them yourself.
I know it is a little bit expensive, but you can use Automated QA's (now SmartBear) TestRecorder as an extension to TestComplete (if you want this only on your system, TestComplete alone will do). This piece of software will track your GUI actions and store it in a script like language. There is even a unit that can be linked into your exe to make these recordings directly at the user's system. This is especially helpful when some users are not able to explain what they have done to produce an error.
Use WinSight to see the message flow in real time.
If you really want the program to produce a log, then override WinProc and/or intercept the messages in Application.
The TApplication.OnMessage event can be used to catch messages that are posted to the main message queue. That is primarily for OS-issued messages, not internal VCL/RTL messages, which are usually dispatched to WndProc() methods directly. Not all VCL events are message-driven to begin with. There is no single solution to what you are looking for. You would have to use a combination of TApplication.OnMessage, TApplication.HookMainWindow(), WndProc() overrides, SetWindowsHook(), and selective breakpoints/hooks in code.
Borland's WinSight tool is not distributed anymore, but there are plenty of third-party tools readily available that do the same thing as WinSight, such as Microsoft's Spy++, WinSpector, etc, for tracking the logging window messages in real-time.
As an alternative, to debug the triggered events use the debugger Step Into (F7) instead of Step Over (F8) commands.
The debugger will stop on any available code line reached during the call.
You can try one of the AOP frameworks for Delphi. MeAOP provides a default logger that you can use. It won't tell you what is going on inside an event handler but it will tell you when an event handler is called and when it returns.

Is the process dictionary appropriate in this case?

I've read several comments here and elsewhere suggesting that Erlang's process dictionary was a bad idea and should die. Normally, as a total Erlang newbie, I'd just avoid it. However, in this situation my other options aren't great.
I have a main dispatcher function that looks something like this:
dispatch(State) ->
receive
{cmd1, Params} ->
NewState = do_cmd1_stuff(Params, State),
dispatch(NewState);
{cmd2, Params} ->
NewState = do_cmd2_stuff(Params, State),
dispatch(NewState);
BadMsg ->
log_error(BadMsg),
dispatch(State)
end.
Obviously, my names are more meaningful to me, but that's the gist of it. Deep down in a function called by a function called by a function called by do_cmd2_stuff(), I want to send out messages to all my users telling them about something I've done. In order to do that, I need to get the list of users from the point where I send the messages. The user list doesn't lend itself easily to sticking in the global state, since that's just one data structure representing the only block of data on which I operate.
The way I see it, I have a couple unpleasant options other than using the process dictionary. I can send the user list through all the various levels of functions down to the very bottom one that does the broadcasting. That's unpleasant because it causes all my functions to gain a parameter, whether they really care about it or not.
Alternatively, I could have all the do_cmdN_stuff() functions return a message to send. That's not great either though, since sending the message may not be the last thing I want to do and it clutters up my dispatcher with a bunch of {Msg, NewState} tuples. Furthermore, some of the functions might not have any messages to send some of the time.
Like I said earlier, I'm very new to Erlang. Maybe someone with more experience can point me at a better way. Is there one? Is the process dictionary appropriate in this case?
The general rule is that if you have doubts, you shouldn't use the process dictionary.
If the two options you mentioned aren't good enough (I personally like the one where you return the messages to send) and what you want is some particular piece of code to track users and forward messages to them, maybe what you want to do is have a process holding that info.
Pid ! {forward, Msg}
where Pid will take care of sending everything to a bunch of other processes. Now, you would still need to pass the Pid around, unless you give it a name in some registry to find it. Either with register/2, global or gproc.
A simple answer would be to nest your global within a state record, which is then threaded through the system, at least at the stop level. This makes it easy to add new fields to the state in the future, not an uncommon occurrence, and allow you to keep your global state data structure untouched. So initially
-record(state, {users=[],state_data}).
Defining it as a record makes it easy to access and extend when necessary.
As you mentioned you can always pass the user list as extra param, thats not so bad.
If you don't want to do this just put it in State. You can have a special State just for this part of the calculation that also contains the user list.
Then there always is the possibility of putting it in ETS or in another server process.
What exactly to do is hard to recommend since it depends a lot on your exact application and preferences.
Just choose from the mentioned possibilities as if the process dictionary doesn't exist. Maybe your code needs restructuring if none of the variants look elegant, there always is some better way without the process dictionary.
Its really bad it is still there, because its alluring to many beginning Erlang users.
You really should not use process dictionary. I accept using dictionary only if
It is short living process.
I have full control about the process from spawn to termination i.e. I use minimum and well known set of external modules.
I need performance gain badly. It means avoid copy of data when using ets and dict/gb_tree is too slow (for GC reason).
ad 1. is not your case, you are using in server. ad 2. I don't know if it is your case. ad 3. is not your case because you need list of recipient so you don't gain nothing from that process dictionary is very fast key/value storage. In your case I don't see any reason why you should not include what you need to your State. IMHO State is exactly the right place for it.
Its an interesting question because it involves the fundamentals of functional design.
My opinion:
Try as much as possible to make the function return the messages, then send them. This separates the two different tasks nicely, and separates the purely functional task from the one that causes side effects.
If this isn't possible, pass receivers as argument even if its a bit messy. If the broadcasting function uses that data, it should be given to it explicitly, for clarity and predictability.
Using ETS as Peer Stritzinger suggests is really not any better than the PD, both hides the fact that the broadcasting function uses the receiver list and makes it dependent on global data.
I'm not sure about the Erlang way of encapsulating some state in a process, as I GIVE TERRIBLE ADVICE suggests. Is it really any better that ETS or PD?
clutters up my dispatcher with a bunch
of {Msg, NewState}
This is my experience also, that you often end up like this. It's not particularly pretty, but functional design seems to encourage this. Could some language feature be introduced to make it more beautiful and natural?
EDIT:
6 years ago I wrote:
Could some language feature be introduced to make it more beautiful and natural?
After learning much more about functional programming I have realised that examples of this are state-monads and do-notation that are found in Haskell.
I would consider sending a special message to self() from deep inside the call stack, and handling it at the top level dispatch method that you've sketched, where list of users is available.

Erlang gen_server vs stateless module

I've recently finished Joe's book and quite enjoyed it.
I'm since then started coding a soft realtime application with erlang and I have to say I am a bit confused at the use of gen_server.
When should I use gen_server instead of a simple stateless module?
I define a stateless module as follow:
- A module that takes it's state as a parameter (much like ETS/DETS) as opposed to keeping it internally (like gen_server)
Say for an invoice manager type module, should it initialize and return state which I'd then pass subsequently to it?
SomeState = InvoiceManager:Init(),
SomeState = InvoiceManager:AddInvoice(SomeState, AnInvoiceFoo).
Suppose I'd need multiple instances of the invoice manager state (say my application manages multiple companies each with their own invoices), should they each have a gen_server with internal state to manage their invoices or would it better fit to simply have the stateless module above?
Where is the line between the two?
(Note the invoice manage example above is just that, an example to illustrate my question)
I don't really think you can make that distinction between what you call a stateless module and gen_server. In both cases there is a recursive receive loop which carries state in at least one argument. This main loop handles requests, does work depending on the requests and, when necessary, sends results back the requesters. The main loop will most likely handle a number of administrative requests as well which may not be part of the main API/protocol.
The difference is that gen_server abstracts away the main receive loop and allows the user to only the write the actual user code. It will also handle many administrative OTP functions for you. The main difference is that the user code is in another module which means that you see the passed through state more easily. Unless you actually manage to write your code in one big receive loop and not call other functions to do the work there is no real difference.
Which method is better depends very much on what you need. Using gen_server will simplify your code and give you added functionality "for free" but it can be more restrictive. Rolling your own will give you more power but also you give more possibilities to screww things up. It is probably a little faster as well. What do you need?
It strongly depend of your needs and application design. When you need shared state between processes you have to use process to keep this state. Then gen_server, gen_fsm or other gen_* is your friend. You can avoid this design when your application is not concurrent or this design doesn't bring you some other benefits. For example break your application to processes will lead to simpler design. In other case sometimes you can choose single process design and using "stateless" modules for performance or such. "stateless" module is best choice for very simply stateless (pure functional) tasks. gen_server is often best choice for thinks that seems naturally "process". You must use it when you want share something between processes (using processes can be constrained by scalability or concurrency).
Having used both models, I must say that using the provided gen_server helps me stay structured more easily. I guess this is why it is included in the OTP stack of tools: gen_server is a good way to get the repetitive boiler-plate out of the way.
If you have shared state over multiple processes you should probably go with gen_server and if the state is just local to one process a stateless module will do fine.
I suppose your invoices (or whatever they stand for) should be persistent, so they would end up in an ETS/Mnesia table anyway. If this is so, you should create a stateless module where you put your API for accessing the invoice table.

Resources