Is there any alternative to the Mediator pattern with Collegue "idle"? - delphi

I have a project in which its activities and functions follows a sequential process, most of the time. But sometimes you need to "go back" your steps and rerun the previous functions.
I made a state diagram to see how complex it would be.
The first approach I thought was applying the State pattern but the number of states did not seem feasible. Then "I separated" and classified it functions in 6 processes. A grades traits each process what I imagined something like this:
TProcessXXX = class(TProcess)
private
{* atributos privados, etc.. *}
public
{* funciones y actividades *}
procedure DoActivity1;
procedure DoActivity2;
{* ... *}
function DoActivityN: TResultProcess;
end;
Most of the activities they of the each Process operate on the same class that encapsulates the data structure needed. And my intention is that each Process can notify has ended for another process then the next job.
The design I've seen is the Mediator pattern, and have a class that encapsulates the state diagram and "enable" to each process.
To coordinate among themselves I considered add methods to communicate with the coordinator/mediator class. Including:
function TProcess.RequestPermission: boolean;
procedure TProcess.NotifyFinishOperation(Result: TResultOperation);
In the process I designed them with some independence.
As for being not asking permission for each activity and for some need some redundant sequencing and ask again and again, I applied a "lock" that allows enable them.
TProcessXXX.PrepareToWork;
var req: boolean;
begin
req: = RequestPermission;
then begin if req
EnableActivitys
Work;
end;
end;
So far so good. My doubts began when from the Presentation layer invoke operations processes.
To get permission to invoke I have a indirection from the layer Presentation to the Mediator: Presentation -> TProcessXXX -> Mediator
And then we obtained permission another for each activity: Presentation -> TProcessXXX -> TDataStructure
When a process receives permission, captures for himself using TDataStructure. It takes over until the operation is completed. Meanwhile, other processes are "idle". And from the Presentation layer may be giving you request to operate needlessly.
I considered disabling controls, which is the most straightforward and easy. But then would have to be enabling and disabling all the time.
I ask: What alternatives do you recommend? Is there a pattern to work on the theme of "idle processes"?
EDIT:
I have studied alternatives such as Strategy and Visitor but I am not sure if they are the best option. And I admit that perhaps these 3 patterns (Mediator, Strategy, Visitor) do not dominate the 100%.
I forgot to clarify this. My apologies.
I would also add that if I deserve a negative vote, at least be kind enough to post a comment explaining why.
EDIT 2:
As recommended, I attached a link to a picture of the state diagram:
In the diagram you can see that there is a choise. Is designed so that at startup evaluates which was the last state reached and to continue from that point.
In this diagram I have separate activities in five processes: configuration, manage set, training, testing and recognition.
And I added one more, the sixth named initializer, which has the function to initialize the data structure with the data accessed from a database.
Each Process is a Collegue for the Mediator. The Mediator implements this state diagram and decides to process him "permission" to operate.

Your question isn't best formed so I'm not compleetly sure what are you trying to achieve. But I asume you want to implement some more complex code flow controll which is based on certain conditions.
If my asumption is correct you should check the Decision Tree pattern.
In Decision Tree pattern you have your work divided into multiple smaller steps(which is what yseems you are already trying to implement).
On the end of each step you check specific condition and then decide how to proceed based on that condition (either continue with next step, repeat current step or even jumpt to compleetly different step).
From your SO profile information I see you are computer engineering student so I can tell you that the usage of decision trees is usually discussed during the AI development classes in great depth becouse most complex AI algorithms actually depends on using Decision Tree pattern as it procidea great scalability.
So you might want to check your school books for the section that describes AI development.

Related

replicating trees between ACID RDB using CRDT

I'm interested in replicating "hierachies" of data say similar to addresses.
Area
District
Sector
Unit
but you may have different pieces of data associated to each layer, so you may know the area of Sectors, but not of units, and you may know the population of a unit, basically its not a homogenious tree.
I know little about replication of data except brushing Brewers theorem/CAP, and some naive intuition about what eventual consistency is.
I'm looking for SIMPLE mechanisms to replicate this data from an ACID RDB, into other ACID RDBs, systemically the system needs to eventually converge, and obviously each RDB will enforce its own local consistent view, but any 2 nodes may not match at any given time (except 'eventually').
The simplest way to approach this is to simple store all the data in a single message from some designated leader and distribute it...like an overnight dump and load process, but thats too big.
So the next simplest thing (I thought) was if something inside an area changes, I can export the complete set of data inside an area, and load it into the nodes, thats still quite a coarse algorithm.
The next step was if, say an 'object' at any level changed, was to send all the data in the path to that 'object', i.e. if something in a sector is amended, you would send the data associated to the sector, its parent the district, and its parent the sector (with some sort of version stamp and lets say last update wins)....what i wanted to do was to ensure that any replication 'update' was guaranteed to succeed (so it needs the whole path, which potentially would be created if it didn't exist).
then i stumbled on CRDTs and thought....ah...I'm reinventing the wheel here, and the algorithms are allegedly easy in principle, but tricky to get correct in practice
are there standards accepted patterns to do this sort of thing?
In my use case the hierarchies are quite shallow, and there is only a single designated leader (at this time), I'm quite attracted to state based CRDTs because then I can ignore ordering.
Simplicity is the key requirement.
Actually it appears I've reinvented (in a very crude naive way) the SHELF algorithm.
I'll write some code and see if I can get it to work, and try to understand whats going on.

State dependent action set in reinforcement learning

How do people deal with problems where the legal actions in different states are different? In my case I have about 10 actions total, the legal actions are not overlapping, meaning that in certain states, the same 3 states are always legal, and those states are never legal in other types of states.
I'm also interested in see if the solutions would be different if the legal actions were overlapping.
For Q learning (where my network gives me the values for state/action pairs), I was thinking maybe I could just be careful about which Q value to choose when I'm constructing the target value. (ie instead of choosing the max, I choose the max among legal actions...)
For Policy-Gradient type of methods I'm less sure of what the appropriate setup is. Is it okay to just mask the output layer when computing the loss?
There are two closely related works in recent two years:
[1] Boutilier, Craig, et al. "Planning and learning with stochastic action sets." arXiv preprint arXiv:1805.02363 (2018).
[2] Chandak, Yash, et al. "Reinforcement Learning When All Actions Are Not Always Available." AAAI. 2020.
Currently this problem seems to not have one, universal and straight-forward answer. Maybe because it is not that of an issue?
Your suggestion of choosing the best Q value for legal actions is actually one of the proposed ways to handle this. For policy gradients methods you can achieve similar result by masking the illegal actions and properly scaling up the probabilities of the other actions.
Other approach would be giving negative rewards for choosing an illegal action - or ignoring the choice and not making any change in the environment, returning the same reward as before. For one of my personal experiences (Q Learning method) I've chosen the latter and the agent learned what he has to learn, but he was using the illegal actions as a 'no action' action from time to time. It wasn't really a problem for me, but negative rewards would probably eliminate this behaviour.
As you see, these solutions don't change or differ when the actions are 'overlapping'.
Answering what you've asked in the comments - I don't believe you can train the agent in described conditions without him learning the legal/illegal actions rules. This would need, for example, something like separate networks for each set of legal actions and doesn't sound like the best idea (especially if there are lots of possible legal action sets).
But is the learning of these rules hard?
You have to answer some questions yourself - is the condition, that makes the action illegal, hard to express/articulate? It is, of course, environment-specific, but I would say that it is not that hard to express most of the time and agents just learn them during training. If it is hard, does your environment provide enough information about the state?
Not sure if I understand your question correctly, but if you mean that in certain states some actions are impossible then you simply reflect it in the reward function (big negative value). You can even decide to end the episode if it is not clear what state would the illegal action result in. The agent should then learn that those actions are not desirable in the specific states.
In exploration mode, the agent might still choose to take the illegal actions. However, in exploitation mode it should avoid them.
I recently built a DDQ agent for connect-four and had to address this. Whenever a column was chosen that was already full with tokens, I set the reward equivalent to losing the game. This was -100 in my case and it worked well.
In connect four, allowing an illegal move (effectively skipping a turn) can in some cases be advantageous for the player. This is why I set the reward equivalent to losing and not a smaller negative number.
So if you set the negative reward greater than losing, you'll have to consider in your domain what are the implications of allowing illegal moves to happen in exploration.

Omniture: Creating Specific Context Variables

Was wondering if anyone out there can help.......
My company works in the travel industry and one of the product we provide is the function of buying a flight and hotel together.
One of the advantages of this is that sometimes a visitor can save on a hotel if they buy the package together.
What I want to be able to track is the following:
The hotel which has the saving on it (accomodation code); the saving that they will make; the price of the package that they will pay.
I am new to implementing but have been told by a colleague that I can use a context variable.
Would anyone be able to tell me how I should write this please?
Kind Regards
Yaser
Here is the document entry for Context Data Variables
For example, in the custom code section of the on-page code, within s_doPlugins or via some wrapper function that ultimately makes a s.t() or s.tl() call, you would have:
s.contextData['package.code'] = "accommodation code";
s.contextData['package.savings'] = "savings";
s.contextData['package.price'] = "price";
Then in the interface you can go to processing rules and map them to whatever props or eVars you want.
Having said that...processing rules are pretty basic at the moment, and to be honest, it's not really worth it IMO. Firstly, you have to get certified (take an exam and pass) to even access processing rules. It's not that big a deal, but it's IMO a pointless hoop to jump through (tip: if you are going to go ahead and take this step, be sure to study up on more than just processing rules. Despite the fact that the exam/certification is supposed to be about processing rules, there are several questions that have little to nothing to do with them)
2nd, context data doesn't show up in reports by themselves. You must assign the values to actual props/eVars/events through processing rules (or get ClientCare to use them in a vista rule, which is significantly more powerful than a processing rule, but costs lots of money)
3rd, the processing rules are pretty basic. Seriously, you're limited to just simple stuff like straight duping, concatenating values, etc.
4th, processing rules are limited in setting events, and won't let you set the products string. IOW, You can set a basic (counter) event, but not a numeric or currency event (an event with a custom value associated with it). Reason I mention this is because those price and savings values might be good as a numeric or currency event for calculated metrics. Well since you can't set an event as such via processing rules, you'd have to set the events in your page code anyways.
The only real benefit here is if you're simply looking to dupe them into a prop/eVar and that prop/eVar varies from report suite to report suite (which FYI, most people try to keep them consistent across report suites anyways, and people rarely repurpose them).
So if you are already being consistent across multiple report suites (or only have like 1 report suite in the first place), since you're already having to put some code on the site, there's no real incentive to just pop the values in the first place.
I guess the overall point here is that since the overall goal is to get the values into actual props, eVars and possibly events, and processing rules fail on a lot of levels, there's no compelling reason not to just pop them in the first place.

UnitTesting a class that returns a complex dataset

After months of frustration and of time spent in inserting needles in voodoo dolls of previous developers I decided that it is better try to refactor the legacy code.
I already ordered Micheal Feather's book, I am into Fowler's refactoring and I made some sample projects with DUnit.
So even if I don't master the subject I feel it is time to act and put some ideas into practice.
Almost 100% of the code I work on has the business logic trapped in the UI, moreover all is procedural programming (with some few exceptions). The application started as quick & dirty and continued as such.
Now writing tests for all the application is a meaningless task in my case, but I would like to try to unittest something that I need to refactor.
One of the complex tasks one big "TForm business logic class" does is to read DB data, make some computations and populate a scheduler component. I would like to remove the reading DB data and computation part and assign to a new class this task. Of course this is a way to improve the current design, it is not the best way for starting from scratch, but I'd like to do this because the data returned by this new class is useful also in other ways, for example now I've been ask to send e-mail notifications of scheduler data.
So to avoid a massive copy and paste operation I need the new class.
Now the scheduler is populated from a huge dataset (huge in size and in number of fields), probably a first refactoring step could be obtaining the dataset from the new class. But then in the future I'd better use a new class (like TSchedulerData or some other name less bound to scheduler) to manage the data, and instead of having a dataset as result i can have a TSchedulerData object.
Since refactor occurs at at small steps and tests are needed to refactor better I am a little confused on how to proceed.
The following points are not clear to me:
1) how to test a complex dataset? Should I run the working application, save one result set to xml, and write a test where I use a TClientDataSet containing that xml data?
2) How much do I have to care about TSchedulerData? I mean I am not 100% sure I will use TSchedulerData, may be I will stick with the Dataset, anyway thinking of creating complex tests that will be discarded in 2 weeks is not appealing for a DUnitNewbee. Anyway probably this is how it works. I can't imagine the number of bugs that I would face without a test.
Final note: I know someone thinks rewriting from scratch is a better option, but this is not an option. "The application is huge and it is sold today and new features are required today not to get out of business". This is what I have been told, anyway refactoring can save my life and extend the application life.
Your eventual goal is to separate the UI, data storage and business logic into distinct layers.
Its very difficult to test a UI with automatic testing frameworks. You'll want to eventually separate as much of the business logic from the UI as possible. This can be accomplished using one of the various Model/View/* patterns. I prefer MVP passive view, which attempts to make the UI nothing more than an interface. If you're using a Dataset MVP Supervising Controller may be a better fit.
Data storage needs to have its own suite of tests but these are different from unit tests (though you can use the same unit testing framework) and there are usually fewer of them. You can get away with this because most of the heavy lifting is being done by third party data components and a dbms (in your case T*Dataset). These are integration tests. Basically making sure your code plays nice with the vendor's code. Also needed if you have any stored procedures defined in the DB. They are much slower that unit tests and don't need to be run as often.
The business logic is what you want to test the most. Every calculation, loop or branch should have at least one test(more is preferable). In legacy code this logic often touches the UI and db directly and does multiple things in a single function. Here Extract Method is your friend. Good places to extract methods are:
for I:=0 to List.Count - 1 do
begin
//HERE
end;
if /*HERE if its a complex condition*/ then
begin
//HERE
end
else
begin
//HERE
end
Answer := Var1 / Var2 + Var1 * Var3; //HERE
When you come across one of these extraction points
Decide what you want the method signature to look like for your new method: Method name, parameters, return value.
Write a test that calls it and checks the expected outcome.
Extract the method.
If all goes well you will have a newly extracted method with at least one passing unit test.
Delphi's built in Extract Method doesn't give you any way to adjust the signature so if that's your own option you'll have to make do and fix it after extraction. You'll also want to make the new method public so your test can access it. Some people balk at making a private utility method public but at this early stage you have little choice. Once you've made sufficient progress you'll start to see that some utility methods you've extracted belong in their own class (in which case they'd have to be public anyway) while others can be made private/protected and tested indirectly by testing methods that depend on them.
As your test suite grows you'll want to run them after each change to ensure your latest change hasn't broken something elsewhere.
This topic is much too large to cover completely in an answer. You'll find the vast majority of your questions are covered when that book arrives.
I'd say approach it in focussed baby steps.
Step#1: Should always be to get some tests around your area of invasion TForm - regression tests aka safety net. In your case, sense what the app is doing. From what I read, it seems to be a data transformer. So spend time to understand all (or most important if all is not feasible) combinations of input data and the corresponding output schedules. Write them up as tests. Ensure that all tests pass.
Step#2: Now attempt your refactorings. Move blocks of code into cohesive classes etc all under the safety of the regression net.
Testing complex datasets - testing file dumps should be the last resort. But in this case, it seems like a simple option to get started. Maybe you could later make it a first class domain object TSchedule with its own Equals() implementation. Defer design decisions/changes until you have a solid regression test suite around your area of modification.

How would you refactor this code?

This hypothetical example illustrates several problems I can't seem to get past, even though I keep trying!! ... Suppose the original code is a long event handler, coded in the UI, triggered when a user clicks a cell in a grid. Expressed as pseudocode it's:
if Condition1=true then
begin
//loop through every cell in row,
//if aCell/headerCellValue>1 then
//color aCell red
end
else if Condition2=true then
begin
//do some other calculation adding cell and headerCell values, and
//if some other product>2 then
//color the whole row green
end
else show an error message
I look at this and say "Ah, refactor to the strategy pattern! The code will be easier to understand, easier to debug, and easier to later extend!" I get that.
And I can easily break the code into multiple procedures.
The problem is ultimately scope related. Assume the pseudocode makes extensive use of grid properties, values displayed in cells, maybe even built-in grid methods. How do you move all that to another unit, without referencing the grid component in the UI--which would break all the "rules" about loose coupling that make OOP valuable? ...
I'm really looking forward to responses. Thanks, as always -- Al C.
Refactoring to put code into a separate routine doesn't necessarily mean decoupling everything. You could just as well refactor each of those cases into a new method belonging to the same class as the event handler you're refactoring. Those methods would have all the same access to the grid component as your current code already has.
You're writing code for that event to do things to that grid on that form. Do you really foresee needing to do those operations in response to some other event? Or perform them on some other grid on some other form? If not, then decoupling everything is just an academic exercise and serves no purpose to your product. It's OK to write application-specific code.
If you want to decouple, then the way to do it is to add parameters to your factored-out routines. If the routines need to work with the grid without knowing exactly which grid it is, then pass the grid in as a parameter:
if Condition1 then
ColorCellsRedAboveRatio(Grid, 1.0)
else if Condition2 then
ColorRowsGreenAboveProduct(Grid, 2)
else
Error;
First of all, NEVER EVER compare a value to true.
It's
if Condition1 then
begin
end else if Condition2 then
begin
end;
Comparing to true can, in the worst case, fail even if the value is true. There is a good (but german) article 'Über den Umgang mit Boolean' in the Delphi-PRAXiS.net community forums wich shows an example to reproduce this strange-seeming behaviour.
Regarding your question directly:
This code would be better in a custom paint event. This will get called for every cell and directly paint it with the correct colors. And it well draw in the correct colors even when repainted. In your event you would only draw the cells once, and if - for any reason - the grid repaints your color would be lost.
Then, after all, this code is stringly UI and component related. If you hadn't this grid on your form, you wouldn't need this code.
What you could do to decouple things a bit is to pass the values retrieved from the grid row to an external unit that will do only the calculation and returns a logical result. Your UI code on the form then would take this result and has to decide how it has to be displayed (ie. what color etc.) and put the information on the grid.
You could consider to extract an UI dependent unit. If the method is long and you want to extract some stuff from the class, it is reasonable to extract a strategy.
As Rob suggested you could just pass the needed context to the strategy procedures. You could introduce a SheetRenderingStrategy with an approriate abstract method and approriate SubClasses eg. HighlightTheseSpecialCellsStrategy. These classes are still part of the UI, but probably clarifiy the intent and improve modularisation.
Answering a question about refactoring, without context, is like answering a question about design, without context. Because you are changing a design. I usually start out with "reasons to refactor". Maybe I have metrics that I'm exceeding. (Look, a class with 10,000 lines, surely it can be partitioned into several more-coherent, more-cohesive classes, with less coupling, and tighter cohesion).
So, if you find yourself with a lot of code that has several hundred if...else conditions in event handlers, as I often do, I would forget all about that event handler for a minute,
and reduce it as the person said above, to a minimal object oriented pattern:
if a then
doA(fewer,parameters)
else if b then
doB(is,generally,better)
else if c then
doC;
...
Now, if doA,doB,and doC belong together in another object (They share state, and modify/control some particular set of fields), then I might move doA,doB,doC methods into another object.
In general, however, rather than drilling down to an individual case-by-case where event handlers do everything, I also find the following delphi-pattern handy:
procedure TForm1.BigGuiControlRightButtonClick(Sender;...);
begin
BigThingController.RightClickMenuHandler(Sender, ....)
end;
procedure TForm1.BigGuiControlDoSomeThing(Sender:TObject);
begin
BigThingController.DoSomeThing;
end;
procedure TForm1.Print(Sender);
begin
DocumentManager.Print(Document);
end;
I like it when my TForm methods are clear and readable. I don't like to see a lot of noise, and a lot of error-checking code. I find that applications that have been carefully maintained and debugged over years tend to grow iteratively towards a complete mess of unreadable spaghetti. If the goal of refactoring is more than just to make the code look pretty, then the refactoring should also have some measurable quality goal too. Reduce defects, crashes, etc. Sometimes, I use refactoring as a time to remove features that are no longer useful, or implemented in a faulty way. So my code is more correct when I'm finished, and not just refactored to fit some ideal of how code should be written, that doesn't change the quality the user experiences. I'm a delphi developer, and I'm goal oriented, and quality oriented, and pragmatic rather than a stylizer. Other people may differ here.

Resources