Can a transition have two or more actions?
For example:
event[condition]/action1;action2
stateA -------------------------------------------> stateB
Yes.
From Wikipedia:
In UML, a state transition can directly connect any two states. These two states, which may be composite, are designated as the main source and the main target of a transition. Figure 7 shows a simple transition example and explains the state roles in that transition. The UML specification prescribes that taking a state transition involves executing the following actions in the following sequence (see Section 15.3.14 in OMG Unified Modeling Language (OMG UML), Infrastructure Version 2.2):
Evaluate the guard condition associated with the transition and perform the following steps only if the guard evaluates to TRUE.
Exit the source state configuration.
Execute the actions associated with the transition.
Enter the target state configuration.
I have been unable to find succint wording to define this in the UML specification, but diagrams and further wording on the Wikipedia article (which is well-referenced) seem to imply that you should use ; as a separator, as in your example.
However, intuitively I would expect a system's state to change after each action has been taken, so (again intuitively) I would recommend minimizing your use of multiple actions per transition. Instead consider adding intermediate states.
Related
I need to create a sequence diagram for a project I am currently working on. The sequence in question does contain a forEach loop, which does some action for every available user.
Is there any consensus/ convention on how to translate a forEach-loop to a loop-fragment?
I personally used "for each user in users" as condition for the loop fragment.
Not specifically. A loop fragment just has a guard to describe the loop condition.
These constraints somehow glitch over into the coding world. C guys are familiar with for (<init>; <cond>; <inc>) constructs so you could simply place
[<init>; <cond>; <inc>]
as the guard. Similarly you can place a foreach.
Note that this might go into details which in SDs are not foreseen. Sort of graphical programming. SDs are here to give an overview of collaborating classes, not as detailed coding templates. Probably a foreach is acceptable, though.
I'm currently in the process of planning out the structure of a language interpreter and I am realizing that I do not like the idea of exclusively using a Visitor or Listener tree traversal method.
Since both tree traversal methods have their merits, Ideally, I would like to use a mix of both:
Using a listener makes the most sense when traversing arbitrary language block definitions (function/class definitions, struct/enum-like definitions) especially when they can be nested.
Visitors seem to naturally lend themselves to situations such as expression evaluation, where the context is far more predictable, and result values can be returned back up the chain.
What is the most "correct" method to switch between the two traversal methods?
So far, my ideas are as follows:
Switch from Listener to Visitor for a portion of a parse tree
Say that, when the listener reaches the node "Foo", I want to handle its children more explicitly using a Visitor. One way I can think of doing this is:
Parse Tree walker calls enterFoo(ctx)
Create an instance of myFooVisitor
Explicitly visit children, store result, etc.
Set ctx.children = [] (or equivalent)
When enterFoo() returns, the parse tree walker sees that there are no more children for this node, and does not needlessly walk though all of Foo's children
Switch from Visitor to Listener for a portion of a parse tree
This is a little more obvious to me. Since tree traversal is explicitly controlled when using Visitors, switching over seems trivial.
visitFoo() gets called
Create an instance of a new parse tree walker and myFooListener
Start the walker using the listener as usual.
You seem to have gotten an idea where listeners and visitors are modes, kinda states you can switch. This is wrong.
Both listener and visitor are classes that allow you act on rule traversal. The listener does this by getting called from the parser during the parse process when a rule is "entered" or "left". There's no parse tree involved.
The visitor however uses a parse tree to walk over every node and call methods for them. You can override any of these methods to do any associated work. That doesn't necessary have to do with the evaluation result. You can use that independently.
ANTLR4 generates method bodies for each of your rule (both in listeners and visitors), which makes it easy for you to only implement those rules you are interested in.
Now that you know that listeners are used during parsing while visitors are used after parsing, it should be obvious that you cannot switch between them. And in fact switching wouldn't help much, since both classes do essentially the same (call methods for encountered rules).
I could probably give you more information if your question would actually contain what you want to achieve, not how.
When converting and NFA to a DFA, sometimes the states have to be merged. Like in the above scenario.
But what does it really mean by 'combining the states into one' in a real scenario?
And what would the nature of the combination of above two states be?
The phrase combining the states into one means, that you
Create a new state, which is labelled with all labels from the original states.
The new state gets all output and all conflicting (ambiguous) input transitions from the original states.
Each combination of the original labels may occur only in one new condition.
Note: Creating a new state with a single label in the DFA can be seen as a special case of the above.
The naming of the new state with the labels of the original states has the sense that you can refer unambiguously to this new state in the subsequent generation process.
One of the biggest problems with designing a lexical analyzer/parser combination is overzealousness in designing the analyzer. (f)lex isn't designed to have parser logic, which can sometimes interfere with the design of mini-parsers (by means of yy_push_state(), yy_pop_state(), and yy_top_state().
My goal is to parse a document of the form:
CODE1 this is the text that might appear for a 'CODE' entry
SUBCODE1 the CODE group will have several subcodes, which
may extend onto subsequent lines.
SUBCODE2 however, not all SUBCODEs span multiple lines
SUBCODE3 still, however, there are some SUBCODES that span
not only one or two lines, but any number of lines.
this makes it a challenge to use something like \r\n
as a record delimiter.
CODE2 Moreover, it's not guaranteed that a SUBCODE is the
only way to exit another SUBCODE's scope. There may
be CODE blocks that accomplish this.
In the end, I've decided that this section of the project is better left to the lexical analyzer, since I don't want to create a pattern that matches each line (and identifies continuation records). Part of the reason is that I want the lexical parser to have knowledge of the contents of each line, without incorporating its own tokenizing logic. That is to say, if I match ^SUBCODE[ ][ ].{71}\r\n (all records are blocked in 80-character records) I would not be able to harness the power of flex to tokenize the structured data residing in .{71}.
Given these constraints, I'm thinking about doing the following:
Entering a CODE1 state from the <INITIAL> start condition results
in calls to:
yy_push_state(CODE_STATE)
yy_push_state(CODE_CODE1_STATE)
(do something with the contents of the CODE1 state identifier, if such contents exist)
yy_push_state(SUBCODE_STATE) (to tell the analyzer to expect SUBCODE states belonging to the CODE_CODE1_STATE. This is where the analyzer begins to masquerade as a parser.
The <SUBCODE1_STATE> start condition is nested as follows: <CODE_STATE>{ <CODE_CODE1_STATE> { <SUBCODE_STATE>{ <SUBCODE1_STATE> { (perform actions based on the matching patterns) } } }. It also sets the global previous_state variable to yy_top_state(), to wit SUBCODE1_STATE.
Within <SUBCODE1_STATE>'s scope, \r\n will call yy_pop_state(). If a continuation record is present (which is a pattern at the highest scope against which all text is matched), yy_push_state(continuation_record_states[previous_state]) is called, bringing us back to the scope in 2. continuation_record_states[] maps each state with its continuation record state, which is used by the parser.
As you can see, this is quite complicated, which leads me to conclude that I'm massively over-complicating the task.
Questions
For states lacking an extremely clear token signifying the end of its scope, is my proposed solution acceptable?
Given that I want to tokenize the input using flex, is there any way to do so without start conditions?
The biggest problem I'm having is that each record (beginning with the (SUB)CODE prefix) is unique, but the information appearing after the (SUB)CODE prefix is not. Therefore, it almost appears mandatory to have multiple states like this, and the abstract CODE_STATE and SUBCODE_STATE states would act as groupings for each of the concrete SUBCODE[0-9]+_STATE and CODE[0-9]+_STATE states.
I would look at how the OMeta parser handles these things.
According to wikipedia: functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. (emphasis mine).
Is this really true? My personal understanding is that it makes the state more explicit, in the sense that programming is essentially applying functions (transforms) to a given state to get a transformed state. In particular, constructs like monads let you carry the state explicitly through functions. I also don't think that any programming paradigm can avoid state altogether.
So, is the Wikipedia definition right or wrong? And if it is wrong what is a better way to define functional programming?
Edit: I guess a central point in this question is what is state? Do you understand state to be variables or object attributes (mutable data) or is immutable data also state? To take an example (in F#):
let x = 3
let double n = 2 * n
let y = double x
printfn "%A" y
would you say this snippet contains a state or not?
Edit 2: Thanks to everyone for participating. I now understand the issue to be more of a linguistic discrepancy, with the use of the word state differing from one community to the other, as Brian mentions in his comment. In particular, many in the functional programming community (mainly Haskellers) interpret state to carry some state of dynamism like a signal varying with time. Other uses of state in things like finite state machine, Representational State Transfer, and stateless network protocols may mean different things.
I think you're just using the term "state" in an unusual way. If you consider adding 1 and 1 to get 2 as being stateful, then you could say functional programming embraces state. But when most people say "state," they mean storing and changing values, such that calling a function might leave things different than before the function was called, and calling the function a second time with the same input might not have the same result.
Essentially, stateless computations are things like this:
The result of 1 + 1
The string consisting of 's' prepended to "pool"
The area of a given rectangle
Stateful computations are things like this:
Increment a counter
Remove an element from the array
Set the width of a rectangle to be twice what it is now
Rather than avoids state, think of it like this:
It avoids changing state. That word "mutable".
Think in terms of a C# or Java object. Normally you would call a method on the object and you might expect it to modify its internal state as a result of that method call.
With functional programming, you still have data, but it's just passed through each function, creating output that corresponds to the input and the operation.
At least in theory. In reality, not everything you do actually works functionally so you frequently end up hiding state to make things work.
Edit:
Of course, hiding state also frequently leads to some spectacular bugs, which is why you should only use functional programming languages for purely functional situations. I've found that the best languages are the ones that are object oriented and functional, like Python or C#, giving you the best of both worlds and the freedom to move between each as necessary.
The Wikipedia definition is right. Functional programming avoids state. Functional programming means that you apply a function to a given input and get a result. However, it is guaranteed that your input will not be modified in any way. It doesn't mean that you cannot have a state at all. Monads are perfect example of that.
The Wikipedia definition is correct. It may seem puzzling at first, but if you start working with say Haskell, you'll note that you don't have any variables laying around containing values.
The state can still be sort of represented using state monads.
At some point, all apps must have state (some data that changes). Otherwise, you would start the app up, it wouldn't be able to do anything, and the app would finish.
You can have immutable data structures as part of your state, but you will swap those for other instances of other immutable data structures.
The point of immutability is not that data does not change over time. The point is that you can create pure functions that don't have side effects.