Meaning of combined states in DFA - automata

When converting and NFA to a DFA, sometimes the states have to be merged. Like in the above scenario.
But what does it really mean by 'combining the states into one' in a real scenario?
And what would the nature of the combination of above two states be?

The phrase combining the states into one means, that you
Create a new state, which is labelled with all labels from the original states.
The new state gets all output and all conflicting (ambiguous) input transitions from the original states.
Each combination of the original labels may occur only in one new condition.
Note: Creating a new state with a single label in the DFA can be seen as a special case of the above.
The naming of the new state with the labels of the original states has the sense that you can refer unambiguously to this new state in the subsequent generation process.

Related

Given a DFA with multiple Final states. How can I find its compliment?

It is not possible to have multiple start state in DFA but how can I achieve Compliment operation on DFA having multiple Final states?
I try Complimenting the language of given DFA but how multiple final states can be converted to multiple starting states
As pointed out in the comments, finding a DFA for compliment of the language of another DFA is quite straightforward: you simply make all accepting states non-accepting, and vice-versa. The resulting DFA may not be minimal for the language it accepts, but it is still a DFA.
You might be thinking of how to find a DFA for the reversal of a language of another DFA. This is a little more involved and encounters the issue you suggest: by reversing the direction of all transitions, making the initial state accepting and making accepting states initial, you get an automaton that works, but it has multiple initial states. This is not allowed for DFAs. Happily, we can make an NFA-lambda easily by adding a new state q0' and adding empty/lambda/epsilon transitions from it to the otherwise initial states. Now we have an NFA-lambda for the reversal of the language and we can obtain a DFA by determinizing this using a standard algorithm.

Merging final states of a DFA

Can we merge or combine all of the final states of a DFA which has more then one final states? Will it produce another equivalent DFA?
Till now, i just figure out that in some cases, merging all the final states of a DFA can produce another NFA which maybe equivalent with the foregoing DFA.
THANK YOU
You may only merge/combine states which are equivalent. States are equivalent, if the language recognized by them is identical. The recognized language is the set of strings which leads from the given state to a stop state.
Consider the regular expression a+|b. After the sequence aa the DFA is in a stop state, call it s1. It must have an outgoing transition on a to another stop state, which may be itself.
On the other hand, on the input b, the DFA is also in a stop state, call it s2. This can not have any outgoing transition which will ever end up in a stop state, because otherwise some string starting with b would be recognized, which is not permitted by a+|b.
Consequently s1 and s2 are not equivalent and can not be merged.
You noticed correctly, however, that we can always add an epsilon transition from all stop states to a unique, new stop state. But the result is an NFA, not a DFA anymore.

State chart diagram and multiple actions

Can a transition have two or more actions?
For example:
event[condition]/action1;action2
stateA -------------------------------------------> stateB
Yes.
From Wikipedia:
In UML, a state transition can directly connect any two states. These two states, which may be composite, are designated as the main source and the main target of a transition. Figure 7 shows a simple transition example and explains the state roles in that transition. The UML specification prescribes that taking a state transition involves executing the following actions in the following sequence (see Section 15.3.14 in OMG Unified Modeling Language (OMG UML), Infrastructure Version 2.2):
Evaluate the guard condition associated with the transition and perform the following steps only if the guard evaluates to TRUE.
Exit the source state configuration.
Execute the actions associated with the transition.
Enter the target state configuration.
I have been unable to find succint wording to define this in the UML specification, but diagrams and further wording on the Wikipedia article (which is well-referenced) seem to imply that you should use ; as a separator, as in your example.
However, intuitively I would expect a system's state to change after each action has been taken, so (again intuitively) I would recommend minimizing your use of multiple actions per transition. Instead consider adding intermediate states.

Is it acceptable to store the previous state as a global variable?

One of the biggest problems with designing a lexical analyzer/parser combination is overzealousness in designing the analyzer. (f)lex isn't designed to have parser logic, which can sometimes interfere with the design of mini-parsers (by means of yy_push_state(), yy_pop_state(), and yy_top_state().
My goal is to parse a document of the form:
CODE1 this is the text that might appear for a 'CODE' entry
SUBCODE1 the CODE group will have several subcodes, which
may extend onto subsequent lines.
SUBCODE2 however, not all SUBCODEs span multiple lines
SUBCODE3 still, however, there are some SUBCODES that span
not only one or two lines, but any number of lines.
this makes it a challenge to use something like \r\n
as a record delimiter.
CODE2 Moreover, it's not guaranteed that a SUBCODE is the
only way to exit another SUBCODE's scope. There may
be CODE blocks that accomplish this.
In the end, I've decided that this section of the project is better left to the lexical analyzer, since I don't want to create a pattern that matches each line (and identifies continuation records). Part of the reason is that I want the lexical parser to have knowledge of the contents of each line, without incorporating its own tokenizing logic. That is to say, if I match ^SUBCODE[ ][ ].{71}\r\n (all records are blocked in 80-character records) I would not be able to harness the power of flex to tokenize the structured data residing in .{71}.
Given these constraints, I'm thinking about doing the following:
Entering a CODE1 state from the <INITIAL> start condition results
in calls to:
yy_push_state(CODE_STATE)
yy_push_state(CODE_CODE1_STATE)
(do something with the contents of the CODE1 state identifier, if such contents exist)
yy_push_state(SUBCODE_STATE) (to tell the analyzer to expect SUBCODE states belonging to the CODE_CODE1_STATE. This is where the analyzer begins to masquerade as a parser.
The <SUBCODE1_STATE> start condition is nested as follows: <CODE_STATE>{ <CODE_CODE1_STATE> { <SUBCODE_STATE>{ <SUBCODE1_STATE> { (perform actions based on the matching patterns) } } }. It also sets the global previous_state variable to yy_top_state(), to wit SUBCODE1_STATE.
Within <SUBCODE1_STATE>'s scope, \r\n will call yy_pop_state(). If a continuation record is present (which is a pattern at the highest scope against which all text is matched), yy_push_state(continuation_record_states[previous_state]) is called, bringing us back to the scope in 2. continuation_record_states[] maps each state with its continuation record state, which is used by the parser.
As you can see, this is quite complicated, which leads me to conclude that I'm massively over-complicating the task.
Questions
For states lacking an extremely clear token signifying the end of its scope, is my proposed solution acceptable?
Given that I want to tokenize the input using flex, is there any way to do so without start conditions?
The biggest problem I'm having is that each record (beginning with the (SUB)CODE prefix) is unique, but the information appearing after the (SUB)CODE prefix is not. Therefore, it almost appears mandatory to have multiple states like this, and the abstract CODE_STATE and SUBCODE_STATE states would act as groupings for each of the concrete SUBCODE[0-9]+_STATE and CODE[0-9]+_STATE states.
I would look at how the OMeta parser handles these things.

Does functional programming avoid state?

According to wikipedia: functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. (emphasis mine).
Is this really true? My personal understanding is that it makes the state more explicit, in the sense that programming is essentially applying functions (transforms) to a given state to get a transformed state. In particular, constructs like monads let you carry the state explicitly through functions. I also don't think that any programming paradigm can avoid state altogether.
So, is the Wikipedia definition right or wrong? And if it is wrong what is a better way to define functional programming?
Edit: I guess a central point in this question is what is state? Do you understand state to be variables or object attributes (mutable data) or is immutable data also state? To take an example (in F#):
let x = 3
let double n = 2 * n
let y = double x
printfn "%A" y
would you say this snippet contains a state or not?
Edit 2: Thanks to everyone for participating. I now understand the issue to be more of a linguistic discrepancy, with the use of the word state differing from one community to the other, as Brian mentions in his comment. In particular, many in the functional programming community (mainly Haskellers) interpret state to carry some state of dynamism like a signal varying with time. Other uses of state in things like finite state machine, Representational State Transfer, and stateless network protocols may mean different things.
I think you're just using the term "state" in an unusual way. If you consider adding 1 and 1 to get 2 as being stateful, then you could say functional programming embraces state. But when most people say "state," they mean storing and changing values, such that calling a function might leave things different than before the function was called, and calling the function a second time with the same input might not have the same result.
Essentially, stateless computations are things like this:
The result of 1 + 1
The string consisting of 's' prepended to "pool"
The area of a given rectangle
Stateful computations are things like this:
Increment a counter
Remove an element from the array
Set the width of a rectangle to be twice what it is now
Rather than avoids state, think of it like this:
It avoids changing state. That word "mutable".
Think in terms of a C# or Java object. Normally you would call a method on the object and you might expect it to modify its internal state as a result of that method call.
With functional programming, you still have data, but it's just passed through each function, creating output that corresponds to the input and the operation.
At least in theory. In reality, not everything you do actually works functionally so you frequently end up hiding state to make things work.
Edit:
Of course, hiding state also frequently leads to some spectacular bugs, which is why you should only use functional programming languages for purely functional situations. I've found that the best languages are the ones that are object oriented and functional, like Python or C#, giving you the best of both worlds and the freedom to move between each as necessary.
The Wikipedia definition is right. Functional programming avoids state. Functional programming means that you apply a function to a given input and get a result. However, it is guaranteed that your input will not be modified in any way. It doesn't mean that you cannot have a state at all. Monads are perfect example of that.
The Wikipedia definition is correct. It may seem puzzling at first, but if you start working with say Haskell, you'll note that you don't have any variables laying around containing values.
The state can still be sort of represented using state monads.
At some point, all apps must have state (some data that changes). Otherwise, you would start the app up, it wouldn't be able to do anything, and the app would finish.
You can have immutable data structures as part of your state, but you will swap those for other instances of other immutable data structures.
The point of immutability is not that data does not change over time. The point is that you can create pure functions that don't have side effects.

Resources