Is a Pushdown Automaton with an Epsilon Transition a NDPA? - automata

Let's suppose we have this PA:
-> q0 (e, e -> $) --> q1
Where:
q0 is a final and initial state;
e is epsilon (empty); and q1 is another state.
If the automaton were to read the e word, it could either make the transition to q1 or stop in q0.
So, would this PA be Non-Deterministic?
My teacher says it wouldn't because, in reality, there's only one path for the automaton to follow: since the word is empty and all symbols had already been consumed in q0, it would make no transition whatsoever; however, we're not sure if he's right (by the way, he says that in order for a PA to recognize a word it needs not only to be in a final state but also all the word's symbols must have been consumed).

For a PA to be deterministic it must follow at least the following rule:
If there is an epsilon transition from a state q, there must not be any alphabet transition from that state.
So in your case, if there isn't any other rule, the PA is deterministic.

Related

DFA creation and minimization

Task: Draw an DFA that accepts words from the alphabet {0,1} where the last character is not repeated anywhere else in the word.
(example: words 0, 1, 00001, 111110, ε are in the language of this NFA, while words 010, 111, 0010, 101 are not).
I think I got the DFA right but I can't minimize it because I have a trap state which I can't get rid of whatever I do. Any advices or tips?
Your DFA is correct (except the initial state q0 should be accepting since the empty string is in the language). It is also minimal; we can show that the sets of strings that lead to each state are all distinguishable w.r.t. the language according to the Myhill-Nerode indistinguishability relation.
q0: any string in L can be appended to the strings that lead here (the empty string) to get another string in L
q1: appending the string 0 in L to 0 gives 00 not in L, so q1 is distinct from q0.
q2: appending the string 1 in L to 1 gives 11 not in L, so q2 is distinct from q0 and from q1 (since appending 1 to 0 gives 01 which is in L also)
strings that lead to q3 are distinct from q1 in that you cannot append the empty string (i.e., q1 is accepting and q3 isn't), but otherwise identical, and therefore q3 is also distinct from q0-q2
strings that lead to q4 are distinct from q2 in that you cannot append the empty string (i.e., q2 is accepting and q4 isn't), but otherwise identical, and therefore q4 is also distinct from q0-q2
appending anything but the empty string to strings that lead to q5 leads to a string not in the language, so it is distinct from q0-q4
appending anything to a strings that lead to q6 leads to a string not in the language, so it is distinct from q0-q5 (these strings differ from strings that lead to q5 only in that you cannot even append the empty string to them to get a string in L, i.e., q6 is not accepting and q5 is).
Therefore, your DFA is minimal. You can show this by attempting to run a minimization algorithm and noting that no states get removed.
Note: it depends on your definitions but it is a normal (and I would say preferred) definition of DFAs that they define all transitions, which means that dead (or as you call them, trap) states are required to be shown. Minimization algorithms probably don't remove them, but you can remove them if you like (though I'd call the resulting automaton nondeterministic since deterministic behavior is not specified for some transitions).

How does LR parsing parse the word "a b" in this grammar: S -> a b | a T ; T -> a

Suppose I have a grammar
S -> a b | a T
T -> a
Clearly, the grammar accepts {aa, ab}. But I am confused how LR parsing parses the word "a b". Naively, it could work like this, by reducing the first "a" to T and then it gets stuck or has to backtrack.
a a => T a => stuck or backtrack
How would an LR parser knows that it should NOT reduce the first "a" to T?
In this case, the parser will never try reducing T because the production T → a is not in any state reached from S. The initial state has items:
S → • a b
S → • a T
and the only action possible in this state is a shift action with the token a. Since a is, in fact, the next input character, we do a shift transition to a state whose itemset is
S → a • b
S → a • T
T → • a
This state has no reduce actions either, and it has two distinct shift actions: one on b and the other one on a. Since the next input is b, that shift action is taken, leading to a state whose itemset is
S → a b •
in which only a reduction is available.
A slightly more interesting case would be the rather similar grammar
S → a b
S → T a
T → a
Here, the itemset for the initial state does include the production for T:
S → • a b
S → • T a
T → • a
It's still the case that the only action available in the initial state is to shift a, but now after doing the shift we find ourselves in a state whose itemset is:
S → a • b
T → a •
and now we have two possible actions: a shift of b and a reduction of T → a. Here, the parser needs to use its ability to look one token into the future (assuming that its an LR(1) parser).
But to let it do so, we need to do a small adjustment. Grammars are always "augmented" before the parsing automaton is constructed. The augmented grammar adds explicit recognition of the end of input by adding a unique end-of-input character which can also participate in lookahead checks. The augmented grammar is:
S'→ S $
S → a b
S → T a
T → a
In this grammar, we can see that the nonterminal T can only be followed by the symbol a, and this fact is encoded into the state transition tables, where each item in an itemset is actually annotated with a set of possible lookaheads. (During table construction, all items are annotated, but for the computation of legal actions, only the lookaheads for reductions are considered; a reduction is an item whose • is at the end.)
With this modification, the itemset reached after shifting a is:
S → a • b
T → a • [ lookahead: { a } ]
and the lookahead makes it absolutely clear which action should be chosen. If the next input is b, it is shifted. If the next input is a, T is reduced.
That precision is not available with LR(0) parsing, in which lookahead is not used. The modified grammar is not LR(0), precisely for the reason you mention: that it cannot decide whether or not to reduce T. This issues comes up pretty frequently, which is why LR(0) is rarely if ever useful in practice.
Given the example input of "a b"... After reading the 'a', the parser is in a state with the items:
S -> a . b
S -> a . T
T -> . a
Since none of these items has the dot at the end of the production, the state does not call for any reduce action. The only available action is to shift, so the parser reads the next symbol. No need for lookahead.

How do I expand the item set for this grammar?

I have this grammar
E -> E + i
E -> i
The augmented grammar
E' -> E
E -> E + i
E -> i
Now I try to expand the item set 0
I0)
E' -> .E
+E -> .E + i
+E -> .i
Then, since I have .E in I0 I would expand it but then I will get another E rule, and so on, this is my first doubt.
Assuming that this is alright the next item sets are
I0)
E' -> .E
+E -> .E + i
+E -> .i
I1) (I moved the dot from I0, no variables at rhs of dot)
E' -> E.
E -> E. + i
E -> i.
I2) (I moved the dot from I1, no vars at rhs of dot)
E -> E +. i
I3) (I moved the dot from I2, also no vars)
E -> E + i.
Then I will have this DFA
I0 -(E, i)-> I1 -(+)-> I2 -(i)-> I3
| |
+-(∅)-> acpt <-(∅)--+
I'm missing something because E -> E + i must accept i + i + .. but the DFA doesn't goes back to the I0, so it seems wrong to me. My guess is that it should have a I0 to I0 transition, but I then I don't know that to do with the dot.
What you call the "expansion" of the item set is actually a closure; that's how it's described in all the descriptions of the algorithm I've seen (at least in textbooks). Like any closure operation, you just keep on doing the transformation until you reach a fixed-point; once you've included the productions for E, they're included.
But the essential point is that you're not building a DFA. You're building a pushdown automaton, and the DFA is just one part of it. The DFA is used for shift operations; when you shift a new terminal (because the current parse stack is not a handle), you do a state transition according to the DFA. But you also push the current state onto the PDA's stack.
The interesting part is what happens when the automaton decides to perform a reduction, which replaces the right-hand side of a production with its left-hand side non-terminal. (The right-hand side at the top of the stack is called a "handle".) To do the reduction, you unwind the stack, popping each right-hand side symbol (and the corresponding DFA state) until you reach the beginning of the production. What that does is rewind the DFA to the state it was in before it shifted the first symbol from the right-hand side. (Note that it is only at this point that you know for sure which production was used.) With the DFA thus reset, you can now shift the non-terminal which was encountered, do the corresponding DFA transition, and continue with the parse.
The basis for this procedure is the fact that the parser stack is at all times a "viable prefix"; that is, a sequence of symbols which are the prefix of some right sentential form which can be derived from the start symbol. What's interesting about the set of viable prefixes for a context-free grammar is that it is a regular language, and consequently can be recognised by a DFA. The reduction procedure given above precisely represents this recognition procedure when handles are "pruned" (to use Knuth's original vocabulary).
In that sense, it doesn't really matter what procedure is used to determine which handle is to be pruned, as long as it provides a valid answer. You could, for example, fork the parse every time a potential handle is noticed at the top of the stack, and continue in parallel with both forks. With clever stack management, this parallel search can be done in worst-case O(n3) time for any context-free grammar (and this can be reduced if the grammar is not ambiguous). That's a very rough description of Earley parsers.
But in the case of an LR(k) parser, we require that the grammar be unambiguous, and we also require that we can identify a reduction by looking at no more than k more symbols from the input stream, which is an O(1) operation since k is fixed. If at each point in the parse we know whether to reduce or not, and if so which reduction to choose, then the reductions can be implemented as I outlined above. Each reduction can be performed in O(1) time for a fixed grammar (since the maximum size of a right-hand side in a particular grammar is fixed), and since the number of reductions in a parse is linear in the size of the input, the entire parse can be done in linear time.
That was all a bit informal, but I hope it serves as an intuitive explanation. If you're interested in the formal proof, Donald Knuth's original 1965 paper (On the Translation of Languages from Left to Right) is easy to find and highly readable as these things go.

How to construct a PDA for n(a) is less than or equal to 2n(b)

So here is where I am stuck, I have to construct a PDA that would accept words from {a,b}* with the condition n(a) is less than or equal to 2n(b)
Put yourself into the frame of mind of a pushdown automaton. All you know how to do is read input and the stack, and then push/pop and change state based on what you see. If you start reading a string and need to tell whether it's in the language, what can you do?
It seems to me the best we can do starting out is to remember how many as we are reading. Until we see bs, there's no reason to do anything else. That is, simply read as and push them on the stack. The following rules should suffice:
Q i s Q' s'
-- -- -- -- --
q0 a Z q0 aZ
q0 a a q0 aa
Now, what happens when we see a b? If we want at least twice as many bs as as, we can't just cross of as for each b, since that would give at least as many bs as as but not at least twice as many.
What if we cross off two as for each b? Well, that gives us at least half as many bs as as, which is the wrong direction. This suggests crossing off one a for every two bs, which turns out to be correct. The rules:
Q i s Q' s'
-- -- -- -- --
q0 b a q1 a
q1 b a q2 -
q2 b a q1 a
Note we created a new state q2 which has one rule in common with q0 but not the ones for reading as. Indeed, once we start reading bs, we don't want to allow any more as to be read. Leaving out the rules will crash the automaton and reject the string.
If we have exactly twice as many bs as as, we will end up in state q2 with no more input and an empty stack. If we have additional bs, we need a rule to allow them. It suffices to add a self loop on state q2:
Q i s Q' s'
-- -- -- -- --
q2 b Z q2 Z
Taken together, these states and transitions should be pretty close to a working PDA for your language. Other options would have been to write a CFG first and then apply an algorithm to convert the CFG to a PDA, for example, a top-down or bottom-up parser.

Rules to convert a CFG to a NDPA?

I have to define an FA by using this grammar:
S -> aSb
S -> c
S -> dA
A -> Sd
How do I manage the first rule and the last one?
For the second one I think I have to create another state (the final one) and link S and this new state. For the third one instead, I think I have to create the state "A" and link it to S by passing "d".
There are algorithms you can use to get a PDA from a CFG: look into top-down and bottom-up parsers, for instance. What I think of as the usual proof that PDAs accept languages generated by CFGs, and vice versa, uses such a construction.
An alternative is to understand the language generated by the grammar, and to design a PDA for it directly. This is less mechanical but has the potential to yield a more concise PDA. If you want to go this route, we can first simplify the grammar by recognizing the nonterminal A can safely be replaced by the RHS of the only production for it:
S -> aSb
S -> c
S -> dSd // removed A -> Sd and replaced here
How does this grammar work?
You have c in the middle by the 2nd production;
You have matching ds on the left and right of the c;
You have as on the left matching bs on the right of c.
A PDA should work as follows:
Read as and ds until you see a c. Push everything on the stack as you go. When you see a c, go to the next state, but don't push the c.
Read bs and ds, popping as and ds from the stack, until:
The topmost stack symbol doesn't match input; crash.
You run out of input with symbols still on the stack; crash.
You run out of stack symbols with input remaining; crash.
You run out of stack and input simultaneously; accept.
Here's a transition table:
q s x q' s'
------------------------------
q0 a,d,Z a q0 aa,ad,aZ
q0 a,d,Z d q0 da,dd,dZ
q0 a,d,Z c q1 a,d,Z
q1 a b q1 -
q1 d d q1 -
If we accept in q1 by empty stack, these transitions are enough. If we want to accept by empty stack or accepting state, we could add a transition like f(q1, Z, -) = (q2, Z) and make q2 accepting; the PDA would transition there nondeterministically and would crash unless the input were also exhausted.

Resources