I am confused about the implementation of a language by an automaton. Does the automaton go directly to the next state if there is a ɛ-transition? Suppose I have an automaton consisting of three states a, b, and c (where a is initial state and c the accepting state) with alphabet {0,1}. How does the following work?
a----ɛ--->(b----0---->a)
(b----1---->c)
Is the string "1" accepted? What if we had
a---1--->b----ɛ--->c
? Would the string "1" be accepted?
Does the automaton go directly to the next state if there is an ɛ-transition?
Roughly speaking, yes. An ɛ-transition (in a non-deterministic finite automaton, or NFA, for short) is a transition that is not associated with the consumption of any symbol (0 or 1, in this case). Once you understand that, it's easy (in this case) to derive deterministic finite automata (or DFA, for short) that are equivalent to your NFAs and identify the languages that the latter describe.
Suppose I have an automaton [...] Is the string "1" accepted?
Yes. Here is a nicer diagram (curtesy of LaTeX and tikz) of your first NFA:
An equivalent DFA would be:
Once you have that, it's easy to see that the language accepted by that NFA is the set of strings that
start with zero or more 0's,
end with exactly one 1.
The string "1", because it starts with zero 0 and ends with one 1, is indeed accepted.
What if we had [...]? Would the string "1" be accepted?
Yes. Here is a nicer diagram of your second NFA:
An equivalent DFA would be:
In fact, it's easy to see that "1" is the only accepted string, here.
Related
I am stuck to a problem from the famous dragon Book of Compiler Design.How to find all the viable prefixes of the following grammar:
S -> 0S1 | 01
The grammar is actually the language of the regex 0n1n.
I presume the set of all viable prefixes might come as a regex too.I came up with the following solution
0+
0+S
0+S1
0+1
S
(By plus , I meant no of zeroes is 1..inf)
after reducing string 000111 with the following steps:
stack input
000111
0 00111
00 0111
000 111
0001 11
00S 11
00S1 1
0S 1
0S1 $
S $
Is my solution correct or I am missing something?
0n1n is not a regular language; regexen don't have variables like n and they cannot enforce an equal number of repetitions of two distinct subsequences. Nonetheless, for any context-free grammar, the set of viable prefixes is a regular language. (A proof of this fact, in some form, appears at the beginning of Part II of Donald Knuth's seminal 1965 paper, On the Translation of Languages from Left to Right, which demonstrated both a test for the LR(k) property and an algorithm for parsing LR(k) grammars in linear time.)
OK, to the actual question. A viable prefix for a grammar is (by definition) the prefix of a sentential form which can appear on the stack during a parse using that grammar. It's called "viable" (which means "still alive" or "could continue growing") precisely because it must be the prefix of some right sentential form whose suffix contains no non-terminal symbol. In other words, there exists a sequence of terminals which can be appended to the viable prefix in order to produce a right-sentential form; the viable prefix can grow.
Knuth shows how to create a DFA which produces all viable prefixes, but it's easier to see this DFA if we already have the LR(k) parser produced by an LR(k) algorithm. That parser is a finite-state machine whose alphabet is the set of terminal and non-terminal symbols of a grammar. To get the viable-prefix grammar, we use exactly the same state machine, but we remove the stack (so that it becomes just a state machine) and the reduce actions, leaving only the shift and goto actions as transitions. All states in the viable-prefix machine are accepting states, since any prefix of a viable prefix is itself a viable prefix.
A key feature of this new automaton is that it cannot extend a prefix with a reduce action (since we removed all the reduce actions). A prefix with a reduce action is a prefix which ends in a handle -- recall that a handle is the right-hand side of some production -- so another definition of a viable prefix is that it is a right-sentential form (that is, a possible step in a derivation) which does not extend beyond the right-most handle.
The grammar you are working with has only two productions, so there are only two handles, 01 and 0S1. Note that 10 and 1S cannot be subsequences of any right-sentential form, nor can a right-sentential form contain more than one S. Any right-sentential form must either be a sentence 0n1n or a sentential form 0nS1n where n>0. But every handle ends at the first 1 of a sentential form, and so a viable prefix must end at or before the first 1. This produces precisely the four possibilities you list, which we can condense to the regular expression 0*0(S1?)?.
Chopping off the suffix removed the second n from the formula, so there is no longer a requirement of concordance and the language is regular.
Note:
Questions like this and their answers are begging to be rendered using MathJax. StackOverflow, unfortunately, does not provide this extension, which is apparently considered unnecessary for programming. However, there is a site in the StackExchange constellation dedicated to computing science questions, http://cs.stackexchange.com, and another one dedicated to mathematical questions, http://math.stackexchange.com. Formal language theory is part of both computing science and mathematics. Both of those sites permit MathJax, and questions on those sites will not be closed because they are not programming questions. I suggest you take this information into account for questions like this one.
enter image description here
I tried this but this is DFA how would NFA be?
By most definitions, every DFA is also an NFA. So your automaton is an NFA.
If you really want some nondeterminism, here is another NFA:
0,1
-> ((even)) <---> (odd)
^
| 0,1
v
(also odd)
As you can see, forcing an automaton to use nondeterminism is not very enlightening.
In most cases, using nondeterminism makes the automaton more compact and simple to write. But in this case the automaton already only used two states. It can hardly get smaller.
I'm implementing the automatic construction of an LALR parse table for no reason at all. There are two flavors of this parser, LALR(0) and LALR(1), where the number signifies the amount of look-ahead.
I have gotten myself confused on what look-ahead means.
If my input stream is 'abc' and I have the following production, would I need 0 look-ahead, or 1?
P :== a E
Same question, but I can't choose the correct P production in advance by only looking at the 'a' in the input.
P :== a b E
| a b F
I have additional confusion in that I don't think the latter P-productions really happen in when building a LALR parser generator. The reason is that the grammar is effectively left-factored automatically as we compute the closures.
I was working through this page and was ok until I got to the first/follow section. My issue here is that I don't know why we are calculating these things, so I am having trouble abstracting this in my head.
I almost get the idea that the look-ahead is not related to shifting input, but instead in deciding when to reduce.
I've been reading the Dragon book, but it is about as linear as a Tarantino script. It seems like a great reference for people who already know how to do this.
The first thing you need to do when learning about bottom-up parsing (such as LALR) is to remember that it is completely different from top-down parsing. Top-down parsing starts with a nonterminal, the left-hand-side (LHS) of a production, and guesses which right-hand-side (RHS) to use. Bottom-up parsing, on the other hand, starts by identifying the RHS and then figures out which LHS to select.
To be more specific, a bottom-up parser accumulates incoming tokens into a queue until a right-hand side is at the right-hand end of the queue. Then it reduces that RHS by replacing it with the corresponding LHS, and checks to see whether an appropriate RHS is at the right-hand edge of the modified accumulated input. It keeps on doing that until it decides that no more reductions will take place at that point in the input, and then reads a new token (or, in other words, takes the next input token and shifts it onto the end of the queue.)
This continues until the last token is read and all possible reductions are performed, at which point if what remains is the single non-terminal which is the "start symbol", it accepts the parse.
It is not obligatory for the parser to reduce a RHS just because it appears at the end of the current queue, but it cannot reduce a RHS which is not at the end of the queue. That means that it has to decide whether to reduce or not before it shifts any other token. Since the decision is not always obvious, it may examine one or more tokens which it has not yet read ("lookahead tokens", because it is looking ahead into the input) in order to decide. But it can only look at the next k tokens for some value of k, typically 1.
Here's a very simple example; a comma separated list:
1. Start -> List
2. List -> ELEMENT
3. List -> List ',' ELEMENT
Let's suppose the input is:
ELEMENT , ELEMENT , ELEMENT
At the beginning, the input queue is empty, and since no RHS is empty the only alternative is to shift:
queue remaining input action
---------------------- --------------------------- -----
ELEMENT , ELEMENT , ELEMENT SHIFT
At the next step, the parser decides to reduce using production 2:
ELEMENT , ELEMENT , ELEMENT REDUCE 2
Now there is a List at the end of the queue, so the parser could reduce using production 1, but it decides not to based on the fact that it sees a , in the incoming input. This goes on for a while:
List , ELEMENT , ELEMENT SHIFT
List , ELEMENT , ELEMENT SHIFT
List , ELEMENT , ELEMENT REDUCE 3
List , ELEMENT SHIFT
List , ELEMENT SHIFT
List , ELEMENT -- REDUCE 3
Now the lookahead token is the "end of input" pseudo-token. This time, it does decide to reduce:
List -- REDUCE 1
Start -- ACCEPT
and the parse is successful.
That still leaves a few questions. To start with, how do we use the FIRST and FOLLOW sets?
As a simple answer, the FOLLOW set of a non-terminal cannot be computed without knowing the FIRST sets for the non-terminals which might follow that non-terminal. And one way we can decide whether or not a reduction should be performed is to see whether the lookahead is in the FOLLOW set for the target non-terminal of the reduction; if not, the reduction certainly should not be performed. That algorithm is sufficient for the simple grammar above, for example: the reduction of Start -> List is not possible with a lookahead of ,, because , is not in FOLLOW(Start). Grammars whose only conflicts can be resolved in this way are SLR grammars (where S stands for "Simple", which it certainly is).
For most grammars, that is not sufficient, and more analysis has to be performed. It is possible that a symbol might be in the FOLLOW set of a non-terminal, but not in the context which lead to the current stack configuration. In order to determine that, we need to know more about how we got to the current configuration; the various possible analyses lead to LALR, IELR and canonical LR parsing, amongst other possibilities.
I know anbn for n > 0 is not regular by the pumping lemma but I would imagine a*b* to be regular since both a,b don't have to be the same length. Is there a proof for it being regular or not?
Answer to your question:
imagine a*b* to be regular, Is there a proof for it being regular or not?
No need to imagine, expression a*b* is called regular expression (re), and regular expressions are possible only for regular languages. If a language is not regular then regular expression is also not possible for that and if a language is regular language then we can always represent it by some regular expression.
Yes, a*b* represents a regular language.
Language description: Any number of a followed by any numbers of b (by any number I mean zero (including null ^) or more times). Some example strings are:
{^, a, b, aab, abbb, aabbb, ...}
DFA for RE a*b* will be as follows:
a- b-
|| ||
▼| ▼|
---►((Q0))---b---►((Q1))
In figure: `(())` means final state, so both `{Q0, Q1}` are final states.
You need to understand following basic concept:
What is basically a regular language? And why an infinite language `a*b*` is regular whereas languages like `{ anbn | n > 0 }` are not regular!!
A language(a set) is called regular language, if it requires only bounded(finite) amount of information to keep store at any instance of time while processing strings of the language.
So, what is 'bounded' information?
For example: Consider a fan 'on'/'off' switch. By viewing fan switch we can say whether the fan is in the on or off state (this is bounded or limited information). But we cannot tell 'how many times' a fan has been switched to on or off in the past! (to memorize this, we require a mechanism to store an 'unbounded' amount of information to count — 'how many times' e.g. the meter used in our cars/bikes).
The language { anbn | n > 0 } is not a regular language because here n is unbounded(it can be infinitely large). To verify strings in the language anbn, we need to memorize how many a symbols there are and it requires an infinite memory storage to count because the number of a symbols in the string can be infinitely large!
That means an automata is only capable of processing strings of the language anbn if it has infinite memory e.g PDA.
Whereas, a*b* is of course regular by its nature, because there is the bounded restriction ‐ that b may come after some a ( and a can't came after b). And that is why every string of this language can be easily processed (or recognized) by an automata in which we have finite memory - and finite automata is a class of automata where memory is finite. Yes, in finite automata, we have finite amount of memory in the term of states.
(Memory in finite automata is present in the form of states Q and according to automata principal: any automata can have only finite states. hence finite automata have finite memory, this is the reason the class of automata for regular languages is called finite automata. You can think of a finite automata like a CPU without memory, that has finite register to remember its internal states)
Finite State ⇒ Finite Memory ⇒ Only language can be processed for which finite memory needs to store at any instance of time while processing the string ⇒ that language is called Regular Language
Absent of external memory is limitation of finite automate ⇒ or we can say limitation of finite automata defined class of language called Regular Language.
You should read other answer "finiteness of regular language" to learn scope of regular language.
side note::
language { anbn | n > 0 } is subset of a*b*
Also a language { anbn | 10>100 n > 0 } is regular, a large set but regular because n is bounded, hence finite automata and regular expression is possible for this language.
You should also read: How to prove a language is regular?
The proof is: ((a*)(b*)) is a well-formed regular expression, hence matching a regular language. a*b* is a syntactic shortenning of the same expression.
Another proof: Regular languages are closed to concatenation. a* is a regular language. b* is a regular language, therefore their concatenation, a*b*, is also a regular expression.
You can build an automat for it:
0 ->(a) 1
0 ->(b) 2
1 ->(a) 1
1 ->(b) 2
2 ->(b) 2
2 ->(a) 3
3 ->(a,b) 3
where only 3 is not an accepting state, and prove that the language is a*b*.
To prove that a language is regular, it is sufficient to show either:
1) There exists some DFA that recognizes it. In this case, the DFA is trivial.
2) The language can be expressed as a regular expression, as mentioned in another answer. a*b* is a regular expression to recognize this language.
A regular language is a language that can be expressed with a regular expression or a deterministic or non-deterministic finite automata or state machine.
A language is a set of strings which are made up of characters from a specified alphabet, or set of symbols. Regular languages are a subset of the set of all strings.
a closure property is a statement that a certain operation on languages, when applied to languages in a class (e.g., the regular languages), produces a result that is also in that class.
this RE shows..the type of language that accepts multiple of (a) if any but before (b)
means language without containing any substring (ba)
Regular languages are not subset of context free languages. For example, ab is regular, comprising all the strings made of substring of a's followed by substring of b's. This is not subset of a^nb^n, but superset.
Can you check on this: https://dl.dropbox.com/u/25439537/finite%20automata.png
This is a checked homework, so don't worry. I just want to clarify whether my answer is correct or not, because it is marked by my teacher as incorrect.
My answer is ((a+b)(a+b))*a
The first (a+b) signifies the upper arrows. The second (a+b) signifies the lower arrows. The last 'a' tells us that it should always end in 'a'.
I just want to record evidences from a lot of experts so that I can give it to my teacher.
I believe your answer is correct.
Let's consider the whole process as two parts: (1) start with start, and go back to start; and (2) go from start to end and accept. Obviously, the (1) part is a loop.
For (1), starting with start, either accept b or a. For b, it's b(a+b) to go back. For a, it's a(a+b) to go back. So (1) is b(a+b) + a(a+b) which is (a+b)(a+b).
For (2), it's a'.
So, the final result is (loop in (1))* (2) i.e. ( (a+b)(a+b) )* a.
Follow the description above, you can also come up with a proof of the equivalence between the two. Proof part (a) every sequence accepted by the automata is in the set ((a+b)(a+b))*a; part (b) every sequence in the set ((a+b)(a+b))*a is accepted by the automata.
Your answer is wrong, because it doesn't provide for strings starting with b.
The path (start) -> b -> a+b -> a -> (end) is accepted by your finite automaton, but not by your regex. The simplest counterexample to your answer being correct is the regex's rejection of the string "baba".
By the way, if the teacher gave you that regex without the "end" state having two concentric circles (to indicate being an accept state) it was probably a trick question. Having no accept state means your automaton rejects everything. The best way to describe that would be to just write down {} (the empty set).