Grammar for recursive descent parsing - parsing

Is there an easy way to tell whether a simple grammar is suitable for recursive descent? Is eliminating left recursion and left factoring the grammar enough to achieve this ?

Not necessarily.
To build a recursive descent parser (without backtracking), you need to eliminate or resolve all predict conflicts. So one definitive test is to see if the grammar is LL(1); LL(1) grammars have no predict conflicts, by definition. Left-factoring and left-recursion elimination are necessary for this task, but they might not be sufficient, since a predict conflict might be hiding behind two competing non-terminals:
list ::= item list'
list' ::= ε
| ';' item list'
item ::= expr1
| expr2
expr1 ::= ID '+' ID
expr2 ::= ID '(' list ')
The problem with the above (or, at least, one problem) is that when the parser expects an item and sees an ID, it can't know which of expr1 and expr2 to try. (That's a predict conflict: Both non-terminals could be predicted.) In this particular case, it's pretty easy to see how to eliminate that conflict, but it's not really left-factoring since it starts by combining two non-terminals. (And in the full grammar this might be excerpted from, combining the two non-terminals might be much more difficult.)
In the general case, there is no algorithm which can turn an arbitrary grammar into an LL(1) grammar, or even to be able to say whether the language recognised by that grammar has an LL(1) grammar as well. (However, it's easy to tell whether the grammar itself is LL(1).) So there's always going to be some art and/or experimentation involved.
I think it's worth adding that you don't really need to eliminate left-recursion in a practical recursive descent parser, since you can usually turn it into a while-loop instead of recursion. For example, leaving aside the question of the two expr types above, the original grammar in an extended BNF with repetition operators might be something like
list ::= item (';' item)*
Which translates into something like:
def parse_list():
parse_item()
while peek(';'):
match(';')
parse_item()
(Error checking and AST building omitted.)

Related

Does Boost Spirit X3 support left recursion?

One of the shortcomings/implementation challenges of recursive-descent parsers is dealing with left recursion, e.g.
<expr> := <expr> '+' <num>
| <num>
The parser needs to parse an expr before it can parse an expr...
Now, Boost::Spirit::X3 generates recursive descent parsers. Does that mean it doesn't support left-recursion, or does it have workarounds for it?
Note: Left recursion can (often? always?) be eliminated from the grammar beforehand (like in the solution to this question), but that's not what I'm asking.
Spirit doesn't rewrite your grammar at all, it runs exactly what you've wrote.

Which grammars can be parsed using recursive descent without backtracking?

According to "Recursive descent parser" on Wikipedia, recursive descent without backtracking (a.k.a. predictive parsing) is only possible for LL(k) grammars.
Elsewhere, I have read that the implementation of Lua uses such a parser. However, the language is not LL(k). In fact, Lua is inherently ambiguous: does a = f(g)(h)[i] = 1 mean a = f(g); (h)[i] = 1 or a = f; (g)(h)[i] = 1? This ambiguity is resolved by greediness in the parser (so the above is parsed as the erroneous a = f(g)(h)[i]; = 1).
This example seems to show that predictive parsers can handle grammars which are not LL(k). Is it true they can, in fact, handle a superset of LL(k)? If so, is there a way to find out whether a given grammar is in this superset?
In other words, if I am designing a language which I would like to parse using a predictive parser, do I need to restrict the language to LL(k)? Or is there a looser restriction I can apply?
TL;DR
For a suitable definition of a recursive descent parser, it is absolutely correct that only LL(k) languages can be parsed by recursive descent.
Lua can be parsed with a recursive descent parser precisely because the language is LL(k); that is, an LL(k) grammar exists for Lua. [Note 1]
1. An LL(k) language may have non-LL(k) grammars.
A language is LL(k) if there is an LL(k) grammar which recognizes the language. That doesn't mean that every grammar which recognizes the language is LL(k); there might be any number of non-LL(k) grammars which recognize the language. So the fact that some grammar is not LL(k) says absolutely nothing about the language itself.
2. Many practical programming languages are described with an ambiguous grammar.
In formal language theory, a language is inherently ambiguous only if every grammar for the language is ambiguous. It is probably safe to say that no practical programming language is inherently ambiguous, since practical programming languages are deterministically parsed (somehow). [Note 2].
Because writing a strictly non-ambiguous grammar can be tedious, it is pretty common for the language documentation to provide an ambiguous grammar, along with textual material which indicates how the ambiguities are to be resolved.
For example, many languages (including Lua) are documented with a grammar which does not explicitly include operator precedence, allowing a simple rule for expressions:
exp ::= exp Binop exp | Unop exp | term
That rule is clearly ambiguous, but given a list of operators, their relative precedences and an indication of whether each operator is left- or right-associative, the rule can be mechanically expanded into an unambiguous expression grammar. Indeed, many parser generators allow the user to provide the precedence declarations separately, and perform the mechanical expansion in the course of producing the parser. The resulting parser, it should be noted, is a parser for the disambiguated grammar so the ambiguity of the original grammar does not imply that the parsing algorithm is capable of dealing with ambiguous grammars.
Another common example of ambiguous reference grammars which can be mechanically disambiguated is the "dangling else" ambiguity found in languages like C (but not in Lua). The grammar:
if-statement ::= "if" '(' exp ')' stmt
| "if" '(' exp ')' stmt "else" stmt
is certainly ambiguous; the intention is that the parse be "greedy". Again, the ambiguity is not inherent. There is a mechanical transformation which produces an unambiguous grammar, something like the following:
matched-statement ::= matched-if-stmt | other-statement
statement ::= matched-if-stmt | unmatched-if-stmt
matched-if-stmt ::= "if" '(' exp ')' matched-statement "else" matched-statement
unmatched-if-stmt ::= "if" '(' exp ')' statement
| "if" '(' exp ')' matched-statement "else" unmatched-if-stmt
It is quite common for parser generators to implicitly perform this transformation. (For an LR parser generator, the transformation is actually implemented by deleting reduce actions if they conflict with a shift action. This is simpler than transforming the grammar, but it has exactly the same effect.)
So Lua (and other programming languages) are not inherently ambiguous; and therefore they can be parsed with parsing algorithms which require unambiguous deterministic parsers. Indeed, it might even be a little surprising that there are languages for which every possible grammar is ambiguous. As is pointed out in the Wikipedia article cited above, the existence of such languages was proven by Rohit Parikh in 1961; a simple example of an inherently-ambiguous context-free language is
{anbmcmdn|n,m≥0} ∪ {anbncmdm|n,m≥0}.
3. Greedy LL(1) parsing of Lua assignment and function call statements
As with the dangling else construction above, the disambiguation of Lua statement sequences is performed by only allowing the greedy parse. Intuitively, the procedure is straight-forward; it is based on forbidding two consecutive statements (without intervening semicolon) where the second one starts with a token which might continue the first one.
In practice, it is not really necessary to perform this transformation; it can be done implicitly during the construction of the parser. So I'm not going to bother to generate a complete Lua grammar here. But I trust that the small subset of the Lua grammar here is sufficient to illustrate how the transformation can work.
The following subset (largely based on the reference grammar) exhibits precisely the ambiguity indicated in the OP:
program ::= statement-list
statement-list ::= Ø
| statement-list statement
statement ::= assignment | function-call | block | ';'
block ::= "do" statement-list "end"
assignment ::= var '=' exp
exp ::= prefixexp [Note 3]
prefixexp ::= var | '(' exp ')' | function-call
var ::= Name | prefixexp '[' exp ']'
function-call ::= prefixexp '(' exp ')'
(Note: (I'm using Ø to represent the empty string, rather ε, λ, or %empty.)
The Lua grammar as is left-recursive, so it is clearly not LL(k) (independent of the ambiguity). Removing the left-recursion can be done mechanically; I've done enough of it here in order to demonstrate that the subset is LL(1). Unfortunately, the transformed grammar does not preserve the structure of the parse tree, which is a classic problem with LL(k) grammars. It is usually simple to reconstruct the correct parse tree during a recursive descent parse and I'm not going to go into the details.
It is simple to provide an LL(1) version of exp, but the result eliminates the distinction between var (which can be assigned to) and function-call (which cannot):
exp ::= term exp-postfix
exp-postfix ::= Ø
| '[' exp ']' exp-postfix
| '(' exp ')' exp-postfix
term ::= Name | '(' exp ')'
But now we need to recreate the distinction in order to be able to parse both assignment statements and function calls. That's straight-forward (but does not promote understanding of the syntax, IMHO):
a-or-fc-statement ::= term a-postfix
a-postfix ::= '=' exp
| ac-postfix
c-postfix ::= Ø
| ac-postfix
ac-postfix ::= '(' exp ')' c-postfix
| '[' exp ']' a-postfix
In order to make the greedy parse unambiguous, we need to ban (from the grammar) any occurrence of S1 S2 where S1 ends with an exp and S2 starts with a '('. In effect, we need to distinguish different types of statement, depending on whether or not the statement starts with a (, and independently, whether or not the statement ends with an exp. (In practice, there are only three types because there are no statements which start with a ( and do not end with an exp. [Note 4])
statement-list ::= Ø
| s1 statement-list
| s2 s2-postfix
| s3 s2-postfix
s2-postfix ::= Ø
| s1 statement-list
| s2 s2-postfix
s1 ::= block | ';'
s2 ::= Name a-postfix
s3 ::= '(' exp ')' a-postfix
4. What is recursive descent parsing, and how can it be modified to incorporate disambiguation?
In the most common usage, a predictive recursive descent parser is an implementation of the LL(k) algorithm in which each non-terminal is mapped to a procedure. Each non-terminal procedure starts by using a table of possible lookahead sequences of length k to decide which alternative production for that non-terminal to use, and then simply "executes" the production symbol by symbol: terminal symbols cause the next input symbol to be discarded if it matches or an error to be reported if it doesn't match; non-terminal symbols cause the non-terminal procedure to be called.
The tables of lookahead sequences can be constructed using FIRSTk and FOLLOWk sets. (A production A→ω is mapped to a sequence α of terminals if α ∈ FIRSTk(ω FOLLOWk(A)).) [Note 5]
With this definition of recursive descent parsing, a recursive descent parser can handle precisely and solely LL(k) languages. [Note 6]
However, the alignment of LL(k) and recursive descent parsers ignores an important aspect of a recursive descent parser, which is that it is, first and foremost, a program normally written in some Turing-complete programming language. If that program is allowed to deviate slightly from the rigid structure described above, it could parse a much larger set of languages, even languages which are not context-free. (See, for example, the C context-sensitivity referenced in Note 2.)
In particular, it is very easy to add a "default" rule to a table mapping lookaheads to productions. This is a very tempting optimization because it considerably reduces the size of the lookahead table. Commonly, the default rule is used for non-terminals whose alternatives include an empty right-hand side, which in the case of an LL(1) grammar would be mapped to any symbol in the FOLLOW set for the non-terminal. In that implementation, the lookahead table only includes lookaheads from the FIRST set, and the parser automatically produces an empty right-hand side, corresponding to an immediate return, for any other symbol. (As with the similar optimisation in LR(k) parsers, this optimization can delay recognition of errors but they are still recognized before an additional token is read.)
An LL(1) parser cannot include a nullable non-terminal whose FIRST and FOLLOW sets contain a common element. However, if the recursive descent parser uses the "default rule" optimization, that conflict will never be noticed during the construction of the parser. In effect, ignoring the conflict allows the construction of a "greedy" parser from (certain) non-deterministic grammars.
That's enormously convenient, because as we have seen above producing unambiguous greedy grammars is a lot of work and does not lead to anything even vaguely resembling a clear exposition of the language. But the modified recursive parsing algorithm is not more powerful; it simply parses an equivalent SLL(k) grammar (without actually constructing that grammar).
I do not intend to provide a complete proof of the above assertion, but a first step is to observe that any non-terminal can be rewritten as a disjunction of new non-terminals, each with a single distinct FIRST token, and possibly a new non-terminal with an empty right-hand side. It is then "only" necessary to remove non-terminals from the FOLLOW set of nullable non-terminals by creating new disjunctions.
Notes
Here, I'm talking about the grammar which operates on a tokenized stream, in which comments have been removed and other constructs (such as strings delimited by "long brackets") reduced to a single token. Without this transformation, the language would not be LL(k) (since comments -- which can be arbitrarily long -- interfere with visibility of the lookahead token). This allows me to also sidestep the question of how long brackets can be recognised with an LL(k) grammar, which is not particularly relevant to this question.
There are programming languages which cannot be deterministically parsed by a context-free grammar. The most notorious example is probably Perl, but there is also the well-known C construct (x)*y which can only be parsed deterministically using information about the symbol x -- whether it is a variable name or a type alias -- and the difficulties of correctly parsing C++ expressions involving templates. (See, for example, the questions Why can't C++ be parsed with a LR(1) parser? and Is C++ context-free or context-sensitive?)
For simplicity, I've removed the various literal constants (strings, numbers, booleans, etc.) as well as table constructors and function definitions. These tokens cannot be the target of a function-call, which means that an expression ending with one of these tokens cannot be extended with a parenthesized expression. Removing them simplifies the illustration of disambiguation; the procedure is still possible with the full grammar, but it is even more tedious.
With the full grammar, we will need to also consider expressions which cannot be extended with a (, so there will be four distinct options.
There are deterministic LL(k) grammars which fail to produce unambiguous parsing tables using this algorithm, which Sippu & Soisalon-Soininen call the Strong LL(k) algorithm. It is possible to augment the algorithm using an additional parsing state, similar to the state in an LR(k) parser. This might be convenient for particular grammars but it does not change the definition of LL(k) languages. As Sippu & Soisalon-Soininen demonstrate, it is possible to mechanically derive from any LL(k) grammar an SLL(k) grammar which produces exactly the same language. (See Theorem 8.47 in Volume 2).
The recursive definition algorithm is a precise implementation of the canonical stack-based LL(k) parser, where the parser stack is implicitly constructed during the execution of the parser using the combination of the current continuation and the stack of activation records.

Difference between: 'Eliminate left-recursion' and 'construct an equivalent unambiguous grammar'

For example:
R → R bar R|RR|R star|(R)|a|b
construct an equivalent unambiguous grammar:
R → S|RbarS S→T|ST
T → U|Tstar U→a|b|(R)
How about Eliminate left-recursion for R → R bar R|RR|R star|(R)|a|b?
What's the different between Eliminate left-recursion and construct an equivalent unambiguous grammar?
An unambiguous grammar is one where for each string in the language, there is exactly one way to derive it from the grammar. In the context of compiler construction the problem with ambiguous grammar is that it's not obvious from the grammar what the parse tree for a given input string should be. Some tools solve this using their rules for resolving ambiguities while other simply require the grammar to be unambiguous.
A left-recursive grammar is one where the derivation for a given non-terminal can produce that same non-terminal again without first producing a terminal. This leads to infinite loops in recursive-descent-style parsers, but is no problems for shift-reduce parsers.
Note that an unambiguous grammar can still be left-recursive and a grammar without left recursion can still be ambiguous. Also note that depending on your tools, you may need to only remove ambiguity, but not left-recursion, or you may need to remove left-recursion, but not ambiguity (though an unambiguous grammar is generally preferable).
So the difference is that eliminating left recursion and ambiguity solve different problems and are necessary in different situation.

Is this grammar LL(1)?

I have derived the following grammar:
S -> a | aT
T -> b | bR
R -> cb | cbR
I understand that in order for a grammar to be LL(1) it has to be non-ambiguous and right-recursive. The problem is that I do not fully understand the concept of left-recursive and right-recursive grammars. I do not know whether or not the following grammar is right recursive. I would really appreciate a simple explanation of the concept of left-recursive and right-recursive grammars, and if my grammar is LL(1).
Many thanks.
This grammar is not LL(1). In an LL(1) parser, it should always be possible to determine which production to use next based on the current nonterminal symbol and the next token of the input.
Let's look at this production, for example:
S → a | aT
Now, suppose that I told you that the current nonterminal symbol is S and the next symbol of input was an a. Could you determine which production to use? Unfortunately, without more context, you couldn't do so: perhaps you're suppose to use S → a, and perhaps you're supposed to use S → aT. Using similar reasoning, you can see that all the other productions have similar problems.
This doesn't have anything to do with left or right recursion, but rather the fact that no two productions for the same nonterminal in an LL(1) grammar can have a nonempty common prefix. In fact, a simple heuristic for checking if a grammar is not LL(1) is to see if you can find two production rules like this.
Hope this helps!
The grammar has only a single recursive rule: the last one where R is the symbol on the left, and also appears on the right. It is right-recursive because in the grammar rule, R is the rightmost symbol. The rule refers to R, and that reference is rightmost.
The language is LL(1). How we know this is that we can easily construct a recursive descent parser that uses no backtracking and at most one token of lookahead.
But such a parser would be based on a slightly modified version of the grammar.
For instance the two productions: S -> a and S -> a T could be merged into a single one that can be expressed by the EBNF S -> a [ T ]. (S derives a, followed by optional T). This rule can be handled by a single parsing function for recognizing S.
The function matches a and then looks for the optional T, which would be indicated by the next input symbol being b.
We can write an LL(1) grammar for this, along these lines:
S -> a T_opt
T_opt -> b R_opt
T_opt -> <empty>
... et cetera
The optionality of T is handled explicitly, by making T (which we rename to T_opt) capable of deriving the empty string, and then condensing to a single rule for S, so that we don't have two phrases that both start with a.
So in summary, the language is LL(1), but the given grammar for it isn't. Since the language is LL(1) it is possible to find another grammar which is LL(1), and that grammar is not far off from the given one.

LR(1) grammar: how to tell? examples for/against?

I'm currently having a look at GNU Bison to parse program code (or actually to extend a program that uses Bison for doing that). I understand that Bison can only (or: best) handle LR(1) grammars, i.e. a special form of context-free grammars; and I actually also (believe to) understand the rules of context-free and LR(1) grammars.
However, somehow I'm lacking a good understanding of the notion of a LR(1) grammar. Assume SQL, for instance. SQL incorporates - I believe - a context-free grammar. But is it also a LR(1) grammar? How could I tell? And if yes, what would violate the LR(1) rules?
LR(1) means that you can choose proper rule to reduce by knowing all tokens that will be reduced plus one token after them. There are no problems with AND in boolean queries and in BETWEEN operation. The following grammar, for example is LL(1), and thus is LR(1) too:
expr ::= and_expr | between_expr | variable
and_expr ::= expr "and" expr
between_expr ::= "between" expr "and" expr
variable ::= x
I believe that the whole SQL grammar is even simpler than LR(1). Probably LR(0) or even LL(n).
Some of my customers created SQL and DB2 parsers using my LALR(1) parser generator and used them successfully for many years. The grammars they sent me are LALR(1) (except for the shift-reduce conflicts which are resolved the way you would want). For the purists -- not LALR(1), but work fine in practice, no GLR or LR(1) needed. You don't even need the more powerful LR(1), AFAIK.
I think the best way to figure this out is to find an SQL grammar and a good LALR/LR(1) parser generator and see if you get a conflict report. As I remember an SQL grammar (a little out of date) that is LALR(1), is available in this download: http://lrstar.tech/downloads.html
LRSTAR is an LR(1) parser generator that will give you a conflict report. It's also LR(*) if you cannot resolve the conflicts.

Resources