LALR vs LL parser - parsing

I've been using lex/yacc and now I'm trying to switch to ANTLR. The major concern is that ANTLR is an LL(*) parser unlike yacc which is LALR. I'm used to thinking bottom-up and I don't exactly know what the advantage of LL grammars is. People say that LL grammars are easier to understand and more popular these days. But it seems that LR parsers are more powerful e.g. LL parsers are incapable of dealing with left-recursions, although there seems to be some workarounds.
So the question is what is the advantage of LL grammars over LALR? I'd appreciate it if somebody could give me some examples. Links to useful articles would be great, too.
Thanks for your help in advance!
(I see this is a great resource: What advantages do LL parsers have over LR parsers?, but it would've been better with some examples.)

LR parsers are strictly more powerful than LL parsers, and in addition, LALR parsers can run in O(n) like LL parsers. So you won't find any functional advantages of LL over LR.
Thus, the only advantage of LL is that LR state machines are quite a bit more complex and difficult to understand, and LR parsers themselves are not especially intuitive. On the other hand, LL parser code which is automatically generated can be very easy to understand and debug.

The greatest advantage I see to LL parsers is that they are so easy to understand and implement! You can hand write recursive descent parsers with code that closely matches the grammar.
LR are generally considered more powerful and also much faster BUT there are a few trade offs that I know:
LR parsers can only use synthesized attributes; they can not pass inherited attributes.
Actions in an LR grammar can cause grammar nondeterminism but not in LL.
However, you will find that LL(*) are also very powerful.

Related

What happens if you directly use LL grammar for an LR parser, after making basic syntactical changes?

sorry for the amateurish question. I have a grammar that's LL and I want to write an LR grammar. Can I use the LL grammar, make minimal syntactical changes for it to fit with an LR parser and use it? Is that a bad idea? Are there structural differences between them that don't translate?
All LL(1) grammars are LR(1), so if you had an LR(1) parser generator, you could definitely use your LL(1) grammar, assuming the BNF syntax is that used by the parser generator.
But you probably don't have an LR(1) parser generator, but rather a parser generator which can only handle the LALR(1) subset of LR(1) grammars. All the same, you're probably fine. "Most" LL(1) grammars are in LALR(1), and it's pretty rare to find a useful LL(1) which is not. (This pattern is unlikely to arise in a practical grammar, for example.)
So it's probably possible. But it might not be a good idea.
Top-down parsers can't handle left-recursion, and without left-recursion you can't write a grammar which represents left-associative operators, which is to say most arithmetic operators. This problem is usually solved in practice by using a right-associative grammar along with an idiosyncratic evaluation function which corrects the associativity. That's less than ideal. Also, LL grammars created by mechanically removing left-recursion tend to be very hard to read.
So you are probably best off using a grammar designed for LR parsing. But you probably don't have to.

CKY for Parsing Programming Languages

Is it a good idea to use the CKY chart parsing algorithm to parse the syntax of programming languages (knowing that it is mostly used to parse the syntax of natural language)?
CKY can parse any context free language, but the time complexity is not great compared to alternatives. CKY requires the grammar to be in Chomsky Normal Form, which can blow up the size of the grammar and hurt running time too. It's an okay approach for a quick-and-dirty parser, but you'll run into issues when you try to scale up to larger inputs or complex grammars.
If you're looking for an understandable parsing algorithm that's relatively straightforward to implement, take a look at Parsing Expression Grammars (PEGs). They can recognize a large subset of context-free languages, plus some languages with limited context sensitivity. Once you have a working PEG parser it's easy to add memoization, which gives you a Packrat Parser that runs in linear time. The academic papers on PEGs, Packrat, and this extension to allow left-recursive grammars are all quite understandable.

Linear-time general parser

I've been reading Dick Grune's Parsing techniques 1st edition for quite a while now, the book is from the mid-90's and the author argues that no such parsing method (Linear-time general parsing) has been discovered until the date.
"we should like to have a linear-time general parsing method.
Unfortunately no such method has been discovered to date." pg 76
Has anyone developed such method?
No such method has been devised. As far as I can tell, the CYK algorithm remains the general parsing algorithm with the best worst case performance (O(n3)).
A GLR parsers are O(n^3) in worst case, but offer linear performance where the grammar is deterministic. Many grammars have this property, so in effect you get linear time parsing in practice.
We've managed to build parsers for many real, complicated languages using a GLR parser, even the famously hard to parse C++.
Packrat with a full memoisation is a guaranteed O(n), but there is a relatively big linear multiplier.
I am developing a parser (JoeSon) written in CoffeeScript, and I believe it is O(n) for most interesting grammars.
I think it is essentially a Packrat parser, but with the ability to bypass the cache for some rules, which I think is necessary to write whitespace-sensitive grammars.
Packrat does not parse all context free grammars. It has difficulty with counting problems, like with the grammar ' S = x S x | x '. But these kinds of grammars are also difficult for humans to parse.
https://github.com/jaekwon/joeson/blob/master/joeson_grammar.coffee
https://github.com/jaekwon/joeson/blob/master/joescript_grammar.coffee

How do recursive ascent parsers work?

How do recursive ascent parsers work? I have written a recursive descent parser myself but I don't understand LR parsers all that well. What I found on Wikipedia has only added to my confusion.
Another question is why recursive ascent parsers aren't used more than their table-based counterparts. It seems that recursive ascent parsers have greater performance overall.
The clasical dragon book explains very well how LR parsers work. There is also Parsing Techniques. A Practical Guide. where you can read about them, if I remember well. The article in wikipedia (at least the introduction) is not right. They were created by Donald Knuth, and he explains them in his The Art of Computer Programming Volume 5. If you understand spanish, there is a complete list of books here posted by me. Not all that books are in spanish, either.
Before to understand how they work, you must understand a few concepts like first, follows and lookahead. Also, I really recommend you to understand the concepts behind LL (descendant) parsers before trying to understand LR (ascendant) parsers.
There are a family of parsers LR, specially LR(K), SLR(K) and LALR(K), where K is how many lookahead they need to work. Yacc supports LALR(1) parsers, but you can make tweaks, not theory based, to make it works with more powerful kind of grammars.
About performance, it depends on the grammar being analyzed. They execute in linear time, but how many space they need depends on how many states do you build for the final parser.
I'm personally having a hard time understanding how a function call can be faster -- much less "significantly faster" than a table lookup. And I suspect that even "significantly faster" is insignificant when compared to everything else that a lexer/parser has to do (primarily reading and tokenizing the file). I looked at the Wikipedia page, but didn't follow the references; did the author actually profile a complete lexer/parser?
More interesting to me is the decline of table-driven parsers with respect to recursive descent. I come from a C background, where yacc (or equivalent) was the parser generator of choice. When I moved to Java, I found one table-driven implementation (JavaCup), and several recursive descent implementations (JavaCC, ANTLR).
I suspect that the answer is similar to the answer of "why Java instead of C": speed of execution isn't as important as speed of development. As noted in the Wikipedia article, table-driven parsers are pretty much impossible to understand from code (back when I was using them, I could follow their actions but would never have been able to reconstruct the grammar from the parser). Recursive descent, by comparison, is very intuitive (which is no doubt why it predates table-driven by some 20 years).
The Wikipedia article on recursive ascent parsing references what appears to be the original paper on the topic ("Very Fast LR Parsing"). Skimming that paper cleared a few things up for me. Things I noticed:
The paper talks about generating assembly code. I wonder if you can do the same things they do if you're generating C or Java code instead; see sections 4 and 5, "Error recovery" and "Stack overflow checking". (I'm not trying to FUD their technique -- it might work out fine -- just saying that it's something you might want to look into before committing.)
They compare their recursive ascent tool to their own table-driven parser. From the description in their results section, it looks like their table-driven parser is "fully interpreted"; it doesn't require any custom generated code. I wonder if there's a middle ground where the overall structure is still table-driven but you generate custom code for certain actions to speed things up.
The paper referenced by the Wikipedia page:
"Very fast LR parsing" (1986)
http://portal.acm.org/citation.cfm?id=13310.13326
Another paper about using code-generation instead of table-interpretation:
"Very fast YACC-compatible parsers (for very little effort)" (1999)
http://www3.interscience.wiley.com/journal/1773/abstract
Also, note that recursive-descent parsing is not the fastest way to parse LL-grammar-based languages:
Difference between an LL and Recursive Descent parser?

Difference between an LL and Recursive Descent parser?

I've recently being trying to teach myself how parsers (for languages/context-free grammars) work, and most of it seems to be making sense, except for one thing. I'm focusing my attention in particular on LL(k) grammars, for which the two main algorithms seem to be the LL parser (using stack/parse table) and the Recursive Descent parser (simply using recursion).
As far as I can see, the recursive descent algorithm works on all LL(k) grammars and possibly more, whereas an LL parser works on all LL(k) grammars. A recursive descent parser is clearly much simpler than an LL parser to implement, however (just as an LL one is simpler than an LR one).
So my question is, what are the advantages/problems one might encounter when using either of the algorithms? Why might one ever pick LL over recursive descent, given that it works on the same set of grammars and is trickier to implement?
LL is usually a more efficient parsing technique than recursive-descent. In fact, a naive recursive-descent parser will actually be O(k^n) (where n is the input size) in the worst case. Some techniques such as memoization (which yields a Packrat parser) can improve this as well as extend the class of grammars accepted by the parser, but there is always a space tradeoff. LL parsers are (to my knowledge) always linear time.
On the flip side, you are correct in your intuition that recursive-descent parsers can handle a greater class of grammars than LL. Recursive-descent can handle any grammar which is LL(*) (that is, unlimited lookahead) as well as a small set of ambiguous grammars. This is because recursive-descent is actually a directly-encoded implementation of PEGs, or Parser Expression Grammar(s). Specifically, the disjunctive operator (a | b) is not commutative, meaning that a | b does not equal b | a. A recursive-descent parser will try each alternative in order. So if a matches the input, it will succeed even if b would have matched the input. This allows classic "longest match" ambiguities like the dangling else problem to be handled simply by ordering disjunctions correctly.
With all of that said, it is possible to implement an LL(k) parser using recursive-descent so that it runs in linear time. This is done by essentially inlining the predict sets so that each parse routine determines the appropriate production for a given input in constant time. Unfortunately, such a technique eliminates an entire class of grammars from being handled. Once we get into predictive parsing, problems like dangling else are no longer solvable with such ease.
As for why LL would be chosen over recursive-descent, it's mainly a question of efficiency and maintainability. Recursive-descent parsers are markedly easier to implement, but they're usually harder to maintain since the grammar they represent does not exist in any declarative form. Most non-trivial parser use-cases employ a parser generator such as ANTLR or Bison. With such tools, it really doesn't matter if the algorithm is directly-encoded recursive-descent or table-driven LL(k).
As a matter of interest, it is also worth looking into recursive-ascent, which is a parsing algorithm directly encoded after the fashion of recursive-descent, but capable of handling any LALR grammar. I would also dig into parser combinators, which are a functional way of composing recursive-descent parsers together.

Resources