LL top-down parser, from CST to AST - parsing

I am currently learning about syntax analysis, and more especially, top-down parsing.
I know the terminology and the difference with bottom-up LR parsers, and since top-down LL parsers are easier to implement by hand, I am looking forward to make my own.
I have seen two kinds of approach:
The recursive-descent one using a collection of recursive functions.
The stack-based and table-driven automaton as shown here on Wikipedia.
I am more interested by the latter, for its power and its elimination of call-stack recursion. However, I don't understand how to build the AST from the implicit parse tree.
This code example of a stack-based finite automaton show the parser analyzing the input buffer, only giving a yes/no answer if the syntax has been accepted.
I have heard of stack annotations in order to build the AST, but I can't figure out how to implement them. Can someone provide a practical implementation of such technique ?

"Top-down" and "bottom-up" are excellent descriptions of the two parsing strategies, because they describe precisely how the syntax tree would be constructed if it were constructed. (You can also think of it as the traversal order over the implicit parse tree but here we're actually interested in real parse trees.)
It seems clear that there is an advantage to bottom-up tree construction. When it is time to add a node to the tree, you already know what its children are. You can construct the node fully-formed in one (functional) action. All the child information is right there waiting for you, so you can add semantic information to the node based on the semantic information of its children, even using the children in an order other than left-to-right.
By contrast, the top-down parser constructs the node without any children, and then needs to add each child in turn to the already constructed node. That's certainly possible, but it's a bit ugly. Also, the incremental nature of the node constructor means that semantic information attached to the node also needs to be computed incrementally, or deferred until the node is fully constructed.
In many ways, this is similar to the difference between evaluating expressions written in Reverse Polish Notation (RPN) from expressions written in (Forward) Polish Notation [Note 1]. RPN was invented precisely to ease evaluation, which is possible with a simple value stack. Forward Polish expressions can be evaluated, obviously: the easiest way is to use a recursive evaluator but in environments where the call stack can not be relied upon it is possible to do it using an operator stack, which effectively turns the expression into RPN on the fly.
So that's probably the mechanism of choice for building syntax trees from top-down parsers as well. We add a "reduction" marker to the end of every right-hand side. Since the marker goes at the end of the right-hand side, so it is pushed first.
We also need a value stack, to record the AST nodes (or semantic values) being constructed.
In the basic algorithm, we now have one more case. We start by popping the top of the parser stack, and then examine this object:
The top of the parser stack was a terminal. If the current input symbol is the same terminal, we remove the input symbol from the input, and push it (or its semantic value) onto the value stack.
The top of the parser stack was a marker. The associated reduction action is triggered, which will create the new AST node by popping an appropriate number of values from the value stack and combining them into a new AST node which is then pushed onto the value stack. (As a special case, the marker action for the augmented start symbol's unique production S' -> S $ causes the parse to be accepted, returning the (only) value in the value stack as the AST.)
The top of the parser stack was a non-terminal. We then identify the appropriate right-hand side using the current input symbol, and push that right-hand side (right-to-left) onto the parser stack.

You need to understand the concept behind. You need to understand the concept of pushdown automaton. After you understand how to make computation on paper with pencil you will be able to understand multiple ways to implement its idea, via recursive descent or with stack. The ideas are the same, when you use recursive descent you implicitly have the stack that the program use for execution, where the execution data is combined with the parsing automaton data.
I suggest you to start with the course taught by Ullman (automata) or Dick Grune, this one is the best focused on parsing. (the book of Grune is this one), look for the 2nd edition.
For LR parsing the essential is to understand the ideas of Earley, from these ideas Don Knuth created the LR method.
For LL parsing, the book of Grune is excellent, and Ullman presents the computation on paper, the math background of parsing that is essential to know if you want to implement your own parsers.
Concerning the AST, this is the output of parsing. A parser will generate a parsing tree that is transformed in AST or can generate and output directly the AST.

Related

Searching/predicting next terminal/non-terminal by CFG/Tree?

I'm looking for algorithm to help me predict next token given a string/prefix and Context free grammar.
First question is what is the exact structure representing CFG. It seems it is a tree, but what type of tree ? I'm asking because the leaves are always ordered , is there a ordered-tree ?
May be if i know the correct structure I can find algorithm for bottom-up search !
If it is not exactly a Search problem, then the next closest thing it looks like Parsing the prefix-string and then Generating the next-token ? How do I do that ?
any ideas
my current generated grammar is simple it has no OR rules (except when i decide to reuse the grammar for new sequences, i will be). It is generated by Sequitur algo and is so called SLG(single line grammar) .. but if I generate it using many seq's the TOP rule will be Ex:>
S : S1 z S3 | u S2 .. S5 S1 | S4 S2 .. |... | Sn
S1 : a b
S2 : h u y
...
..i.e. top-heavy SLG, except the top rule all others do not have OR |
As a side note I'm thinking of a ways to convert it to Prolog and/or DCG program, where may be there is easier way to do what I want easily ?! what do you think ?
TL;DR: In abstract, this is a hard problem. But it can be pretty simple for given grammars. Everything depends on the nature of the grammar.
The basic algorithm indeed starts by using some parsing algorithm on the prefix. A rough prediction can then be made by attempting to continue the parse with each possible token, retaining only those which do not produce immediate errors.
That will certainly give you a list which includes all of the possible continuations. But the list may also include tokens which cannot appear in a correct input. Indeed, it is possible that the correct list is empty (because the given prefix is not the prefix of any correct input); this will happen if the parsing algorithm is unable to correctly verify whether a token sequence is a possible prefix.
In part, this will depend on the grammar itself. If the grammar is LR(1), for example, then the LR(1) parsing algorithm can precisely identify the continuation set. If the grammar is LR(k) for some k>1, then it is theoretically possible to produce an LR(1) grammar for the same language, but the resulting grammar might be impractically large. Otherwise, you might have to settle for "false positives". That might be acceptable if your goal is to provide tab-completion, but in other circumstances it might not be so useful.
The precise datastructure used to perform the internal parse and exploration of alternatives will depend on the parsing algorithm used. Many parsing algorithms, including the standard LR parsing algorithm whose internal data structure is a simple stack, feature a mutable internal state which is not really suitable for the exploration step; you could adapt such an algorithm by making a copy of the entire internal data structure (that is, the stack) before proceeding with each trial token. Alternatively, you could implement a copy-on-write stack. But the parser stack is not usually very big, so copying it each time is generally feasible. (That's what Bison does to produce expanded error messages with an "expected token" list, and it doesn't seem to trigger unacceptable runtime overhead in practice.)
Alternatively, you could use some variant of CYK chart parsing (or a GLR algorithm like the Earley algorithm), whose internal data structures can be implemented in a way which doesn't involve destructive modification. Such algorithms are generally used for grammars which are not LR(1), since they can cope with any CFG although highly ambiguous grammars can take a long time to parse (proportional to the cube of the input length). As mentioned above, though, you will get false positives from such algorithms.
If false positives are unacceptable, then you could use some kind of heuristic search to attempt to find an input sequence which completes the trial prefix. This can in theory take quite a long time, but for many grammars a breadth-first search can find a completion within a reasonable time, so you could terminate the search after a given maximum time. This will not produce false positives, but the time limit might prevent it from finding the complete set of possible continuations.

Writing a lexer for a context sensitive markup language, that has recursive structures such as nested lists

I'm working on a reStructuredText transpiler in Rust, and am in need of some advice concerning how lexing should be structured in languages that have recursive structures. For example lists within lists are possible in rST:
* This is a list item
* This is a sub list item
* And here we are at the preceding indentation level again.
The default docutils.parsers.rst took the approach of scanning the input one line at a time:
The reStructuredText parser is implemented as a state machine, examining its
input one line at a time.
The state machine mentioned basically operates on a set of states of the form (regex, match_method, next_state). It tries to match the current line to the regex based on the current state and runs match_method while transitioning to the next_state if a match succeeds, doing this until it runs out of lines to scan.
My question then is, is this the best approach to scanning a language such as rST? My approach thus far has been to create a Chars iterator of the source and eat away at the source while trying to match against structures at the current Unicode scalar. This works to some extent when all I'm doing is scanning inline content, but I've now run into the realization that handling recursive body level structures like nested lists is going to be a pain in the butt. It feels like I'm going to need a whole bunch of states with duplicate regexes and related methods in many states for matching against indentations before new lines and such.
Would it be better to simply have and iterator of the lines of the source and match on a per-line basis, and if a line such as
* this is an indented list item
is encountered in State::Body, simply transition to a state such as State::BulletList and start lexing lines based on the rules specified there? The above line could be lexed for example as a sequence
TokenType::Indent, TokenType::Bullet, TokenType::BodyText
Any thoughts on this?
I don't know much about rST. But you say it has "recursive" structures. If that's that case, you can't fully lex it as a recursive structure using just state machines or regexes or even lexer generators.
But this the wrong way to think about it. The lexer's job is to identify the atoms of the language. A parser's job is to recognize structure, especially if it is recursive (yes, parsers often build trees recording the recursive structures they found).
So build the lexer ignoring context if you can, and use a parser to pick up the recursive structures if you need them. You can read more about the distinction in my SO answer about Parsers Vs. Lexers https://stackoverflow.com/a/2852716/120163
If you insist on doing all of this in the lexer, you'll need to augment it with a pushdown stack to track the recursive structures. Then what are you building is a sloppy parser disguised as lexer. (You will probably still want a real parser to process the output of this "lexer").
Having a pushdown stack actually useful if the language has different atoms in different contexts especially if the contexts nest; in this case what you want is mode stack that you change as the lexer encounters tokens that indicate a switch from one mode to another. A really useful extension of this idea is to have mode changes select what amounts to different lexers, each of which produces lexemes unique to that mode.
As an example you might do this to lex a language that contains embedded SQL. We build parsers for JavaScript; our lexer uses a pushdown stack to process the content of regexp literals and track nesting of { ... } [...] and (... ). (This has arguably a downside: it rejects versions of JQuery.js that contain malformed regexes [yes, they exist]. Javascript doesn't care if you define a bad regex literal and never use it, but that seems pretty pointless.)
A special case of the stack occurs if you only have track single "(" ... ")" pairs or the equivalent. In this case you can use a counter to record how many "pushes" or "pop" you might have done on a real stack. If you have two or more pairs of tokens like this, counters don't work.

Can this be parsed by a LALR(1) parser?

I am writing a parser in Bison for a language which has the following constructs, among others:
self-dispatch: [identifier arguments]
dispatch: [expression . identifier arguments]
string slicing: expression[expression,expression] - similar to Python.
arguments is a comma-separated list of expressions, which can be empty too. All of the above are expressions on their own, too.
My problem is that I am not sure how to parse both [method [other_method]] and [someString[idx1, idx2].toInt] or if it is possible to do this at all with an LALR(1) parser.
To be more precise, let's take the following example: [a[b]] (call method a with the result of method b). When it reaches the state [a . [b]] (the lookahead is the second [), it won't know whether to reduce a (which has already been reduced to identifier) to expression because something like a[b,c] might follow (which could itself be reduced to expression and continue with the second construct from above) or to keep it identifier (and shift it) because a list of arguments will follow (such as [b] in this case).
Is this shift/reduce conflict due to the way I expressed this grammar or is it not possible to parse all of these constructs with an LALR(1) parser?
And, a more general question, how can one prove that a language is/is not parsable by a particular type of parser?
Assuming your grammar is unambiguous (which the part you describe appears to be) then your best bet is to specify a %glr-parser. Since in most cases, the correct parse will be forced after only a few tokens, the overhead should not be noticeable, and the advantage is that you do not need to complicate either the grammar or the construction of the AST.
The one downside is that bison cannot verify that the grammar is unambiguous -- in general, this is not possible -- and it is not easy to prove. If it turns out that some input is ambiguous, the GLR parser will generate an error, so a good test suite is important.
Proving that the language is not LR(1) would be tricky, and I suspect that it would be impossible because the language probably is recognizable with an LALR(1) parser. (Impossible to tell without seeing the entire grammar, though.) But parsing (outside of CS theory) needs to create a correct parse tree in order to be useful, and the sort of modifications required to produce an LR grammar will also modify the AST, requiring a post-parse fixup. The difficultly in creating a correct AST spring from the difference in precedence between
a[b[c],d]
and
[a[b[c],d]]
In the first (subset) case, b binds to its argument list [c] and the comma has lower precedence; in the end, b[c] and d are sibling children of the slice. In the second case (method invocation), the comma is part of the argument list and binds more tightly than the method application; b, [c] and d are siblings in a method application. But you cannot decide the shape of the parse tree until an arbitrarily long input (since d could be any expression).
That's all a bit hand-wavey since "precedence" is not formally definable, and there are hacks which could make it possible to adjust the tree. Since the LR property is not really composable, it is really possible to provide a more rigorous analysis. But regardless, the GLR parser is likely to be the simplest and most robust solution.
One small point for future reference: CFGs are not just a programming tool; they also serve the purpose of clearly communicating the grammar in question. Nirmally, if you want to describe your language, you are better off using a clear CFG than trying to describe informally. Of course, meaningful non-terminal names will help, and a few examples never hurt, but the essence of the grammar is in the formal description and omitting that makes it harder for others to "be helpful".

simplify equations/expressions using Javacc/jjtree

I have created a grammar to read a file of equations then created AST nodes for each rule.My question is how can I do simplification or substitute vales on the equations that the parser is able to read correctly. in which stage? before creating AST nodes or after?
Please provide me with ideas or tutorials to follow.
Thank you.
I'm assuming you equations are something like simple polynomials over real-value variables, like X^2+3*Y^2
You ask for two different solutions to two different problems that start with having an AST for at least one equation:
How to "substitute values" into the equation and compute the resulting value, e.g, for X==3 and Y=2, substitute into the AST for the formula above and compute 3^2+3*2^2 --> 21
How to do simplification: I assume you mean algebraic simplification.
The first problem of substituting values is fairly easy if yuo already have the AST. (If not, parse the equation to produce the AST first!) Then all you have to do is walk the AST, replacing every leaf node containing a variable name with the corresponding value, and then doing arithmetic on any parent nodes whose children now happen to be numbers; you repeat this until no more nodes can be arithmetically evaluated. Basically you wire simple arithmetic into a tree evaluation scheme.
Sometimes your evaluation will reduce the tree to a single value as in the example, and you can print the numeric result My SO answer shows how do that in detail. You can easily implement this yourself in a small project, even using JavaCC/JJTree appropriately adapted.
Sometimes the formula will end up in a state where no further arithmetic on it is possible, e.g., 1+x+y with x==0 and nothing known about y; then the result of such a subsitution/arithmetic evaluation process will be 1+y. Unfortunately, you will only have this as an AST... now you need to print out the resulting AST in order for the user to see the result. This is harder; see my SO answer on how to prettyprint a tree. This is considerably more work; if you restrict your tree to just polynomials over expressions, you can still do this in small project. JavaCC will help you with parsing, but provides zero help with prettyprinting.
The second problem is much harder, because you must not only accomplish variable substitution and arithmetic evaluation as above, but you have to somehow encode knowledge of algebraic laws, and how to match those laws to complex trees. You might hardwire one or two algebraic laws (e.g., x+0 -> x; y-y -> 0) but hardwiring many laws this way will produce an impossible mess because of how they interact.
JavaCC might form part of such an answer, but only a small part; the rest of the solution is hard enough so you are better off looking for an alternative rather than trying to build it all on top of JavaCC.
You need a more organized approach for this: a Program Transformation System (PTS). A typical PTS will allow you specify
a grammar for an arbitrary language (in your case, simply polynomials),
automatically parses instance to ASTs and can regenerate valid text from the AST. A good PTS will let you write source-to-source transformation rules that the PTS will apply automatically the instance AST; in your case you'd write down the algebraic laws as source-to-source rules and then the PTS does all the work.
An example is too long to provide here. But here I describe how to define formulas suitable for early calculus classes, and how to define algebraic rules that simply such formulas including applying some class calculus derivative laws.
With sufficient/significant effort, you can build your own PTS on top of JavaCC/JJTree. This is likely to take a few man-years. Easier to get a PTS rather than repeat all that work.

How should I construct and walk the AST output of an ANTLR3 grammar?

Documentation and general advice is that the abstract syntax tree should omit tokens that have no meaning. ("Record the meaningful input tokens (and only the meaningful tokens" - The Definitive ANTLR Reference) IE: In a C++ AST, you would omit the braces at the start and end of the class, as they have no meaning and are simply a mechanism for delineating class start and end for parsing purposes. I understand that, for the sake of rapidly and efficiently walking the tree, culling out useless token nodes like that is useful, but in order to appropriately colorize code, I would need that information, even if it doesn't contribute to the meaning of the code. A) Is there any reason why I shouldn't have the AST serve multiple purposes and choose not to omit said tokens?
It seems to me that what ANTLRWorks interpreter outputs is what I'm looking for. In the ANTLRWorks interpreter, it outputs a tree diagram where, for each rule matched, a node is made, as well as a child node for each token and/or subrule. The parse tree, I suppose it's called.
If manually walking a tree, wouldn't it be more useful to have nodes marking a rule? By having a node marking a rule, with it's subrules and tokens as children, a manual walker doesn't need to look ahead several nodes to know the context of what node it's on. Tree grammars seem redundant to me. Given a tree of AST nodes, the tree grammar "parses" the nodes all over again in order to produce some other output. B) Given that the parser grammar was responsible for generating correctly formed ASTs and given the inclusion of rule AST nodes, shouldn't a manual walker avoid the redundant AST node pattern matching of a tree grammar?
I fear I'm wildly misunderstanding the tree grammar mechanism's purpose. A tree grammar more or less defines a set of methods that will run through a tree, look for a pattern of nodes that matches the tree grammar rule, and execute some action based on that. I can't depend on forming my AST output based on what's neat and tidy for the tree grammar (omitting meaningless tokens for pattern matching speed) yet use the AST for color-coding even the meaningless tokens. I'm writing an IDE as well; I also can't write every possible AST node pattern that a plugin author might want to match, nor do I want to require them using ANTLR to write a tree grammar. In the case of plugin authors walking the tree for their own criteria, rule nodes would be rather useful to avoid needing pattern matching.
Thoughts? I know this "question" might push the limits of being an SO question, but I'm not sure how else to formulate my inquiries or where else to inquire.
Sion Sheevok wrote:
A) Is there any reason why I shouldn't have the AST serve multiple purposes and choose not to omit said tokens?
No, you mind as well keep them in there.
Sion Sheevok wrote:
It seems to me that what ANTLRWorks interpreter outputs is what I'm looking for. In the ANTLRWorks interpreter, it outputs a tree diagram where, for each rule matched, a node is made, as well as a child node for each token and/or subrule. The parse tree, I suppose it's called.
Correct.
Sion Sheevok wrote:
B) Given that the parser grammar was responsible for generating correctly formed ASTs and given the inclusion of rule AST nodes, shouldn't a manual walker avoid the redundant AST node pattern matching of a tree grammar?
Tree grammars are often used to mix custom code in to evaluate/interpret the input source. If you mix this code inside the parser grammar and there's some backtracking going on in the parser, this custom code could be executed more than it's supposed to. Walking the tree using a tree grammar is (if done properly) only possible in one way causing custom code to be executed just once.
But if a separate tree walker/iterator is necessary, there are two camps out there that advocate using tree grammars and others opt for walking the tree manually with a custom iterator. Both camps raise valid points about their preferred way of walking the AST. So there's no clear-cut way to do it in one specific way.
Sion Sheevok wrote:
Thoughts?
Since you're not evaluating/interpreting, you mind as well not use a tree grammar.
But to create a parse tree as ANTLRWorks does (which you have no access to, btw), you will need to mix AST rewrite rules inside your parser grammar though. Here's a Q&A that explains how to do that: How to output the AST built using ANTLR?
Good luck!

Resources