Splitting Bison grammars into smaller grammar files - parsing

I have a Bison grammar file that's 700 lines long and growing; I've tried to divide it up as well as I can but as I add more complexity to my language it gets harder and harder to manage and organize all of the rules. I tried searching for some way to do the bison-equivalent of #include or the like, so that I could at least textually split it up, but came up dry. Is there a way for me to split my grammar into smaller modules without resorting to makefile hacks or the like?

Bison has no facility analogous to C's #include.
Adding a preprocessor step to your bison recipe is not a "makefile hack", imho, but rather an expected use case for make.
It's a bit tricky to use cpp on bison files because they tend to use #define directives in their prologues, but there are other preprocessors. For a simple case, you can just use cat.

Instead of splitting up the grammar itself, you can condense it down to mostly a sequence of one-liners by having the rule bodies call functions:
foo : ID bar '{' whatever_list '}' { $$ = ast_node_foo($1, $2, $4); }
;
The details of the syntax-directed translation scheme rest in these functions, which can be split into modules.

btyacc is a yacc variant that supports %include to read from multiple files
%include bar.y /* inserts the contents of bar.y at this point */
While btyacc supports all of POSIX yacc, it only supports a couple of bison extensions, however.

Related

Is it possible to remove the internal control of lexer by the parser for parsing heredoc in shell?

To deal with heredoc in shell (e.g., bash), the grammar rule will change the variable need_here_doc via push_heredoc().
| LESS_LESS WORD
{
source.dest = 0;
redir.filename = $2;
$$ = make_redirection (source, r_reading_until, redir, 0);
push_heredoc ($$);
}
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n539
static void
push_heredoc (r)
REDIRECT *r;
{
if (need_here_doc >= HEREDOC_MAX)
{
last_command_exit_value = EX_BADUSAGE;
need_here_doc = 0;
report_syntax_error (_("maximum here-document count exceeded"));
reset_parser ();
exit_shell (last_command_exit_value);
}
redir_stack[need_here_doc++] = r;
}
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n2794
need_here_doc is used in read_token(), which is called by yylex(). This makes the behavior of yylex() non-automomous.
Is it normal to design a parser that can change how yylex() behaves?
Is it because the shell language is not LALR(1), so there is no way to avoid changing the behavior of yylex() by the grammar actions?
if (need_here_doc)
gather_here_documents ();
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n3285
current_token = read_token (READ);
http://git.savannah.gnu.org/cgit/bash.git/tree/parse.y#n2761
Is it normal to design a parser that can change how yylex() behaves?
Sure. It might not be ideal, but it's extremely common.
The Posix shell syntax is far from the ideal candidate for a flex/bison parser, and about the only thing you can say for the bash implementation using flex and bison is that it demonstrates how flexible those tools can be if pushed to their respective limits. Here-docs are not the only place where "lexical feedback" is necessary.
But even in more disciplined languages, lexical feedback can be useful. Or its alternative: writing partial parsing logic into the lexical scanner in order for it to know when the parse would require a different set of lexical rules.
Possibly the most well-known (or most frequently-commented) lexical feedback is the parsing of C-style cast expressions, which require the lexer to know whether the foo in (foo) is a typename or not. (This is usually implemented by way of a symbol table shared between the parser and the lexer but the precise implementation details are tricky.)
Here are a few other examples, which might be considered relatively benign uses of lexical feedback, although they certainly increase the coupling between lexer and parser.
Python (and Haskell) require the lexical scanner to reformulate leading whitespace into INDENT or DEDENT tokens. But if the line break occurs within parentheses, the whitespace handling is suppressed (including the NEWLINE token itself).
Ecmascript (Javascript) and other languages allow regular expression literals to be written surrounded by /s. But the / could also be a division operator or the first character in a /= mutation operator. The lexical decision depends on the parse context. (This could be guessed by the lexical scanner from the recent token history, which would count as reproducing part of the parsing logic in the lexical scanner.)
Similar to the above, many languages overload < in ways which complicate the logic in the lexical scanner. The use as a template bracket rather than a comparison operator might be dealt with in the scanner -- in C++, for example, it will depend on features like whether the preceding identifier was a template or not -- but that doesn't actually change lexical context. However, the use of an angle bracket to indicate the start of an X/HTML literal (or template) definitely changes lexical context. As with the regex example above, it will be necessary to know whether or not a comparison operator would be syntactically valid or not.
Is it because the shell language is not LALR(1), so there is no way to avoid changing the behavior of yylex() by the grammar actions?
The Posix shell syntax is most certainly not LALR(1), or even context-free. But most languages could not be parsed scannerlessly with an LALR(1) parser, and many languages turn out not to have context-free grammars if you take all syntactic considerations into account. (Cf. C-style cast expressions, above.) Perhaps shell is further from the platonic ideal than most. But then, it grew over the years from a kernel intended to be simple to type, rather than formally analysable. (No comment from me about whether this excuse can be extended to Perl, which I don't plan to discuss here.)
What I'd say in general is that languages which embed other languages (regular expressions, HTML fragments, Flex/Bison semantic actions, shell arithmetic expansions, etc., etc.) present challenges for a simplistic parser/scanner model. Despite lots of interesting work and solid experimentation, my sense is that language embedding still lacks a good implementable formal structure. And since most languages do have embedded sublanguages, there is and will continue to be a certain adhockery in their parser implementations. In part, that's what makes this field of study so much fun.

Are the compilers of C++ written using a lexer/parser generator?

(Background: Inspired by Is C++ context-free or context-sensitive?, while I am writing a simple compiler using jflex/cup myself. )
If they are written using a lexer/parser generator, how do we specify the grammar?
Since code like
a b(c);
could be interpreted as either a function declaration or a local variable definition, how could we handle it in the grammar definition file?
Another example could be the token ">>" in the following code:
std::vector<std::vector<int>> foo;
int a = 1000 >> 4;
Thanks
Are the compilers of C++ written using a lexer/parser generator?
It depends. Some are, some aren't.
GCC originally did use GNU bison, but was re-written a couple of years ago with a hand-written parser. If I have understood that correctly, the main reason was that writing the parser by hand gives you more control over the parser state, and specifically, how much "extraneous" data to keep in there, so that you can generate better error messages.
If they are written using a lexer/parser generator, how do we specify the grammar?
This depends on which parser generator you are using.
Since code like
a b(c);
could be interpreted as either a function declaration or a local variable definition, how could we handle it in the grammar definition file?
Some parser generators may be powerful enough to handle this directly.
Some aren't. Some parser generators which aren't powerful enough have a concept of semantic action that allow you to attach code written in an arbitrarily powerful language to parser rules. E.g. yacc allows you to attach C code to rules.
Otherwise, you will have to handle it during semantic analysis.

Combined unparser/parser generator

Is there a parser generator that also implements the inverse direction, i.e. unparsing domain objects (a.k.a. pretty-printing) from the same grammar specification? As far as I know, ANTLR does not support this.
I have implemented a set of Invertible Parser Combinators in Java and Kotlin. A parser is written pretty much in LL-1 style and it provides a parse- and a print-method where the latter provides the pretty printer.
You can find the project here: https://github.com/searles/parsing
Here is a tutorial: https://github.com/searles/parsing/blob/master/tutorial.md
And here is a parser/pretty printer for mathematical expressions: https://github.com/searles/parsing/blob/master/src/main/java/at/searles/demo/DemoInvert.kt
Take a look at Invertible syntax descriptions: Unifying parsing and pretty printing.
There are several parser generators that include an implementation of an unparser. One of them is the nearley parser generator for context-free grammars.
It is also possible to implement bidirectional transformations of source code using definite clause grammars. In SWI-Prolog, the phrase/2 predicate can convert an input text into a parse tree and vice-versa.
Our DMS Software Reengineering Toolkit does precisely this (and provides a lot of additional support for analyzing/transforming code). It does this by decorating a language grammar with additional attributes, producing what is called an attribute grammar. We use a special DSL to write these rules to make them convenient to write.
It helps to know that DMS produces a tree based directly on the grammar.
Each DMS grammar rule is paired with with so-called "prettyprinting" rule. Each prettyprinting rule describes how to "prettyprint" the syntactic element and sub-elements recognized by its corresponding grammar rule. The prettyprinting process essentially manufactures or combines rectangular boxes of text horizontally or vertically (with optional indentation), with leaves producing unit-height boxes containing the literal value of the leaf (keyword, operator, identifier, constant, etc.
As an example, one might write the following DMS grammar rule and matching prettyprinting rule:
statement = 'for' '(' assignment ';' assignment ';' conditional_expression ')'
'{' sequence_of_statements '}' ;
<<PrettyPrinter>>:
{ V(H('for','(',assignment[1],';','assignment[2],';',conditional_expression,')'),
H('{', I(sequence_of_statements)),
'}');
This will parse the following:
for ( i=x*2;
i--; i>-2*x ) { a[x]+=3;
b[x]=a[x]-1; }
(using additional grammar rules for statements and expressions) and prettyprint it (using additional prettyprinting rules for those additional grammar rules) as follows:
for (i=x*2;i--;i>-2*x)
{ a[x]+=3;
b[x]=a[x]-1;
}
DMS also captures comments, attaches them to AST nodes, and regenerates them on output. The implementation is a bit exotic because most parsers don't handle comments, but utilization is easy, even "free"; comments will be automatically inserted in the prettyprinted result in their original places.
DMS can also print in "fidelity" mode. In this form, it tries to preserve the shape of the toke (e.g., number radix, identifier character capitalization, which keyword spelling was used) the column offset (into the line) of a parsed token. This would cause the original text (or something so close that you don't think it is different) to get regenerated.
More details about what prettyprinters must do are provided in my SO answer on Compiling an AST back to source code. DMS addresses all of those topics cleanly.
This capability has been used by DMS on some 40+ real languages, including full IBM COBOL, PL/SQL, Java 1.8, C# 5.0, C (many dialects) and C++14.
By writing a sufficiently interesting set of prettyprinter rules, you can build things like JavaDoc extended to include hyperlinked source code.
It is not possible in general.
What makes a print pretty? A print is pretty, if spaces, tabs or newlines are at those positions, which make the print looking nicely.
But most grammars ignore white spaces, because in most languages white spaces are not significant. There are exceptions like Python but in general the question, whether it is a good idea to use white spaces as syntax, is still controversial. And therefor most grammars do not use white spaces as syntax.
And if the abstract syntax tree does not contain white spaces, because the parser has thrown them away, no generator can use them to pretty print an AST.

Is there a parser generator that can use the Wirth syntax?

ie: http://en.wikipedia.org/wiki/Wirth_syntax_notation
It seems like most use BNF / EBNF ...
The distinction made by the Wikipedia article looks to me like it is splitting hairs. "BNF/EBNF" has long meant writing grammar rules in roughly the following form:
nonterminal = right_hand_side end_rule_marker
As with other silly langauge differences ( "{" in C, begin in Pascal, endif vs. fi) you can get very different looking but identical meaning by choosing different, er, syntax for end_rule_marker and what you are allowed to say for the right_hand_side.
Normally people allow literal tokens in (your choice! of) quotes, other nonterminal names, and for EBNF, various "choice" operators typically | or / for alternatives, * or + for "repeat", [ ... ] or ? for optional, etc.
Because people designing language syntax are playing with syntax, they seem to invent their own every time they write some down. (Check the various syntax formalisms in the language standards; none of them are the same). Yes, we'd all be better off if there were one standard way to write this stuff. But we don't do that for C or C++ or C# (not even Microsoft could follow their own standard); why should BNF be any different?
Because people that build parser generators usually use it to parse their own syntax, they can and so easily define their own for each parser generator. I've never seen one that did precisely the "WSN" version at Wikipedia and I doubt I ever will in practice.
Does it matter much? Well, no. What really matters is the power of the parser generator behind the notation. You have to bend most grammars to match the power (well, weakness) of the parser generator; for instance, for LL-style generators, your grammar rules can't have left recursion (WSN allows that according to my reading). People building parser generators also want to make it convenient to express non-parser issues (such as, "how to build tree nodes") and so they also add extra notation for non-parsing issues.
So what really drives the parser generator syntax are weaknesses of the parser generator to handle arbitrary languages, extra goodies deemed valuable for the parser generator, and the mood of the guy implementing it.

Parser generator for inline documentation

To have a general-purpose documentation system that can extract inline documentation of multiple languages, a parser for each language is needed. A parser generator (which actually doesn't have to be that complete or efficient) is thus needed.
http://antlr.org/ is a nice parser generator that already has a number of grammars for popular languages. Are there better alternatives i.e. simpler ones that support generating parsers for even more languages out-of-the-box?
If you're only looking for "partial parsing", then you could use ANTLR's option to partially "lex" a token stream and ignore the rest of the tokens. You can do that by enabling the filter=true in a lexer-grammar. The lexer then tries to match any token you defined in your grammar, and when it can't match one of the tokens, it advances one single character (and ignores it) and then again tries to match one of your token at the next character:
lexer grammar Foo;
options {filter=true;}
StringLiteral
: ...
;
CharLiteral
: ...
;
SingleLineComment
: ...
;
MultiLineComment
: ...
;
When implemented properly, you can get the MultiLineComments (/* ... */) from a Java file quite easily without being afraid of single line comments and String- or char literals messing things up.
Obviously, your source files need to be valid to be able to properly tokenize a file, otherwise you get strange results!
My compiler uses Dypgen. This is a user extenisble GLR parser with lots of enrichments so it can parse many languages. The bootstrap grammar is EBNF like (it supports * + and ? directly in your productions). It is powerful enough to dynamically load extensions, a fact my compiler leverages: the bulk of my programming language has its syntax dynamically loaded at compiler startup.
Dypgen is written in Ocaml and generates Ocaml code.
There is a C++ GLR parser called Elkhound which is powerful enough to parse most of C++.
However, for your actual requirements, you do not really need to do any serious parsing: a regular expression matching engine is probably good enough. Googles re2 may be suitable (provides most PCRE functionality, a lot faster and with C++ interface).
Although this is less accurate, it is good enough because you can demand that inline documentation adhere to some simple formats. Most existing inline docs already do so for just this reason.
Where I work we used to use GOLD Parser. This is a lot simpler that Antlr and supports multiple languages. We have since moved to Antlr however as we needed to do more complex parsing, which we found Antlr was better for than GOLD.

Resources