With a (simplified) grammar that recognizes classes of the form
classDeclaration
: Class className=Identifier
( Extends parentClassIdentifier=PackageIdentifier )?
( classSpecifier )*
Semicolon
;
classSpecifier
: ( Abstract | Final )
| /* ... */
;
one could easily create a class and make it both abstract and final
class MyContradiction abstract final;
In my current grammar I have some of these keywords which don't make sense when combined. How do I check for those combinations that don't make sense. Should I modify my grammar to prevent this by including horrible loops and checks or should I check in code using listener or visitors which flags are set and which ones do not make sense?
I would say that checking for these wrong combinations in code is better because you have a chance to give user a better error message than something like 'syntax error'.
To clarify, the code which checks these kind of semantic constraints, catches the wrong combinations of keywords, etc., should be in code, but in the code of parser's semantic actions. This is the best place for it. So you have this in the same file as the grammar, but you don't make the grammar itself more complicated.
I've seen parsers where the semantic actions contained code which was evaluating expressions of the language being parsed and more, basically some people embed interpreters of the parsed language in the grammar file. This is wrong. The parser should produce AST and that's it. But, again, it is best to do things like checking wrong combination of keywords in semantic actions in the grammar file, I believe. Someone studying your grammar will not see these very syntax related things, also you may simply forget to do these checks after parsing is completed. And, after all, returning from your parser an AST of a text which is illegal for your grammar is a bit dodgy, isn't it?
Related
I'm trying to figure out how I can best parse just a subset of a given language with ANTLR. For example, say I'm looking to parse U-SQL. Really, I'm only interested in parsing certain parts of the language, such as query statements. I couldn't be bothered with parsing the many other features of the language. My current approach has been to design my lexer / parser grammar as follows:
// ...
statement
: queryStatement
| undefinedStatement
;
// ...
undefinedStatement
: (.)+?
;
// ...
UndefinedToken
: (.)+?
;
The gist is, I add a fall-back parser rule and lexer rule for undefined structures and tokens. I imagine later, when I go to walk the parse tree, I can simply ignore the undefined statements in the tree, and focus on the statements I'm interested in.
This seems like it would work, but is this an optimal strategy? Are there more elegant options available? Thanks in advance!
Parsing a subpart of a grammar is super easy. Usually you have a top level rule which you call to parse the full input with the entire grammar.
For the subpart use the function that parses only a subrule like:
const expression = parser.statement();
I use this approach frequently when I want to parse stored procedures or data types only.
Keep in mind however, that subrules usually are not termined with the EOF token (as the top level rule should be). This will cause no syntax error if more than the subelement is in the token stream (the parser just stops when the subrule has matched completely). If that's a problem for you then add a copy of the subrule you wanna parse, give it a dedicated name and end it with EOF, like this:
dataTypeDefinition: // For external use only. Don't reference this in the normal grammar.
dataType EOF
;
dataType: // type in sql_yacc.yy
type = (
...
Check the MySQL grammar for more details.
This general idea -- to parse the interesting bits of an input and ignore the sea of surrounding tokens -- is usually called "island parsing". There's an example of an island parser in the ANTLR reference book, although I don't know if it is directly applicable.
The tricky part of island parsing is getting the island boundaries right. If you miss a boundary, or recognise as a boundary something which isn't, then your parse will fail disastrously. So you need to understand the input at least well enough to be able to detect where the islands are. In your example, that might mean recognising a SELECT statement, for example. However, you cannot blindly recognise the string of letters SELECT because that string might appear inside a string constant or a comment or some other context in which it was never intended to be recognised as a token at all.
I suspect that if you are going to parse queries, you'll basically need to be able to recognise any token. So it's not going to be sea of uninspected input characters. You can view it as a sea of recognised but unparsed tokens. In that case, it should be reasonably safe to parse a non-query statement as a keyword followed by arbitrary tokens other than ; and ending with a ;. (But you might need to recognise nested blocks; I don't really know what the possibilities are.)
Most interpreters let you type the following at their console:
>> a = 2
>> a+3
5
>>
My question is what mechanisms are usually used to handle this syntax? Somehow the parser is able to distinguish between an assignment and an expression even though they could both start with a digit or letter. It's only when we retrieve the second token that you know if you have an assignment or not. In the past, I've looked ahead two tokens and if the second token isn't an equals I push the tokens back into the lexical stream and assume it's an expression. I suppose one could treat the assignment as an expression which I think some languages do. I thought of using left-factoring but I can't see it working.
eg
assignment = variable A
A = '=' expression | empty
Update I found this question on StackOverflow which address the same question: How to modify parsing grammar to allow assignment and non-assignment statements?
From how you're describing your approach - doing a few tokens of lookahead to decide how to handle things - it sounds like you're trying to write some sort of top-down parser along the lines of an LL(1) or an LL(2) parser, and you're trying to immediately decide whether the expression you're parsing is a variable assignment or an arithmetical expression. There are several ways that you could parse expressions like these quite naturally, and they essentially involve weakening one of those two assumptions.
The first way we could do this would be to switch from using a top-down parser like an LL(1) or LL(2) parser to something else like an LR(0) or SLR(1) parser. Those parsers work bottom-up by reading larger prefixes of the input string before deciding what they're looking at. In your case, a bottom-up parser might work by seeing the variable and thinking "okay, I'm either going to be reading an expression to print or an assignment statement, but with what I've seen so far I can't commit to either," then scanning more tokens to see what comes next. If they see an equals sign, great! It's an assignment statement. If they see something else, great! It's not. The nice part about this is that if you're using a standard bottom-up parsing algorithm like LR(0), SLR(1), LALR(1), or LR(1), you should probably find that the parser generally handles these sorts of issues quite well and no special-casing logic is necessary.
The other option would be to parse the entire expression assuming that = is a legitimate binary operator like any other operation, and then check afterwards whether what you parsed is a legal assignment statement or not. For example, if you use Dijkstra's shunting-yard algorithm to do the parsing, you can recover a parse tree for the overall expression, regardless of whether it's an arithmetical expression or an assignment. You could then walk the parse tree to ask questions like
if the top-level operation is an assignment, is the left-hand side a single variable?
if the top-level operation isn't an assignment, are there nested assignment statements buried in here that we need to get rid of?
In other words, you'd parse a broader class of statements than just the ones that are legal, and then do a postprocessing step to toss out anything that isn't valid.
Suppose I already have a complete YACC grammar. Let that be C grammar for example. Now I want to create a separate parser for domain-specific language, with simple grammar, except that it still needs to parse complete C type declarations. I wouldn't like to duplicate long rules from the original grammar with associated handling code, but instead would like to call out to the original parser to handle exactly one rule (let's call it "declarator").
If it was a recursive descent parser, there would be a function for each rule, easy to call in. But what about YACC with its implicit stack automaton?
Basically, no. Composing LR grammars is not easy, and bison doesn't offer much help.
But all is not lost. Nothing stops you from including the entire grammar (except the %start declaration), and just using part of it, except for one little detail: bison will complain about useless productions.
If that's a show-stopper for you, then you can use a trick to make it possible to create a grammar with multiple start rules. In fact, you can create a grammar which lets you specify the start symbol every time you call the parser; it doesn't even have to be baked in. Then you can tuck that into a library and use whichever parser you want.
Of course, this also comes at a cost: the cost is that the parser is bigger than it would otherwise need to be. However, it shouldn't be any slower, or at least not much -- there might be some cache effects -- and the extra size is probably insignificant compared to the rest of your compiler.
The hack is described in the bison FAQ in quite a lot of detail, so I'll just do an outline here: for each start production you want to support, you create one extra production which starts with a pseudo-token (that is, a lexical code which will never be generated by the lexer). For example, you might do the following:
%start meta_start
%token START_C START_DSL
meta_start: START_C c_start | START_DSL dsl_start;
Now you just have to arrange for the lexer to produce the appropriate START token when it first starts up. There are various ways to do that; the FAQ suggests using a global variable, but if you use a re-entrant flex scanner, you can just put the desired start token in the scanner state (along with a flag which is set when the start token has been sent).
I'm writing a lexer/parser for a small subset of C in ANTLR that will be run in a Java environment. I'm new to the world of language grammars and in many of the ANTLR tutorials, they create an AST - Abstract Syntax Tree, am I forced to create one and why?
Creating an AST with ANTLR is incorporated into the grammar. You don't have to do this, but it is a really good tool for more complicated requirements. This is a tutorial on tree construction you can use.
Basically, with ANTLR when the source is getting parsed, you have a few options. You can generate code or an AST using rewrite rules in your grammar. An AST is basically an in memory representation of your source. From there, there's a lot you can do.
There's a lot to ANTLR. If you haven't already, I would recommend getting the book.
I found this answer to the question on jGuru written by Terence Parr, who created ANTLR. I copied this explanation from the site linked here:
Only simple, so-called syntax directed translations can be done with actions within the parser. These kinds of translations can only spit out constructs that are functions of information already seen at that point in the parse. Tree parsers allow you to walk an intermediate form and manipulate that tree, gradually morphing it over several translation phases to a final form that can be easily printed back out as the new translation.
Imagine a simple translation problem where you want to print out an html page whose title is "There are n items" where n is the number of identifiers you found in the input stream. The ids must be printed after the title like this:
<html>
<head>
<title>There are 3 items</title>
</head>
<body>
<ol>
<li>Dog</li>
<li>Cat</li>
<li>Velociraptor</li>
</body>
</html>
from input
Dog
Cat
Velociraptor
So with simple actions in your grammar how can you compute the title? You can't without reading the whole input. Ok, so now we know we need an intermediate form. The best is usually an AST I've found since it records the input structure. In this case, it's just a list but it demonstrates my point.
Ok, now you know that a tree is a good thing for anything but simple translations. Given an AST, how do you get output from it? Imagine simple expression trees. One way is to make the nodes in the tree specific classes like PlusNode, IntegerNode and so on. Then you just ask each node to print itself out. For input, 3+4 you would have tree:
+
|
3 -- 4
and classes
class PlusNode extends CommonAST {
public String toString() {
AST left = getFirstChild();
AST right = left.getNextSibling();
return left + " + " + right;
}
}
class IntNode extends CommonAST {
public String toString() {
return getText();
}
}
Given an expression tree, you can translate it back to text with t.toString(). SO, what's wrong with this? Seems to work great, right? It appears to work well in this case because it's simple, but I argue that, even for this simple example, tree grammars are more readable and are formalized descriptions of precisely what you coded in the PlusNode.toString().
expr returns [String r]
{
String left=null, right=null;
}
: #("+" left=expr right=expr) {r=left + " + " + right;}
| i:INT {r=i.getText();}
;
Note that the specific class ("heterogeneous AST") approach actually encodes a complete recursive-descent parser for #(+ INT INT) by hand in toString(). As parser generator folks, this should make you cringe. ;)
The main weakness of the heterogeneous AST approach is that it cannot conveniently access context information. In a recursive-descent parser, your context is easily accessed because it can be passed in as a parameter. You also know precisely which rule can invoke which other rule (e.g., is this expression a WHILE condition or an IF condition?) by looking at the grammar. The PlusNode class above exists in a detached, isolated world where it has no idea who will invoke it's toString() method. Worse, the programmer cannot tell in which context it will be invoked by reading it.
In summary, adding actions to your input parser works for very straightforward translations where:
the order of output constructs is the same as the input order
all constructs can be generated from information parsed up to the point when you need to spit them out
Beyond this, you will need an intermediate form--the AST is the best form usually. Using a grammar to describe the structure of the AST is analogous to using a grammar to parse your input text. Formalized descriptions in a domain-specific high-level language like ANTLR are better than hand coded parsers. Actions within a tree grammar have very clear context and can conveniently access information passed from invoking rlues. Translations that manipulate the tree for multipass translations are also much easier using a tree grammar.
I think the creation of the AST is optional. The Abstract Syntax Tree is useful for subsequent processing like semantic analysis of the parsed program.
Only you can decide if you need to create one. If your only objective is syntactic validation then you don't need to generate one. In javacc (similar to ANTLR) there is a utility called JJTree that allows the generation of the AST. So I imagine this is optional in ANTLR as well.
There are certainly plenty of docs and howtos on resolving shift/reduce errors. The bison docs suggest the correct solution is usually to just %expect them and deal with it.
When you have things like this:
S: S 'b' S | 't'
You can easily resolve them like this:
S: S 'b' T | T
T: 't'
My question is: Is it better to leave the grammar a touch ambiguous and %expect shift/reduce problems or is it better to try to adjust the grammar to avoid them? I suspect there is a balance and it's based on the needs of the author, but I don't really know.
As I read it, Your question is "When is an ambiguous grammar or production rule OK?"
First consider the language you are describing. What would be the implication of allowing an ambiguous production rule into the language.
Your example describes a language which might include an expression like: t b t b t b t
The expression, resolved as in your second example would be (((( t ) b t) b t ) b t ) but in an ambiguous grammer it could also become ( t b ( t b ( t b ( t)))) or even ( t b t ) b ( t b t ). Which could be valid might depend on the language. If the b operator models subtraction, it really shouldn't be ambiguous, but if it was addition, it might be ok. This really depends on the language.
The second question to consider is what the resulting grammar source file ends up looking like, after the conflicts are resolved. As with other source code, a grammar is meant to be read by humans, and secondarily also by computers. Prefer a notation that gives a clearer explanation of what the parser is trying to do from the grammar. That is, if the parser is executing some possibly undefined behavior, for example, order of evaluation of a function's arguments in an eager language, make the grammar look ambiguous.
You can guide the conflict resolution with operator precedence. Declare 'b' as an left- or right-associative operator and you have covered at least that case.
For more complex patterns, as long as the final parser produces the correct result in all cases, the warnings isn't much to worry about. Though if you can't get it to give the correct result using declarations you would have to rewrite the grammar.
In my compiler course last semester we used bison, and built a compiler for a subset of pascal.
If the language is complex enough, you will have some errors. As long as you understand why they are there, and what you'd have to do to remove them, we found it to be alright. If something was there, but due to the behaviour would work as we wanted it to, and would require much to much thought and work to make it worth while (and also complicating the grammar), we left it alone. Just make sure you fully understand the error, and document it somewhere (even for yourself), so that you always know what's going on with it.
It's a cost/benefit analysis once things get really involved, but IMHO, fixing it should be considered FIRST, then actually figure out what the work would be (and if that work breaks something else, or makes something else harder), and go from there. Never pass them off as commonplace.
When I need to prove that a grammar is unambiguous, I tend to write it first as a Parsing Expression Grammar, and then convert it by hand to whatever grammar type the tool set I'm using for the project needs. In my experience, the need for this level of proof is very rare, though, since most shift/reduce conflicts I have come across have been fairly trivial ones to show the correctness of (on the order of your example).