I'm having a minor problem in trying to figure out how to resolve a conflict in my CUP parser project. I understand why the error is occurring, VariableDeclStar's first terminal can be ID, as well as Type, which brings up the conflict, however I cannot figure out how to resolve the conflict in a way that would preserve Type and Variable as separate states. Any help or tips would be appreciated.
VariableDecl ::= Variable SEMICOLON {::};
Variable ::= Type ID {::};
Type ::= INT {::}
| DOUBLE {::}
| BOOLEAN {::}
| STRING {::}
| Type LEFTBRACKET RIGHTBRACKET {::}
| ID {::};
VariableDeclStar::= VariableDecl VariableDeclStar {::}
| {::};
https://i.gyazo.com/0ac3fbf4ebc2d3968f1c2a78c292bc0d.png
Related
I am using bison (3.0.4) and lexer to implement the (partial) grammar of Decaf programming language. I am only implementing what is inside the class.
So, my task is simple: store every production rule (as a string) in the tree, and then just print it out.
For example, if you have the following line of code as an input
class Foo { Foo(int arg1) { some2 a; } }
you (must) get the following output
<ClassDecl> --> class identifier <classBody>
<ClassBody> --> { <VariableDecl>* <ConstructorDecl>* <MethodDecl>* }
<ConstructorDecl> --> identifier ( <ParameterList> ) <Block>
<ParameterList> --> <Parameter> <, Parameter>*
<Parameter> --> <Type> identifier
<Type> --> <SimpleType>
<SimpleType> --> int
<Block> --> { <LocalVariableDecl>* <Statement>* }
<LocalVariableDecl> --> <Type> identifier ;
<Type> --> <SimpleType>
<SimpleType> --> identifier
The first problem (solved) was that it parsed variable declaration instead of constructor declaration, though I have no variable declaration in the scope of a class itself (i.e. I have only inside the constructor's block). This is solved.
Nevertheless, if I give the following class abc { some1 abc; john doe; } it says that syntax error, unexpected SEMICOLON, expecting LP. So, the character at line 19 causes the problem.
Here is .y file (only classBody rule)
class_decl:
CLASS ID LC class_body RC
;
/* FIXME: Gotta add more grammar here */
class_body: var_declmore constructor_declmore method_declmore
| var_declmore constructor_declmore
| var_declmore method_declmore
| constructor_declmore method_declmore
| method_declmore
| var_declmore
| constructor_declmore
| %empty
;
var_declmore: var_decl
| var_declmore var_decl
;
constructor_declmore: constructor_decl
| constructor_declmore constructor_decl
;
var_decl: type ID SEMICOLON
| type ID error
;
constructor_decl: ID LP parameter_list RP block
| ID error parameter_list RP block
;
Here is the gist to the full .y file.
The essential problem is that constructor_declmore can be empty, and that both var_decl and constructor_decl can start with ID.
That's a problem because before the parser can recognize a constructor_decl, it needs to reduce an (empty) constructor_declmore. But it obviously cannot do that reduction unless it knows that the var_declmore is finished.
So when it sees an ID it has to decide between one of two actions:
Reduce an empty constructor_declmore, thereby deciding that there are no more var_decls; or
Shift the ID in order to start parsing a new var_decl.
In the absence of precedence declarations (which wouldn't help here), bison/yacc always resolves shift/reduce conflicts in favour of the shift action. So in this case, it assumes that Foo is the ID which starts a var_decl, leading to the error message you note.
The reduce/reduce conflict produced by the grammar should also be looked at. It comes from the method_declmore: method_decl rule, which conflicts with the other possible way of creating a method_declmore by starting with an empty method_declmore and then adding a method_decl.
So i have a lexer with a token defined so that on a boolean property it is enabled/disabled
I create an input stream and parse a text. My token is called PHRASE_TEXT and should match anything falling within this pattern '"' ('\\' ~[] |~('\"'|'\\')) '"' {phraseEnabled}?
I tokenize "foo bar" and as expected I get a single token. After setting the property to false on the lexer and calling setInputStream on it with the same text I get "foo , bar" so 2 tokens instead of one. This is also expected behavior.
The problem comes when setting the property to true again. I would expect the same text to tokenize to the whole 1 token "foo bar" but instead is tokenized to the 2 tokens from before. Is this a bug on my part? What am I doing wrong here? I tried using new instances of the tokenizer and reusing the same instance but it doesn't seem to work either way. Thanks in advance.
Edit : Part of my grammar follows below
grammar LuceneQueryParser;
#header{package com.amazon.platformsearch.solr.queryparser.psclassicqueryparser;}
#lexer::members {
public boolean phrases = true;
}
#parser::members {
public boolean phraseQueries = true;
}
mainQ : LPAREN query RPAREN
| query
;
query : not ((AND|OR)? not)* ;
andClause : AND ;
orClause : OR ;
not : NOT? modifier? clause;
clause : qualified
| unqualified
;
unqualified : LBRACK range_in LBRACK
| LCURL range_out RCURL
| truncated
| {phraseQueries}? quoted
| LPAREN query RPAREN
| normal
;
truncated : TERM_TEXT_TRUNCATED;
range_in : (TERM_TEXT|STAR) TO (TERM_TEXT|STAR);
range_out : (TERM_TEXT|STAR) TO (TERM_TEXT|STAR);
qualified : TERM_TEXT COLON unqualified ;
normal : TERM_TEXT;
quoted : PHRASE_TEXT;
modifier : PLUS
| MINUS
;
PHRASE_TEXT : '"' (ESCAPE|~('\"'|'\\'))+ '"' {phrases}?;
TERM_TEXT : (TERM_CHAR|ESCAPE)+;
TERM_CHAR : ~(' ' | '\t' | '\n' | '\r' | '\u3000'
| '\\' | '\'' | '(' | ')' | '[' | ']' | '{' | '}'
| '+' | '-' | '!' | ':' | '~' | '^'
| '*' | '|' | '&' | '?' );
ESCAPE : '\\' ~[];
The problem seems to be that after i set the phrases to false, and then to true again, no more tokens seem to be recognized as PHRASE_TEXT. I know that as a guideline i should define my grammars to be unambiguous but this is basically the way it has to end up looking : tokenizing a string with quotes in 2 different modes, depending on the situation.
I'm gonna have to update this with an answer a colleague of mine helpfully pointed out. The lexer generated class has a static DFA[] array shared between all instances of the class. Once the property was set to false instead of the default true the decision tree was apparently changed for all object instances. A fix for this was to have to separate DFA[] arrays for both the true and false instances of the property i was modifying. I think making that array not static would be too expensive and i really can't think about another fix.
I always get implicit declaration in parsing rule warning, when coding all examples from antlr v4 inside antlrworks 2. for my simple rule like :
type
: 'Integer'
| 'Character'
| 'Real'
| 'String'
| 'Short'
| 'Long'
| 'Double'
| 'Signed'
| 'Unsigned'
| 'Boolean'
| structTag
| enumTag
| declarator
;
Can anybody give me solution that warning, at last solution for example above.
thank
The warning is to inform you that you will have no way in code to know if your type is an identifier, character, real, etc., because you have not assigned named token types to the corresponding tokens. You can resolve this warning by creating named lexer rules for each of your tokens:
INTEGER : 'Integer';
CHARACTER : 'Character';
You do not have to change the type rule after adding these new definitions, but after adding the definitions you will be able to check if a token type is INTEGER or CHARACTER as part of your parser result handling code.
I'm working on an SQL grammar in ANTLR which allows quoted identifiers (table names, field names, etc), as well as quoted literal strings.
The problem is that this grammar seems to always match quoted inputs as "QUOTED_LITERAL", and never as IDs wrapped in quotes.
Here are my results:
input: 'blahblah' result: string_literal as expected.
input: field1 restul: column_name as expected
input: table.field1 result: column_spec as expected
input: 'table'.'field1' result: string_literal, MissingTokenException
Below is my simplified grammar for the expression portion of the SQL grammar, if anybody can help identify what is needed to match quoted rules other than the quoted literal, thanks.
grammar test;
expression
:
simpleExpression EOF!
;
simpleExpression
:
column_spec
| literal_value
;
column_spec
:
(table_name '.')? column_name
| ('\''table_name '\'''.')? '\'' column_name '\''
| ('\"'table_name '\"' '.')? '\"' column_name '\"'
;
string_literal: QUOTED_LITERAL ;
boolean_literal: 'TRUE' | 'FALSE' ;
literal_value :
(
string_literal
| boolean_literal
)
;
table_name :ID;
column_name :ID;
QUOTED_LITERAL:
( '\''
( ('\\' '\\') | ('\'' '\'') | ('\\' '\'') | ~('\'') )*
'\'' )
|
( '\"'
( ('\\' '\\') | ('\"' '\"') | ('\\' '\"') | ~('\"') )*
'\"' )
;
ID
:
( 'A'..'Z' | 'a'..'z' ) ( 'A'..'Z' | 'a'..'z' | '_' | '0'..'9'| '::' )*
;
WHITE_SPACE : ( ' '|'\r'|'\t'|'\n' ) {$channel=HIDDEN;} ;
In case anybody is interested, I removed a little bit of the flexibility from the quoted literal strings. Literal strings can only be quoted by single quotes, and identifiers can be optionally quoted by double quotes. As long as the literal quote and the identifier quote is well defined and they don't overlap, the grammar is trivial.
This policy makes the grammar much cleaner, and doesn't remove the ability to quote identifiers. I make use of the JDBC method getIdentifierQuote to report which quote can be used to wrap identifiers.
This is your classical shift/reduce conflict. (Except that ANTLR does not shift or reduce; since it is not a stack automaton.)
You have the following problem:
When you are in the simpleExpression state you need to decide what branch to take with one token lookahead. In the case of ANTLR, since no difference is done between lexer and parser the one token is a single character. (You should see a warning from ANTLR about the conflict.)
It gets even better, what is the difference between "Bob Dillan" and "table1"? From the parsers point of view, none. So how do you expect to make a difference between:
('\"'table_name '\"' '.')? '\"' column_name '\"'
and
( '\"'
( ('\\' '\\') | ('\"' '\"') | ('\\' '\"') | ~('\"') )*
'\"' )
I strongly suggest to rewrite the simpleExpression rule to:
simpleExpression:
IDENTIFIER |
IDENTIFIER . IDENTIFIER |
QUOTED_LITERAL |
QUOTED_LITERAL . QUOTED_LITERAL |
boolean_literal;
And then decide in the action code of simpleExpression what to do. Especially since I am quite sure that you can reference a table with a quoted name; never the less "users" and "Bod Dillan" are syntactically equal.
It also depends on the grater grammar, you may also be able to resolve the amiability on a higher level.
The antlr lexer is greedy, in that when there are two possible token matches, it will match the longest possible one.
When the lexer sees 'some_id', it can match the first quote as just a quote, or a quoted literal. The literal is longer, so that matches.
As a side note, you generally do not want lexer rules that can match nothing (like ID) or to uses string constants in the parser rules, but only reference token names.
What you want to do is something like this.
QUOTE: '\'';
ID: ('a'..'z' | 'A'..'Z')+; // Must have at least one character
QUOTED_LITERAL: QUOTE ( (ID QUOTE) => { $type=QUOTE; } ) | .* QUOTE;
id: ID | QUOTE ID QUOTE;
quoted_literal: QUOTED_LITERAL | QUOTE ID QUOTE;
If the lexer sees something that looks like a quoted id, it cannot tell which to use, so it breaks it up into smaller tokens. In your parser, you use id where you expect a possibly quoted ID, and quoted_literal where you expect a QUOTED_LITERAL.
The syntactical predicate in QUOTED_LITERAL prevents it from matching the full quote when the input is ambiguous.
Looking that this, it will fail to correctly parse lines like
'tag' text 'second'
as ' text ' will be parsed as a QUOTED_LITERAL. If that is a valid input, then you would need something like
fragment QUOTED_ID;
QUOTED_LITERAL: QUOTE ( ID {$type=QUOTED_ID} | .* ) QUOTE;
id: ID | QUOTED_ID;
quoted_literal: QUOTED_LITERAL | QUOTED_ID;
(My example does not cover all the cases in your input, but extending it should be obvious. You also probably need some actions to either generate the correct tokens in your AST or add/remove quotes from the text, depending one what you do after you parse.)
I begin with an otherwise well formed (and well working) grammar for a language. Variables,
binary operators, function calls, lists, loops, conditionals, etc. To this grammar I'd like to add what I'm calling the object construct:
object
: object_name ARROW more_objects
;
more_objects
: object_name
| object_name ARROW more_objects
;
object_name
: IDENTIFIER
;
The point is to be able to access scalars nested in objects. For example:
car->color
monster->weapon->damage
pc->tower->motherboard->socket_type
I'm adding object as a primary_expression:
primary_expression
: id_lookup
| constant_value
| '(' expression ')'
| list_initialization
| function_call
| object
;
Now here's a sample script:
const list = [ 1, 2, 3, 4 ];
for var x in list {
send "foo " + x + "!";
}
send "Done!";
Prior to adding the nonterminal object as a primary_expression everything is sunshine and puppies. Even after I add it, Bison doesn't complain. No shift and/or reduce conflicts reported. And the generated code compiles without a sound. But when I try to run the sample script above, I get told error on line 2: Attempting to use undefined symbol '{' on line 2.
If I change the script to:
var list = 0;
for var x in [ 1, 2, 3, 4 ] {
send "foo " + x + "!";
}
send "Done!";
Then I get error on line 3: Attempting to use undefined symbol '+' on line 3.
Clearly the presence of object in the grammar is messing up how the parser behaves [SOMEhow], and I feel like I'm ignoring a rather simple principle of language theory that would fix this in a jiff, but the fact that there aren't any shift/reduce conflicts has left me bewildered.
Is there a better way (grammatically) to write these rules? What am I missing? Why aren't there any conflicts?
(And here's the full grammar file in case it helps)
UPDATE: To clarify, this language, which compiles into code being run by a virtual machine, is embedded into another system - a game, specifically. It has scalars and lists, and there are no complex data types. When I say I want to add objects to the language, that's actually a misnomer. I am not adding support for user-defined types to my language.
The objects being accessed with the object construct are actually objects from the game which I'm allowing the language processor to access through an intermediate layer which connects the VM to the game engine. This layer is designed to decouple as much as possible the language definition and the virtual machine mechanics from the implementation and details of the game engine.
So when, in my language I write:
player->name
That only gets codified by the compiler. "player" and "name" are not traditional identifiers because they are not added to the symbol table, and nothing is done with them at compile time except to translate the request for the name of the player into 3-address code.
It seems you are doing a classical error when using direct strings in the yacc source file. Since you are using a lexer, you can only use token names in yacc source files. More on this here
So I spent a reasonable amount of time picking over the grammar (and the bison output) and can't see what is obviously wrong here. Without having the means to execute it, I can't easily figure out what is going on by experimentation. Therefore, here are some concrete steps I usually go through when debugging grammars. Hopefully you can do any of these you haven't already done and then perhaps post follow-ups (or edit your question) with any results that might be revealing:
Contrive the smallest (in terms of number of tokens) possible working input, and the smallest possible non-working inputs based on the rules you expect to be applied.
Create a copy of the grammar file including only the troublesome rules and as few other supporting rules as you can get away with (i.e. you want a language that only allows construction of sequences consisting of the object and more_object rules, joined by ARROW. Does this work as you expect?
Does the rule in which it is nested work as you expect? Try replacing object with some other very simple rule (using some tokens not occuring elsewhere) and seeing if you can include those tokens without it breaking everything else.
Run bison with --report=all. Inspect the output to try to trace the rules you've added and the states that they affect. Try removing those rules and repeat the process - what has changed? This is extremely time consuming often, and is a giant pain, but it's a good last resort. I recommend a pencil and some paper.
Looking at the structure of your error output - '+' is being recognised as an identifier token, and is therefore being looked up as a symbol. It might be worth checker your lexer to see how it is processing identifier tokens. You might just accidentally be grabbing too much. As a further debugging technique, you might consider turning some of those token literals (e.g. '+', '{', etc) into real tokens so that bison's error reporting can help you out a little more.
EDIT: OK, the more I've dug into it, the more I'm convinced that the lexer is not necessarily working as it should be. I would double-check that the stream of tokens you are getting from yylex() matches your expectations before proceeding any further. In particular, it looks like a bunch of symbols that you consider special (e.g. '+' and '{') are being captured by some of your regular expressions, or at least are being allowed to pass for identifiers.
You don't get shift/reduce conflicts because your rules using object_name and more_objects are right-recursive - rather than the left-recursive rules that Yacc (Bison) handles most naturally.
On classic Yacc, you would find that you can run out of stack space with deep enough nesting of the 'object->name->what->not' notation. Bison extends its stack at runtime, so you have to run out of memory, which is a lot harder these days than it was when machines had a few megabytes of memory (or less).
One result of the right-recursion is that no reductions occur until you read the last of the object names in the chain (or, more accurately, one symbol beyond that). I see that you've used right-recursion with your statement_list rule too - and in a number of other places too.
I think your principal problem is that you failed to define a subtree constructor
in your object subgrammar. (EDIT: OP says he left the semantic actions for
object out of his example text. That doesn't change the following answer).
You probably have to lookup up the objects in the order encountered, too.
Maybe you intended:
primary_expression
: constant_value { $$ = $1; }
| '(' expression ')' { $$ = $2; }
| list_initialization { $$ = $1; }
| function_call { $$ = $1; }
| object { $$ = $1; }
;
object
: IDENTIFIER { $$ = LookupVariableOrObject( yytext ); }
| object ARROW IDENTIFIER { $$ = LookupSubobject( $1, yytext ); }
;
I assume that if one encounters an identifier X by itself, your default interpretation
is that it is a variable name. But, if you encounter X -> Y, then even if X
is a variable name, you want the object X with subobject Y.
What LookupVarOrObject does is to lookup the leftmost identifier encountered to see if it is variable
(and return essentially the same value as idlookup which must produce an AST node of type AST_VAR),
or see if it is valid object name, and return an AST node marked as an AST_OBJ,
or complain if the identifier isn't one of these.
What LookupSuboject does, is to check its left operand to ensure it is an AST_OBJ
(or an AST_VAR whose name happens to be the same as that of an object).
and complain if it is not. If it is, then its looks up the yytext-child object of
the named AST_OBJ.
EDIT: Based on discussion comments in another answer, right-recursion in the OP's original
grammar might be problematic if the OP's semantic checks inspect global lexer state (yytext).
This solution is left-recursive and won't run afoul of that particular trap.
id_lookup
: IDENTIFIER
is formally identical to
object_name
: IDENTIFIER
and object_name would accept everything that id_lookup wouldn't, so assertLookup( yytext ); probably runs on everything that may look like IDENTIFIER and is not accepted by enother rule just to decide between the 2 and then object_name can't accept because single lookahead forbids that.
For the twilight zone, the two chars that you got errors for are not declared as tokens with opends the zone of undefinded behavior and could trip parser into trying to treat them as potential identifiers when the grammar gets loose.
I just tried running muscl in Ubuntu 10.04 using bison 2.4.1 and I was able to run both of your examples with no syntax errors. My guess is that you have a bug in your version of bison. Let me know if I'm somehow running your parser wrong. Below is the output from the first example you gave.
./muscle < ./test1.m (this was your first test)
\-statement list
|-declaration (constant)
| |-symbol reference
| | \-list (constant)
| \-list
| |-value
| | \-1
| |-value
| | \-2
| |-value
| | \-3
| \-value
| \-4
|-loop (for-in)
| |-symbol reference
| | \-x (variable)
| |-symbol reference
| | \-list (constant)
| \-statement list
| \-send statement
| \-binary op (addition)
| |-binary op (addition)
| | |-value
| | | \-foo
| | \-symbol reference
| | \-x (variable)
| \-value
| \-!
\-send statement
\-value
\-Done!
+-----+----------+-----------------------+-----------------------+
| 1 | VALUE | 1 | |
| 2 | ELMT | #1 | |
| 3 | VALUE | 2 | |
| 4 | ELMT | #3 | |
| 5 | VALUE | 3 | |
| 6 | ELMT | #5 | |
| 7 | VALUE | 4 | |
| 8 | ELMT | #7 | |
| 9 | LIST | | |
| 10 | CONST | #10 | #9 |
| 11 | ITER_NEW | #11 | #10 |
| 12 | BRA | #14 | |
| 13 | ITER_INC | #11 | |
| 14 | ITER_END | #11 | |
| 15 | BRT | #22 | |
| 16 | VALUE | foo | |
| 17 | ADD | #16 | #11 |
| 18 | VALUE | ! | |
| 19 | ADD | #17 | #18 |
| 20 | SEND | #19 | |
| 21 | BRA | #13 | |
| 22 | VALUE | Done! | |
| 23 | SEND | #22 | |
| 24 | HALT | | |
+-----+----------+-----------------------+-----------------------+
foo 1!
foo 2!
foo 3!
foo 4!
Done!