Parsing the c-like example code, i have the following issue. Its like some tokens, like identifiers, are ignored by grammar, causing a non-reason syntax error.
Parser code :
%{
#include <stdio.h>
#include <stdlib.h>
int yylex();
void yyerror (char const *);
%}
%token T_MAINCLASS T_ID T_PUBLIC T_STATIC T_VOID T_MAIN T_PRINTLN T_INT T_FLOAT T_FOR T_WHILE T_IF T_ELSE T_EQUAL T_SMALLER T_BIGGER T_NOTEQUAL T_NUM T_STRING
%left '(' ')'
%left '+' '-'
%left '*' '/'
%left '{' '}'
%left ';' ','
%left '<' '>'
%%
PROGRAM : T_MAINCLASS T_ID '{' T_PUBLIC T_STATIC T_VOID T_MAIN '(' ')' COMP_STMT '}'
;
COMP_STMT : '{' STMT_LIST '}'
;
STMT_LIST : /* nothing */
| STMT_LIST STMT
;
STMT : ASSIGN_STMT
| FOR_STMT
| WHILE_STMT
| IF_STMT
| COMP_STMT
| DECLARATION
| NULL_STMT
| T_PRINTLN '(' EXPR ')' ';'
;
DECLARATION : TYPE ID_LIST ';'
;
TYPE : T_INT
| T_FLOAT
;
ID_LIST : T_ID ',' ID_LIST
|
;
NULL_STMT : ';'
;
ASSIGN_STMT : ASSIGN_EXPR ';'
;
ASSIGN_EXPR : T_ID '=' EXPR
;
EXPR : ASSIGN_EXPR
| RVAL
;
FOR_STMT : T_FOR '(' OPASSIGN_EXPR ';' OPBOOL_EXPR ';' OPASSIGN_EXPR ')' STMT
;
OPASSIGN_EXPR : /* nothing */
| ASSIGN_EXPR
;
OPBOOL_EXPR : /* nothing */
| BOOL_EXPR
;
WHILE_STMT : T_WHILE '(' BOOL_EXPR ')' STMT
;
IF_STMT : T_IF '(' BOOL_EXPR ')' STMT ELSE_PART
;
ELSE_PART : /* nothing */
| T_ELSE STMT
;
BOOL_EXPR : EXPR C_OP EXPR
;
C_OP : T_EQUAL | '<' | '>' | T_SMALLER | T_BIGGER | T_NOTEQUAL
;
RVAL : RVAL '+' TERM
| RVAL '-' TERM
| TERM
;
TERM : TERM '*' FACTOR
| TERM '/' FACTOR
| FACTOR
;
FACTOR : '(' EXPR ')'
| T_ID
| T_NUM
;
%%
void yyerror (const char * msg)
{
fprintf(stderr, "C-like : %s\n", msg);
exit(1);
}
int main ()
{
if(!yyparse()){
printf("Compiled !!!\n");
}
}
Part of Lexical Scanner code :
{Empty}+ { printf("EMPTY ") ; /* nothing */ }
"mainclass" { printf("MAINCLASS ") ; return T_MAINCLASS ; }
"public" { printf("PUBLIC ") ; return T_PUBLIC; }
"static" { printf("STATIC ") ; return T_STATIC ; }
"void" { printf("VOID ") ; return T_VOID ; }
"main" { printf("MAIN ") ; return T_MAIN ; }
"println" { printf("PRINTLN ") ; return T_PRINTLN ; }
"int" { printf("INT ") ; return T_INT ; }
"float" { printf("FLOAT ") ; return T_FLOAT ; }
"for" { printf("FOR ") ; return T_FOR ; }
"while" { printf("WHILE ") ; return T_WHILE ; }
"if" { printf("IF ") ; return T_IF ; }
"else" { printf("ELSE ") ; return T_ELSE ; }
"==" { printf("EQUAL ") ; return T_EQUAL ; }
"<=" { printf("SMALLER ") ; return T_SMALLER ; }
">=" { printf("BIGGER ") ; return T_BIGGER ; }
"!=" { printf("NOTEQUAL ") ; return T_NOTEQUAL ; }
{id} { printf("ID ") ; return T_ID ; }
{num} { printf("NUM ") ; return T_NUM ; }
{string} { printf("STRING ") ; return T_STRING ; }
{punct} { printf("PUNCT ") ; return yytext[0] ; }
<<EOF>> { printf("EOF ") ; return T_EOF; }
. { yyerror("lexical error"); exit(1); }
Example :
mainclass Example {
public static void main ( )
{
int c;
float x, sum, mo;
c=0;
x=3.5;
sum=0.0;
while (c<5)
{
sum=sum+x;
c=c+1;
x=x+1.5;
}
mo=sum/5;
println (mo);
}
}
Running all this stuff it showed up this output:
C-like : syntax error
MAINCLASS EMPTY ID
It seems like id is in wrong position although in grammar we have:
PROGRAM : T_MAINCLASS T_ID '{' T_PUBLIC T_STATIC T_VOID T_MAIN '(' ')' COMP_STMT '}'
Based on the "solution" proposed in OP's self answer, it's pretty clear that the original problem was that the generated header used to compile the scanner was not the same as the header generated by bison/yacc from the parser specification.
The generated header includes definitions of all the token types as small integers; in order for the scanner to communicate with the parser, it must identify each token with the correct token type. So the parser generator (bison/yacc) produces a header based on the parser specification (the .y file), and that header must be #included into the generated scanner so that scanner actions can used symbolic token type names.
If the scanner was compiled with a header file generated from some previous version of the parser specification, it is quite possible that the token numbers no longer correspond with what the parser is expecting.
The easiest way to avoid this problem is to use a build system like make, which will automatically recompile the scanner if necessary.
The easiest way to detect this problem is to use bison's built-in trace facility. Enabling tracing requires only a couple of lines of code, and saves you from having to scatter printf statements throughout your scanner and parser. The bison trace will show you exactly what is going on, so not only is it less work than adding printfs, it is also more precise. In particular, it reports every token which is passed to the parser (and, with a little more effort, you can get it to report the semantic values of those tokens as well). So if the parser is getting the wrong token code, you'll see that right away.
After many potential helpful changes, parser worked by changing the order of these tokens.
From
%token T_MAINCLASS T_ID T_PUBLIC T_STATIC T_VOID T_MAIN T_PRINTLN T_INT T_FLOAT T_FOR T_WHILE T_IF T_ELSE T_EQUAL T_SMALLER T_BIGGER T_NOTEQUAL T_NUM T_STRING
TO
%token T_MAINCLASS T_PUBLIC T_STATIC T_VOID T_MAIN T_PRINTLN T_INT T_FLOAT T_FOR T_WHILE T_IF T_EQUAL T_ID T_NUM T_SMALLER T_BIGGER T_NOTEQUAL T_ELSE T_STRING
It looked like that the reading element was else but lexer normaly returned an id. Somehow this modification was the solution.
Related
I have to build a compiler that translates the java language into pyhton. I'm using the Flex and Bison tools. I created the flex file and I defined the syntactic grammar in Bison for some restrictions that I have to implement (such as array, management of cycles, management of a class, management of logical-arithmetic operators, etc.).
I'm having trouble understanding how to handle semantic rules. For example, I should handle the semantics for import statement and variable declaration, add the variable in the symbol table and then handle the translation.
This is the structure of the symbol table in the symboltable.h module:
struct symtable{
char *scopename; // key
struct symtable2 *subtable; // symble table secondary
UT_hash_handle hh; // to make the structure hash
}
struct symtable2 // secondary symbol structure
{
char *name; // Name of the symbol (key)
char *element; // it can be a variable or an array
char *type; // Indicates the type assumed by the token
(int, float, char, bool)
char *value; // value assigned to the variable
int dim; // Array size, it is zero in the case of a variable.
UT_hash_handle hh; // to make the structure hash
};
And this is the add symbol function:
void add_symbol( char *name, char *current_scopename, char *element, char *current_type, char *current_value, int dim, int nr) { //Function to add a new symbol in the symbol table
struct symtable *s;
HASH_FIND_PTR(symbols, current_scopename, s);
if (s == NULL) {
s = (struct symtable *)malloc(sizeof *s);
s->scopename =current_scopename;
s->subtable=NULL;
s->scopename =current_scopename;
HASH_ADD_KEYPTR(hh,symbols,s->scopename,strlen(s->scopename),s);
}
struct symtable2 *s2;
HASH_FIND_PTR(symbols2, name, s2);
if (s2==NULL) {
s2 = (struct symtable2 *)malloc(sizeof *s2);
s2->name = name;
s2->element = element;
s2->type = current_type;
s2->value = current_value;
s2->dim = dim;
HASH_ADD_KEYPTR(hh,s->subtable,s2->name,strlen(s2->name),s2);
} else {
if (strcmp( s2->type,current_type) == 0){
s2->value =current_value;
} else {
printf("\033[01;31mRiga %i. [FATALE] SEMANTIC ERROR: assignment violates the primitive type of the variable.\033[00m\n", nr);
printf("\n\n\033[01;31mParsing failed.\033[00m\n");
}
}
}
This is a part of the bison file with the grammar to handle import statement and the variable declaration:
%{
#include <stdio.h>;
#include <ctype.h>;
#include <symboltable.h>;
file *f_ptr;
%}
%start program
%token NUMBER
%token ID
%token INT
%token FLOAT
%token DOUBLE
%token CHAR
%token IMPORT
%right ASSIGNOP
%left SCOR
%left SCAND
%left EQ NE
%left LT GT LE GE
%left ADD SUB
%left MULT DIV MOD
%right NOT
%left '(' ')' '[' ']'
%%
program
: ImportStatement GlobalVariableDeclarations
;
ImportStatement
: IMPORT LibraryName ';' { delete_file (); f_ptr = open_file (); fprintf(fptr, "import array \n"); }
;
LibraryName
: 'java.util.*'
;
GlobalVariableFieldDeclarations
: type GlobalVariableDeclarations ';'
;
GlobalVariableDeclarations
: GlobalVariableDeclaration
| GlobalVariableDeclarations ',' GlobalVariableDeclaration
;
GlobalVariableDeclaration
: VariableName
| VariableName ASSIGNOP VariableInitializer {if (typeChecking($1,$3)== 0) {$1= $3; $$=$1;}}
;
VariableName
: ID {$$ = $1 ;}
;
type
: INT
| CHAR
| FLOAT
| DOUBLE
| BOOLEAN
;
VariableInitializers
: VariableInitializer
| VariableInitializers ',' VariableInitializer
;
VariableInitializer
: ExpressionStatement
;
ExpressionStatement
: VariableName
| NUMBER
| ArithmeticExpression
| RelationalExpression
| BooleanExpression
;
ArithmeticExpression
: ExpressionStatement ADD ExpressionStatement
| ExpressionStatement SUB ExpressionStatement
| ExpressionStatement MULT ExpressionStatement
| ExpressionStatement DIV ExpressionStatement
| ExpressionStatement MOD ExpressionStatement
;
RelationalExpression
: ExpressionStatement GT ExpressionStatement
| ExpressionStatement LT ExpressionStatement
| ExpressionStatement GE ExpressionStatement
| ExpressionStatement LE ExpressionStatement
| ExpressionStatement EQ ExpressionStatement
| ExpressionStatement NE ExpressionStatement
;
BooleanExpression
: ExpressionStatement SCAND ExpressionStatement
| ExpressionStatement SCOR ExpressionStatement
| NOT ExpressionStatement
;
%%
int yyerror (char *s)
{
printf ("%s \n",s);
}
int main (void) {
return yyparse();
}
int typeChecking (variable1, variable2) {
struct symtable2 *s2;
s2=find_symbol (scopename, variable1);
if (s2!=NULL) {
int type1= s2->type;
char element1 = s2->element;
}
else{
printf("\n\n\033[01;31mVariable 1 not defined.\033[00m\n");
return -1;
}
s2=find_symbol (scopename, variable2);
if (s2!=NULL) {
int type2= s2->type;
char element2 = s2->element;
}
else{
printf("\n\n\033[01;31mVariable 2 not defined.\033[00m\n");
return -1;
}
if(element1=='variable' && element2=='variable'){
if (type1 == type2){
return 0;
}
else {
return 1;
}
}
else {
printf("\n\n\033[01;31m Different elements.\033[00m\n");
return -1;
}
}
I am a beginner with the syntax of the bison for the management of semantics, on the following productions I have doubts about the relative semantic rule:
GlobalVariableFieldDeclarations
: type GlobalVariableDeclarations ';'
;
GlobalVariableDeclarations
: GlobalVariableDeclaration
| GlobalVariableDeclarations ',' GlobalVariableDeclaration
;
GlobalVariableDeclaration
: VariableName
| VariableName ASSIGNOP VariableInitializer {if (typeChecking($1,$3)== 0) {$1= $3; $$=$1;}}
;
VariableName
: ID {$$ = $1 ;}
;
Is it correct to manage semantics in this way for a GlobalVariableDeclaration production? And how can I insert the required parameter values, in the symbol table, via the add_symbol function? (Or better, how can I acquire the required parameters starting from productions to insert them in the add_symbol function that I have implemented?) Forgive me but I am a beginner, and many things about the semantics are not clear to me. I hope you have the patience to help me, I thank you in advance.
You should use Bison to build an AST and then you would perform semantic analysis on the tree instead of in the grammar. Building an AST allows you to perform analysis on more complex data structures then just the grammar rules you built in Bison.
Once you have your AST for the input you can then make rules for how to convert that AST into a python program with the same syntax.
Here is an example of a Bison/Flex compiler for the Decaf language that might give you some ideas https://github.com/davidcox143/Decaf-Compiler
I have started learning about YACC, and I have executed a few examples of simple toy programs. But I have never seen a practical example that demonstrates how to build a compiler that identifies and implements function definitions and function calls, array implementation and so on, nor has it been easy to find an example using Google search. Can someone please provide one example of how to generate the tree using YACC? C or C++ is fine.
Thanks in advance!
Let's parse this code with yacc.
file test contains valid C code that we want to parse.
int main (int c, int b) {
int a;
while ( 1 ) {
int d;
}
}
A lex file c.l
alpha [a-zA-Z]
digit [0-9]
%%
[ \t] ;
[ \n] { yylineno = yylineno + 1;}
int return INT;
float return FLOAT;
char return CHAR;
void return VOID;
double return DOUBLE;
for return FOR;
while return WHILE;
if return IF;
else return ELSE;
printf return PRINTF;
struct return STRUCT;
^"#include ".+ ;
{digit}+ return NUM;
{alpha}({alpha}|{digit})* return ID;
"<=" return LE;
">=" return GE;
"==" return EQ;
"!=" return NE;
">" return GT;
"<" return LT;
"." return DOT;
\/\/.* ;
\/\*(.*\n)*.*\*\/ ;
. return yytext[0];
%%
file c.y for input to YACC:
%{
#include <stdio.h>
#include <stdlib.h>
extern FILE *fp;
%}
%token INT FLOAT CHAR DOUBLE VOID
%token FOR WHILE
%token IF ELSE PRINTF
%token STRUCT
%token NUM ID
%token INCLUDE
%token DOT
%right '='
%left AND OR
%left '<' '>' LE GE EQ NE LT GT
%%
start: Function
| Declaration
;
/* Declaration block */
Declaration: Type Assignment ';'
| Assignment ';'
| FunctionCall ';'
| ArrayUsage ';'
| Type ArrayUsage ';'
| StructStmt ';'
| error
;
/* Assignment block */
Assignment: ID '=' Assignment
| ID '=' FunctionCall
| ID '=' ArrayUsage
| ArrayUsage '=' Assignment
| ID ',' Assignment
| NUM ',' Assignment
| ID '+' Assignment
| ID '-' Assignment
| ID '*' Assignment
| ID '/' Assignment
| NUM '+' Assignment
| NUM '-' Assignment
| NUM '*' Assignment
| NUM '/' Assignment
| '\'' Assignment '\''
| '(' Assignment ')'
| '-' '(' Assignment ')'
| '-' NUM
| '-' ID
| NUM
| ID
;
/* Function Call Block */
FunctionCall : ID'('')'
| ID'('Assignment')'
;
/* Array Usage */
ArrayUsage : ID'['Assignment']'
;
/* Function block */
Function: Type ID '(' ArgListOpt ')' CompoundStmt
;
ArgListOpt: ArgList
|
;
ArgList: ArgList ',' Arg
| Arg
;
Arg: Type ID
;
CompoundStmt: '{' StmtList '}'
;
StmtList: StmtList Stmt
|
;
Stmt: WhileStmt
| Declaration
| ForStmt
| IfStmt
| PrintFunc
| ';'
;
/* Type Identifier block */
Type: INT
| FLOAT
| CHAR
| DOUBLE
| VOID
;
/* Loop Blocks */
WhileStmt: WHILE '(' Expr ')' Stmt
| WHILE '(' Expr ')' CompoundStmt
;
/* For Block */
ForStmt: FOR '(' Expr ';' Expr ';' Expr ')' Stmt
| FOR '(' Expr ';' Expr ';' Expr ')' CompoundStmt
| FOR '(' Expr ')' Stmt
| FOR '(' Expr ')' CompoundStmt
;
/* IfStmt Block */
IfStmt : IF '(' Expr ')'
Stmt
;
/* Struct Statement */
StructStmt : STRUCT ID '{' Type Assignment '}'
;
/* Print Function */
PrintFunc : PRINTF '(' Expr ')' ';'
;
/*Expression Block*/
Expr:
| Expr LE Expr
| Expr GE Expr
| Expr NE Expr
| Expr EQ Expr
| Expr GT Expr
| Expr LT Expr
| Assignment
| ArrayUsage
;
%%
#include"lex.yy.c"
#include<ctype.h>
int count=0;
int main(int argc, char *argv[])
{
yyin = fopen(argv[1], "r");
if(!yyparse())
printf("\nParsing complete\n");
else
printf("\nParsing failed\n");
fclose(yyin);
return 0;
}
yyerror(char *s) {
printf("%d : %s %s\n", yylineno, s, yytext );
}
A Makefile to put it together. I use flex-lexer and bison but the example will also work with lex and yacc.
miniC: c.l c.y
bison c.y
flex c.l
gcc c.tab.c -ll -ly
Compile and parse the test code:
$ make
bison c.y
flex c.l
gcc c.tab.c -ll -ly
c.tab.c: In function ‘yyparse’:
c.tab.c:1273:16: warning: implicit declaration of function ‘yylex’ [-Wimplicit-function-declaration]
yychar = yylex ();
^
c.tab.c:1402:7: warning: implicit declaration of function ‘yyerror’ [-Wimplicit-function-declaration]
yyerror (YY_("syntax error"));
^
c.y: At top level:
c.y:155:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
yyerror(char *s) {
^
$ ls
a.out c.l CMakeLists.txt c.tab.c c.y lex.yy.c Makefile README.md test
$ ./a.out test
Parsing complete
For reading resources I can recommend the books Modern Compiler Implementation in C by Andrew Appel and the flex/bison book by John Levine.
I have the following problem:
My ANTLR 3 grammar compiles, but my simple testprogram doesn't work. The grammar is as follows:
grammar Rietse;
options {
k=1;
language=Java;
output=AST;
}
tokens {
COLON = ':' ;
SEMICOLON = ';' ;
OPAREN = '(' ;
CPAREN = ')' ;
COMMA = ',' ;
OCURLY = '{' ;
CCURLY = '}' ;
SINGLEQUOTE = '\'' ;
// operators
BECOMES = '=' ;
PLUS = '+' ;
MINUS = '-' ;
TIMES = '*' ;
DIVIDE = '/' ;
MODULO = '%' ;
EQUALS = '==' ;
LT = '<' ;
LTE = '<=' ;
GT = '>' ;
GTE = '>=' ;
UNEQUALS = '!=' ;
AND = '&&' ;
OR = '||' ;
NOT = '!' ;
// keywords
PROGRAM = 'program' ;
COMPOUND = 'compound' ;
UNARY = 'unary' ;
DECL = 'decl' ;
SDECL = 'sdecl' ;
STATIC = 'static' ;
PRINT = 'print' ;
READ = 'read' ;
IF = 'if' ;
THEN = 'then' ;
ELSE = 'else' ;
DO = 'do' ;
WHILE = 'while' ;
// types
INTEGER = 'int' ;
CHAR = 'char' ;
BOOLEAN = 'boolean' ;
TRUE = 'true' ;
FALSE = 'false' ;
}
#lexer::header {
package Eindopdracht;
}
#header {
package Eindopdracht;
}
// Parser rules
program
: program2 EOF
-> ^(PROGRAM program2)
;
program2
: (declaration* statement)+
;
declaration
: STATIC type IDENTIFIER SEMICOLON -> ^(SDECL type IDENTIFIER)
| type IDENTIFIER SEMICOLON -> ^(DECL type IDENTIFIER)
;
type
: INTEGER
| CHAR
| BOOLEAN
;
statement
: assignment_expr SEMICOLON!
| while_stat SEMICOLON!
| print_stat SEMICOLON!
| if_stat SEMICOLON!
| read_stat SEMICOLON!
;
while_stat
: WHILE^ OPAREN! or_expr CPAREN! OCURLY! statement+ CCURLY! // while (expression) {statement+}
;
print_stat
: PRINT^ OPAREN! or_expr (COMMA! or_expr)* CPAREN! // print(expression)
;
read_stat
: READ^ OPAREN! IDENTIFIER (COMMA! IDENTIFIER)+ CPAREN! // read(expression)
;
if_stat
: IF^ OPAREN! or_expr CPAREN! comp_expr (ELSE! comp_expr)? // if (expression) compound else compound
;
assignment_expr
: or_expr (BECOMES^ or_expr)*
;
or_expr
: and_expr (OR^ and_expr)*
;
and_expr
: compare_expr (AND^ compare_expr)*
;
compare_expr
: plusminus_expr ((LT|LTE|GT|GTE|EQUALS|UNEQUALS)^ plusminus_expr)?
;
plusminus_expr
: timesdivide_expr ((PLUS | MINUS)^ timesdivide_expr)*
;
timesdivide_expr
: unary_expr ((TIMES | DIVIDE | MODULO)^ unary_expr)*
;
unary_expr
: operand
| PLUS operand -> ^(UNARY PLUS operand)
| MINUS operand -> ^(UNARY MINUS operand)
| NOT operand -> ^(UNARY NOT operand)
;
operand
: TRUE
| FALSE
| charliteral
| IDENTIFIER
| NUMBER
| OPAREN! or_expr CPAREN!
;
comp_expr
: OCURLY program2 CCURLY -> ^(COMPOUND program2)
;
// Lexer rules
charliteral
: SINGLEQUOTE! LETTER SINGLEQUOTE!
;
IDENTIFIER
: LETTER (LETTER | DIGIT)*
;
NUMBER
: DIGIT+
;
COMMENT
: '//' .* '\n'
{ $channel=HIDDEN; }
;
WS
: (' ' | '\t' | '\f' | '\r' | '\n')+
{ $channel=HIDDEN; }
;
fragment DIGIT : ('0'..'9') ;
fragment LOWER : ('a'..'z') ;
fragment UPPER : ('A'..'Z') ;
fragment LETTER : LOWER | UPPER ;
// EOF
I then use the following java file to test programs:
package Package;
import java.io.FileInputStream;
import java.io.InputStream;
import org.antlr.runtime.ANTLRInputStream;
import org.antlr.runtime.CommonTokenStream;
import org.antlr.runtime.RecognitionException;
import org.antlr.runtime.tree.BufferedTreeNodeStream;
import org.antlr.runtime.tree.CommonTree;
import org.antlr.runtime.tree.CommonTreeNodeStream;
import org.antlr.runtime.tree.DOTTreeGenerator;
import org.antlr.runtime.tree.TreeNodeStream;
import org.antlr.stringtemplate.StringTemplate;
public class Rietse {
public static void main (String[] args)
{
String inputFile = args[0];
try {
InputStream in = inputFile == null ? System.in : new FileInputStream(inputFile);
RietseLexer lexer = new RietseLexer(new ANTLRInputStream(in));
CommonTokenStream tokens = new CommonTokenStream(lexer);
RietseParser parser = new RietseParser(tokens);
RietseParser.program_return result = parser.program();
} catch (RietseException e) {
System.err.print("ERROR: RietseException thrown by compiler: ");
System.err.println(e.getMessage());
} catch (RecognitionException e) {
System.err.print("ERROR: recognition exception thrown by compiler: ");
System.err.println(e.getMessage());
e.printStackTrace();
} catch (Exception e) {
System.err.print("ERROR: uncaught exception thrown by compiler: ");
System.err.println(e.getMessage());
e.printStackTrace();
}
}
}
And at last, the testprogram itself:
print('a');
Now when I run this, I get the following errors:
line 1:7 mismatched input 'a' expecting LETTER
line 1:9 mismatched input ')' expecting LETTER
I have no clue whatsoever what causes this bug. I have tried several changes of things but nothing fixed it. Does anyone here know what's wrong with my code and how I can fix it?
Every bit of help is greatly appreciated, thanks in advance.
Greetings,
Rien
Using a rule:
CHARLITERAL
: SINGLEQUOTE (LETTER | DIGIT) SINGLEQUOTE
;
and changing operand to:
operand
: TRUE
| FALSE
| CHARLITERAL
| IDENTIFIER
| NUMBER
| OPAREN! or_expr CPAREN!
;
will fix the problem. It does give the problem of having singlequotes in the AST, but that can be fixed optionally by changing the text of the node with the
setText(String);
method.
Turn charliteral into a lexer rule (rename it to CHARLITERAL). Right now, the string 'a' is tokenized like this: SINGLEQUOTE IDENTIFIER SINGLEQUOTE, so you're getting an IDENTIFIER instead of a LETTER.
I wonder how this code can compile at all given that you're using a fragment (LETTER) from a parser rule.
As a test file, I'm using this piece of code to see whether or not the parser is working well:
/* A test program */
void main(void){
int x;
int y;
int z;
x= 10;
y = 20;
z =x* (x+y);
}
However, I'm getting a syntax error right after the first void, and I don't quite understand why it won't even reach the parameter part. If there's any tips or see something that perhaps I'm not seeing, I'd appreciate it. I know that I have yet to solve the dangling else problem, but that shouldn't be of an issue for this test.
Here's my parser so far:
%{
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
/* external function prototypes */
extern int yylex();
extern int initLex(int , char **);
/* external global variables */
extern int yydebug;
extern int yylineno;
/* function prototypes */
void yyerror(const char *);
/* global variables */
%}
/* YYSTYPE */
/* terminals */
/* Start adding token names here */
/* Your token names must match Project 1 */
/* The file cmparser.tab.h was gets generated here */
%token TOK_ERROR
%token TOK_ELSE
%token TOK_IF
%token TOK_RETURN
%token TOK_VOID
%token TOK_INT
%token TOK_WHILE
%token TOK_PLUS
%token TOK_MINUS
%token TOK_MULT
%token TOK_DIV
%token TOK_LT
%token TOK_LE
%token TOK_GT
%token TOK_GE
%token TOK_EQ
%token TOK_NE
%token TOK_ASSIGN
%token TOK_SEMI
%token TOK_COMMA
%token TOK_LPAREN
%token TOK_RPAREN
%token TOK_LSQ
%token TOK_RSQ
%token TOK_LBRACE
%token TOK_RBRACE
%token TOK_NUM
%token TOK_ID
/* associativity and precedence */
/* specify operator precedence (taken care of by grammar) and associatity here -
-uncomment */
//%left
%left '*' '/'
%left '+' '-'
%nonassoc '<' '>' "<=" ">="
%left "==" "!="
%right '='
%nonassoc error
/* Begin your grammar specification here */
%%
Start : /* put your RHS for this rule here */
Declarations
{ printf ("Declaration.\n"); }
; /* note that the rule ends with a semicolon */
Declarations :
/* empty */
{}
| Vardeclaration Declarations
{ printf ("Var-declaration.\n");}
| Fundeclaration Declarations
{ printf ("Fun-declaration.\n");}
;
Vardeclaration :
Typespecifier TOK_ID ';'
{ printf ("Not an array.\n");}
| Typespecifier TOK_ID '[' TOK_NUM ']' ';'
{ printf ("An array.\n");}
;
Typespecifier :
TOK_INT
{ printf ("Type int.\n");}
| TOK_VOID
{ printf("Type void.\n");}
;
Fundeclaration :
Typespecifier TOK_ID '(' Params ')' Compoundstmt
{ printf ("Function Declaration.");}
;
Params :
Paramlist
| TOK_VOID
{ printf ("Void param.\n");}
;
Paramlist :
Paramlist ',' Param
| Param
;
Param :
Typespecifier TOK_ID
| Typespecifier TOK_ID '[' ']'
;
Compoundstmt :
'{' Localdeclaration Statement '}'
;
Localdeclaration :
Vardeclaration Localdeclaration
| /* empty */
;
Statement :
Expressionstmt Statement
| Compoundstmt Statement
| Selectionstmt Statement
| Iterationstmt Statement
| Returnstmt Statement
| /* empty */
;
Expressionstmt :
Expression ';'
| ';'
;
Selectionstmt :
TOK_IF '(' Expression ')' Statement
| TOK_IF '(' Expression ')' Statement TOK_ELSE Statement
;
Iterationstmt :
TOK_WHILE '(' Expression ')' Statement
;
Returnstmt :
TOK_RETURN ';'
| TOK_RETURN Expression ';'
;
Expression :
Var '=' Expression
| SimpleExpression
;
Var :
TOK_ID
| TOK_ID '[' Expression ']'
;
SimpleExpression :
AdditiveExpression Relop AdditiveExpression
| AdditiveExpression
;
Relop :
"<="
| '<'
| '>'
| ">="
| "=="
| "!="
;
AdditiveExpression :
AdditiveExpression Addop Term
| Term
;
Addop :
'+'
| '-'
;
Term :
Term Mulop Factor
| Factor
;
Mulop :
'*'
| '/'
;
Factor :
'(' Expression ')'
| Var
| Call
| TOK_NUM
;
Call :
TOK_ID '(' Args ')'
;
Args :
Arglist
| /* empty */
;
Arglist :
Arglist ',' Expression
| Expression
;
%%
void yyerror (char const *s) {
fprintf (stderr, "Line %d: %s\n", yylineno, s);
}
int main(int argc, char **argv){
initLex(argc,argv);
#ifdef YYLLEXER
while (gettok() !=0) ; //gettok returns 0 on EOF
return;
#else
yyparse();
#endif
}
Your grammar looks ok, so my best guess is that your lexer doesn't recognize the string void as TOK_VOID, but instead returns something else.
Try compiling your parser with -DYYDEBUG and setting yydebug = 1 before calling yyparse to see what its getting from the lexer.
I am writing a parser for delphi's dfm's files. The lexer looks like this:
EXP ([Ee][-+]?[0-9]+)
%%
("#"([0-9]{1,5}|"$"[0-9a-fA-F]{1,6})|"'"([^']|'')*"'")+ {
return tkStringLiteral; }
"object" { return tkObjectBegin; }
"end" { return tkObjectEnd; }
"true" { /*yyval.boolean = true;*/ return tkBoolean; }
"false" { /*yyval.boolean = false;*/ return tkBoolean; }
"+" | "." | "(" | ")" | "[" | "]" | "{" | "}" | "<" | ">" | "=" | "," |
":" { return yytext[0]; }
[+-]?[0-9]{1,10} { /*yyval.integer = atoi(yytext);*/ return tkInteger; }
[0-9A-F]+ { return tkHexValue; }
[+-]?[0-9]+"."[0-9]+{EXP}? { /*yyval.real = atof(yytext);*/ return tkReal; }
[a-zA-Z_][0-9A-Z_]* { return tkIdentifier; }
"$"[0-9A-F]+ { /* yyval.integer = atoi(yytext);*/ return tkHexNumber; }
[ \t\r\n] { /* ignore whitespace */ }
. { std::cerr << boost::format("Mystery character %c\n") % *yytext; }
<<EOF>> { yyterminate(); }
%%
and the bison grammar looks like
%token tkInteger
%token tkReal
%token tkIdentifier
%token tkHexValue
%token tkHexNumber
%token tkObjectBegin
%token tkObjectEnd
%token tkBoolean
%token tkStringLiteral
%%object:
tkObjectBegin tkIdentifier ':' tkIdentifier
property_assignment_list tkObjectEnd
;
property_assignment_list:
property_assignment
| property_assignment_list property_assignment
;
property_assignment:
property '=' value
| object
;
property:
tkIdentifier
| property '.' tkIdentifier
;
value:
atomic_value
| set
| binary_data
| strings
| collection
;
atomic_value:
tkInteger
| tkReal
| tkIdentifier
| tkBoolean
| tkHexNumber
| long_string
;
long_string:
tkStringLiteral
| long_string '+' tkStringLiteral
;
atomic_value_list:
atomic_value
| atomic_value_list ',' atomic_value
;
set:
'[' ']'
| '[' atomic_value_list ']'
;
binary_data:
'{' '}'
| '{' hexa_lines '}'
;
hexa_lines:
tkHexValue
| hexa_lines tkHexValue
;
strings:
'(' ')'
| '(' string_list ')'
;
string_list:
tkStringLiteral
| string_list tkStringLiteral
;
collection:
'<' '>'
| '<' collection_item_list '>'
;
collection_item_list:
collection_item
| collection_item_list collection_item
;
collection_item:
tkIdentifier property_assignment_list tkObjectEnd
;
%%
void yyerror(const char *s, ...) {...}
The problem with this grammar occurs while parsing the binary data. Binary data in the dfm's files is nothing
but a sequence of hexadecimal characters which never spans more than 80 characters per line. An example of
it is:
Picture.Data = {
055449636F6E0000010001002020000001000800A80800001600000028000000
2000000040000000010008000000000000000000000000000000000000000000
...
FF00000000000000000000000000000000000000000000000000000000000000
00000000FF000000FF000000FF00000000000000000000000000000000000000
00000000}
As you can see, this element lacks any markers, so the strings clashes with other elements. In the example
above the first line is returns the proper token tkHexValue. The second however returns a tkInteger token
and the third a tkIdentifier token. So when the parsing comes, it fails with an syntax error because
binary data is composed only of tkHexValue tokens.
My first workaround was to require integers to have a maximum length (which helped in all but the last line
of the binary data). And the second was to move the tkHexValue token above the tkIdentifier but it means
that now I will not have identifiers like F0
I was wondering if there is any way to fix this grammar?
Ok, I solved this one. I needed to define a state so tkHexValue is only returned while reading binary data. In the preamble part of the lexer I added
%x BINARY
and modify the following rules
"{" {BEGIN BINARY; return yytext[0];}
<BINARY>"}" {BEGIN INITIAL; return yytext[0];}
<BINARY>[ \t\r\n] { /* ignore whitespace */ }
And that was all!