I'm currently trying Rascal to create a small DSL. I tried to modify the Pico example, however I'm currently stuck. The following code parses examples like a = 3, b = 7 begin declare x : natural, field real # cells blubb; x := 5.7 end parses perfectly, but the implode function fails with the error message "Cannot find a constructor for PROGRAM". I tried various constructor declarations, however none seemed to fit. Is there a way to see what the expected constructor looks like?
Syntax:
module BlaTest::Syntax
import Prelude;
lexical Identifier = [a-z][a-z0-9]* !>> [a-z0-9];
lexical NaturalConstant = [0-9]+;
lexical IntegerConstant = [\-+]? NaturalConstant;
lexical RealConstant = IntegerConstant "." NaturalConstant;
lexical StringConstant = "\"" ![\"]* "\"";
layout Layout = WhitespaceAndComment* !>> [\ \t\n\r%];
lexical WhitespaceAndComment
= [\ \t\n\r]
| #category="Comment" "%" ![%]+ "%"
| #category="Comment" "%%" ![\n]* $
;
start syntax Program
= program: {ExaOption ","}* exadomain "begin" Declarations decls {Statement ";"}* body "end"
;
syntax Domain = "domain" "{" ExaOption ", " exaoptions "}"
;
syntax ExaOption = Identifier id "=" Expression val
;
syntax Declarations
= "declare" {Declaration ","}* decls ";" ;
syntax Declaration
= variable_declaration: Identifier id ":" Type tp
| field_declaration: "field" Type tp "#" FieldLocation fieldLocation Identifier id
;
syntax FieldLocation
= exacell: "cells"
| exanode: "nodes"
;
syntax Type
= natural:"natural"
| exareal: "real"
| string :"string"
;
syntax Statement
= asgStat: Identifier var ":=" Expression val
| ifElseStat: "if" Expression cond "then" {Statement ";"}* thenPart "else" {Statement ";"}* elsePart "fi"
| whileStat: "while" Expression cond "do" {Statement ";"}* body "od"
;
syntax Expression
= id: Identifier name
| stringConstant: StringConstant stringconstant
| naturalConstant: NaturalConstant naturalconstant
| realConstant: RealConstant realconstant
| bracket "(" Expression e ")"
> left conc: Expression lhs "||" Expression rhs
> left ( add: Expression lhs "+" Expression rhs
| sub: Expression lhs "-" Expression rhs
)
;
public start[Program] program(str s) {
return parse(#start[Program], s);
}
public start[Program] program(str s, loc l) {
return parse(#start[Program], s, l);
}
Abstract:
module BlaTest::Abstract
public data TYPE = natural() | string() | exareal();
public data FIELDLOCATION = exacell() | exanode();
public alias ExaIdentifier = str;
public data PROGRAM = program(list[OPTION] exadomain, list[DECL] decls, list[STATEMENT] stats);
public data DOMAIN
= domain_declaration(list[OPTION] options)
;
public data OPTION
= exaoption(ExaIdentifier name, EXP exp)
;
public data DECL
= variable_declaration(ExaIdentifier name, TYPE tp)
| field_declaration(TYPE tp, FIELDLOCATION fieldlocation, ExaIdentifier name)
;
public data EXP
= id(ExaIdentifier name)
| naturalConstant(int iVal)
| stringConstant(str sVal)
| realConstant(real rVal)
| add(EXP left, EXP right)
| sub(EXP left, EXP right)
| conc(EXP left, EXP right)
;
public data STATEMENT
= asgStat(ExaIdentifier name, EXP exp)
| ifElseStat(EXP exp, list[STATEMENT] thenpart, list[STATEMENT] elsepart)
| whileStat(EXP exp, list[STATEMENT] body)
;
anno loc TYPE#location;
anno loc PROGRAM#location;
anno loc DECL#location;
anno loc EXP#location;
anno loc STATEMENT#location;
anno loc OPTION#location;
public alias Occurrence = tuple[loc location, ExaIdentifier name, STATEMENT stat];
Load:
module BlaTest::Load
import IO;
import Exception;
import Prelude;
import BlaTest::Syntax;
import BlaTest::Abstract;
import BlaTest::ControlFlow;
import BlaTest::Visualize;
public PROGRAM exaload(str txt) {
PROGRAM p;
try {
p = implode(#PROGRAM, parse(#Program, txt));
} catch ParseError(loc l): {
println("Parse error at line <l.begin.line>, column <l.begin.column>");
}
return p; // return will fail in case of error
}
public Program exaparse(str txt) {
Program p;
try {
p = parse(#Program, txt);
} catch ParseError(loc l): {
println("Parse error at line <l.begin.line>, column <l.begin.column>");
}
return p; // return will fail in case of error
}
Thanks a lot,
Chris
Unfortunately the current implode facility depends on a hidden semantic assumption, namely that the non-terminals in the syntax definition have the same name as the types in the data definitions. So if the non-terminal is called "Program", it should not be called "PROGRAM" but "Program" in the data definition.
We are looking for a smoother way of integrating concrete and abstract syntax trees, but for now please decapitalize your data names.
Related
I am writing a simple arithmetic expression parser in the Haskell platform's Happy. The Happy tutorial (labeled "Documentation") from the Haskell site implements a similar grammar to what I need. The difference is that I want to include floating point numbers in my expressions and I do not need to define variables (i.e. expressions will not contain "let", "in", "=", or "x" or "y").
When I compile my grammar file with Happy it outputs:
unused terminals: 1
happy: no token type given
CallStack (from HasCallStack):
error, called at src/AbsSyn.lhs:93:24 in main:AbsSyn
I've searched StackOverflow for questions mentioning "no token type given" and found nothing mentioning this error. I also can't figure out what the "CallStack" trace means. (I'm quite new to Haskell).
I've defined a helper function to tell whether a token is a float:
isNum :: [Token] -> a -> Bool
isNum x = typeOf (read x) == typeOf 1.1
I've copied the documentation page's grammar file almost exactly, except where I've removed any production rules for variables, "=", or other alphabetic input, and where I've added rules for floating point numbers, i.e.
%token
int { TokenInt $$ }
float { TokenNum $$ }
...
Exp : Expl { Expl $1 }
Exp : Expl '+' Term { Plus $1 $3}
...
Factor : int { Int $1 }
| float { Float $1 }
...
data Exp
= Let String Exp Exp
| Expl Expl
deriving Show
data Expl
= Plus Expl Term
| Minus Expl Term
| Term Term
deriving Show
...
data Token
= TokenInt Int
| TokenFloat Float
| TokenNum Float
| TokenPlus
...
lexer (c:cs)
| isSpace c = lexer cs
| isDigit c = lexNum (c:cs)
lexer ('=':cs) = TokenEq : lexer cs
...
lexNum cs = TokenInt (read num) : lexer rest
where (num,rest) = span isDigit cs
lexFloat cs = TokenFloat (read num) : lexer rest
where (num,rest) = span isNum cs
That's about it, so far.
I am creating a compiler and am trying to extract line information from the parser. I wish to attach this to the AST node as metadata so that any error at a later point can be reported easily. I was successfully able to extract the line information in the Lexer by using this:
exception LexErr of string
exception ParseErr of string
let error msg start finish =
Printf.sprintf "(line %d: char %d..%d): %s" start.pos_lnum
(start.pos_cnum -start.pos_bol) (finish.pos_cnum - finish.pos_bol) msg
let lex_error lexbuf =
raise ( LexErr (error (lexeme lexbuf) (lexeme_start_p lexbuf) (lexeme_end_p lexbuf)))
This generates the line number, char number for Lexer perfectly after using it in this manner:
rule read = parse
(* Lexing tokens *)
| _ { lex_error lexbuf }
For parser, I am using this method:
exception LexErr of string
exception ParseErr of string
let error msg start finish =
Printf.sprintf "(line %d: char %d..%d): %s" start.pos_lnum
(start.pos_cnum -start.pos_bol) (finish.pos_cnum - finish.pos_bol) msg
let parse_error msg nterm =
raise (ParseErr (error msg (rhs_start_pos nterm) (rhs_end_pos nterm)))
My parser looks like this:
%start <Ast.stmt> program
%%
program:
| s = stmt; EOF { s }
;
stmt:
| TINT; e = expr { Decl(e) }
| e1 = expr; EQUALS; e2 = expr { Assign(e1,e2) }
| error { parse_error "wsorword" 1 }
;
expr:
| i = INT; { Const i }
| x = ID { Var x }
| e1 = expr; b = binop; e2 = expr; { Binop(e1,b,e2) }
;
binop:
| SUM { Sum }
| SUB { Sub }
| MUL { Mul }
| DIV { Div }
;
On running this, if a parser error is detected, it throws the invalid_argument "Index out of bounds" exception. This is detected on raise (ParseErr (error msg (rhs_start_pos nterm) (rhs_end_pos nterm))) line. I would ultimately like to create an AST node which contains this parser line information as it's metadata but can't get through this exception. I am not sure if my method of implementation is wrong or if I'm making some other mistake. Would love some help on this.
The function rhs_start_pos nth can not be used with menhir parsers; in this case, you should use $symbolstartpos or $startpos.
Similarly, e = expr is not valid with ocamlyacc.
Thus, I am not sure which parser generator you are trying to use.
I've been using regexes to go through a pile of Verilog files and pull out certain statements. Currently, regexes are fine for this, however, I'm starting to get to the point where a real parser is going to be needed in order to deal with nested structures so I'm investigating ocamllex/ocamlyacc. I'd like to first duplicate what I've got in my regex implementation and then slowly add more to the grammar.
Right now I'm mainly interested in pulling out module declarations and instantiations. To keep this question a bit more brief, let's look at module declarations only.
In Verilog a module declaration looks like:
module modmame ( ...other statements ) endmodule;
My current regex implementation simply checks that there is a module declared with a particular name ( checking against a list of names that I'm interested in - I don't need to find all module declarations just ones with certain names). So basically, I get each line of the Verilog file I want to parse and do a match like this (pseudo-OCaml with Pythonish and Rubyish elements ):
foreach file in list_of_files:
let found_mods = Hashtbl.create 17;
open file
foreach line in file:
foreach modname in modlist
let mod_patt= Str.regexp ("module"^space^"+"^modname^"\\("^space^"+\\|(\\)") in
try
Str.search_forward (mod_patt) line 0
found_mods[file] = modname; (* map filename to modname *)
with Not_found -> ()
That works great. The module declaration can occur anywhere in the Verilog file; I'm just wanting to find out if the file contains that particular declaration, I don't care about what else may be in that file.
My first attempt at converting this over to ocamllex/ocamlyacc:
verLexer.mll:
rule lex = parse
| [' ' '\n' '\t'] { lex lexbuf }
| ['0'-'9']+ as s { INT(int_of_string s) }
| '(' { LPAREN }
| ')' { RPAREN }
| "module" { MODULE }
| ['A'-'Z''a'-'z''0'-'9''_']+ as s { IDENT(s) }
| _ { lex lexbuf }
| eof
verParser.mly:
%{ type expr = Module of expr | Ident of string | Int of int %}
%token <int> INT
%token <string> IDENT
%token LPAREN RPAREN MODULE EOF
%start expr1
%type <expr> expr1
%%
expr:
| MODULE IDENT LPAREN { Module( Ident $2) };
expr1:
| expr EOF { $1 };
Then trying it out in the REPL:
# #use "verLexer.ml" ;;
# #use "verParser.ml" ;;
# expr1 lex (Lexing.from_string "module foo (" ) ;;
- : expr = Module (Ident "foo")
That's great, it works!
However, a real Verilog file will have more than a module declaration in it:
# expr1 lex (Lexing.from_string "//comment\nmodule foo ( \nstuff" ) ;;
Exception: Failure "lexing: empty token".
I don't really care about what appeared before or after that module definition, is there a way to just extract that part of the grammar to determine that the Verilog files contains the 'module foo (' statement? Yes, I realize that regexes are working fine for this, however, as stated above, I am planning to grow this grammar slowly and add more elements to it and regexes will start to break down.
EDIT: I added a match any char to the lex rule:
| _ { lex lexbuf }
Thinking that it would skip any characters that weren't matched so far, but that didn't seem to work:
# expr1 lex (Lexing.from_string "fof\n module foo (\n" ) ;;
Exception: Parsing.Parse_error.
A first advertisement minute: instead of ocamlyacc you should consider using François Pottier's Menhir, which is like a "yacc, upgraded", better in all aspects (more readable grammars, more powerful constructs, easier to debug...) while still very similar. It can of course be used in combination with ocamllex.
Your expr1 rule only allows to begin and end with a expr rule. You should enlarge it to allow "stuff" before or after expr. Something like:
junk:
| junk LPAREN
| junk RPAREN
| junk INT
| junk IDENT
expr1:
| junk expr junk EOF
Note that this grammar does not allow the module token to appear in the junk section. Doing so would be a bit problematic as it would make the grammar ambiguous (the structure you're looking for could be embedded either in expr or junk). If you could have a module token happening outside the form you're looking form, you should consider changing the lexer to capture the whole module ident ( structure of interest in a single token, so that it can be atomically matched from the grammar. On the long term, however, have finer-grained tokens is probably better.
As suggested by #gasche I tried menhir and am already getting much better results. I changed the verLexer.ml to:
{
open VerParser
}
rule lex = parse
| [' ' '\n' '\t'] { lex lexbuf }
| ['0'-'9']+ as s { INT(int_of_string s) }
| '(' { LPAREN }
| ')' { RPAREN }
| "module" { MODULE }
| ['A'-'Z''a'-'z''0'-'9''_']+ as s { IDENT(s) }
| _ as c { lex lexbuf }
| eof { EOF }
And changed verParser.mly to:
%{ type expr = Module of expr | Ident of string | Int of int
|Lparen | Rparen | Junk %}
%token <int> INT
%token <string> IDENT
%token LPAREN RPAREN MODULE EOF
%start expr1
%type <expr> expr1
%%
expr:
| MODULE IDENT LPAREN { Module( Ident $2) };
junk:
| LPAREN { }
| RPAREN { }
| INT { }
| IDENT { } ;
expr1:
| junk* expr junk* EOF { $2 };
The key here is that menhir allows a rule to be parameterized with a '*' as in the line above where I've got 'junk*' in a rule meaning match junk 0 or more times. ocamlyacc doesn't seem to allow that.
Now when I tried it in the REPL I get:
# #use "verParser.ml" ;;
# #use "verLexer.ml" ;;
# expr1 lex (Lexing.from_string "module foo ( " ) ;;
- : expr = Module (Ident "foo")
# expr1 lex (Lexing.from_string "some module foo ( " ) ;;
- : expr = Module (Ident "foo")
# expr1 lex (Lexing.from_string "some module foo (\nbar " ) ;;
- : expr = Module (Ident "foo")
# expr1 lex (Lexing.from_string "some module foo (\n//comment " ) ;;
- : expr = Module (Ident "foo")
# expr1 lex (Lexing.from_string "some module fot foo (\n//comment " ) ;;
Exception: Error.
# expr1 lex (Lexing.from_string "some module foo (\n//comment " ) ;;
Which seems to work exactly as I want it to.
I'm looking for a bibtex grammar in ANTLR to use in a hobby project. I don't want to spend my time for writing ANTLR grammar (this may take some time for me because it will involve a learning curve). So I'd appreciate for any pointers.
Note: I've found bibtex grammars for bison and yacc but couldn't find any for antlr.
Edit: As Bart pointed the I don't need to parse the preambles and tex in the quoted strings.
Here's a (very) rudimentary BibTex grammar that emits an AST (contrary to a simple parse tree):
grammar BibTex;
options {
output=AST;
ASTLabelType=CommonTree;
}
tokens {
BIBTEXFILE;
TYPE;
STRING;
PREAMBLE;
COMMENT;
TAG;
CONCAT;
}
//////////////////////////////// Parser rules ////////////////////////////////
parse
: (entry (Comma? entry)* Comma?)? EOF -> ^(BIBTEXFILE entry*)
;
entry
: Type Name Comma tags CloseBrace -> ^(TYPE Name tags)
| StringType Name Assign QuotedContent CloseBrace -> ^(STRING Name QuotedContent)
| PreambleType content CloseBrace -> ^(PREAMBLE content)
| CommentType -> ^(COMMENT CommentType)
;
tags
: (tag (Comma tag)* Comma?)? -> tag*
;
tag
: Name Assign content -> ^(TAG Name content)
;
content
: concatable (Concat concatable)* -> ^(CONCAT concatable+)
| Number
| BracedContent
;
concatable
: QuotedContent
| Name
;
//////////////////////////////// Lexer rules ////////////////////////////////
Assign
: '='
;
Concat
: '#'
;
Comma
: ','
;
CloseBrace
: '}'
;
QuotedContent
: '"' (~('\\' | '{' | '}' | '"') | '\\' . | BracedContent)* '"'
;
BracedContent
: '{' (~('\\' | '{' | '}') | '\\' . | BracedContent)* '}'
;
StringType
: '#' ('s'|'S') ('t'|'T') ('r'|'R') ('i'|'I') ('n'|'N') ('g'|'G') SP? '{'
;
PreambleType
: '#' ('p'|'P') ('r'|'R') ('e'|'E') ('a'|'A') ('m'|'M') ('b'|'B') ('l'|'L') ('e'|'E') SP? '{'
;
CommentType
: '#' ('c'|'C') ('o'|'O') ('m'|'M') ('m'|'M') ('e'|'E') ('n'|'N') ('t'|'T') SP? BracedContent
| '%' ~('\r' | '\n')*
;
Type
: '#' Letter+ SP? '{'
;
Number
: Digit+
;
Name
: Letter (Letter | Digit | ':' | '-')*
;
Spaces
: SP {skip();}
;
//////////////////////////////// Lexer fragments ////////////////////////////////
fragment Letter
: 'a'..'z'
| 'A'..'Z'
;
fragment Digit
: '0'..'9'
;
fragment SP
: (' ' | '\t' | '\r' | '\n' | '\f')+
;
(if you don't want the AST, remove all -> and everything to the right of it and remove both the options{...} and tokens{...} blocks)
which can be tested with the following class:
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
// parse the file 'test.bib'
BibTexLexer lexer = new BibTexLexer(new ANTLRFileStream("test.bib"));
BibTexParser parser = new BibTexParser(new CommonTokenStream(lexer));
// you can use the following tree in your code
// see: http://www.antlr.org/api/Java/classorg_1_1antlr_1_1runtime_1_1tree_1_1_common_tree.html
CommonTree tree = (CommonTree)parser.parse().getTree();
// print a DOT tree of our AST
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
System.out.println(st);
}
}
and the following example Bib-input (file: test.bib):
#PREAMBLE{
"\newcommand{\noopsort}[1]{} "
# "\newcommand{\singleletter}[1]{#1} "
}
#string {
me = "Bart Kiers"
}
#ComMENt{some comments here}
% or some comments here
#article{mrx05,
auTHor = me # "Mr. X",
Title = {Something Great},
publisher = "nob" # "ody",
YEAR = 2005,
x = {{Bib}\TeX},
y = "{Bib}\TeX",
z = "{Bib}" # "\TeX",
},
#misc{ patashnik-bibtexing,
author = "Oren Patashnik",
title = "BIBTEXing",
year = "1988"
} % no comma here
#techreport{presstudy2002,
author = "Dr. Diessen, van R. J. and Drs. Steenbergen, J. F.",
title = "Long {T}erm {P}reservation {S}tudy of the {DNEP} {P}roject",
institution = "IBM, National Library of the Netherlands",
year = "2002",
month = "December",
}
Run the demo
If you now generate a parser & lexer from the grammar:
java -cp antlr-3.3.jar org.antlr.Tool BibTex.g
and compile all .java source files:
javac -cp antlr-3.3.jar *.java
and finally run the Main class:
*nix/MacOS
java -cp .:antlr-3.3.jar Main
Windows
java -cp .;antlr-3.3.jar Main
You'll see some output on your console which corresponds to the following AST:
(click the image to enlarge it, generated with graphviz-dev.appspot.com)
To emphasize: I did not properly test the grammar! I wrote it a while back and never really used it in any project.
I'm parsing CoCo/R grammars in a utility to automate CoCo -> ANTLR translation. The core ANTLR grammar is:
rule '=' expression '.' ;
expression
: term ('|' term)*
-> ^( OR_EXPR term term* )
;
term
: (factor (factor)*)? ;
factor
: symbol
| '(' expression ')'
-> ^( GROUPED_EXPR expression )
| '[' expression']'
-> ^( OPTIONAL_EXPR expression)
| '{' expression '}'
-> ^( SEQUENCE_EXPR expression)
;
symbol
: IF_ACTION
| ID (ATTRIBUTES)?
| STRINGLITERAL
;
My problem is with constructions such as these:
CS = { ExternAliasDirective }
{ UsingDirective }
EOF .
CS results in an AST with a OR_EXPR node although no '|' character
actually appears. I'm sure this is due to the definition of
expression but I cannot see any other way to write the rules.
I did experiment with this to resolve the ambiguity.
// explicitly test for the presence of an '|' character
expression
#init { bool ored = false; }
: term {ored = (input.LT(1).Type == OR); } (OR term)*
-> {ored}? ^(OR_EXPR term term*)
-> ^(LIST term term*)
It works but the hack reinforces my conviction that something fundamental is wrong.
Any tips much appreciated.
Your rule:
expression
: term ('|' term)*
-> ^( OR_EXPR term term* )
;
always causes the rewrite rule to create a tree with a root of type OR_EXPR. You can create "sub rewrite rules" like this:
expression
: (term -> REWRITE_RULE_X) ('|' term -> ^(REWRITE_RULE_Y))*
;
And to resolve the ambiguity in your grammar, it's easiest to enable global backtracking which can be done in the options { ... } section of your grammar.
A quick demo:
grammar CocoR;
options {
output=AST;
backtrack=true;
}
tokens {
RULE;
GROUP;
SEQUENCE;
OPTIONAL;
OR;
ATOMS;
}
parse
: rule EOF -> rule
;
rule
: ID '=' expr* '.' -> ^(RULE ID expr*)
;
expr
: (a=atoms -> $a) ('|' b=atoms -> ^(OR $expr $b))*
;
atoms
: atom+ -> ^(ATOMS atom+)
;
atom
: ID
| '(' expr ')' -> ^(GROUP expr)
| '{' expr '}' -> ^(SEQUENCE expr)
| '[' expr ']' -> ^(OPTIONAL expr)
;
ID
: ('a'..'z' | 'A'..'Z') ('a'..'z' | 'A'..'Z' | '0'..'9')*
;
Space
: (' ' | '\t' | '\r' | '\n') {skip();}
;
with input:
CS = { ExternAliasDirective }
{ UsingDirective }
EOF .
produces the AST:
and the input:
foo = a | b ({c} | d [e f]) .
produces:
The class to test this:
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
/*
String source =
"CS = { ExternAliasDirective } \n" +
"{ UsingDirective } \n" +
"EOF . ";
*/
String source = "foo = a | b ({c} | d [e f]) .";
ANTLRStringStream in = new ANTLRStringStream(source);
CocoRLexer lexer = new CocoRLexer(in);
CommonTokenStream tokens = new CommonTokenStream(lexer);
CocoRParser parser = new CocoRParser(tokens);
CocoRParser.parse_return returnValue = parser.parse();
CommonTree tree = (CommonTree)returnValue.getTree();
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
System.out.println(st);
}
}
and with the output this class produces, I used the following website to create the AST-images: http://graph.gafol.net/
HTH
EDIT
To account for epsilon (empty string) in your OR expressions, you might try something (quickly tested!) like this:
expr
: (a=atoms -> $a) ( ( '|' b=atoms -> ^(OR $expr $b)
| '|' -> ^(OR $expr NOTHING)
)
)*
;
which parses the source:
foo = a | b | .
into the following AST:
The production for expression explicitly says that it can only return an OR_EXPR node. You can try something like:
expression
:
term
|
term ('|' term)+
-> ^( OR_EXPR term term* )
;
Further down, you could use:
term
: factor*;