I am able to parse the following valid SQL expression:
(select 1 limit 1) union all select 1 union all (select 2);
However, it seems there is an ambiguity in it, which I've been have a lot of trouble resolving. Here is a working version of the program (I've obviously cut down the statements just to create a minimal producible question) --
parser grammar DBParser;
options { tokenVocab = DBLexer;}
root
: selectStatement SEMI? EOF
;
selectStatement
: (selectStatementWithParens|selectClause|setOperation)
limitClause?
;
selectClause
: SELECT NUMBER
;
limitClause
: LIMIT NUMBER
;
selectStatementWithParens
: OPEN_PAREN selectStatement CLOSE_PAREN
;
setOperation:
(selectClause | selectStatementWithParens)
(setOperand (selectClause | selectStatementWithParens))*
;
setOperand
: UNION ALL?
;
lexer grammar DBLexer;
options { caseInsensitive=true; }
SELECT : 'SELECT'; // SELECT *...
LIMIT : 'LIMIT'; // ORDER BY x LIMIT 20
ALL : 'ALL'; // SELECT ALL vs. SELECT DISTINCT; WHERE ALL (...); UNION ALL...
UNION : 'UNION'; // Set operation
SEMI : ';'; // Statement terminator
OPEN_PAREN : '('; // Function calls, object declarations
CLOSE_PAREN : ')';
NUMBER
: [0-9]+
;
WHITESPACE
: [ \t\r\n] -> skip
;
Where is the ambiguity coming from, and what could be a possible way to solve this?
Update: I'm not sure if this is the solution, but it seems the following helped eliminate that ambiguity:
selectStatement:
withClause?
(selectStatementWithParens|selectClause)
(setOperand (selectClause|selectStatementWithParens))*
orderClause?
(limitClause offsetClause?)?
;
In other words -- making it such that the setOperand doesn't re-start with the select.
your setOpertaion rule can match single selectClause or selectStatementWithParens (because you use the * cardinality for the second half of the rule, so 0 instances of the second half still matches the rule). This means that a selectClause can match the selectClause rule in selectStatement, or it could be used to construct a setOperation (which is the other alternative in your ambiguity).
If you change setOperation to use + cardinality for the second half of the rule, you resolve the ambiguity.
setOperation
: (selectClause | selectStatementWithParens)
(setOperand (selectClause | selectStatementWithParens))+
;
This also seems logical, that you'd only want to consider something a setOperation if there's a setOperand involved.
That explains and corrects the ambiguity, but still leaves you with a "max k" of 7.
Related
I'm parsing a script language that defines two types of statements; control statements and non control statements. Non control statements are always ended with ';', while control statements may end with ';' or EOL ('\n'). A part of the grammar looks like this:
script
: statement* EOF
;
statement
: control_statement
| no_control_statement
;
control_statement
: if_then_control_statement
;
if_then_control_statement
: IF expression THEN end_control_statment
( statement ) *
( ELSEIF expression THEN end_control_statment ( statement )* )*
( ELSE end_control_statment ( statement )* )?
END IF end_control_statment
;
no_control_statement
: sleep_statement
;
sleep_statement
: SLEEP expression END_STATEMENT
;
end_control_statment
: END_STATEMENT
| EOL
;
END_STATEMENT
: ';'
;
ANY_SPACE
: ( LINE_SPACE | EOL ) -> channel(HIDDEN)
;
EOL
: [\n\r]+
;
LINE_SPACE
: [ \t]+
;
In all other aspects of the script language, I never care about EOL so I use the normal lexer rules to hide white space.
This works fine in all cases but the cases where I need to use a EOL to find a termination of a control statement, but with the grammar above, all EOL is hidden and not used in the control statement rules.
Is there a way to change my grammar so that I can skip all EOL but the ones needed to terminate parts of my control statements?
Found one way to handle this.
The idea is to divert EOL into one hidden channel and the other stuff I donĀ“t want to see in another hidden channel (like spaces and comments). Then I use some code to backtrack the tokens when an EOL is supposed to show up and examine the previous tokens channels (since they already have been consumed). If I find something on EOL channel before I run into something from the ordinary channel, then it is ok.
It looks like this:
Changed the lexer rules:
#lexer::members {
public static int EOL_CHANNEL = 1;
public static int OTHER_CHANNEL = 2;
}
...
EOL
: '\r'? '\n' -> channel(EOL_CHANNEL)
;
LINE_SPACE
: [ \t]+ -> channel(OTHER_CHANNEL)
;
I also diverted all other HIDDEN channels (comments) to the OTHER_CHANNEL.
Then I changed the rule end_control_statment:
end_control_statment
: END_STATEMENT
| { isEOLPrevious() }?
;
and added
#parser::members {
public static int EOL_CHANNEL = 1;
public static int OTHER_CHANNEL = 2;
boolean isEOLPrevious()
{
int idx = getCurrentToken().getTokenIndex();
int ch;
do
{
ch = getTokenStream().get(--idx).getChannel();
}
while (ch == OTHER_CHANNEL);
// Channel 1 is only carrying EOL, no need to check token itself
return (ch == EOL_CHANNEL);
}
}
One could stick to the ordinary hidden channel but then there is a need to both track channel and tokens while backtracking so this is maybe a bit easier...
Hope this could help someone else dealing with these kind of issues...
I'm trying to build a MVS JCL recognizer using Antlr4. The general endeavour is going reasonably well, but I am having trouble handling the MVS equivalent of *nix "here docs" (inline files). I cannot use lexer modes to flip-flop between JCL and here-doc content, so I am looking for alternatives that I might use a parser level.
IBM MVS allows the use of "instream datasets", similar to *nix here-docs.
Example:
This defines a three-line inline file, terminated by the characters "ZZ" and accessible to a referencing program using the label "ANYNAME":
//ANYNAME DD *,SYMBOLS=(JCLONLY,FILEREF),DLM=ZZ
HEREDOC TEXT 1
HEREDOC TEXT 2
HEREDOC TEXT 3
ZZ
//NEXTFILE DD ...stuff...
ANYNAME is a handle by which a program can access the here-doc content.
DD * is mandatory and informs MVS that a here-doc follows.
SYMBOLS=(JCLONLY,FILEREF) is optional detail relating to how the here-doc is handled.
DLM=ZZ is also optional and defines the here-doc terminator (default terminator = /*).
I need to be able, at parser level, to process the //ANYNAME... line (I have that bit), then to read the here-doc content until I find the (possibly non-default) here-doc terminator. In a sense, this looks like a lexer modes opportunity- but at this point I am working within the parser and I do not have a fixed terminator to work with.
I need guidance on how to switch modes to handle my here-doc, then switch back again to continue processing my JCL.
A hugely abridged version of my grammar follows (the actual grammar, so far, is about 2,200 lines and is incomplete).
Thanks for any insights. I appreciate your help, comments and suggestions.
/* the ddstmt parser rule should be considered the main entry point. It handles (at least):
//ANYNAME DD *,SYMBOLS=(JCLONLY,FILEREF),DLM=ZZ
and // DD *,DLM=ZZ
and //ANYNAME DD *,SYMBOLS=EXECSYS
and //ANYNAME DD *
I need to be able process the above line as JCL then read the here-doc content...
"HEREDOC TEXT 1"
"HEREDOC TEXT 2"
"HEREDOC TEXT 3"
as either a single token or a series of tokens, then, after reading the here-doc
delimiter...
"ZZ"
, go back to processing regular JCL again.
*/
/* lexer rules: */
LINECOMMENT3 : SLASH SLASH STAR ;
DSLASH : SLASH SLASH ;
INSTREAMTERMINATE : SLASH STAR ;
SLASH : '/' ;
STAR : '*' ;
OPAREN : '(' ;
CPAREN : ')' ;
COMMA : ',' ;
KWDD : 'DD' ;
KWDLM : 'DLM' ;
KWSYMBOLS : 'SYMBOLS' ;
KWDATA : 'DATA' ;
SYMBOLSTARGET : 'JCLONLY'|'EXECSYS'|'CNVTSYS' ;
EQ : '=' ;
APOST : '\'' ;
fragment
SPC : ' ' ;
SPCS : SPC+ ;
NL : ('\r'? '\n') ;
UNQUOTEDTEXT : (APOST APOST|~[=\'\"\r\n\t,/() ])+ ;
/* parser rules: */
label : unquotedtext
;
separator : SPCS
;
/* handle crazy JCL comment rules - start */
partcomment : SPCS partcommenttext NL
;
partcommenttext : ((~NL+?)?)
;
linecomment : LINECOMMENT3 linecommenttext NL
;
linecommenttext : ((~NL+?)?)
;
postcommaeol : ( (partcomment|NL) linecomment* DSLASH SPCS )?
;
poststmteol : ( (partcomment|NL) linecomment* )?
;
/* handle crazy JCL comment rules - end */
ddstmt : DSLASH (label|) separator KWDD separator dddecl
;
dddecl : ...
| ddinstreamdecl
| ...
;
ddinstreamdecl : (STAR|KWDATA) poststmteol ddinstreamopts
;
ddinstreamopts : ( COMMA postcommaeol ddinstreamopt poststmteol )*
;
ddinstreamopt : ( ddinstreamdelim
| symbolsdecl
)
;
ddinstreamdelim : KWDLM EQ unquotedtext
;
symbolsdecl : KWSYMBOLS EQ symbolsdef
;
symbolsdef : OPAREN symbolstarget ( COMMA symbolsloggingdd )? CPAREN
| symbolstarget
;
symbolstarget : SYMBOLSTARGET
;
symbolsloggingdd : unquotedtext
;
unquotedtext : UNQUOTEDTEXT
;
Your lexer needs to be able to tokenize the entire document prior to the beginning of the parsing operation. Any attempt to control the lexer from within the parser is a recipe for endless nightmares down the road. The following fragments of a PHP Lexer show how predicates can be used in combination with lexer modes to detect the end of a string with a user-defined delimiter. The key part is recording the start delimiter, and then checking tokens which start at the beginning of the line against it.
PHP_NOWDOC_START
: '<<<\'' PHP_IDENTIFIER '\'' {_input.La(1) == '\r' || _input.La(1) == '\n'}?
-> pushMode(PhpNowDoc)
;
mode PhpNowDoc;
PhpNowDoc_NEWLINE : NEWLINE -> type(NEWLINE);
PHP_NOWDOC_END
: {_input.La(-1) == '\n'}?
PHP_IDENTIFIER ';'?
{CheckHeredocEnd(_input.La(1), Text);}?
-> popMode
;
PHP_NOWDOC_TEXT
: ~[\r\n]+
;
The identifier is actually recorded in a custom override of NextToken() (shown here for a C# target):
public override IToken NextToken()
{
IToken token = base.NextToken();
switch (token.Type)
{
case PHP_NOWDOC_START:
// <<<'identifier'
_heredocIdentifier = token.Text.Substring(3).Trim('\'');
break;
case PHP_NOWDOC_END:
_heredocIdentifier = null;
break;
default:
break;
}
return token;
}
private bool CheckHeredocEnd(int la1, string text)
{
// identifier
// - or -
// identifier;
bool semi = text[text.Length - 1] == ';';
string identifier = semi ? text.Substring(0, text.Length - 1) : text;
return string.Equals(identifier, HeredocIdentifier, StringComparison.Ordinal);
}
Following is my antlr 3 grammar. I want to strip off content inside html tags.
The problem arises when I have arithmetic operator < > inside the tag.
How can this be handled?
grammar T;
options {
output=AST;
}
tokens {
ROOT;
}
parse
: text+ ;
text
: (tag)=> tag !
| SPACE !
| outsidetag
;
SPACE
: (' ' | '\t' | '\r' | '\n')+ ;
tag
: OPEN INSIDETAG CLOSE ;
CLOSE : '>' ;
OPEN : '<' ;
INSIDETAG
: ~(CLOSE|OPEN)+ ;
outsidetag
: ~(SPACE) ;
First you don't need to check for OPEN in your INSIDETAG rule, since there is no harm in skipping it there. In fact you want it that way. Additionally combine tag and INSIDETAG and make it greedy so it tries to consume anything until the last CLOSE TOKEN, skipping so any intermediate ones:
tag: options { greedy = true; }: OPEN ~CLOSE* CLOSE;
I would like to be able to parse a non-empty, one-or-many element, comma-delimited (and optionally parenthesized) list using flex/bison parse rules.
some e.g. of parseable lists :
1
1,2
(1,2)
(3)
3,4,5
(3,4,5,6)
etc.
I am using the following rules to parse the list (final result is parse element 'top level list'), but they do not seem to give the desired result when parsing (I get a syntax-error when supplying a valid list). Any suggestion on how I might set this up ?
cList : ELEMENT
{
...
}
| cList COMMA ELEMENT
{
...
}
;
topLevelList : LPAREN cList RPAREN
{
...
}
| cList
{
...
}
;
This sounds simple. Tell me if i missed something or if my example doesnt work
RvalCommaList:
RvalCommaListLoop
| '(' RvalCommaListLoop ')'
RvalCommaListLoop:
Rval
| RvalCommaListLoop ',' Rval
Rval: INT_LITERAL | WHATEVER
However if you accept rvals as well as this list you'll have a conflict confusing a regular rval with a single item list. In this case you can use the below which will either require the '('')' around them or require 2 items before it is a list
RvalCommaList2:
Rval ',' RvalCommaListLoop
| '(' RvalCommaListLoop ')'
I too want to know how to do this, thinking about it briefly, one way to achieve this would be to use a linked list of the form,
struct list;
struct list {
void *item;
struct list *next;
};
struct list *make_list(void *item, struct list *next);
and using the rule:
{ $$ = make_list( $1, $2); }
This solution is very similar in design to:
Using bison to parse list of elements
The hard bit is to figure out how to handle lists in the scheme of a (I presume) binary AST.
%start input
%%
input:
%empty
| integer_list
;
integer_list
: integer_loop
| '(' integer_loop ')'
;
integer_loop
: INTEGER
| integer_loop COMMA INTEGER
;
%%
I am trying to write an ANTLR grammar for the PHP serialize() format, and everything seems to work fine, except for strings. The problem is that the format of serialized strings is :
s:6:"length";
In terms of regexes, a rule like s:(\d+):".{\1}"; would describe this format if only backreferences were allowed in the "number of matches" count (but they are not).
But I cannot find a way to express this for either a lexer or parser grammar: the whole idea is to make the number of characters read depend on a backreference describing the number of characters to read, as in Fortran Hollerith constants (i.e. 6HLength), not on a string delimiter.
This example from the ANTLR grammar for Fortran seems to point the way, but I don't see how. Note that my target language is Python, while most of the doc and examples are for Java:
// numeral literal
ICON {int counter=0;} :
/* other alternatives */
// hollerith
'h' ({counter>0}? NOTNL {counter--;})* {counter==0}?
{
$setType(HOLLERITH);
String str = $getText;
str = str.replaceFirst("([0-9])+h", "");
$setText(str);
}
/* more alternatives */
;
Since input like s:3:"a"b"; is valid, you can't define a String token in your lexer, unless the first and last double quote are always the start and end of your string. But I guess this is not the case.
So, you'll need a lexer rule like this:
SString
: 's:' Int ':"' ( . )* '";'
;
In other words: match a s:, then an integer value followed by :" then one or more characters that can be anything, ending with ";. But you need to tell the lexer to stop consuming when the value Int is not reached. You can do that by mixing some plain code in your grammar to do so. You can embed plain code by wrapping it inside { and }. So first convert the value the token Int holds into an integer variable called chars:
SString
: 's:' Int {chars = int($Int.text)} ':"' ( . )* '";'
;
Now embed some code inside the ( . )* loop to stop it consuming as soon as chars is counted down to zero:
SString
: 's:' Int {chars = int($Int.text)} ':"' ( {if chars == 0: break} . {chars = chars-1} )* '";'
;
and that's it.
A little demo grammar:
grammar Test;
options {
language=Python;
}
parse
: (SString {print 'parsed: [\%s]' \% $SString.text})+ EOF
;
SString
: 's:' Int {chars = int($Int.text)} ':"' ( {if chars == 0: break} . {chars = chars-1} )* '";'
;
Int
: '0'..'9'+
;
(note that you need to escape the % inside your grammar!)
And a test script:
import antlr3
from TestLexer import TestLexer
from TestParser import TestParser
input = 's:6:"length";s:1:""";s:0:"";s:3:"end";'
char_stream = antlr3.ANTLRStringStream(input)
lexer = TestLexer(char_stream)
tokens = antlr3.CommonTokenStream(lexer)
parser = TestParser(tokens)
parser.parse()
which produces the following output:
parsed: [s:6:"length";]
parsed: [s:1:""";]
parsed: [s:0:"";]
parsed: [s:3:"end";]