Why use the fragment modifier here? - parsing

I've seen the use of fragment quite frequently within a Lexing rule, but not quite sure what its use is, or why it cannot just be removed. For example in the following rule:
NUMBER
: DECIMAL ([Ee] [+-]?[0-9]+)?
;
fragment DECIMAL
: [0-9]+ ('.' [0-9]*)? | '.' [0-9]+
;
When I remove the fragment I still get the same parse tree. So what exactly is the use of using fragment or is it mainly an annotative type of thing?
As another example from this tutorial page:
Fragments are reusable parts of lexer rules which cannot match on their own - they need to be referenced from a lexer rule.
INTEGER: DIGIT+
| '0' [Xx] HEX_DIGIT+
;
fragment DIGIT: [0-9];
fragment HEX_DIGIT: [0-9A-Fa-f];
I see no difference from using the following two approaches:
And without fragments:
Could someone please explain why these would be useful then?

The fragment declaration prevents the part from being recognized as a token. That might not be necessary very often, but it can definitely save you from hard-to-find bugs.
Let's take the second example in your post, without the fragment modifiers:
expression: INTEGER ;
INTEGER: DIGIT+
| '0' [Xx] HEX_DIGIT+
;
DIGIT: [0-9];
HEX_DIGIT: [0-9A-Fa-f];
Now, we decide that we want to add variables to the grammar:
expression: INTEGER | IDENTIFIER ;
INTEGER: DIGIT+
| '0' [Xx] HEX_DIGIT+
;
DIGIT: [0-9];
HEX_DIGIT: [0-9A-Fa-f];
IDENTIFIER: LETTER (LETTER | DIGIT)+ ;
LETTER: [A-Za-z] ;
Do you see the bug?
The parser won't handle the input a, although it has no trouble with ax or i. That's because the tokeniser will interpret a as a HEX_DIGIT, not an IDENTIFIER.
Of course, I could have prevented that by putting HEX_DIGIT after IDENTIFIER, but that's more thinking about lexer rule ordering than I really want to do. I'd like the implementation details of IDENTIFIER and INTEGER to not interfere with each other, thank you very much.
Correctly flagging non-token fragments, like LETTER, DIGIT and HEX_DIGIT saves me from having to think about whether a fragment might somehow manage to high-jack a token definition somewhere else in the file.
Here's a possibly more pernicious example, based on your first example:
NUMBER : DECIMAL EXPONENT? ;
EXPONENT: [Ee] [+-]? [0-9]+ ;
DECIMAL : [0-9]+ ('.' [0-9]*)? | '.' [0-9]+ ;
Once I add expressions to that grammar, I'll find that f+17 is fine, but e+17 is a syntax error. Why? Because it is recognised as an EXPONENT, rather than being parsed as an expression. No reordering of lexical rules will prevent that. But adding the fragment modifiers does the trick.

Related

Does antlr automatically discard whitespace?

I've written the following arithmetic grammar:
grammar Calc;
program
: expressions
;
expressions
: expression (NEWLINE expression)*
;
expression
: '(' expression ')' // parenExpression has highest precedence
| expression MULDIV expression // then multDivExpression
| expression ADDSUB expression // then addSubExpression
| OPERAND // finally the operand itself
;
MULDIV
: [*/]
;
ADDSUB
: [-+]
;
// 12 or .12 or 2. or 2.38
OPERAND
: [0-9]+ ('.' [0-9]*)?
| '.' [0-9]+
;
NEWLINE
: '\n'
;
And I've noticed that regardless of how I space the tokens I get the same result, for example:
1+2
2+3
Or:
1 +2
2+3
Still give me the same thing. Also I've noticed that adding in the following rule does nothing for me:
WS
: [ \r\n\t] + -> skip
Which makes me wonder whether skipping whitespace is the default behavior of antlr4?
ANTLR4 based parsers have the ability to skip over single unwanted or missing tokens and continue parsing if possible (which is the case here). And there's no default to ignore whitespaces. You have to always specify a whitespace rule which either skips them or puts them on a hidden channel.

ANTLR4: How can I recognize words from an alphabet?

I am new to Antl4. I have an antlr grammar file that consists of something similar to:
consonant : 'b' | 'c' | 'd' | 'f' ;
vowel : 'a' | 'e' | 'i' ;
connector : ':' | '-' ;
cseq : (consonant)+ ;
vseq : (vowel)+ ;
prefix : cseq vseq ;
word : (cseq vseq | cseq)+ ;
From my understanding, even though these lines are at the bottom of a file, they're still considered rules. My parse tree captures each individual letter instead of treating them as lexical items - or words. How can I change these rules into lexer statements?
A couple of things to keep in mind.
parser rules are rules beginning with lower case letters
lexer rules are those whose name begins with an uppercase character (fairly common convention is to make then all uppercase)
if you put a literal character in a parser rule (all of your rules are parser rules, as they begin with lower case characters), ANTLR will synthesize a TOKEN rule for those characters.
Since it appears that you want a word to be a lexical item (i.e. Token), you could do something along the lines of:
fragment CONSONANT : 'b' | 'c' | 'd' | 'f' ;
fragment VOWEL : 'a' | 'e' | 'i' ;
CONNECTOR : ':' | '-' ; // not sure what you intend for this
fragment CSEQ: CONSONANT+ ;
fragment VSEQ : VOWEL+ ;
PREFIX : CSEQ VSEQ ; // not sure what you intend for this
WORD : (CSEQ VSEQ | CSEQ)+ ;
(That's making quite a few assumptions about your intention.)
Main point, if you want WORDs to be single tokens, they need to be defined as a Lexer rule.
If you want to compose rules for Lexer rules, you can define fragment rules. These rules can be used to compose Lexer rules, but will not, themselves, be recognized as tokens.
With the changes here, you should be able to use WORD in a parser rule, and have all the characters that make up your WORD in a single Token.

ANTLR grammar not working as expected. What am I doing wrong?

I have this grammar below for implementing an IN operator taking a list of numbers or strings.
grammar listFilterExpr;
listFilterExpr: entityIdNumberListFilter | entityIdStringListFilter;
entityIdNumberProperty
: 'a.Id'
| 'c.Id'
| 'e.Id'
;
entityIdStringProperty
: 'f.phone'
;
listFilterExpr
: entityIdNumberListFilter
| entityIdStringListFilter
;
listOperator
: '$in:'
;
entityIdNumberListFilter
: entityIdNumberProperty listOperator numberList
;
entityIdStringListFilter
: entityIdStringProperty listOperator stringList
;
numberList: '[' ID (',' ID)* ']';
fragment ID: [1-9][0-9]*;
stringList: '[' STRING (',' STRING)* ']';
STRING
: '"'(ESC | SAFECODEPOINT)*'"'
;
fragment ESC
: '\\' (["\\/bfnrt] | UNICODE)
;
fragment SAFECODEPOINT
: ~ ["\\\u0000-\u001F]
;
If I try to parse the following input:
c.Id $in: [1,1]
Then I get the following error in the parser:
mismatched input '1' expecting ID
Please help me to correct this grammar.
Update
I found this following rule way above in the huge grammar file of my project that might be matching '1' before it gets to match to ID:
NUMBER
: '-'? INT ('.' [0-9] +)?
;
fragment INT
: '0' | [1-9] [0-9]*
;
But, If I write my ID rule before NUMBER then other things fail, because they have already matched ID which should have matched NUMBER
What should I do?
As mentioned by rici: ID should not be a fragment. Fragments can only be used by other lexer rules, they will never become a token on their own (and can therefor not be used in parser rules).
Just remove the fragment keyword from it: ID: [1-9][0-9]*;
Note that you'll also have to account for spaces. You probably want to skip them:
SPACES : [ \t\r\n] -> skip;
...
mismatched input '1' expecting ID
...
This looks like there's another lexer, besides ID, that also matches the input 1 and is defined before ID. In that case, have a look at this Q&A: ANTLR 4.5 - Mismatched Input 'x' expecting 'x'
EDIT
Because you have the rules ordered like this:
NUMBER
: '-'? INT ('.' [0-9] +)?
;
fragment INT
: '0' | [1-9] [0-9]*
;
ID
: [1-9][0-9]*
;
the lexer will never create an ID token (only NUMBER tokens will be created). This is just how ANTLR works: in case of 2 or more lexer rules match the same amount of characters, the one defined first "wins".
In the first place I think it's odd to have an ID rule that matches only digits, but, if that's the language you're parsing, OK. In your case, you could do something like this:
id : POS_NUMBER;
number : POS_NUMBER | NEG_NUMBER;
POS_NUMBER : INT ('.' [0-9] +)?;
NEG_NUMBER : '-' POS_NUMBER;
fragment INT
: '0' | [1-9] [0-9]*
;
and then instead of ID, use id in your parser rules. As well as using number instead of the NUMBER you're using now.

Antlr grammar, implicit token definition in parser rule

A weird thing is going on. I defined the grammar and this is an excerpt.
name
: Letter
| Digit name
| Letter name
;
numeral
: Digit
| Digit numeral
;
fragment
Digit
: [0-9]
;
fragment
Letter
: [a-zA-Z]
;
So why does it show warnings for just two lines (Letter and Digit name) where i referenced a fragment and others below are completely fine...
Lexer rules you mark as fragments can only be used by other lexer rules, not by parser rules. Fragment rules never become a token of their own.
Be sure you understand the difference: What does "fragment" mean in ANTLR?
EDIT
Also, I now see that you're doing too much in the parser. The rules name and numeral should really be a lexer rule:
Name
: ( Digit | Letter)* Letter
;
Numeral
: Digit+
;
in which case you don't need to account for a Space rule in any of your parser rules (this is about your last question which was just removed).
Just in case you are using an older version of antlr:
[0-9]
and
[a-zA-Z]
are not valid regular expressions in old Antlr.
replace them with
'0'..'9'
and
('a'..'z' | 'A'..'Z')
and your issues should go away.

Dealing with overloaded symbols in ambiguous grammars in ANTLR4

I am trying to write a parser for a dialect of Answer Set Programming (ASP) which, in terms of grammar, looks like Prolog with some extensions.
One extension, for instance is expansion, meaning that fact(1..3). for instance is expanded in fact(1). fact(2). fact(3).. Notice that the language understands INT and FLOAT numbers and uses . also as a terminator.
In some cases the parser fails to distinguish between integers, floats, extensions and separators because I reckon the language is clearly ambiguous. In that cases, I have to explicitly separate tokens with white spaces. Any Prolog or ASP parser, however, correctly deals with such productions. I read that ANTLR4 can disambiguate problematic productions autonomously, but probably it needs some help but I don't know how to do! ;-) I read something like here and here, but apparently they did not help me.
Could somebody please tell me what to do to overcome this ambiguity?
Please notice that I cannot change the language because it is quite standard.
In order to simplify the experts' work, I created a minimal working example that follows.
grammar Test;
program:
statement* ;
statement: // DOT is the statement terminator
range DOT |
intNum DOT |
floatNum DOT ;
intNum: // not needed, but helps in TestRig
INT;
floatNum: // not needed, but helps in TestRig
FLOAT;
range: // defines an expansion
INT DOTS INT ;
DOTS: '..';
DOT: '.';
FLOAT: DIGIT+ '.' DIGIT* | '.' DIGIT+ ;
INT: DIGIT+ ;
WS: [ \t\r\n]+ -> skip ;
fragment NONZERO : [1-9] ;
fragment DIGIT : [0] | NONZERO ;
I use the following input:
1 .
1. .
1.5 .
.5 .
1 .. 5 .
1.
1..
1.5.
.5.
1..5.
And I get the following errors which instead are parsed corrected by other tools:
line 8:0 extraneous input '1.' expecting '.'
line 11:2 extraneous input '.5' expecting '.'
Many thanks in advance!
Before your DOTS rule, add a unique rule for the statement terminal dot and disambiguate the DOTS rule (and change your other rules to use the TERMINAL):
TERMINAL: DOT { isTerminal(1) }? ;
DOTS: DOT DOT { !isTerminal(2) }? ;
DOT: '.';
where the predicate method simply looks ahead on the _input character stream to see if, at the current token index, the next character is white space. Put something like this in an #member block in your grammar:
public boolean isTerminal(int la) {
int offset = _tokenStartCharIndex + 1 + la;
String s = _input.getText(Interval.of(offset, offset));
if (Character.isWhitespace(s.charAt(0))) {
return true;
}
return false;
}
May have to do a bit more work if whitespace is valid between a DOTS and the trailing INT.
I recommend shifting the work to the parser.
If the lexer can't decide if 1..2 is 1. .2 or 1 .. 2 leave if up to the parser.
Maybe there is a context in which it can be interpreted as the first alternative and another context in which it may be interpreted as the second alternative.
Btw: 1..2. could be interpreted as 1 .. 2 . (range) or as 1. . 2 . (floatNum, intNum). How do you want to deal with this?
The following grammar should parse everything. But note that . . is treated as dots as well as 1 . 23 is a floatNum! You can check these tough while parsing or after parsing (depending on whether it should influence the parsing or not).
grammar Test;
program:
statement* ;
statement: // DOT is the statement terminator
range DOT |
intNum DOT |
floatNum DOT ;
intNum: // not needed, but helps in TestRig
INT;
floatNum:
INT DOT INT? | DOT INT ;
range: // defines an expansion
INT dots INT ;
dots : DOT DOT;
DOT: '.';
INT: DIGIT+ ;
WS: [ \t\r\n]+ -> skip ;
fragment NONZERO : [1-9] ;
fragment DIGIT : [0] | NONZERO ;
Prolog does not accept 1. as a float. This feature makes your grammar significantly more ambiguous, so maybe try removing that feature.

Resources