I am having some difficulties understanding the specific difference between Lexical Grammar and Syntactic Grammar in the ECMAScript 2017 specification.
Excerpts from ECMAScript 2017
5.1.2 The Lexical and RegExp Grammars
A lexical grammar for ECMAScript is given in clause 11. This grammar
has as its terminal symbols Unicode code points that conform to the
rules for SourceCharacter defined in 10.1. It defines a set of
productions, starting from the goal symbol InputElementDiv,
InputElementTemplateTail, or InputElementRegExp, or
InputElementRegExpOrTemplateTail, that describe how sequences of such
code points are translated into a sequence of input elements.
Input elements other than white space and comments form the terminal
symbols for the syntactic grammar for ECMAScript and are called
ECMAScript tokens. These tokens are the reserved words, identifiers,
literals, and punctuators of the ECMAScript language.
5.1.4 The Syntactic Grammar
When a stream of code points is to be parsed as an ECMAScript Script
or Module, it is first converted to a stream of input elements by
repeated application of the lexical grammar; this stream of input
elements is then parsed by a single application of the syntactic
grammar.
Questions
Lexical grammar
Here it says the terminal symbols are Unicode code points (individual characters)
It also says it produces input elements (aka. tokens)
How are these reconcilable? Either the terminal symbols are tokens, and thus it produces tokens. Or, the terminal symbols are individual code points, and that's what it produces.
Syntactic grammar
I have the same questions on this grammar as on the lexical grammar
It seems to say that the terminal symbols here are tokens
So by applying the syntactic grammar rules, valid tokens are produced, which in turn can be sent to parser? Or, does this grammar accept tokens as input and then test the overall stream of tokens for validity?
My Best Guess
Lexing phase
Input: Code points (source code)
Output: Applies lexical grammar productions to produce valid tokens (lexeme type + value) as output
Parsing phase
Input: Tokens
Output: Applies syntactic grammar productions (CFG) to decide if all the tokens together represent a valid stream (i.e. that the source code as a whole is a valid Script / Module)
I think you are confused about what terminal symbol means. In fact they are the inputs of the parser, not the outputs (which is a parse tree - including the degenerate case of a list).
On the other hand, a production rule does have indeed terminal symbols as the output and a goal symbol as the input - it's backwards, that's where the term "terminal" comes from. A non-terminal can be expanded (in different ways, that's what the rules describe) to a sequence of terminal symbols.
Example:
Language:
S -> T | S '_' T
T -> D | T D
D -> '0' | '1' | '2' | … | '9'
String:
12_45
Production:
S // start: the goal
= S '_' T
= T '_' T
= T D ' ' T
= T '2 ' T
= D '2 ' T
= '12 ' T
= '12 ' T D
= '12 ' T '5'
= '12 ' D '5'
= '12_45' // end: the terminals
Parse tree:
S
S
T
T
D
'1'
D
'2'
' '
T
T
D
'4'
D
'5'
Parser output (generating a sequence of items from top-level Ts):
'12'
'45'
So
The lexing phase has code points as inputs and tokens as outputs. The code points are the terminal symbols of the lexical grammar.
The syntactic phase has tokens as inputs and programs as outputs. The tokens are the terminal symbols of the syntactic grammar.
Your "best guess" is correct to a first approximation. The main correction is to change "tokens" to "input elements". That is, the lexical level produces input elements (only some of which are designated 'tokens'), and the syntactic level takes input elements as input.
The syntactic level can almost ignore input elements that aren't tokens, except that Automatic Semicolon Insertion rules require it to pay attention to line-terminators in whitespace and comments.
Your "How are these reconcilable?" questions seems to stem from a misunderstanding of either "terminal symbol" or "produces", but it's not clear to me which.
Related
I'm trying to create an ANTLR grammar to parse sequences of keys that optionally have a repeat count. For example, (a b c r5) means "repeat keys a, b, and c five times."
I have the grammar working for KEYS : ('a'..'z'|'A'..'Z').
But when I try to add digit keys KEYS : ('a'..'z'|'A'..'Z'|'0'..'9') with an input expression like (a 5 r5), the parse fails on the middle 5 because it can't tell if the 5 is an INTEGER or a KEY. (Or so I think; the error messages are difficult to interpret "NoViableAltException").
I have tried these grammatical forms, which work ('r' means "repeatcount"):
repeat : '(' LETTERKEYS INTEGER ')' - works for a-zA-Z
repeat : '(' LETTERKEYS 'r' INTEGER ')'; - works for a-zA-Z
But I fail with
repeat : '(' LETTERSandDIGITKEYS INTEGER ')' - fails on '(a 5 r5)'
repeat : '(' LETTERSandDIGITKEYS 'r' INTEGER ')'; - fails on '(a 5 r5)'
Maybe the grammar can't do the recognition; maybe I need to recognize all the 5's keys in the same way (as KEYS or DIGITS or INTEGERS) and in the parse tree visitor interpret the middle DIGIT instances as keys, and the last set of DIGITS as an INTEGER count?
Is it possible to define a grammar that allows me to repeat digit keys as well as letter keys so that expressions like (a 5 123 r5) will be recognized correctly? (That is, "repeat keys a,5,1,2,3 five times.") I'm not tied to that specific syntax, although it would be nice to use something similar.
Thank you.
the parse fails on the middle 5 because it can't tell if the 5 is an INTEGER or a KEY.
If you have defined the following rules:
INTEGER : [0-9]+;
KEY : [a-zA-Z0-9];
then a single digit, like 5 in your example, will always become an INTEGER token. Even if
the parser is trying to match a KEY token, the 5 will become an INTEGER. There is nothing
you can do about that: this is the way ANTLR's lexer works. The lexer works in the following way:
try to consume as many characters as possible (the longest match wins)
if 2 or more rules match the same characters (like INTEGER and KEY in case of 5), let the rule defined first "win"
If you want a 5 to be an INTEGER, but sometimes a KEY, do something like this instead:
key : KEY | SINGLE_DIGIT | R;
integer : INTEGER | SINGLE_DIGIT;
repeat : R integer;
SINGLE_DIGIT : [0-9];
INTEGER : [0-9]+;
R : 'r';
KEY : [a-zA-Z];
and in your parser rules, you use key and integer instead of KEY and INTEGER.
You can split your grammar into two parts. One to be the lexer grammar, one to be the parser grammar. The lexer grammar splits the input characters into tokens. The parser grammar uses the string of tokens to parse and build a syntax tree. I work on Tunnel Grammar Studio (TGS) that can generate parsers with this two ABNF (RFC 5234) like grammars:
key = 'a'-'z' / 'A'-'Z' / '0'-'9'
repeater = 'r' 1*('0'-'9')
That is the lexer grammar that has two rules. Each character that is not processed by the lexer grammar, is converted to a token, made from the character itself. Meaning that a is a key, r11 is a repeater and ( for example will be transferred to the parser as a token (.
document = *ws repeat
repeat = '(' *ws *({key} *ws) [{repeater} *ws] ')' *ws
ws = ' ' / %x9 / %xA / %xD
This is the parser grammar, that has 3 rules. The document rule accepts (recognizes) white space at first, then one repeat rule. The repeat rule starts with a scope, followed by any number of white space. After that is a list of keys maybe separated by white space and after all keys there is an optional repeater token followed by optional white space, closing scope and again optional white space. The white space is space tab carriage return and line feed in that order.
The runtime of this parsing is linear for both the lexer and the parser because both grammars are LL(1). Bellow is the generic parse tree from the TGS online laboratory, where you can run this grammars for input (a 5 r5) and you will get this tree:
If you want to have more complex key, then you may use this:
key = 1*('a'-'z' / 'A'-'Z' / '0'-'9')
In this case however, the key and repeater lexer rules will both recognize the sequence r7, but because the repeater rule is defined later it will take precedence (i.e. overwrites the previous rule). With other words r7 will be a repeater token, and the parsing will still be linear. You will get a warning from TGS if your lexer rules overwrite one another.
Despite my limited knowledge about compiling/parsing I dared to build a small recursive-descent parser for OData $filter expressions. The parser only needs to check the expression for correctness and output a corresponding condition in SQL. As input and output have almost the same tokens and structure, this was fairly straightforward, and my implementation does 90% of what I want.
But now I got stuck with parentheses, which appear in separate rules for logical and arithmetic expressions. The full OData grammar in ABNF is here, a condensed version of the rules involved is this:
boolCommonExpr = ( boolMethodCallExpr
/ notExpr
/ commonExpr [ eqExpr / neExpr / ltExpr / ... ]
/ boolParenExpr
) [ andExpr / orExpr ]
commonExpr = ( primitiveLiteral
/ firstMemberExpr ; = identifier
/ methodCallExpr
/ parenExpr
) [ addExpr / subExpr / mulExpr / divExpr / modExpr ]
boolParenExpr = "(" boolCommonExpr ")"
parenExpr = "(" commonExpr ")"
How does this grammar match a simple expression like (1 eq 2)? From what I can see all ( are consumed by the rule parenExpr inside commonExpr, i.e. they must also close after commonExpr to not cause an error and boolParenExpr never gets hit. I suppose my experience / intuition on reading such a grammar is just insufficient to get it. A comment in the ABNF says: "Note that boolCommonExpr is also a commonExpr". Maybe that's part of the mystery?
Obviously an opening ( alone won't tell me where it's going to close: After the current commonExpr expression or further away after boolCommonExpr. My lexer has a list of all tokens ahead (URL is very short input). I was thinking to use that to find out what type of ( I have. Good idea?
I'd rather have restrictions in input or a little hack than switching to a generally more powerful parser model. For a simple expression translation like this I also want to avoid compiler tools.
Edit 1: Extension after answer by rici - Is grammar rewrite correct?
Actually I started out with the example for recursive-descent parsers given on Wikipedia. Then I though to better adapt to the official grammar given by the OData standard to be more "conformant". But with the advice from rici (and the comment from "Internal Server Error") to rewrite the grammar I would tend to go back to the more comprehensible structure provided on Wikipedia.
Adapted to the boolean expression for the OData $filter this could maybe look like this:
boolSequence= boolExpr {("and"|"or") boolExpr} .
boolExpr = ["not"] expression ("eq"|"ne"|"lt"|"gt"|"lt"|"le") expression .
expression = term {("add"|"sum") term} .
term = factor {("mul"|"div"|"mod") factor} .
factor = IDENT | methodCall | LITERAL | "(" boolSequence")" .
methodCall = METHODNAME "(" [ expression {"," expression} ] ")" .
Does the above make sense in general for boolean expressions, is it mostly equivalent to the original structure above and digestible for a recursive descent parser?
#rici: Thanks for your detailed remarks on type checking. The new grammar should resolve your concerns about precedence in arithmetic expressions.
For all three terminals (UPPERCASE in the grammar above) my lexer supplies a type (string, number, datetime or boolean). Non-terminals return the type they produce. With this I managed quite nicely do type checking on the fly in my current implementation, including decent error messages. Hopefully this will also work for the new grammar.
Edit 2: Return to original OData grammar
The differentiation between a "logical" and "arithmetic" ( is not a trivial one. To solve the problem even N.Wirth uses a dodgy workaround to keep the grammar of Pascal simple. As a consequence, in Pascal an extra pair of () is mandatory around and and or expressions. Neither intuitive nor OData conformant :-(. The best read about the "() difficulty" I found is in Let's Build a Compiler (Part VI). Other languages seem to go to great length in the grammar to solve the problem. As I don't have experience with grammar construction I stopped doing my own.
I ended up implementing the original OData grammar. Before I run the parser I go over all tokens backwards to figure out which ( belong to a logical/arithmetic expression. Not a problem for the potential length of a URL.
Personally, I'd just modify the grammar so that it has only one type of expression and therefore one type of parenthesis. I'm not convinced that the OData grammar is actually correct; it is certainly not usable in an LL(1) (or recursive descent) parser for exactly the reason you mention.
Specifically, if the goal is boolCommonExpr, there are two productions which can match the ( lookahead token:
boolCommonExpr = ( …
/ commonExpr [ eqExpr / neExpr / … ]
/ boolParenExpr
/ …
) …
commonExpr = ( …
/ parenExpr
/ …
) …
For the most part, this is a misguided attempt to make the grammar detect a type violation. (If in fact it is a type violation.) It's misguided because it is doomed to failure if there are boolean variables, which there apparently are in this environment. Since there is not syntactic clue as to the type of a variable, the parser is not capable of deciding whether particular expressions are well-formed or not, so there is a good argument for not trying at all, particularly if it creates parsing headaches. A better solution is to first parse the expression into an AST of some form, and then do another pass over the AST to check that each operators has operands of the correct type (and possibly inserting explicit cast operators if that is necessary).
Aside from any other advantage, doing the type check in a separate pass lets you produce much better error messages. If you make (some) type violations syntax errors, then you may leave the user puzzled about why their expression was rejected; in contrast, if you notice that a comparison operation is being used as an operand to multiply (and if your language's semantics don't allow an automatic conversion from True/False to 1/0), then you can produce a well-targetted error message ("comparisons cannot be used as the operand of an arithmetic operator", for example).
One possible reason to put different operators (but not parentheses) into different grammatical variables is to express grammatical precedence. That consideration might encourage you to rewrite the grammar with explicit precedence. (As written, the grammar assumes that all arithmetic operators have the same precedence, which would presumably lead to 2 + 3 * a being parsed as (2 + 3) * a, which might be a huge surprise.) Alternatively, you might use some simple precedence aware subparser for expressions.
If you want to test your ABNF grammar for determinism (i.e. LL(1)), you can use Tunnel Grammar Studio (TGS). I have tested the full grammar, and there are plenty of conflicts, not only this scopes. If you are able to extract the relevant rules, you can use the desktop version of TGS to visualize the conflicts (the online version checker is with a textual result only). If the rules are not too many, the demo may help you to create an LL(1) grammar from your rules.
If you extract all rules you need, and add them to your question, I can run it for you and will tell you is it LL(1). Note that the grammar is not exactly in ABNF meta syntax, because the case sensitivity is typed with ' for case sensitive strings. The ABNF (RFC 5234) by definition is case insensitive, as RFC 7405 defines the sensitivity with %s and %i (sensitive and insensitive) prefixes before the the actual string. The default case (without a prefix) still means insensitive. This means that you have to replace this invalid '...' strings with %s"..." before testing in TGS.
TGS is a project I work on.
I've been writing a parser with flex and bison for a few weeks now and have ground to a halt on account of a double recursion, the definitions of which are similar for the first few rules. Bison always chooses the wrong path at one particular stage and crashes because the grammar doesn't fit. The bison code looks a little like this:
set :
TOKEN_ /* token */
QString
QString
Integer /* number of descrs (see below) */
M_op /*'M' optional*/
alts;
and
alts :
alt | alts alt ;
alt :
QString
pName_op /* empty | TOKEN1 QString */
deVal_op /* empty | TOKEN2 Integer */
descrs
;
and
descrs :
descr | descrs descr ;
descr :
QString
QString_op /* optional qstring */
Integer
D_op /* optional 'D' */
Bison stays in the descrs recursion and never exits it to progress to the next alt. The integer that is read in in the initial block, however, tells us how many instances of descr are going to come. So my question is this:
Is there a way of preparing bison for a specific number of instances of the recursion so that he can exit this recursion and enter the recursion "above"? I can access this integer in the C code, but I'm not aware of syntax for said move, something like a descrs : {for (int i=0;i<n;++i){descr}} (I'm aware that probably looks ridiculous)
Failing this, is there any other way around this problem?
Any input would be much appreciated. Thanks in advance.
A context-free grammar cannot be contingent on semantic information. Yet, that is precisely what you are seeking: you wish the value of a numeric token to be taken into account in the syntax of an expression.
As a request, that's not unreasonable or immoral; it's simply outside of the reach of context-free grammars. And bison is intended to create parsers for context-free grammars. So it's simply not the correct tool for this problem.
Having said that, it is possible to use bison in this manner, if you are using a reasonably recent version of bison which includes support for GLR grammars. Bison`s GLR support includes the option of using semantic predicates to control the parse. (See the bison manual for details.) A solution based on that mechanism is possible, and probably not too complicated.
Much easier -- if the grammar allows for it -- would be to use a top-down parser. Parsing a number and then that number of descrs would be trivial in a recursive-descent parser, for example.
The liberal use of FOO_op non-terminals in the grammar suggests that top-down parsing would not be problematic, but it is impossible to say for sure without seeing the entire grammar. Artificial non-terminals (like FOO_op) often cause shift-reduce conflicts in LR(1) languages, because they force an immediate shift/reduce decision to be made. In an LR(1) language, a production of the form: A → ω B? χ
would normally be rendered as the pair of productions A → ω B χ; A → ω χ, rather than the substitution Bop → B | ε; A → ω Bop χ, in order to avoid creating conflicts with other productions of the form C → ω ζ where FIRST(ζ) ∩ FIRST(B ∪ ω) ≠ ∅.
Parsing math expressions, would be better treat invisible multiplication (e.g. ab, meaning a times b, or (a-b)c, or (a-b)(c+d) ecc. ecc.) at level of the lexer or of the parser ?
Implicit multiplication is a grammatical construct. Lexing is purely about recognizing the individual symbols. The fact that two adjacent expressions should be multiplied is not a lexical notion, as the lexer does not know about "expressions". The parser does.
If the lexer were responsible, you'd have to add lots of rules relating to adjacent tokens. For instance, insert a × token between two IDENTIFIERs, or an IDENTIFIER and a NUMBER, or a NUMBER and an IDENTIFIER, or between ) and IDENTIFIER, or IDENTIFIER and (... except uh oh, IDENTIFIER ( could be a function call, so maybe I need to look up IDENTIFIER in the symbol table to see if it's a function name...
What a mess!
The parser, on the other hand, can do this with a single grammar rule.
E → E '×' E
| E E
Below is a a Bison grammar which illustrates my problem. The actual grammar that I'm using is more complicated.
%glr-parser
%%
s : e | p '=' s;
p : fp | p ',' fp;
fp : 'x';
e : te | e ';' te;
te : fe | te ',' fe;
fe : 'x';
Some examples of input would be:
x
x = x
x,x = x,x
x,x = x;x
x,x,x = x,x;x,x
x = x,x = x;x
What I'm after is for the x's on the left side of an '=' to be parsed differently than those on the right. However, the set of legal "expressions" which may appear on the right of an '='-sign is larger than those on the left (because of the ';').
Bison prints the message (input file was test.y):
test.y: conflicts: 1 reduce/reduce.
There must be some way around this problem. In C, you have a similar situation. The program below passes through gcc with no errors.
int main(void) {
int x;
int *px;
x;
*px;
*px = x = 1;
}
In this case, the 'px' and 'x' get treated differently depending on whether they appear to the left or right of an '='-sign.
You're using %glr-parser, so there's no need to "fix" the reduce/reduce conflict. Bison just tells you there is one, so that you know you grammar might be ambiguous, so you might need to add ambiguity resolution with %dprec or %merge directives. But in your case, the grammar is not ambiguous, so you don't need to do anything.
A conflict is NOT an error, its just an indication that your grammar is not LALR(1).
The reduce-reduce conflict in your grammar comes from the context:
... = ... x ,
At this point, the parser has to decide whether x is an fe or an fp, and it cannot know with one symbol lookahead. Indeed, it cannot know with any finite lookahead, you could have any number of repetitions of x , following that point without encountering a =, ; or the end of the input, any of which would reveal the answer.
This is not quite the same as the C issue, which can be resolved with single symbol lookahead. However, the C example is a classic illustration of why SLR(1) grammars are less powerful than LALR(1) grammars -- it's used for that purpose in the dragon book -- and a similarly problematic grammar is an example of the difference between LALR(1) and LR(1); it can be found in the bison manual (here):
def: param_spec return_spec ',';
param_spec: type | name_list ':' type;
return_spec: type | name ':' type;
type: "id";
name: "id";
name_list: name | name ',' name_list;
(The bison manual explains how to resolve this issue for LALR(1) grammars, although using a GLR grammar is always a possibility.)
The key to resolving such conflicts without using a GLR grammar is to avoid forcing the parser to make premature decisions.
For example, it is traditional to distinguish syntactically between lvalues and rvalues, and some languages continue to do so. C and C++ do not, however; and this turns out to be an extremely powerful feature in C++ because it allows the definition of functions which can act as lvalues.
In C, I think it's just to simplify the grammar a bit: the C grammar allows the result of any unary operator to appear on the left hand side of an assignment operator, but unary operators are actually a mix of lvalues (*v, v[expr]) and rvalues (sizeof v, f(expr)). The grammar could have distinguished between the two kinds of unary operators, but it could not resolve the actual restriction, which is that only modifiable lvalues may appears on the left side of an assignment operator.
C++ allows an arbitrary expression to appear on the left-hand side of an assignment operator (although some need to be parenthesized); consequently, the following is totally legal:
(predicate(x) ? *some_pointer : some_variable) = 42;
In your case, you could resolve the conflict syntactically by replacing te with p, since both non-terminals produce the same set of derivations. That's probably not the general solution, unless it is really the case in your full grammar that left-side expressions are a strict subset of right-side expressions. In a full grammar, you might end up with three types of expression (left-only, right-only, common), which could considerably complicated the grammar, and leaving the resolution for semantic analysis might prove to be easier (and even, as in the case of C++, surprisingly useful).