Grammar: Precedence of grammar alternatives - parsing

This is a very basic question about grammar alternatives. If you have the following alternative:
Myalternative: 'a' | .;
Myalternative2: 'a' | 'b';
Would 'a' have higher priority over the '.' and over 'b'?
I understand that this may also depend on the behaviour of the parser generated by this syntax but in pure theoretical grammar terms could you imagine these rules being matched in parallel i.e. test against 'a' and '.' at the same time and select the one with highest priority? Or is the 'a' and . ambiguous due to the lack of precedence in grammars?

The answer depends primarily on the tool you are using, and what the semantics of that tool is. As written, this is not a context-free grammar in canonical form, and you'd need to produce that to get a theoretical answer, because only in that way can you clarify the intended semantics.
Since the question is tagged antlr, I'm going to guess that this is part of an Antlr lexical definition in which . is a wildcard character. In that case, 'a' | . means exactly the same thing as ..
Since MyAlternative matches everything that MyAlternative2 matches, and since MyAlternative comes first in the Antlr lexical definition, MyAlternative2 can never match anything. Any single character will be matched by MyAlternative (unless there is some other lexical rule which matches a longer sequence of input characters).
If you put the definition of MyAlternative2 first in the grammar file, then a or b would be matched as MyAlternative2, while any other character would be matched as MyAlternative.
The question of precedence within alternatives is meaningless. It doesn't matter whether MyAlternative considers the match of an a to be a match of a or a match of .. It is, in the end, a match of MyAlternative, and that symbol can only have one associated action.
Between lexical rules, there is a precedence relationship: The first one wins. (More accurately, as far as I know, Antlr obeys the usual convention that the longest match wins; between two rules which both match the same longest sequence, the first one in the grammar file wins.) That is not in any way influenced by alternative bars in the rules themselves.

Related

Are the following grammars LL(1)?

S -> Abc|aAcb
A -> b|c|ε
I think the first one is LL(1)
S -> aAS|b
A -> a|bSA
But the problem is second one. There's no conflict problem, but I think it doesn't satisfy right-recursion.
I'm not sure about those problems.
The first grammar is not LL(1), because of several conflicts, as for example for input bc, an LL parser will need 2 tokens of look-ahead to parse it:
enter in rule S
enter in rule A
recognize character b
load the next token c
exit rule A
another b is expected, but the current token is c
go back into A
move one token backwards, again to b
exit rule A without recognizing anything (because of the epsilon)
recognize the b character after the reference to rule A that was "jumped" without the use of any token
load the next token c
recognize c
success
You have a similar case in the second alternative of S for an input acb. The grammar is not ambiguous, because in the end there is only one possible syntax tree. Its not LL(1), as in fact is LL(2).
The second grammars is deterministic - there is only one way to parse any input that is valid according to the grammar. This means that it can be used for parsing by an LL(1) parser.
I have made a tool (Tunnel Grammar Studio) that detects the grammar conflicts for nondeterministic grammars and generates parsers. This grammar in ABNF (RFC 5234) like syntax is:
S = 'a' A S / 'b'
A = 'a' / 'b' S A
The right recursion by itself does not create ambiguities inside the grammar. One way to have a right recursion ambiguity is to have some dangling element as in this grammar:
S = 'c' / 'a' S 0*1 'b'
You can read it as: rule S recognizes character c or character a followed by rule S itself and maybe (zero or one time) followed by character b.
The grammar up has a right recursion related ambiguity because of the dangling b character. Meaning that for an input aacb, there is more then one way to parse it: recognize the first a in S; enter into S again; recognize the second a; enter again into S and recognize c; exit S one time; then there are two choices:
case one) recognize the b character, exit S two times or case two) first exit S one time and then recognize the b character. Both cases are these (screen shots from the visual debugger of TGS):
This grammar is thus ambiguous (i.e. not LL(1)) because more then one syntax tree can be generated for some valid inputs. For this input the possible trees are only two, but for an input aaacb there are three trees, as there are three trees for aaacbb, because of the possible 3 places where you can 'attach' the two b characters, two of this places will have b, and one will remain empty. For input aaacbbb there is of course only one possible syntax tree, but the grammar is defined to be ambiguous if there is at least one input for which there is more then one possible syntax tree.

Is there any formal explanation, why a lexer rule defined first is not visible to a parser rule defined later?

The initial title question was: Why does my lexer rule not work, until I change it to a parser rule? The contents below are related to this question. Then I found new information and changed the title question. Please see my comment!
My Antlr Grammar (Only the "Spaces" rule and it's use is important).
Because my input comes from an ocr source there can be multiple whitespaces, but on the other hand i need to recognize the spaces, because they have meaning for the text structure.
For this reason in my grammar I defined
Spaces: Space (Space Space?)?;
but this throws the error above - the whitespace is not recognzied.
So when I replace it with a parser rule (lowercase!) in my grammar
spaces: Space (Space Space?)?;
the error seems to be solved (subsequent errors appear - not part of this question).
So why is the error solved then in this concrete case when using a parser rule instead of a lexer rule?
And in general - when to use a lexer rule and when a parser rule?
Thank you, guys!
A single space is being recognized as a Space and not as a Spaces, since it matches both lexical rules and Space comes first in the grammar file. (You can see that token type 1 is being recognized; Spaces would be type 9 by my count.)
Antlr uses the common "maximum munch" lexical strategy in which the lexical token recognized corresponds to the longest possible match, ordering the possibilities by order in the file in case two patterns match the same longest match. When you put Spaces first in the file, it wins the tie rule. If you make it a parser rule instead of a lexical rule, then it gets applied after the unambiguous lexical rule for Space.
Do you really only want to allow up to 3 spaces? Otherwise, you could just ditch Space and define Spaces as " "*.

ANTLR4 '+' operation

I'm using ANTLR4 for a class I'm taking right now and I seem to understand most of it, but I can't figure out what '+' does. All I can tell is that it's usually after a set of characters in brackets.
The plus is one of the BNF operators in ANTLR that allow to determine the cardinality of an expression. There are 3 of them: plus, star (aka. kleene operator) and question mark. The meaning is easily understandable:
Question mark stands for: zero or one
Plus stands for: one or more
Star stands for: zero or more
Such an operator applies to the expression that directly preceeds it, e.g. ab+ (one a and one or more b), [AB]? (zero or one of either A or B) or a (b | c | d)* (a followed by zero or more occurences of either b, c or d).
ANTLR4 also uses a special construct to denote ungreedy matches. The syntax is one of the BNF operators plus a question mark (+?, *?, ??). This is useful when you have: an introducer match, any content, and then match a closing token. Take for instance a string (quote, any char, quote). With a greedy match ANTLR4 would match multiple strings as one (until the final quote). An ungreedy match however only matches until the first found end token (here the quote char).
Side note: I don't know what ?? could be useful for, since it matches a single entry, hence greedyness doesn't play a role here.
Actually, these operators are not part of traditional BNF, but rather of the Extended Backus-Naur Form. These are one of the reasons it's easier (or even possible) to document certain grammars in EBNF than in old-school BNF, which lacks many of these operators.

Is it logical for several parser reduction functions to get called along the way of reducing one expression?

I wrote a parser that analyzes code and reduces it as would by lex and yacc (pretty much.)
I am wondering about one aspect of the matter. If I have a set of rules such as the following:
unary: IDENTIFIER
| IDENTIFIER '(' expr_list ')'
The first rule with just an IDENTIFIER can be reduced as soon as an identifier is found. However, the second rule can only be reduced if the input also includes a valid list of expressions written between parenthesis.
How is a parser expected to work in a case like this one?
If I reduce the first identifier immediately, I can keep the result and throw it away if I realize that the second rule does match. If the second rule does not match, then I can return the result of the early reduction.
This also means that both reduction functions are going to be called if the second rule applies.
Are we instead expected to hold on the early reduction and apply it only if the second, longer rule applies?
For those interested, I put a more complete version of my parser grammar in this answer: https://codereview.stackexchange.com/questions/41769/numeric-expression-parser-calculator/41786#41786
Bottom-up parsers (like bison and yacc) do not reduce until they reach the end of the production. They don't have to guess which reduction they will use until they need it. So it's really not an issue to have two productions with the same prefix. In this sense, the algorithm is radically different from a top-down algorithm, used in recursive descent parsing, for example.
In order for the fragment which you provide to be parseable by an LALR(1) parser-generator -- that is, a bottom-up parser with the ability to examine only one (1) token beyond the end of a production -- the grammar must be such that there is no place in which unary might be followed by (. As long as that is true, the fact that the parser can see a ( is sufficient to prevent the unit reduction unary: IDENTIFIER from happening in a context in which it should reduce with the other unary production.
(That's a bit of an oversimplification, but I don't think that it would be correct to reproduce a standard text on LALR parsing here on SO.)

Is there such thing as a left-associative prefix operator or right-associative postfix operator?

This page says "Prefix operators are usually right-associative, and postfix operators left-associative" (emphasis mine).
Are there real examples of left-associative prefix operators, or right-associative postfix operators? If not, what would a hypothetical one look like, and how would it be parsed?
It's not particularly easy to make the concepts of "left-associative" and "right-associative" precise, since they don't directly correspond to any clear grammatical feature. Still, I'll try.
Despite the lack of math layout, I tried to insert an explanation of precedence relations here, and it's the best I can do, so I won't repeat it. The basic idea is that given an operator grammar (i.e., a grammar in which no production has two non-terminals without an intervening terminal), it is possible to define precedence relations ⋖, ≐, and ⋗ between grammar symbols, and then this relation can be extended to terminals.
Put simply, if a and b are two terminals, a ⋖ b holds if there is some production in which a is followed by a non-terminal which has a derivation (possibly not immediate) in which the first terminal is b. a ⋗ b holds if there is some production in which b follows a non-terminal which has a derivation in which the last terminal is a. And a ≐ b holds if there is some production in which a and b are either consecutive or are separated by a single non-terminal. The use of symbols which look like arithmetic comparisons is unfortunate, because none of the usual arithmetic laws apply. It is not necessary (in fact, it is rare) for a ≐ a to be true; a ≐ b does not imply b ≐ a and it may be the case that both (or neither) of a ⋖ b and a ⋗ b are true.
An operator grammar is an operator precedence grammar iff given any two terminals a and b, at most one of a ⋖ b, a ≐ b and a ⋗ b hold.
If a grammar is an operator-precedence grammar, it may be possible to find an assignment of integers to terminals which make the precedence relationships more or less correspond to integer comparisons. Precise correspondence is rarely possible, because of the rarity of a ≐ a. However, it is often possible to find two functions, f(t) and g(t) such that a ⋖ b is true if f(a) < g(b) and a ⋗ b is true if f(a) > g(b). (We don't worry about only if, because it may be the case that no relation holds between a and b, and often a ≐ b is handled with a different mechanism: indeed, it means something radically different.)
%left and %right (the yacc/bison/lemon/... declarations) construct functions f and g. They way they do it is pretty simple. If OP (an operator) is "left-associative", that means that expr1 OP expr2 OP expr3 must be parsed as <expr1 OP expr2> OP expr3, in which case OP ⋗ OP (which you can see from the derivation). Similarly, if ROP were "right-associative", then expr1 ROP expr2 ROP expr3 must be parsed as expr1 ROP <expr2 ROP expr3>, in which case ROP ⋖ ROP.
Since f and g are separate functions, this is fine: a left-associative operator will have f(OP) > g(OP) while a right-associative operator will have f(ROP) < g(ROP). This can easily be implemented by using two consecutive integers for each precedence level and assigning them to f and g in turn if the operator is right-associative, and to g and f in turn if it's left-associative. (This procedure will guarantee that f(T) is never equal to g(T). In the usual expression grammar, the only ≐ relationships are between open and close bracket-type-symbols, and these are not usually ambiguous, so in a yacc-derivative grammar it's not necessary to assign them precedence values at all. In a Floyd parser, they would be marked as ≐.)
Now, what about prefix and postfix operators? Prefix operators are always found in a production of the form [1]:
non-terminal-1: PREFIX non-terminal-2;
There is no non-terminal preceding PREFIX so it is not possible for anything to be ⋗ PREFIX (because the definition of a ⋗ b requires that there be a non-terminal preceding b). So if PREFIX is associative at all, it must be right-associative. Similarly, postfix operators correspond to:
non-terminal-3: non-terminal-4 POSTFIX;
and thus POSTFIX, if it is associative at all, must be left-associative.
Operators may be either semantically or syntactically non-associative (in the sense that applying the operator to the result of an application of the same operator is undefined or ill-formed). For example, in C++, ++ ++ a is semantically incorrect (unless operator++() has been redefined for a in some way), but it is accepted by the grammar (in case operator++() has been redefined). On the other hand, new new T is not syntactically correct. So new is syntactically non-associative.
[1] In Floyd grammars, all non-terminals are coalesced into a single non-terminal type, usually expression. However, the definition of precedence-relations doesn't require this, so I've used different place-holders for the different non-terminal types.
There could be in principle. Consider for example the prefix unary plus and minus operators: suppose + is the identity operation and - negates a numeric value.
They are "usually" right-associative, meaning that +-1 is equivalent to +(-1), the result is minus one.
Suppose they were left-associative, then the expression +-1 would be equivalent to (+-)1.
The language would therefore have to give a meaning to the sub-expression +-. Languages "usually" don't need this to have a meaning and don't give it one, but you can probably imagine a functional language in which the result of applying the identity operator to the negation operator is an operator/function that has exactly the same effect as the negation operator. Then the result of the full expression would again be -1 for this example.
Indeed, if the result of juxtaposing functions/operators is defined to be a function/operator with the same effect as applying both in right-to-left order, then it always makes no difference to the result of the expression which way you associate them. Those are just two different ways of defining that (f g)(x) == f(g(x)). If your language defines +- to mean something other than -, though, then the direction of associativity would matter (and I suspect the language would be very difficult to read for someone used to the "usual" languages...)
On the other hand, if the language doesn't allow juxtaposing operators/functions then prefix operators must be right-associative to allow the expression +-1. Disallowing juxtaposition is another way of saying that (+-) has no meaning.
I'm not aware of such a thing in a real language (e.g., one that's been used by at least a dozen people). I suspect the "usually" was merely because proving a negative is next to impossible, so it's easier to avoid arguments over trivia by not making an absolute statement.
As to how you'd theoretically do such a thing, there seem to be two possibilities. Given two prefix operators # and # that you were going to treat as left associative, you could parse ##a as equivalent to #(#(a)). At least to me, this seems like a truly dreadful idea--theoretically possible, but a language nobody should wish on even their worst enemy.
The other possibility is that ##a would be parsed as (##)a. In this case, we'd basically compose # and # into a single operator, which would then be applied to a.
In most typical languages, this probably wouldn't be terribly interesting (would have essentially the same meaning as if they were right associative). On the other hand, I can imagine a language oriented to multi-threaded programming that decreed that application of a single operator is always atomic--and when you compose two operators into a single one with the left-associative parse, the resulting fused operator is still a single, atomic operation, whereas just applying them successively wouldn't (necessarily) be.
Honestly, even that's kind of a stretch, but I can at least imagine it as a possibility.
I hate to shoot down a question that I myself asked, but having looked at the two other answers, would it be wrong to suggest that I've inadvertently asked a subjective question, and that in fact that the interpretation of left-associative prefixes and right-associative postfixes is simply undefined?
Remembering that even notation as pervasive as expressions is built upon a handful of conventions, if there's an edge case that the conventions never took into account, then maybe, until some standards committee decides on a definition, it's better to simply pretend it doesn't exist.
I do not remember any left-associated prefix operators or right-associated postfix ones. But I can imagine that both can easily exist. They are not common because the natural way of how people are looking to operators is: the one which is closer to the body - is applying first.
Easy example from C#/C++ languages:
~-3 is equal 2, but
-~3 is equal 4
This is because those prefix operators are right associative, for ~-3 it means that at first - operator applied and then ~ operator applied to the result of previous. It will lead to value of the whole expression will be equal to 2
Hypothetically you can imagine that if those operators are left-associative, than for ~-3 at first left-most operator ~ is applied, and after that - to the result of previous. It will lead to value of the whole expression will be equal to 4
[EDIT] Answering to Steve Jessop:
Steve said that: the meaning of "left-associativity" is that +-1 is equivalent to (+-)1
I do not agree with this, and think it is totally wrong. To better understand left-associativity consider the following example:
Suppose I have hypothetical programming language with left-associative prefix operators:
# - multiplies operand by 3
# - adds 7 to operand
Than following construction ##5 in my language will be equal to (5*3)+7 == 22
If my language was right-associative (as most usual languages) than I will have (5+7)*3 == 36
Please let me know if you have any questions.
Hypothetical example. A language has prefix operator # and postfix operator # with the same precedence. An expression #x# would be equal to (#x)# if both operators are left-associative and to #(x#) if both operators are right-associative.

Resources