I gotta confused about how to deal unary operator for evaluating post-fix expression.
I searched it for but not found any relevant detail.
Related
I am currently looking through the first page of documentation for LUA and noticed that every assignment appears as var ::= Name, however I could not find any reference to the syntax of ::= itself. The documentation goes over the structuring of an assignment but glosses over these symbols. What I want to know is if every assignment requires the :: before the actual assignment operator, and, if so, why is it structured this way and not just a plain =?
What you're seeing is not Lua code, but a fragment of the grammar of the Lua language, as defined in Backus-Naur Form. The ::= operator is part of BNF.
It's the assignment operator - typically used in formal grammars.
I'm following along with Bob Nystrom's great book "Crafting Interpreters".
Please let me know if this question is too specific for this site - I've been trying for hours but couldn't figure this out on my own :)
In chapter Compiling Expressions, in function unary(), the function parsePrecedence(Precedence) is called with PREC_UNARY instead of PREC_UNARY + 1.
The book explains this is in order to enable "nesting" of unary operators. E.g.: --1.
However, in parsePrecedence(Precedence) no precedence level is checked before parsing prefix operators - it is checked only before infix ones. And unary is a prefix parser.
So passing PREC_UNARY or PREC_UNARY + 1 to parsePrecedence(Precedence) doesn't seem to make a difference. What am I missing?
The simple answer is that you are right: with this particular grammar, there is no difference because no binary (or postfix) operator has precedence PREC_UNARY, and the test that will be used is ≤.
All the same, the conventional answer is to use PREC_UNARY because unary prefix operators are (necessarily) right associative. This convention comes from the case of binary operators, where you need to use the operator's precedence plus one for left associative operators (the normal case) and the operator's precedence itself for right-associative operators (exponentiation and assignment, for example). (Assignment is actually somewhat more complicated, but I personally think the solution proposed by Bob Nystrom is more complicated than would have been necessary.)
Another conventional answer derives from the possibility of using a bottom-up operator precedence parser (Dijkstra's "shunting yard") instead of the top-down Pratt parser. Fully exploring bottom-up parsing goes well beyond the scope of this question; suffice it to say that the same principle applies with respect to associativity.
Context
I've recently come up with an issue that I couldn't solve by myself in a parser I'm writing.
This parser is a component in a compiler I'm building and the question is in regards to the expression parsing necessary in programming language parsing.
My parser uses recursive descent to parse expressions.
The problem
I parse expressions using normal regular language parsing rules, I've eliminated left recursion in all my rules but there is one syntactic "ambiguity" which my parser simply can't handle and it involves generics.
comparison → addition ( ( ">" | ">=" | "<" | "<=" ) addition )* ;
is the rule I use for parsing comparison nodes in the expression
On the other hand I decided to parse generic expressions this way:
generic → primary ( "<" arguments ">" ) ;
where
arguments → expression ( "," expression )* ;
Now because generic expressions have higher precedence as they are language constructs and not mathematical expressions, it causes a scenario where the generic parser will attempt to parse expressions when it shouldn't.
For example in a<2 it will parse "a" as a primary element of the identifier type, immediately afterwards find the syntax for a generic type, parse that and fail as it can't find the closing tag.
What is the solution to such a scenario? Especially in languages like C++ where generics can also have expressions in them if I'm not mistaken arr<1<2> might be legal syntax.
Is this a special edge case or does it require a modification to the syntax definition that im not aware of?
Thank you
for example in a<2 it will parse "a" as a primary element of the identifier type, immideatly afterwards find the syntax for a generic type, parse that and fail as it cant find the closing tag
This particular case could be solved with backtracking or unbounded lookahead. As you said, the parser will eventually fail when interpreting this as a generic, so when that happens, you can go back and parse it as a relational operator instead. The lookahead variant would be to look ahead when seeing a < to check whether the < is followed by comma-separated type names and a > and only go into the generic rule if that is the case.
However that approach no longer works if both interpretations are syntactically valid (meaning the syntax actually is ambiguous). One example of that would be x<y>z, which could either be a declaration of a variable z of type x<y> or two comparisons. This example is somewhat unproblematic since the latter meaning is almost never the intended one, so it's okay to always interpret it as the former (this happens in C# for example).
Now if we allow expressions, it becomes more complicated. For x<y>z it's easy enough to say that this should never be interpreted as two comparison as it makes no sense to compare the result of a comparison with something else (in many languages using relational operators on Booleans is a type error anyway). But for something like a<b<c>() there are two interpretations that might both be valid: Either a is a generic function called with the generic argument b<c or b is a generic function with the generic argument c (and a is compared to the result of calling that function). At this point it is no longer possible to resolve that ambiguity with syntactic rules alone:
In order to support this, you'll need to either check whether the given primary refers to a generic function and make different parsing decisions based on that or have your parser generate multiple trees in case of ambiguities and then select the correct one in a later phase. The former option means that your parser needs to keep track of which generic functions are currently defined (and in scope) and then only go into the generic rule if the given primary is the name of one of those functions. Note that this becomes a lot more complicated if you allow functions to be defined after they are used.
So in summary supporting expressions as generic arguments requires you to keep track of which functions are in scope while parsing and use that information to make your parsing decisions (meaning your parser is context sensitive) or generate multiple possible ASTs. Without expressions you can keep it context free and unambiguous, but will require backtracking or arbitrary lookahead (meaning it's LL(*)).
Since neither of those are ideal, some languages change the syntax for calling generic functions with explicit type parameters to make it LL(1). For example:
Java puts the generic argument list of a method before the method name, i.e. obj.<T>foo() instead of obj.foo<T>().
Rust requires :: before the generic argument list: foo::<T>() instead of foo<T>().
Scala uses square brackets for generics and for nothing else (array subscripts use parentheses): foo[T]() instead of foo<T>().
What are advantages of post-fix expressions over prefix expressions? What can be the disadvantages of prefix expressions that can be removed using postfix expressions?
You can see here
Postfix and Prefix notations have similar complexity, Postfix is slightly easier to evaluate in simple circumstances as the operators really are evaluated strictly left-to-right.
Fore more about Infix, Prefix and Postfix Expressions you can read this.
I have a grammar which has the following productions:
S-> if e then S else | while e do S| begin L end
|s
L-> S; L|S
I am supposed to construct the operator precedence parsing table for the above. But I'm little confused about how to decide the precedence of various terminals here. Till now, we used to work on normal operators (like, +,I,(,id etc). But how to decide in this? I googled to find how to parse if-else grammar using operator precedence parser, but couldn't find any link explaining the same. I actually need to design the error correcting routines for parsing this grammar using operator precedence and SLR parser. Any help will be appreciated (a question from the book Compiler Design, Aho Ullman)!
Thanks in advance!!
Answering my own question for people who want to learn, read this pdf. It presents a method to do the parsing as per operator precedence parsing for all general operators.