Pseudo Code for converting infix to postfix - stack

I'm struggling getting the pseudo code for this.
Scan string left to right for each char
If operand add it to string
Else if operator add to stack
....
i'm struggling on how to handle ( )s

Have you tried these links yet?
http://www.geocities.com/e_i_search/premshree/web-include/pub/infix-postfix/index.htm
http://code.activestate.com/recipes/228915-infixpostfix/

( goes on to the stack, then when you get to ) you pop from the stack until you find a (.
Wikipedia has a more detailed description of the algorithm, supporting functions as well as operators.

Scan input string from left to right character by character.
If the character is an operand, put it into output stack.
If the character is an operator and operator's stack is empty, push operator
into operators' stack.
If the operator's stack is not empty, there may be following possibilities.
If the precedence of scanned operator is greater than the top most operator
of operator's stack, push this operator into operand's stack.
If the precedence of scanned operator is less than or equal to the top most
operator of operator's stack, pop the operators from operand's stack until
we find a low precedence operator than the scanned character. Never pop out
( '(' ) or ( ')' ) whatever may be the precedence level of scanned
character.
If the character is opening round bracket ( '(' ), push it into operator's
stack.
If the character is closing round bracket ( ')' ), pop out operators from
operator's stack until we find an opening bracket ('(' ).
Now pop out all the remaining operators from the operator's stack and push
into output stack.

I am a bit rusty at this, but when you encounter a '(' , you push it onto the stack because it has the highest precedence. I cant remember what to do when you encounter ')', but i think it goes on the stack as well because its the highest precedence.

Related

ANTLR: Why is this grammar rule for a tuples not LL(1)?

I have the following grammar rules defined to cover tuples of the form: (a), (a,), (a,b), (a,b,) and so on. However, antlr3 gives the warning:
"Decision can match input such as "COMMA" using multiple alternatives: 1, 2
I believe this means that my grammar is not LL(1). This caught me by surprise as, based on my extremely limited understanding of this topic, the parser would only need to look one token ahead from (COMMA)? to ')' in order to know which comma it was on.
Also based on the discussion I found here I am further confused: Amend JSON - based grammar to allow for trailing comma
And their source code here: https://github.com/doctrine/annotations/blob/1.13.x/lib/Doctrine/Common/Annotations/DocParser.php#L1307
Is this because of the kind of parser that antlr is trying to generate and not because my grammar isn't LL(1)? Any insight would be appreciated.
options {k=1; backtrack=no;}
tuple : '(' IDENT (COMMA IDENT)* (COMMA)? ')';
DIGIT : '0'..'9' ;
LOWER : 'a'..'z' ;
UPPER : 'A'..'Z' ;
IDENT : (LOWER | UPPER | '_') (LOWER | UPPER | '_' | DIGIT)* ;
edit: changed typo in tuple: ... from (IDENT)? to (COMMA)?
Note:
The question has been edited since this answer was written. In the original, the grammar had the line:
tuple : '(' IDENT (COMMA IDENT)* (IDENT)? ')';
and that's what this answer is referring to.
That grammar works without warnings, but it doesn't describe the language you intend to parse. It accepts, for example, (a, b c) but fails to accept (a, b,).
My best guess is that you actually used something like the grammars in the links you provide, in which the final optional element is a comma, not an identifier:
tuple : '(' IDENT (COMMA IDENT)* (COMMA)? ')';
That does give the warning you indicate, and it won't match (a,) (for example), because, as the warning says, the second alternative has been disabled.
LL(1) as a property of formal grammars only applies to grammars with fixed right-hand sides, as opposed to the "Extended" BNF used by many top-down parser generators, including Antlr, in which a right-hand side can be a set of possibilities. It's possible to expand EBNF using additional non-terminals for each subrule (although there is not necessarily a canonical expansion, and expansions might differ in their parsing category). But, informally, we could extend the concept of LL(k) by saying that in every EBNF right-hand side, at every point where there is more than one alternative, the parser must be able to predict the appropriate alternative looking only at the next k tokens.
You're right that the grammar you provide is LL(1) in that sense. When the parser has just seen IDENT, it has three clear alternatives, each marked by a different lookahead token:
COMMA ↠ predict another repetition of (COMMA IDENT).
IDENT ↠ predict (IDENT).
')' ↠ predict an empty (IDENT)?.
But in the correct grammar (with my modification above), IDENT is a syntax error and COMMA could be either another repetition of ( COMMA IDENT ), or it could be the COMMA in ( COMMA )?.
You could change k=1 to k=2, thereby allowing the parser to examine the next two tokens, and if you did so it would compile with no warnings. In effect, that grammar is LL(2).
You could make an LL(1) grammar by left-factoring the expansion of the EBNF, but it's not going to be as pretty (or as easy for a reader to understand). So if you have a parser generator which can cope with the grammar as written, you might as well not worry about it.
But, for what it's worth, here's a possible solution:
tuple : '(' idents ')' ;
idents : IDENT ( COMMA ( idents )? )? ;
Untested because I don't have a working Antlr3 installation, but it at least compiles the grammar without warnings. Sorry if there is a problem.
It would probably be better to use tuple : '(' (idents)? ')'; in order to allow empty tuples. Also, there's no obvious reason to insist on COMMA instead of just using ',', assuming that '(' and ')' work as expected on Antlr3.

How is the conditional operator parsed?

So, the cppreference claims:
The expression in the middle of the conditional operator (between ? and :) is parsed as if parenthesized: its precedence relative to ?: is ignored.
However, it appears to me that the part of the expression after the ':' operator is also parsed as if it were between parentheses. I've tried to implement the ternary operator in my programming language (and you can see the results of parsing expressions here), and my parser pretends that the part of the expression after ':' is also parenthesized. For example, for the expression (1?1:0?2:0)-1, the interpreter for my programming language outputs 0, and this appears to be compatible with C. For instance, the C program:
#include <stdio.h>
int main() {
printf("%d\n",(1?1:0?2:0)-1);
}
Outputs 0.
Had I programmed the parser of my programming language that, when parsing the ternary operators, simply take the first already parsed node after ':' and take it as the third operand to '?:', it would output the same as ((1?1:0)?2:0)-1, that is 1.
My question is whether this would (pretending that the expression after the ':' is parenthesized) always be compatible with C?
"Pretends that it is parenthesised" is some kind of description of operator parenthesis. But of course that has to be interpreted relative to precedence relations (including associativity). So in a-b*c and a*b-c, the subtraction effectively acts as though its arguments are parenthesised, only the left-hand argument is treated that way in a-b-c and it is the comparison operator which causes grouping in a<b-c and a-b<c.
I'm sure you know all that since your parser seems to work for all these cases, but I say that because the ternary operator is right-associative and of lower precedence than any other operator [Note 1]. That means that the pseudo-parentheses imposed by operator precedence surround the right-hand argument (regardless of its dominating operator, since all operators have higher precedence), and also the left-hand argument unless its dominating operator is another conditional operator. But that wouldn't be the case in C, where the comma operator has lower precedence and would not be enclosed by the imaginary parentheses following the :.
It's important to understand what is meant by the precedence of a complex operator. In effect, to compute the precedence relations we first collapse the operator to a simple ?: which includes the enclosed (second) argument. This is not "as if the expression were parenthesized", because it is parenthesized. It is parenthesized between ? and :, which in this context are syntactically parenthetic.
In this sense, it is very similar to the usual analysis of the subscript operator as a postfix operator, although the brackets of the subscript operator enclose a second argument. The precedence of the subscript operator is logically what would result from considering it to be a single [], abstracting away the expression contained inside. This is also the same as the function call operator. That happens to be written with parentheses, but the precise symbols are not important: it is possible to imagine an alternative language in which function calls are written with different symbols, perhaps { and }. That wouldn't affect the grammar at all.
It might seem odd to think of ? and : to be "parenthetic", since they don't look parenthetic. But a parser doesn't see the shapes of the symbols. It is satisfied by being told that a ( is closed by a ) and, in this case, that a ? is closed by a :. [Note 2]
Having said all that, I tried your compiler on the conditional expression
d = 0 ? 0 : n / d
It parses this expression correctly, but the compiled code computes n / d before verifying whether d = 0 is true. That's not the way the conditional operator should work; in this case, it will lead to an unexpected divide by 0 exception. The conditional operator must first evaluate its left-hand argument, and then evaluate exactly one of the other two expressions.
Notes:
In C, this is not quite correct. The comma operator has lower precedence, and there is a more complex interaction with assignment operators, which logically have the same precedence and are also right-associative.
In C-like languages those symbols are not used for any other purpose, so it's OK to just regard them as strange-looking parentheses and leave it at that. But as the case of the function-call operator shows (or, for that matter, the unary - operator), it is sometimes possible to reuse operator symbols for more than one purpose.
As a curiosity, it is not strictly necessary that open and close parentheses be different symbols, as long as they are not used for any other purpose. So, for example, if | is not used as an operator symbol (as it is in C), then you could use | a | to mean the absolute value of a without creating any ambiguities.
A precise analysis of the circumstances in which symbol reuse leads to actual ambiguities is beyond the scope of this answer.

clarification regarding convertion of postfix

So this expression, A - B + C * (D / E) results to A B - C D E / * +
I thought when converting infix to postfix, you have to keep the operators in stack until you see operators with lower precedence after it or at the end of the expression and you just write it all down.
But minus sign has the same level of precedence as the plus, so why minus sign is written down and not keeping it to stack?
I think there's something wronf with my understanding with the method of converting. Please provide an explanation, preferably with a step by step solution. I am really confused what happens to the minus sign as I understand the conversion differently. Thank you so much.
The Shunting yard algorithm rules state:
if the token is an operator, then:
while ((there is a function at the top of the operator stack)
or (there is an operator at the top of the operator stack with greater precedence)
or (the operator at the top of the operator stack has equal precedence and is left associative))
and (the operator at the top of the operator stack is not a left parenthesis):
pop operators from the operator stack onto the output queue.
Note the second or condition: the operator at the top of the operator stack has equal precedence and is left associative.
The + and - operators have equal precedence, and are left-associative. Therefore, you would remove the - operator when you see the +.
See also https://www.geeksforgeeks.org/operator-precedence-and-associativity-in-c/. Although that's specific to C, most languages use the same precedence and associativity for common operators.

Shift/Reduce conflict in CUP

I'm trying to write a parser for a javascript-ish language with JFlex and Cup, but I'm having some issues with those deadly shift/reduce problems and reduce/reduce problems.
I have searched thoroughly and have found tons of examples, but I'm not able to extrapolate these to my grammar. My understanding so far is that these problems are because the parser cannot decide which way it should take because it can't distinguish.
My grammar is the following one:
start with INPUT;
INPUT::= PROGRAM;
PROGRAM::= FUNCTION NEWLINE PROGRAM
| NEWLINE PROGRAM;
FUNCTION ::= function OPTIONAL id p_izq ARG p_der NEWLINE l_izq NEWLINE BODY l_der;
OPTIONAL ::=
| TYPE;
TYPE::= integer
| boolean
ARG ::=
| TYPE id MORE_ARGS;
MORE_ARGS ::=
| colon TYPE id MORE_ARGS;
NEWLINE ::= salto NEWLINE
| ;
BODY ::= ;
I'm getting several conflicts but these 2 are a mere example:
Warning : *** Shift/Reduce conflict found in state #5
between NEWLINE ::= (*)
and NEWLINE ::= (*) salto NEWLINE
under symbol salto
Resolved in favor of shifting.
Warning : *** Shift/Reduce conflict found in state #0
between NEWLINE ::= (*)
and FUNCTION ::= (*) function OPTIONAL id p_izq ARG p_der NEWLINE l_izq NEWLINE BODY l_der
under symbol function
Resolved in favor of shifting.
PS: The grammar is far more complex, but I think that if I see how these shift/reduce problems are solved, I'll be able to fix the rest.
Thanks for your answers.
PROGRAM is useless (in the technical sense). That is, it cannot produce any sentence because in
PROGRAM::= FUNCTION NEWLINE PROGRAM
| NEWLINE PROGRAM;
both productions for PROGRAM are recursive. For a non-terminal to be useful, it needs to be able to eventually produce some string of terminals, and for it to do that, it must have at least one non-recursive production; otherwise, the recursion can never terminate. I'm surprised CUP didn't mention this to you. Or maybe it did, and you chose to ignore the warning.
That's a problem -- useless non-terminals really cannot ever match anything, so they will eventually result in a parse error -- but it's not the parsing conflict you're reporting. The conflicts come from another feature of the same production, which is related to the fact that you can't divide by 0.
The thing about nothing is that any number of nothings are still nothing. So if you had plenty of nothings and someone came along and asked you exactly how many nothings you had, you'd have a bit of problem, because to get "plenty" back from "0 * plenty", you'd have to compute "0 / 0", and that's not a well-defined value. (If you had plenty of two's, and someone asked you how many two's you had, that wouldn't be a problem: suppose the plenty of two's worked out to 40; you could easily compute that 40 / 2 = 20, which works out perfectly because 20 * 2 = 40.)
So here we don't have arithmetic, we have strings of symbols. And unfortunately the string containing no symbols is really invisible, like 0 was for all those millenia until some Arabian mathematician noticed the value of being able to write nothing.
Where this is all coming around to is that you have
PROGRAM::= ... something ...
| NEWLINE PROGRAM;
But NEWLINE is allowed to produce nothing.
NEWLINE ::= salto NEWLINE
| ;
So the second recursive production of PROGRAM might add nothing. And it might add nothing plenty of times, because the production is recursive. But the parser needs to be deterministic: it needs to know exactly how many nothings are present, so that it can reduce each nothing to a NEWLINE non-terminal and then reduce a new PROGRAM non-terminal. And it really doesn't know how many nothings to add.
In short, optional nothings and repeated nothings are ambiguous. If you are going to insert a nothing into your language, you need to make sure that there are a fixed finite number of nothings, because that's the only way the parser can unambiguously dissect nothingness.
Now, since the only point of that particular recursion (as far as I can see) is to allow empty newline-terminated statements (which are visible because of the newline, but do nothing). And you could do that by changing the recursion to avoid nothing:
PROGRAM ::= ...
| salto PROGRAM;
Although it's not relevant to your current problem, I feel obliged to mention that CUP is an LALR parser generator and all that stuff you might have learned or read on the internet about recursive descent parsers not being able to handle left recursion does not apply. (I deleted a rant about the way parsing technique are taught, so you'll have to try to recover it from the hints I've left behind.) Bottom-up parser generators like CUP and yacc/bison love left recursion. They can handle right recursion, too, of course, but reluctantly because they need to waste stack space for every recursion other than left recursion. So there is no need to warp your grammar in order to deal with the deficiency; just write the grammar naturally and be happy. (So you rarely if ever need non-terminals representing "the rest of" something.)
PD: Plenty of nothing is a culturally-specific reference to a song from the 1934 opera Porgy and Bess.

Handling extra operators in Shunting-yard

Given an input like this: 3+4+
Algorithm turns it in to 3 4 + +
I can find the error when it's time to execute the postfix expression.
But, is it possible to spot this during the conversion?
(Wikipedia article and internet articles I've read do not handle this case)
Thank you
Valid expressions can be validated with a regular expression, aside from parenthesis mismatching. (Mismatched parentheses will be caught by the shunting-yard algorithm as indicated in the wikipedia page, so I'm ignoring those.)
The regular expression is as follows:
PRE* OP POST* (INF PRE* OP POST*)*
where:
PRE is a prefix operator or (
POST is a postfix operator or )
INF is an infix operator
OP is an operand (a literal or a variable name)
The Wikipedia page you linked includes function calls but not array subscripting; if you want to handle these cases, then:
PRE is a prefix operator or (
POST is a postfix operator or ) or ]
INF is an infix operator or ( or ,
OP is an operand, including function names
One thing to note in the above is that PRE and INF are in disjoint contexts; that is, if PRE is valid, then INF is not, and vice versa. This implies that using the same symbol for both a prefix operator and an infix operator is unambiguous. Also, PRE and POST are in disjoint contexts, so you can use the same symbol for prefix and postfix operators. However, you cannot use the same symbol for postfix and infix operators. As examples, consider the C/C++ operators:
- prefix or infix
+ prefix or infix
-- prefix or postfix
++ prefix or postfix
I implicitly used this feature above to allow ( to be used either as an expression grouper (effectively prefix) and as a function call (infix, because it comes between the function name and the argument list; the operator is "call".)
It's most common to implement that regular expression as a state machine, because there are only two states:
+-----+ +-----+
|State|-----OP---->|State|
| 1 |<----INF----| 2 |
| |---+ | |---+
+-----+ | +-----+ |
^ PRE ^ POST
| | | |
+------+ +------+
We could call State 1 "want operand" and State 2 "have operand". A simple implementation would just split the shunting yard algorithm as presented in wikipedia into two loops, something like this (if you don't like goto, it can be eliminated, but it really is the clearest way to present a state machine).
want_operand:
read a token. If there are no more tokens, announce an error.
if the token is an prefix operator or an '(':
mark it as prefix and push it onto the operator stack
goto want_operand
if the token is an operand (identifier or variable):
add it to the output queue
goto have_operand
if the token is anything else, announce an error and stop. (**)
have_operand:
read a token
if there are no more tokens:
pop all operators off the stack, adding each one to the output queue.
if a `(` is found on the stack, announce an error and stop.
if the token is a postfix operator:
mark it as postfix and add it to the output queue
goto have_operand.
if the token is a ')':
while the top of the stack is not '(':
pop an operator off the stack and add it to the output queue
if the stack becomes empty, announce an error and stop.
if the '(' is marked infix, add a "call" operator to the output queue (*)
pop the '(' off the top of the stack
goto have_operand
if the token is a ',':
while the top of the stack is not '(':
pop an operator off the stack and add it to the output queue
if the stack becomes empty, announce an error
goto want_operand
if the token is an infix operator:
(see the wikipeda entry for precedence handling)
(normally, all prefix operators are considered to have higher precedence
than infix operators.)
got to want_operand
otherwise, token is an operand. Announce an error
(*) The procedure as described above does not deal gracefully with parameter lists;
when the postfix expression is being evaluated and a "call" operator is found, it's
not clear how many arguments need to be evaluated. It might be that function names
are clearly identifiable, so that the evaluator can just attach arguments to the
"call" until a function name is found. But a cleaner solution is for the "," handler
to increment the argument count of the "call" operator (that is, the open
parenthesis marked as "infix"). The ")" handler also needs to increment the
argument count.
(**) The state machine as presented does not correctly handle function calls with 0
arguments. This call will show up as a ")" read in the want_operand state and with
a "call" operator on top of the stack. If the "call" operator is marked with an
argument count, as above, then the argument count must be 0 when the ")" is read,
and in this case, unlike the have_operand case, the argument count must not be
incremented.

Resources