Substitute factor by ratio in maxima expression - maxima

I have this expression in maxima:
a^2/sqrt((a*cos(φ))^2+(b*sin(φ))^2)
Can I have maxima rewrite it in terms of a factor times a/b?

v:a^2/sqrt((a*cos(φ))^2+(b*sin(φ))^2);
sqrt(trigsimp(ratsubst(k,a/b,v)^2));

Related

Algorithm to remove redundant parentheses from a boolean expression

I have a boolean expression in prefix notation. Lets say it is or and A B or or C D E. When I convert it to infix notation I end up with
((A and B) or ((C or D) or E)). I want to reduce it to (A and B) or C or D or E. Should I reduce infix notation or is it actually easier to get reduced equation from prefix notation. What algorithms should I use?
Paranthesis can be removed in expression X % (X1 ? X2 ? .. ? Xn) % X(n+1) where Xi is a parenthesized expression or boolean value "?" and "%" are operators if and only if each "?" operator has precedence higher or equal to "%" operator.
For infix notation you would find innermost expression, check if parenthesis can be removed, save result, process parent expresion and continue until all parenthesis checks are done.
This turns into a mapping problem. Postfix notation makes parenthesis elimination easy. Translation between prefix, infix and postfix notations is trivial.

Left recursion parsing

Description:
While reading Compiler Design in C book I came across the following rules to describe a context-free grammar:
a grammar that recognizes a list of one or more statements, each of
which is an arithmetic expression followed by a semicolon. Statements are made up of a
series of semicolon-delimited expressions, each comprising a series of numbers
separated either by asterisks (for multiplication) or plus signs (for addition).
And here is the grammar:
1. statements ::= expression;
2. | expression; statements
3. expression ::= expression + term
4. | term
5. term ::= term * factor
6. | factor
7. factor ::= number
8. | (expression)
The book states that this recursive grammar has a major problem. The right hand side of several productions appear on the left-hand side as in production 3 (And this property is called left recursion) and certain parsers such as recursive-descent parser can't handle left-recursion productions. They just loop forever.
You can understand the problem by considering how the parser decides to apply a particular production when it is replacing a non-terminal that has more than one right hand side. The simple case is evident in Productions 7 and 8. The parser can choose which production to apply when it's expanding a factor by looking at the next input symbol. If this symbol is a number, then the compiler applies Production 7 and replaces the factor with a number. If the next input symbol was an open parenthesis, the parser
would use Production 8. The choice between Productions 5 and 6 cannot be solved in this way, however. In the case of Production 6, the right-hand side of term starts with a factor which, in tum, starts with either a number or left parenthesis. Consequently, the
parser would like to apply Production 6 when a term is being replaced and the next input symbol is a number or left parenthesis. Production 5-the other right-hand side-starts with a term, which can start with a factor, which can start with a number or left parenthesis, and these are the same symbols that were used to choose Production 6.
Question:
That second quote from the book got me completely lost. So by using an example of some statements as (for example) 5 + (7*4) + 14:
What's the difference between factor and term? using the same example
Why can't a recursive-descent parser handle left-recursion productions? (Explain second quote).
What's the difference between factor and term? using the same example
I am not giving the same example as it won't give you clear picture of what you have doubt about!
Given,
term ::= term * factor | factor
factor ::= number | (expression)
Now,suppose if I ask you to find the factors and terms in the expression 2*3*4.
Now,multiplication being left associative, will be evaluated as :-
(2*3)*4
As you can see, here (2*3) is the term and factor is 4(a number). Similarly you can extend this approach upto any level to draw the idea about term.
As per given grammar, if there's a multiplication chain in the given expression, then its sub-part,leaving a single factor, is a term ,which in turn yields another sub-part---the another term, leaving another single factor and so on. This is how expressions are evaluated.
Why can't a recursive-descent parser handle left-recursion productions? (Explain second quote).
Your second statement is quite clear in its essence. A recursive descent parser is a kind of top-down parser built from a set of mutually recursive procedures (or a non-recursive equivalent) where each such procedure usually implements one of the productions of the grammar.
It is said so because it's clear that recursive descent parser will go into infinite loop if the non-terminal keeps on expanding into itself.
Similarly, talking about a recursive descent parser,even with backtracking---When we try to expand a non-terminal, we may eventually find ourselves again trying to expand the same non-terminal without having consumed any input.
A-> Ab
Here,while expanding the non-terminal A can be kept on expanding into
A-> AAb -> AAAb -> ... -> infinite loop of A.
Hence, we prevent left-recursive productions while working with recursive-descent parsers.
The rule factor matches the string "1*3", the rule term does not (though it would match "(1*3)". In essence each rule represents one level of precedence. expression contains the operators with the lowest precedence, factor the second lowest and term the highest. If you're in term and you want to use an operator with lower precedence, you need to add parentheses.
If you implement a recursive descent parser using recursive functions, a rule like a ::= b "*" c | d might be implemented like this:
// Takes the entire input string and the index at which we currently are
// Returns the index after the rule was matched or throws an exception
// if the rule failed
parse_a(input, index) {
try {
after_b = parse_b(input, index)
after_star = parse_string("*", input, after_b)
after_c = parse_c(input, after_star)
return after_c
} catch(ParseFailure) {
// If one of the rules b, "*" or c did not match, try d instead
return parse_d(input, index)
}
}
Something like this would work fine (in practice you might not actually want to use recursive functions, but the approach you'd use instead would still behave similarly). Now, let's consider the left-recursive rule a ::= a "*" b | c instead:
parse_a(input, index) {
try {
after_a = parse_a(input, index)
after_star = parse_string("*", input, after_a)
after_b = parse_c(input, after_star)
return after_b
} catch(ParseFailure) {
// If one of the rules a, "*" or b did not match, try c instead
return parse_c(input, index)
}
}
Now the first thing that the function parse_a does is to call itself again at the same index. This recursive call will again call itself. And this will continue ad infinitum, or rather until the stack overflows and the whole program comes crashing down. If we use a more efficient approach instead of recursive functions, we'll actually get an infinite loop rather than a stack overflow. Either way we don't get the result we want.

Getting symbolic values of bit vectors in Z3

I want to use Z3 for reasoning over bit vectors. In addition with the satisfiability decision I also want the symbolic representations of the bit vectors so that I can apply my own computations on them as needed. For example:
Let,
X[3:0], Y[3:0], Z[4:0] are declared as bit vectors without initializing any value
print X[3:0]
X[3:0] <- X[3:0] >> 1 (logical shift)
print X[3:0]
Z[4:0] <- X[3:0] + Y[3:0]
print Z[4:0]
.......
Desired output (something symbolic like this):
> 2. [x3 x2 x1 x0]
> 4. [0 x3 x2 x1]
> 6. [s4 s3 s2 s1 s0]
Is it possible to have this using Z3?
In general this is not possible. After simplification of the formula, Z3 employs a bit-blaster (translation to Boolean variables) and runs a SAT solver, which (usually) returns exactly one assignment to all the Boolean variables (and thus, after translation, to the bit-vector variables).
The tactic that Z3 employs for QF_BV formulas can be seen here. For some (simple) formulas it may be sufficient to extract the formula after bit-blasting in your case; the Strategies Tutorial describes how to construct and apply such tactics.

How does an LALR(1) grammar differentiate between a variable and a function call?

Given the following input:
int x = y;
and
int x = y();
Is there any way for an LALR(1) grammar to avoid a shift/reduce conflict? The shift/reduce conflict is deciding to reduce at y or continue to (.
(This is assuming that a variable name can be any set of alphanumeric characters, and function call is any set of alphanumeric characters following by parentheses.)
It's not a shift-reduce conflict unless it is possible for an identifier to be immediately followed by an ( without being a function call. That's not normally the case, although in C-derived languages, there is the problem of differentiating cast expressions (type)(value) from parenthesized-function calls (function)(argument).
If your grammar does not exhibit that particular C wierdness, then the LALR (1) grammar can decide between shifting and reducing based on the (1) token lookahead: if the lookahead token is a (, then it shifts the identifier; otherwise, it can reduce.

BNF grammar for left-associative operators

I have the following EBNF grammar for simple arithmetic expressions with left-associative operators:
expression:
term {+ term}
term:
factor {* factor}
factor:
number
( expression )
How can I convert this into a BNF grammar without changing the operator associativity? The following BNF grammar does not work for me, because now the operators have become right-associative:
expression:
term
term + expression
term:
factor
factor * term
factor:
number
( expression )
Wikipedia says:
Several solutions are:
rewrite the grammar to be left recursive, or
rewrite the grammar with more nonterminals to force the correct precedence/associativity, or
if using YACC or Bison, there are operator declarations, %left, %right and %nonassoc, which tell the parser generator which associativity to force.
But it does not say how to rewrite the grammar, and I don't use any parsing tools like YACC or Bison, just simple recursive descent. Is what I'm asking for even possible?
expression
: term
| expression + term;
Just that simple. You will, of course, need an LR parser of some description to recognize a left-recursive grammar. Or, if recursive descent, recognizing such grammars is possible, but not as simple as right-associative ones. You must roll a small recursive ascent parser to match such.
Expression ParseExpr() {
Expression term = ParseTerm();
while(next_token_is_plus()) {
consume_token();
Term next = ParseTerm();
term = PlusExpression(term, next);
}
return term;
}
This pseudocode should recognize a left-recursive grammar in that style.
What Puppy suggests can also be expressed by the following grammar:
expression: term opt_add
opt_add: '+' term opt_add
| /* empty */
term: factor opt_mul
opt_mul: '*' factor opt_mul
| /* emtpty */
factor: number
| '(' expression ')

Resources