The first time an assumption is made, a summary is output:
(%i1) assume(a>0);
(%o1) [a > 0]
But if the same assumption is given again, "redundant" is given, instead of the summary:
(%i2) assume(a >0);
(%o2) [redundant]
I would like a second execution to give exactly the same output as the first one. Otherwise the texmacs session becomes cluttered with "redundants" that are only distracting.
The forget does that, but it is cumbersome to give a > 0 twice.
(%i6) forget(a > 0);
(%o6) [a > 0]
(%i7) assume(a >0);
(%o7) [a > 0]
or even
(%i5) forget(a > 0)$ assume(a >0);
(%o6) [a > 0]
Suppression of the output is an acceptable workaround
(%i4) assume(a >0)$
But the summary is a nicely readable in texmacs.
Is there a way to get it, even upon redundant calls ?
Related
We are trying an alteration optimization strategy to solve Lyapunov problems.
We break down our decision variables into two sets, Set 1 and Set 2.
We were perplexed how it was possible that, after getting a solution to Set 1, and plugging in those solved variables into the optimization over Set 2, the transferred variables would not be feasible.
The constraints that fail are those due to the SOS coefficient matching equality constraints.
Here, we print in each row the constraint that failed, and the value of our Initial Guess. We can see that the Initial Guess is off only a very small amount compared to the constraints.
LinearEqualityConstraint
(2 * Symmetric(97,40) + 2 * Symmetric(96,41)) == 9.50028
[9.50027496]
LinearEqualityConstraint
(2 * Symmetric(97,47) + 2 * Symmetric(96,48)) == 234.465
[234.4647013]
LinearEqualityConstraint
(2 * Symmetric(97,54) + 2 * Symmetric(96,55)) == -234.463
[-234.46336504]
LinearEqualityConstraint
(2 * Symmetric(97,61) + 2 * Symmetric(96,62)) == 12.7962
[12.79618825]
LinearEqualityConstraint
(2 * Symmetric(97,68) + 2 * Symmetric(96,69)) == -12.7964
[-12.79637068]
LinearEqualityConstraint
(2 * Symmetric(97,75) + 2 * Symmetric(96,76)) == -51.4061
[-51.40605828]
LinearEqualityConstraint
(2 * Symmetric(97,81) + 2 * Symmetric(96,82)) == 51.406
[51.40604213]
LinearEqualityConstraint
(2 * Symmetric(97,86) + 2 * Symmetric(96,87)) == 192.794
[192.79430158]
LinearEqualityConstraint
(2 * Symmetric(97,90) + 2 * Symmetric(96,91)) == -141.924
[-141.92366183]
LinearEqualityConstraint
(2 * Symmetric(97,93) + 2 * Symmetric(96,94)) == -37.6674
[-37.66740401]
InitialGuess V_sos and
Our guess for what's happening is:
When you extract the solution from one optimization using result.GetSolution(var), you lose some precision.
Or, when you set the previous solution using prog.SetInitialGuess(np_array) you lose some precision.
What's the solution here? Should we just keep feeding the solution back in even though it says infeasible?
This is a partial cookbook when I debug SOS problem, especially when working with Lyapunov problems:
Choose the right monomial basis
The main idea is to remove the 0-th order monomial 1 from the monomial basis of the sos polynomial. Here is a quick explanation:
The mathematical problem is
Find λ(x)
−Vdot − λ(x) * (ρ − V) is sos
λ(x) is sos
Namely you want to prove that V≤ρ ⇒ Vdot ≤ 0
So first I would suggest to re-write your dynamics to make sure that 0 is the goal state (you can always shift your state).
Second you can see that since x=0 is the equilibrium point, then both V(0) = 0 and Vdot(0) = 0 (Because x=0 is the global minimal of V(x), hence ∂V/∂x=0 at x=0, indicating Vdot(0) = 0), now your sos polynomial p(x) = −Vdot − λ(x) * (ρ − V) must satisfy p(0) = -λ(0) * ρ. But λ(x) >= 0 and ρ > 0, so we know λ(0) = 0.
Lemma
If a sos polynomial s(x) satisfies s(0) = 0, then its monomial basis cannot contain the 0-th order monomial (namely 1).
Proof
Remember that s(x) is a sos polynomial, namely
s(x) = m(x)ᵀQm(x)
where m(x) contains the monomial basis, and Q is a psd matrix. Now let's decompose the monomial basis m(x) into two parts, the 0-th order monomial 1 and the remaining monomials mbar(x). For example, if m(x) = [x1, x2, 1], then mbar(x) = [x1, x2]. We also decompose the psd matrix Q accordingly
s(x) = [mbar(x)]ᵀ [Q11 Q10] [mbar(x)]
[ 1] [Q10 Q00] [ 1]
Since s(0) = Q00 = 0, we also know that Q10 = 0, so now we can use a smaller psd matrix Q11 rather than Q. Equivalently we write s(x) = mbar(x)ᵀ * Q11 * mbar(x), where mbar(x) is the monomial basis that doesn't contain the 0-th order monomial, QED.
So why removing the 0-th order monomial from the monomial basis and use the smaller psd matrix Q11 is a good idea when your sos polynomial s(x) satisfies s(0) = 0? The reason is that if the monomial basis contains 1, then your psd matrix Q has to be on the boundary of the psd cone, namely your SDP problem doesn't have a strict interior. This could leads to violation of Slater's condition, which also breaks the strong duality. One example is that if your s(x) = x², by including the 0'th order monomial, it is written as
x² = [x] [1 0] [x]
[1] [0 0] [1]
And you see that the Gram matrix [[1 0], [0, 0]] is on the boundary of the psd cone (with one eigen value equal to 0). But if you remove 1 from the monomial basis, then its Gram matrix is just Q11=1, strictly in the interior of the psd cone.
In Drake, after removing 1 from the monomial basis, you can create your sos polynomial λ(x) as
lambda_poly, lambda_gram = prog.NewSosPolynomial(monomial_basis)
whee monomial_basis doesn't contain the 0-th order monomial.
Backoff during bilinear alternation
This is a typical problem in bilinear alternation. The issue is that when you solve a conic optimization problem with an objective function, the optimal solution always occurs at the boundary of the cone, namely it is very close to being infeasible. Then when you fix some variables to this solution at the cone boundary, in the next iteration the problem is very likely infeasible due to numerical roundoff error.
A typical solution is that after solving the optimization problem on variable Set 1 with an objective, now "backoff" a little bit by solving a feasibility problem on variable Set 1. This new solution is often strictly feasible (namely it is inside the strict interior of the cone), now pass this strictly feasible solution Set 1 to the next iteration and search for Set 2.
More concretely, suppose at one iteration you solve the following optimization problem
min c'*x
s.t constraint_on_x
and denote the optimal cost as p. Now solve a new feasibility problem
find x
s.t c'*x <= p + epsilon
constraint_on_x
where epsilon can be a small positive number. This new solution will be used in the next iteration to search for a different set of variables.
You can check if your solution is on the boundary of the positive semidefinite cone by checking the Eigen value of your psd matrix. Here is the pseudo-code
for binding : prog.positive_semidefinite_constraints():
psd_sol = result.GetSolution(binding.variables())
psd_sol.reshape((binding.evaluator().matrix_rows(), binding.evaluator().matrix_rows()))
print(f"minimal eigenvalue {np.linalg.eig(psd_sol)[0].min()}")
You should see that before doing this "backoff" some of the minimal eigen value is almost 0. After "backoff" the minimal eigen value gets larger.
Maxima does not seem to come up with an analytic solution to this equation which includes the error function. The independent variable here is "p" and the dependent variable to be solved for is "x".
see an illustration of equation follow link
(%i3) solveexplicit:true$ ratprint:false$ fpprintprec:6$
(%i4) eqn: (sqrt(%pi)*(25*2^(3/2)*p-25*sqrt(2))*erf(1/(25*2^(3/2)*x))*x+1)/(25*p) = 0.04;
(%i5) solve (eqn, x);
(%o5) []
(%i6) eqn, [p=2,x=0.00532014],numer;
(%o6) 0.04=0.04
Any help or pointing in the right direction is appreciated.
As far as I know, Maxima can't solve equations containing erf. You can get a numerical result via find_root:
(%i5) find_root (eqn, x, 0.001, 0.999), p=2;
(%o5) 0.005320136894034347
As for symbolic solutions, I worked with the equation a little bit. One can get it into the form erf(something/x)*x = otherstuff, or equivalently erf(y) = somethingelse*y where y = something/x and somethingelse = otherstuff/something if I'm not mistaken. I don't know anything in particular about equations of that form, but perhaps you can find something.
Yes, solve can only do polynominals. I used the series expansion for small values of x and the accuracy is good enough.
(%i11) seriesE: 1$
termE: erf(x)$
for p: 1 unless p > 3 do
(termE: diff (termE, x)/p,
seriesE: seriesE + subst (x=0, termE)*x^p)$
seriesE;
(%o11) -(2*x^3)/(3*sqrt(%pi))+(2*x)/sqrt(%pi)+1
However, the "Expression longer than allowed by the configuration setting!"
I'm trying to get maxima to perform some "abstract" Taylor series expansions, and I'm running into a simplification issue. A prototype of the problem might be the finite-difference analog of the gradient,
g(x1,dx1) := (f(x1+dx1) - f(x1))/dx1; /* dx1 is small */
taylor(g(x1,dx1), [dx1], [0], 0);
for which maxima returns
So far so good. But now try the finite-difference analog of the second derivative (Hessian),
h(x1,dx1) := (f(x1+dx1) - 2*f(x1) + f(x1-dx1))/dx1^2;
taylor(h(x1,dx1), dx1, 0, 0);
for which I get
which is not nearly as helpful.
A prototype of the "real" problem I want to solve is to compute the low-order errors of the finite-difference approximation to ∂^2 f/(∂x1 ∂x2),
(f(x1+dx1, x2+dx2) - f(x1+dx1, x2) - f(x1, x2+dx2) + f(x1, x2))/(dx1*dx2)
and to collect the terms up to second order (which involves up to 4th derivatives of f). Without reasonably effective simplification I suspect it will be easier to do by hand than by computer algebra, so I am wondering what can be done to coax maxima into doing the simplification for me.
Consider this example. It uses Barton Willis' pdiff package. I
simplified notation a bit: moved center to [0, 0] and introduced
notation for partial derivatives.
(%i1) load("pdiff") $
(%i2) matchdeclare([n, m], integerp) $
(%i3) tellsimpafter(f(0, 0), 'f00) $
(%i4) tellsimpafter(pderivop(f,n,m)(0,0), concat('f, n, m)) $
(%i5) e: (f(dx, dy) - f(dx, -dy) - f(-dx, dy) + f(-dx, -dy))/(4*dx*dy)$
(%i6) taylor(e, [dx, dy], [0, 0], 3);
2 2
f31 dx + f13 dy
(%o6)/T/ f11 + ----------------- + . . .
6
Let's say we have a term like 1/4 * x/sqrt(2) * x^2 / 2; in Maxima.
As an output (without further modification) it gives x^3/2^(7/2).
How can I force the output format to be like x^3/(8*sqrt(2)) with usage of square roots whenever possible?
(%i1) sq2: " "(sqrt(2))$
(%i2) matchdeclare(n, lambda([n], oddp(n) and n#1))$
(%i3) defrule(r_sq2, 2^(n/2), sq2*2^((n-1)/2)) $
(%i4) e: 1/4 * x/sqrt(2) * x^2 / 2;
3
x
(%o4) ----
7/2
2
(%i5) apply1(e, r_sq2);
3
(sqrt(2)) x
(%o5) -------------
16
A rule can help to insert sqrt(2). In the example I use a "null" function to prevent simplification. You can also consider box and rembox functions or leave sq2 undefined.
What am I doing wrong in this code?
atvalue(y(x),[x=0],1)$
desolve(diff(y(x),x)=y(x),y(x));
plot2d(y(x),[x,-6,6]);
Output:
plot2d: expression evaluates to non-numeric value everywhere in plotting range.
plot2d: nothing to plot
false
I want to plot y(x) which is obtained from a differential equation.
In Maxima y(x) = ... is an equation, and y(x) := ... is a function, and those two things are different. Try this:
atvalue (y(x), [x=0], 1)$
desolve (diff(y(x),x)=y(x), y(x));
define (y(x), rhs(%));
plot2d (y(x), [x, -6, 6]);
Here define(y(x), ...) is a different way to define a function. define evaluates the function body rhs(%) to yield exp(x) but := quotes it (not what you want).
The reason is that the result you see after the desolve does not mean y is defined as a function of x; in fact you obtain the same error if you change y(x) with f(x) (or any other unknown function) in plot2d. See the difference:
(%i9) atvalue(y(x),[x=0],1)$
(%i10) desolve(diff(y(x),x)=y(x),y(x));
x
(%o10) y(x) = %e
(%i11) y(x);
(%o11) y(x)
(%i12) y(x):=%e^x;
x
(%o12) y(x) := %e
(%i13) y(x);
x
(%o13) %e
I don't know if there's a way to “transform” the equation (the result) into a function definition automatically. If I find a way, I will complete the answer.