In Maxima, how to check whether a series converges? - maxima

I would like to create a function that returns true precisely if a series diverges to infinity.
Unfortunately it seems that Maxima throws an error if a series diverges. I loaded the package simplify_sum and I hoped that simplify_sum(sum(1/n,n,1,inf)) would return inf, but it returns
sum: sum is divergent.
#0: simplify_sum(expr='sum(1/n,n,1,inf))
-- an error. To debug this try: debugmode(true);
How can I check whether a series diverges to infinity?

I realized this cannot be done as this is an undecidable problem.

Related

SOS polynomial with Drake and Mosek yields error code 1501

I am trying to solve a MathematicalProgram with an SOSPolynomial. I am running Drake in C++ compiled from source with Mosek.
The MathematicalProgram contains a quadratic cost function and some equality constraints, which works fine when calling Solve() without adding the SOS polynomials. When looking at result.get_solver_id(), I find: "Equality constrained QP", as expected.
However, upon calling Solve(), after adding an SOS polynomial through prog.NewSosPolynomial({t}, degree) (with t being a decision variable) the program returns that a solution could not be found. When looking at the value found in result_.get_solution_result(), I find solution_status = false and rescode = 1501.
Looking here, rescode = 1501 means: "The problem contains nonlinear terms conic constraints. The requested operation cannot be applied to this type of problem.". However, by checking the value of result.get_solver_id() before adding the SOS Polynomial, it is clear that there are no other nonlinear constraints in the problem.
Am I missing something here, or is this a bug?
Interesting. The standard form for a semi-definite program (which results from our SOS constraint) only accepts linear objectives, not quadratic objectives. This does not result in any loss of generality, because you can use a slack variable. Can you try the following:
Right now you have something like
min x'Qx
s.t. Ax=b, p(t) is SOS.
Can you write it instead as
min a
s.t. Ax=b, p(t) is SOS, x'Qx <= a
but add the x'Qx <= a using AddLorenzConeConstraint? (Note: looks like you might actually use x'Qx <= a^2 and a >= 0).

Z3 getting last valid model

I using Z3 C++ api to find a satisfiable formula that is minimal with respect to some boolean variables (let us call them b0,...,bn) being true.
I have a formula that includes boolean variables b0,...,bn and I want to find some satisfiable formula where I have the least number of b0,...,bn set to true.
I do this by initially finding a subset of b0,...,bn that can be assigned to true and satisfy my formula, and I incrementally ask the solver to find smaller subsets (i.e. where one of these boolean variables is flipped to false).
I find my local minimum when I cannot find a smaller subset, i.e. I get a unsat result from the Z3. At this point, I would like to access the last valid model.
Is that possible? Does Z3 modify the model when a call to "check" is unsat?
If so, how can I do this using the C++ api?
Many thanks in advance,
You can retrieve a model if the solver returns "sat". The model refers to the state of the solver, so if you add assertions, the state changes and models are no longer valid until you check satisfiability and it returns sat.
So you can retrieve a model every time the solver returns SAT, and then discharge all but the last model.
As Nikolaj mentioned, you need to keep track of models after each call that results in sat and return the last one when you get an unsat if you follow the strategy you outlined.
However, there might be another alternative that avoids repeated calls altogether. Instead of a satisfaction problem, you can cast your problem as an optimization one. You mentioned you have control variables b0, b1, .. bn such that you want to minimize the number of them getting set to true for a satisfying model. Create a metric that counts the number of ones in these variables. Something like:
metric = (if b0 then 1 else 0)
+ (if b1 then 1 else 0)
+ ...
+ (if bn then 1 else 0)
Then use Z3's optimization routines to minimize metric. I believe this will provide you with the solution you are looking for in one call only.
Some helpful references:
Here's the Z3 optimization tutorial: http://rise4fun.com/z3opt/tutorialcontent/guide.
This example, in particular, talks about soft-constraints, and might quite be applicable in your case as well: http://rise4fun.com/z3opt/tutorialcontent/guide#h25.
Here's the C++ API reference for the optimizer: http://z3prover.github.io/api/html/classz3_1_1optimize.html.

Non-Evaluation of Numerical Expression in Maxima

I start with a simple Maxima question, the answer to which may provide the answer to the actual problem I'm grappling with.
Related Simple Question:
How can I get maxima to calculate:
bfloat((1+%i)^0.3);
Might there be an option variable that can be set so that this evaluates to a complex number?
Actual Question:
In evaluating approximations for numerical time integration for finite element methods, for this purpose I'm using spectral analysis, which requires the calculation of the eigenvalues of a 4 x 4 matrix. This matrix "cav" is also calculated within maxima, using some of the algebra capabilities of maxima, but sustituting numerical values, so that matrix is entirely numerical, i.e. containing no variables. I've calculated the eigenvalues with Mathematica and it returns 4 real eigenvalues. However Maxima calculates horrenduously complicated expressions for this case, which apparently it does not "know" how to simplify, even numerically as "bigfloat". Perhaps this problem arises because Maxima first approximates the matrix "cac" by rational numbers (i.e. fractions) and then tries to solve the problem fully exactly, instead of simply using numerical "bigfloat" computations throughout. Is there I way I can change this?
Note that if you only change the input value of gzv to say 0.5 it works fine, and returns numerical values of complex eigenvalues.
I include the code below. Note that all of the code up until "cav:subst(vs,ca)$" is just for the definition of the matrix cav and seems to work fine. It is in the few statements thereafter that it fails to calculate numerical values for the eigenvalues.
v1:v0+ (1-gg)*a0+gg*a1$
d1:d0+v0+(1/2-gb)*a0+gb*a1$
obf:a1+(1+ga)*(w^2*d1 + 2*gz*w*(d1-d0)) -
ga *(w^2*d0 + 2*gz*w*(d0-g0))$
obf:expand(obf)$
cd:subst([a1=1,d0=0,v0=0,a0=0,g0=0],obf)$
fd:subst([a1=0,d0=1,v0=0,a0=0,g0=0],obf)$
fv:subst([a1=0,d0=0,v0=1,a0=0,g0=0],obf)$
fa:subst([a1=0,d0=0,v0=0,a0=1,g0=0],obf)$
fg:subst([a1=0,d0=0,v0=0,a0=0,g0=1],obf)$
f:[fd,fv,fa,fg]$
cad1:expand(cd*[1,1,1/2-gb,0] - gb*f)$
cad2:expand(cd*[0,1,1-gg,0] - gg*f)$
cad3:expand(-f)$
cad4:[cd,0,0,0]$
cad:matrix(cad1,cad2,cad3,cad4)$
gav:-0.05$
ggv:1/2-gav$
gbv:(ggv+1/2)^2/4$
gzv:1.1$
dt:0.01$
wv:bfloat(dt*2*%pi)$
vs:[ga=gav,gg=ggv,gb=gbv,gz=gzv,w=wv]$
cav:subst(vs,ca)$
cav:bfloat(cav)$
evam:eigenvalues(cav)$
evam:bfloat(evam)$
eva:evam[1]$
The main problem here is that Maxima tries pretty hard to make computations exact, and it's hard to tell it to ease up and allow inexact results.
Is there a mistake in the code you posted above? You have cav:subst(vs,ca) but ca is not defined. Is that supposed to be cav:subst(vs,cad) ?
For the short problem, usually rectform can simplify complex expressions to something more usable:
(%i58) rectform (bfloat((1+%i)^0.3));
`rat' replaced 1.0B0 by 1/1 = 1.0B0
(%o58) 2.59023849130283b-1 %i + 1.078911979230303b0
About the long problem, if fixed-precision (i.e. ordinary floats, not bigfloats) is acceptable to you, then you can use the LAPACK function dgeev to compute eigenvalues and/or eigenvectors.
(%i51) load (lapack);
<bunch of messages here>
(%o51) /usr/share/maxima/5.39.0/share/lapack/lapack.mac
(%i52) dgeev (cav);
(%o52) [[- 0.02759949957202372, 0.06804641655485913, 0.997993508502892, 0.928429191717788], false, false]
If you really need variable precision, I don't know what to try. In principle it's possible to rework the LAPACK code to work with variable-precision floats, but that's a substantial task and I'm not sure about the details.

How to fix the NAN or INF when we use cross entropy with theano?

We'll have to compute:
y*log(y_compute)+(1-y)*(1-y_compute)
so when we get y_compute 1. or 0.,this problem would show up. What should I do to avoid it?
Your expression y_compute maybe contains an exponential, e.g. coming from theano.tensor.nnet.sigmoid? In that case, it should usually never reach exact 0 or 1. In those cases you can then just use your expression or theano.tensor.nnet.crossentropy_categorical_1hot directly.
If for whatever reason you have exact 0 and 1, another way is to clip the input to the crossentropy. Try e.g. replacing y_compute with theano.tensor.clip(y_compute, 0.001, 0.999), knowing that this will restrict the range of the logarithm.

Why does this code causes the machine to crash?

I am trying to run this code but it keeps crashing:
log10(x):=log(x)/log(10);
char(x):=floor(log10(x))+1;
mantissa(x):=x/10**char(x);
chop(x,d):=(10**char(x))*(floor(mantissa(x)*(10**d))/(10**d));
rnd(x,d):=chop(x+5*10**(char(x)-d-1),d);
d:5;
a:10;
Ibwd:[[30,rnd(integrate((x**60)/(1+10*x^2),x,0,1),d)]];
for n from 30 thru 1 step -1 do Ibwd:append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd);
Maxima crashes when it evaluates the last line. Any ideas why it may happen?
Thank you so much.
The problem is that the difference becomes negative and your rounding function dies horribly with a negative argument. To find this out, I changed your loop to:
for n from 30 thru 1 step -1 do
block([],
print (1/(2*n-1)-a*last(first(Ibwd))),
print (a*last(first(Ibwd))),
Ibwd: append([[n-1,rnd(1/(2*n-1)-a*last(first(Ibwd)),d)]],Ibwd),
print (Ibwd));
The last difference printed before everything fails miserably is -316539/6125000. So now try
rnd(-1,3)
and see the same problem. This all stems from the fact that you're taking the log of a negative number, which Maxima interprets as a complex number by analytic continuation. Maxima doesn't evaluate this until it absolutely has to and, somewhere in the evaluation code, something's dying horribly.
I don't know the "fix" for your specific example, since I'm not exactly sure what you're trying to do, but hopefully this gives you enough info to find it yourself.
If you want to deconstruct a floating point number, let's first make sure that it is a bigfloat.
say z: 34.1
You can access the parts of a bigfloat by using lisp, and you can also access the mantissa length in bits by ?fpprec.
Thus ?second(z)*2^(?third(z)-?fpprec) gives you :
4799148352916685/140737488355328
and bfloat(%) gives you :
3.41b1.
If you want the mantissa of z as an integer, look at ?second(z)
Now I am not sure what it is that you are trying to accomplish in base 10, but Maxima
does not do internal arithmetic in base 10.
If you want more bits or fewer, you can set fpprec,
which is linked to ?fpprec. fpprec is the "approximate base 10" precision.
Thus fpprec is initially 16
?fpprec is correspondingly 56.
You can easily change them both, e.g. fpprec:100
corresponds to ?fpprec of 335.
If you are diddling around with float representations, you might benefit from knowing
that you can look at any of the lisp by typing, for example,
?print(z)
which prints the internal form using the Lisp print function.
You can also trace any function, your own or system function, by trace.
For example you could consider doing this:
trace(append,rnd,integrate);
If you want to use machine floats, I suggest you use, for the last line,
for n from 30 thru 1 step -1 do :
Ibwd:append([[n-1,rnd(1/(2.0*n- 1.0)-a*last(first(Ibwd)),d)]],Ibwd);
Note the decimal points. But even that is not quite enough, because integration
inserts exact structures like atan(10). Trying to round these things, or compute log
of them is probably not what you want to do. I suspect that Maxima is unhappy because log is given some messy expression that turns out to be negative, even though it initially thought otherwise. It hands the number to the lisp log program which is perfectly happy to return an appropriate common-lisp complex number object. Unfortunately, most of Maxima was written BEFORE LISP HAD COMPLEX NUMBERS.
Thus the result (log -0.5)= #C(-0.6931472 3.1415927) is entirely unexpected to the rest of Maxima. Maxima has its own form for complex numbers, e.g. 3+4*%i.
In particular, the Maxima display program predates the common lisp complex number format and does not know what to do with it.
The error (stack overflow !!!) is from the display program trying to display a common lisp complex number.
How to fix all this? Well, you could try changing your program so it computes what you really want, in which case it probably won't trigger this error. Maxima's display program should be fixed, too. Also, I suspect there is something unfortunate in simplification of logs of numbers that are negative but not obviously so.
This is probably waaay too much information for the original poster, but maybe the paragraph above will help out and also possibly improve Maxima in one or more places.
It appears that your program triggers an error in Maxima's simplification (algebraic identities) code. We are investigating and I hope we have a bug fix soon.
In the meantime, here is an idea. Looks like the bug is triggered by rnd(x, d) when x < 0. I guess rnd is supposed to round x to d digits. To handle x < 0, try this:
rnd(x, d) := if x < 0 then -rnd1(-x, d) else rnd1(x, d);
rnd1(x, d) := (... put the present definition of rnd here ...);
When I do that, the loop runs to completion and Ibwd is a list of values, but I don't know what values to expect.

Resources