Maxima solve returns no solutions - maxima

I defined a function like this:
f(x):=(4*x^4+7*x^3+(-3)*x)/(2*x^2+5)
And then assign the derivative to df like this:
df(x):=''(diff(f(x), x))
Maxima then prints this as the calculated derivative:
df(x):=(16*x^3+21*x^2-3)/(2*x^2+5)-(4*x*(4*x^4+7*x^3-3*x))/(2*x^2+5)^2
Then I try to solve the derivative for df(x)=0 to find stationary points of f:
solve(df(x)=0, x);
But instead of solutions, Maxima gives me this:
[0=16*x^5+14*x^4+80*x^3+111*x^2-15]
Which suggests that there are no solutions. But if I plot the function df, it crosses the x-axis 3 times. So clearly there are 3 points where df(x)=0. Why can't Maxima find them? Am I doing something wrong?

df(x) is a quintic (i.e., degree 5) polynomial, and as such it probably does not have a solution in terms of radicals. There are solvable quintics, although I suspect Maxima cannot determine whether a quintic is solvable or not. For more about the general theory about quintics, take a look at https://en.wikipedia.org/wiki/Quintic_function#Finding_roots_of_a_quintic_equation .
I think a workable approach is to look for numerical approximations. Take a look at the Maxima functions realroots and allroots.

Related

How Multicollinearity affects the model?

enter image description here
I took 4 features, all the features are the same X1=X2=X3=X4 and the target is Y=X1.
I am wondering, how multicollinearity affects the coefficients of the model?. I trained sklearn linear regression model with this data, It seems it does not have any effect on the coefficients. please help me to understand this.
See to understand what is the problem with multi-collinearity we need to understand what is slope.Slope is nothing but how much y changes when there is unit change in x when rest of the features are kept constant.Suppose you want to predict y with two features
y=m1x1+m2x2+b(ideal equation of line)
If there is multi-collinearity problem with above equation and if we try to change x1, eventually x2 will also change as they are correlated.This might create problem to calculate y(target variable) and may give a wrong answer.

Obtaining representation of SMT formula as SAT formula

I've come up with an SMT formula in Z3 which outputs one solution to a constraint solving problem using only BitVectors and IntVectors of fixed length. The logic I use for the IntVectors is only simple Presburger arithmetic (of the form (x[i] - x[i + 1] <=/>= z) for some x and z). I also take the sum of all of the bits in the bitvector (NOT the binary value), and set that value to be within a range of [a, b].
This works perfectly. The only problem is that, as z3 works by always taking the easiest path towards determining satisfiability, I always get the same answer back, whereas in my domain I'd like to find a variety of substantially different solutions (I know for a fact that multiple, very different solutions exist). I'd like to use this nifty tool I found https://bitbucket.org/kuldeepmeel/weightgen, which lets you uniformly sample a constrained space of possibilities using SAT. To use this though, I need to convert my SMT formula into a SAT formula.
Do you know of any resources that would help me learn how to perform Presburger arithmetic and adding the bits of a bitvector as a SAT instance? Alternatively, do you know of any SMT solver which as an intermediate step outputs a readable description of the problem as a SAT instance?
Many thanks!
[Edited to reflect the fact that I really do need the uniform sampling feature.]

COMSOL Schrödinger Equation

I want to solve the Schrödinger equation in COMSOL with some specified boundary conditions. As an ODE the schrödinger equations reads (in 1D):
af''(x) + b(x)f(x) = Ef(x),
where E is an unknown constant that will be determined by the boundary conditions.
I am not used to using COMSOL so I don't know if it is possible to solve this problem. So far all the templates for solving differential equations contains some generic form, where you have to specify the value of the constants before each term. This does not work for the eigenvalue problem above, where E is unknown. Does anyone know how to specify the differential equation as an eigenvalue equation, where E is unknown?
I have been conducting research in the area of quantum dots and rings for over 7 years and here is what I would do.
Choose the PDE interfaces under the Mathematical models. Then pick the Coefficient Form PDE(c). Then choose from the Preset Studies: Eigenvalue.
Set e, f, and alpha, beta, and gamma to zero.
The coefficients a and c cant be set once you know the scale of the system.
When you run the simulation, it will give you the value for lambda which is the eigenvalue and corresponds to E.

How to check if gradient descent with multiple variables converged correctly?

In linear regression with 1 variable I can clearly see on plot prediction line and I can see if it properly fits the training data. I just create a plot with 1 variable and output and construct prediction line based on found values of Theta 0 and Theta 1. So, it looks like this:
But how can I check validity of gradient descent results implemented on multiple variables/features. For example, if number of features is 4 or 5. How to check if it works correctly and found values of all thetas are valid? Do I have to rely only on cost function plotted against number of iterations carried out?
Gradient descent converges to a local minimum, meaning that the first derivative should be zero and the second non-positive. Checking these two matrices will tell you if the algorithm has converged.
We can think of gradient descent as of something solving a problem of f'(x) = 0 where f' denotes gradient of f. For checking this problem convergence, as far as I know, the standard approach is to calculate discrepancy on each iteration and see if it converges to 0.
That is, check if ||f'(x)|| (or its square) converges to 0.
There are some things you can try.
1) Check if your cost/energy function is not improving as your iteration progresses. Use something like "abs(E_after - E_before) < 0.00001*E_before", i.e. check if the relative difference is very low.
2) Check if your variables have stopped changing. You can opt a very similar strategy like above to check this.
There is actually no perfect way to fully make sure that your function has converged, but some of the things mentioned above are what usually people try.
Good luck!

Z3: Function Expansion and Encoding in QBVF

I am trying to encode formulas with functions in Z3 and I have an encoding problem. Consider the following example:
f(x) = x + 42
g(x1, x2) = f(x1) / f(x2)
h(x1, x2) = g(x1, x2) % g(x2, x1)
k(x1, x2, x3) = h(x1, x2) - h(x2, x3)
sat( k(y1, y2, y3) == 42 && k(y3, y2, y1) == 42 * 2 && ... )
I would like my encoding to be both efficient (no expression duplication) and allow Z3 to re-use lemmas about functions across subproblems. Here is what I have tried so far:
Inline the functions for every free variable instantiation y1, y2, etc. This introduces duplication and performance is not as good as I hoped for.
Assert the function declarations with universal quantifiers. This works for very specific examples - from the solving times it seems that Z3 can (?) re-use results from previous queries that involve the same functions. However, solving times vary greatly and in many cases (1) turns out to be faster.
Use function definitions (i.e., quantifiers + the MACRO_FINDER option). If my understanding of the documentation is correct, this should expand the functions and thus should be close to (1). However, in terms of performance the results were a bit surprising (">" means faster):
For problems where (1) > (2) I get: (1) > (3) > (2)
For problems where (2) > (1) I get: (2) > (1) = (3)
I have also tried tweaking the MBQI option (and others) with most of the above. However, it is not clear what is the best combination. I am using Z3 4.0.
The question is: What is the "right" way to encode the problem? Note that I only have interpreted functions (I do not really need UF). Could I use this fact for a more efficient encoding and avoid function expansion?
Thanks
I think there's no clear answer to this question. Some techniques work better for one type of benchmarks and other techniques work better for others. For the QBVF benchmarks we've looked at so far, we found macros give us the best combination of small benchmark size and small solving times, but this may not apply in this case.
Your understanding of the documentation is correct, the macro finder will identify quantifiers that look like function definitions and replace all other calls to that function with its definition. It's possible that not all of your macros are picked up or that you are using quasi-macros which aren't detected correctly, either of which could go towards explaining why the performance is sometimes worse than your (1). How much is the difference in the case that (1) > (3)? A little overhead is to be expected, but vast variations in runtime are probably due to some macros being malformed or not being detected.
In general, the is no "right" way to encode these problems. Function expansion can not always be avoided. The trade-off is essentially between expanding eagerly (1, 3), or doing it lazily (2). It may be that there is a correlation of the type SAT (1, 3 faster) and UNSAT (2 faster), but this is also not guaranteed to be the case.

Resources