I'm trying to get maxima to perform some "abstract" Taylor series expansions, and I'm running into a simplification issue. A prototype of the problem might be the finite-difference analog of the gradient,
g(x1,dx1) := (f(x1+dx1) - f(x1))/dx1; /* dx1 is small */
taylor(g(x1,dx1), [dx1], [0], 0);
for which maxima returns
So far so good. But now try the finite-difference analog of the second derivative (Hessian),
h(x1,dx1) := (f(x1+dx1) - 2*f(x1) + f(x1-dx1))/dx1^2;
taylor(h(x1,dx1), dx1, 0, 0);
for which I get
which is not nearly as helpful.
A prototype of the "real" problem I want to solve is to compute the low-order errors of the finite-difference approximation to ∂^2 f/(∂x1 ∂x2),
(f(x1+dx1, x2+dx2) - f(x1+dx1, x2) - f(x1, x2+dx2) + f(x1, x2))/(dx1*dx2)
and to collect the terms up to second order (which involves up to 4th derivatives of f). Without reasonably effective simplification I suspect it will be easier to do by hand than by computer algebra, so I am wondering what can be done to coax maxima into doing the simplification for me.
Consider this example. It uses Barton Willis' pdiff package. I
simplified notation a bit: moved center to [0, 0] and introduced
notation for partial derivatives.
(%i1) load("pdiff") $
(%i2) matchdeclare([n, m], integerp) $
(%i3) tellsimpafter(f(0, 0), 'f00) $
(%i4) tellsimpafter(pderivop(f,n,m)(0,0), concat('f, n, m)) $
(%i5) e: (f(dx, dy) - f(dx, -dy) - f(-dx, dy) + f(-dx, -dy))/(4*dx*dy)$
(%i6) taylor(e, [dx, dy], [0, 0], 3);
2 2
f31 dx + f13 dy
(%o6)/T/ f11 + ----------------- + . . .
6
Related
I'm trying to obtain a numerical solution to the following integral:
1
The correct answer is -0.324 + 0.382i but as seen below I am not getting a numerical answer and would appreciate help with the Maxima syntax.
2
Perhaps related to why I am not getting a numerical output are two specific questions:
I read that e and i in Maxima need to be preceded by % in input but should these also appear as %e and %i as seen in the Maxima output?
Why is dy missing at the end of the integral in the Maxima output?
Thank you!
Looks to me like your input is okay, however, the function to compute approximations to integrals is named quad_qags. (There are actually several related functions. See ?? quad_ for more info.) Also, a wrinkle here is that the integrand is a complex-valued function (of a real variable), and quad_qags can only work on real-valued integrands, so we'll have to work around it. Here's how I would arrange it.
myintegrand: exp(%i*(1 + %i*y))/(1 + %i*y + 1/(1 + %i*y));
result_realpart: quad_qags (realpart (myintegrand), y, 0, 6);
result_imagpart: quad_qags (imagpart (myintegrand), y, 0, 6);
result: result_realpart[1] + %i*result_imagpart[1];
I get 0.3243496676292901*%i + 0.3820529930785175 as the final result. That's a little different from what you said; maybe a minus sign went missing? or there's a missing or extra factor of %i?
A quick approximation
0.1 * lsum (x, x, float (rectform (makelist (ev (myintegrand, y = k/10), k, 0, 60))));
seems to show the result from quad_qags is reasonable.
Maxima does not seem to come up with an analytic solution to this equation which includes the error function. The independent variable here is "p" and the dependent variable to be solved for is "x".
see an illustration of equation follow link
(%i3) solveexplicit:true$ ratprint:false$ fpprintprec:6$
(%i4) eqn: (sqrt(%pi)*(25*2^(3/2)*p-25*sqrt(2))*erf(1/(25*2^(3/2)*x))*x+1)/(25*p) = 0.04;
(%i5) solve (eqn, x);
(%o5) []
(%i6) eqn, [p=2,x=0.00532014],numer;
(%o6) 0.04=0.04
Any help or pointing in the right direction is appreciated.
As far as I know, Maxima can't solve equations containing erf. You can get a numerical result via find_root:
(%i5) find_root (eqn, x, 0.001, 0.999), p=2;
(%o5) 0.005320136894034347
As for symbolic solutions, I worked with the equation a little bit. One can get it into the form erf(something/x)*x = otherstuff, or equivalently erf(y) = somethingelse*y where y = something/x and somethingelse = otherstuff/something if I'm not mistaken. I don't know anything in particular about equations of that form, but perhaps you can find something.
Yes, solve can only do polynominals. I used the series expansion for small values of x and the accuracy is good enough.
(%i11) seriesE: 1$
termE: erf(x)$
for p: 1 unless p > 3 do
(termE: diff (termE, x)/p,
seriesE: seriesE + subst (x=0, termE)*x^p)$
seriesE;
(%o11) -(2*x^3)/(3*sqrt(%pi))+(2*x)/sqrt(%pi)+1
However, the "Expression longer than allowed by the configuration setting!"
Let's say we have a term like 1/4 * x/sqrt(2) * x^2 / 2; in Maxima.
As an output (without further modification) it gives x^3/2^(7/2).
How can I force the output format to be like x^3/(8*sqrt(2)) with usage of square roots whenever possible?
(%i1) sq2: " "(sqrt(2))$
(%i2) matchdeclare(n, lambda([n], oddp(n) and n#1))$
(%i3) defrule(r_sq2, 2^(n/2), sq2*2^((n-1)/2)) $
(%i4) e: 1/4 * x/sqrt(2) * x^2 / 2;
3
x
(%o4) ----
7/2
2
(%i5) apply1(e, r_sq2);
3
(sqrt(2)) x
(%o5) -------------
16
A rule can help to insert sqrt(2). In the example I use a "null" function to prevent simplification. You can also consider box and rembox functions or leave sq2 undefined.
What am I doing wrong in this code?
atvalue(y(x),[x=0],1)$
desolve(diff(y(x),x)=y(x),y(x));
plot2d(y(x),[x,-6,6]);
Output:
plot2d: expression evaluates to non-numeric value everywhere in plotting range.
plot2d: nothing to plot
false
I want to plot y(x) which is obtained from a differential equation.
In Maxima y(x) = ... is an equation, and y(x) := ... is a function, and those two things are different. Try this:
atvalue (y(x), [x=0], 1)$
desolve (diff(y(x),x)=y(x), y(x));
define (y(x), rhs(%));
plot2d (y(x), [x, -6, 6]);
Here define(y(x), ...) is a different way to define a function. define evaluates the function body rhs(%) to yield exp(x) but := quotes it (not what you want).
The reason is that the result you see after the desolve does not mean y is defined as a function of x; in fact you obtain the same error if you change y(x) with f(x) (or any other unknown function) in plot2d. See the difference:
(%i9) atvalue(y(x),[x=0],1)$
(%i10) desolve(diff(y(x),x)=y(x),y(x));
x
(%o10) y(x) = %e
(%i11) y(x);
(%o11) y(x)
(%i12) y(x):=%e^x;
x
(%o12) y(x) := %e
(%i13) y(x);
x
(%o13) %e
I don't know if there's a way to “transform” the equation (the result) into a function definition automatically. If I find a way, I will complete the answer.
If I have three points that create an angle, what would be the best way to determine if a fourth point resides within the angle created by the previous three?
Currently, I determine the angle of the line to all three points from the origin point, and then check to see if the test angle is in between the two other angles but I'm trying to figure out if there's a better way to do it. The function is run tens of thousands of times an update and I'm hoping that there's a better way to achieve what I'm trying to do.
Let's say you have angle DEF (E is the "pointy" part), ED is the left ray and EF is the right ray.
* D (Dx, Dy)
/
/ * P (Px, Py)
/
/
*---------------*
E (Ex, Ey) F (Fx, Fy)
Step 1. Build line equation for line ED in the classic Al * x + Bl * y + Cl = 0 form, i.e. simply calculate
Al = Dy - Ey // l - for "left"
Bl = -(Dx - Ex)
Cl = -(Al * Ex + Bl * Ey)
(Pay attention to the subtraction order.)
Step 2. Build line equation for line FE (reversed direction) in the classic Ar * x + Br * y + Cr = 0 form, i.e. simply calculate
Ar = Ey - Fy // r - for "right"
Br = -(Ex - Fx)
Cr = -(Ar * Ex + Br * Ey)
(Pay attention to the subtraction order.)
Step 3. For your test point P calculate the expressions
Sl = Al * Px + Bl * Py + Cl
Sr = Ar * Px + Br * Py + Cr
Your point lies inside the angle if and only if both Sl and Sr are positive. If one of them is positive and other is zero, your point lies on the corresponding side ray.
That's it.
Note 1: For this method to work correctly, it is important to make sure that the left and right rays of the angle are indeed left and right rays. I.e. if you think about ED and EF as clock hands, the direction from D to F should be clockwise. If it is not guaranteed to be the case for your input, then some adjustments are necessary. For example, it can be done as an additional step of the algorithm, inserted between steps 2 and 3
Step 2.5. Calculate the value of Al * Fx + Bl * Fy + Cl. If this value is negative, invert signs of all ABC coefficients:
Al = -Al, Bl = -Bl, Cl = -Cl
Ar = -Ar, Br = -Br, Cr = -Cr
Note 2: The above calculations are made under assumption that we are working in a coordinate system with X axis pointing to the right and Y axis pointing to the top. If one of your coordinate axes is flipped, you have to invert the signs of all six ABC coefficients. Note, BTW, that if you perform the test described in step 2.5 above, it will take care of everything automatically. If you are not performing step 2.5 then you have to take the axis direction into account from the very beginning.
As you can see, this a precise integer method (no floating point calculations, no divisions). The price of that is danger of overflows. Use appropriately sized types for multiplications.
This method has no special cases with regard to line orientations or the value of the actual non-reflex angle: it work immediately for acute, obtuse, zero and straight angle. It can be easily used with reflex angles (just perform a complementary test).
P.S. The four possible combinations of +/- signs for Sl and Sr correspond to four sectors, into which the plane is divided by lines ED and EF.
* D
/
(-,+) / (+,+)
/
-------*------------* F
/ E
(-,-) / (+,-)
/
By using this method you can perform the full "which sector the point falls into" test. For an angle smaller than 180 you just happen to be interested in only one of those sectors: (+, +). If at some point you'll need to adapt this method for reflex angles as well (angles greater than 180), you will have to test for three sectors instead of one: (+,+), (-,+), (+,-).
Describe your origin point O, and the other 2 points A and B then your angle is given as AOB. Now consider your test point and call that C as in the diagram.
Now consider that we can get a vector equation of C by taking some multiple of vector OA and some multiple of OB. Explicitly
C = K1 x OA + K2 OB
for some K1,K2 that we need to calculate. Set O to the origin by subtracting it (vectorially) from all other points. If coordinates of A are (a1,a2), B = (b1,b2) and C = (c1,c2) we have in matrix terms
[ a1 b1 ] [ K1 ] = [ c1 ]
[ a2 b2 ] [ K2 ] = [ c2 ]
So we can solve for K1 and K2 using the inverse of the matrix to give
1 / (a1b2 - b1a2) [ b2 -b1 ] [ c1 ] = [ K1 ]
[ -a2 a1 ] [ c2 ] = [ K2 ]
which reduces to
K1 = (b2c1 - b1c2)/(a1b2 - b1a2)
K2 = (-a2c1 + a1c2)/(a1b2 - b1a2)
Now IF the point C lies within your angle, the multiples of the vectors OA and OB will BOTH be positive. If C lies 'under' OB, then we need a negative amount of OA to get to it similarly for the other direction. So your condition is satisfied when both K1 and K2 are greater than (or equal to) zero. You must take care in the case where a1b2 = b1a2 as this corresponds to a singular matrix and division by zero. Geometrically it means that OA and OB are parallel and hence there is no solution. The algebra above probably needs verifying for any slight typo mistake but the methodology is correct. Maybe long winded but you can get it all simply from point coordinates and saves you calculating inverse trig functions to get angles.
The above applies to angles < 180 degrees, so if the your angle is greater than 180 degrees, you should check instead for
!(K1 >= 0 && K2 >= 0)
as this is exterior to the segment less than 180 degree. Remember that for 0 and 180 degrees you will have a divide by zero error which must be checked for (ensure a1b2 - b1a2 != 0 )
Yes, I meant the smallest angle in my comment above. Look at this thread for an extensive discussion on cheap ways to find the measure of the angle between two vectors. I have used the lookup-table approach on many occasions with great success.
Triangle O B C has to be positive oriented and also triangle O C A. To calaculate orientation, just use Shoelace formula. Both values has to be positive.