Vector substitution - maxima

I have a huge dynamic system in wxmaxima, and I need to do some vector substitution but it ends up just with some crazy results. This is what I need to do:
forces:[
F1=[x1,y1,z1],
F2=[x2,y2,z2]
];
equations:[F3=-F2];
subst(forces,subst(equations,F1+F3));
the result which I'm seeking is just a simple [x1+x2,y1+y2,z1+z2], but I got instead: [[x1-x2,x1-y2,x1-z2],[y1-x2,y1-y2,y1-z2],[z1-x2,z1-y2,z1-z2]]
any suggestions?

OK, that is pretty puzzling, although I see now what's going on.
subst is serial (one by one) substitution, so subst([F1 = ..., F2 = ...], ...) is equivalent to subst(F2 = ..., subst(F1 = ..., ...)). That is, substitute for F1 first and then substitute F2 into the result of that.
However the result of subst(F1 = [x1, y1, z1], F1 - F2) is [x1 - F2, y1 - F2, z1 - F2]. You can see now what's going to happen if you substitute F2 into that -- you'll get the messy nested list result.
I think if you try psubst (parallel substitution) you'll get the expected result.
(%i2) forces:[
F1=[x1,y1,z1],
F2=[x2,y2,z2]
];
(%o2) [F1 = [x1, y1, z1], F2 = [x2, y2, z2]]
(%i3) equations:[F3=-F2];
(%o3) [F3 = - F2]
(%i4) subst(equations, F1 + F3);
(%o4) F1 - F2
(%i5) psubst (forces, %o4);
(%o5) [x1 - x2, y1 - y2, z1 - z2]
(%i6) psubst(forces, subst(equations, F1 + F3));
(%o6) [x1 - x2, y1 - y2, z1 - z2]

Related

λ vector on Tensor CP Decomposition with Alternating Least Square

I am trying to understand the procedure of tensor cp decomposition with alternating least squares based on this paper.
At page 464 is referred that "It is often useful to assume that the columns of A, B, and C are normalized to length one with the weights absorbed into the vector λ "
In addition, at page 471 line 7 of psedo code is "
normalize columns of A(n) (storing norms as λ)
"
I don't understand what values will be stored on vector λ and on matrix Λ.
What i understand is that we do normalization to ever column of the factor matrices and we store norms on a new vector λ
For example for a 3x3x3 tensor with rank=3, will have three 3x3 factors A,B and C and after normalize to unit length every column of all these matrices, i will end up with 9 norms. These norms will be the values of the diagonal matrix Λ ?
Am i missing something?
Thank you
Tensor CP Decomposition with Alternating Least Square.
I'm not familiar with the paper, but it looks like this is probably what they mean:
Suppose we're looking at the 3x3 case, since that's easy to draw. We have
fixed, non-normalized matrices $A,B,C$ and want matrices $a_r, b_r, c_r$ that are column normalized, and some matrix $\lambda$ s.t. $A B C = \lambda a_r b_r c_r$
\lambda A =
/ a1 b1 c1 \ / x1 x2 x1 \ / a1 x1 + b1 y1 + c1 z1; a1 x2 + b1 y2 + c1 z2; ... \
| a2 b2 c2 | | y1 y2 y2 | = | a2 x1 + b2 y1 + c2 z1; ... |
\ a3 b3 c3 / \ z1 z2 z3 / \ a3 x1 + b3 y1 + c3 z1; ... /
Then solve the following system of linear equations for $(a b c)$ to enforce the column-normalization of $\lambda A$:
/1\ / (a1+a2+a3) x1 + (b1+b2+b3) y1 + (c1+c2+c3) z1 \
|1| = | (a1+a2+a3) x2 + (b1+b2+b3) y2 + (c1+c2+c3) z2 |
\1/ \ (a1+a2+a3) x3 + (b1+b2+b3) z2 + (c1+c2+c3) z3 /
We've technically got too many free variables here, but since we only care about the sum of, e.g., $a1+a2+a3$, and not
what any individual a is, we'll just enforce the constraint that $\lambda$ be a diagonal matrix, at which point we have:
/1\ / a1 x1 + b2 y1 + c3 z1 \
|1| = | a1 x2 + b2 y2 + c3 z2 |
\1/ \ a1 x3 + b2 z2 + c3 z3 /
We plug that into a linear equation solver to get values for $a1,b2,c3$. We then do the same thing for B and C and multiply all the lambdas together (taking advantage of the associativity of matrix multiplication to pull them all to the left),
at which point $A B C = \lambda a_r b_r c_r$, and all the $foo_r$ matrices have normalized columns.

The output of a shallow multilayer network, with linear activation function in each of the neurons, is linear in the weights -> FALSE?

Can anyone prove to me why the above sentence is FALSE?
I have the feeling that being all linear units I can always write W(Linear) = Y for any W, but the teacher says it's False and quoted a student that said:
The output y of said shallow network is y=sum(v)* sum(wTx +b), so we have the product of the weights v and w inside the output. So the output is non linear in respect to the weights
Could anyone be more precise/analytical?
Consider the following neural network:
x1 -- h1
\/ \ y
/\ /
x2 -- h2
Corresponding to the following equations:
h1 = w11 x1 + w12 x2
h2 = w21 x1 + w22 x2
y = v1 h1 + v2 h2
Here the weights are w11, w12, w21, w22, v1, v2.
Combining the three equations, we get:
y = v1 w11 x1 + v1 w12 x2 + v2 w21 x1 + v2 w22 x2
Hence we can say that y is a bilinear function of (w11, w12, w21, w22) and (v1, v2), but we cannot say that y is a linear function of (w11, w12, w21, w22, v1, v2).

Nonzero vector in quantifier

I want to verify a formula of the form:
Exists p . ForAll x != 0 . f(x, p) > 0
An implementation (that isn't working) is the following:
def f0(x0, x1, x, y):
return x1 ** 2 * y + x0 ** 2 * x
s = Solver()
x0, x1 = Reals('x0 x1')
p0, p1 = Reals('p0 p1')
s.add(Exists([p0, p1],
ForAll([x0, x1],
f0(x0, x1, p0, p1) > 0
)
))
#s.add(Or(x0 != 0, x1 != 0))
while s.check() == sat:
m = s.model()
m.evaluate(x0, model_completion=True)
m.evaluate(x1, model_completion=True)
m.evaluate(p0, model_completion=True)
m.evaluate(p1, model_completion=True)
print m
s.add(Or(x0 != m[x0], x1 != m[x1]))
The formula isn't satisfied.
With f0() >= 0, the only output is (0, 0).
I want to have f0() > 0 and constrain (x0, x1) != (0, 0).
Something I'd expect is: p0, p1 = 1, 1 or 2, 2 for instance, but I don't know how to remove 0, 0 from the possible values for x0, x1.
Following up on Levent's reply. During the first check, Z3 uses a custom decision procedure that works with the quantifiers. In incremental mode it falls back to something that isn't a decision procedure. To force the one-shot solver try the following:
from z3 import *
def f0(x0, x1, x, y):
return x1 * x1 * y + x0 * x0 * x
p0, p1 = Reals('p0 p1')
x0, x1 = Reals('x0 x1')
fmls = [ForAll([x0, x1], Implies(Or(x0 != 0, x1 != 0), f0(x0, x1, p0, p1) > 0))]
while True:
s = Solver()
s.add(fmls)
res = s.check()
print res
if res == sat:
m = s.model()
print m
fmls += [Or(p0 != m[p0], p1 != m[p1])]
else:
print "giving up"
break
You'd simply write that as an implication inside the quantification. I think you're also mixing up some of the variables in there. The following seems to capture your intent:
from z3 import *
def f0(x0, x1, x, y):
return x1 * x1 * y + x0 * x0 * x
s = Solver()
p0, p1 = Reals('p0 p1')
x0, x1 = Reals('x0 x1')
s.add(ForAll([x0, x1], Implies(Or(x0 != 0, x1 != 0), f0(x0, x1, p0, p1) > 0)))
while True:
res = s.check()
print res
if res == sat:
m = s.model()
print m
s.add(Or(p0 != m[p0], p1 != m[p1]))
else:
print "giving up"
break
Of course, z3 isn't guaranteed to find you any solutions; though it seems to manage one:
$ python a.py
sat
[p1 = 1, p0 = 1]
unknown
giving up
Once you use quantifiers all bets are off, as the logic becomes semi-decidable. Z3 is doing a good job here and returning one solution, and then it's giving up. I don't think you can expect anything better, unless you use some custom decision procedures.

linsolve in maxima no work for these equations

I am very novice in maxima. I want to get the solution for W using these equations:
e1: A*W + B*Y = I$
e2: C*W + D*Y = 0$
linsolve ([e1, e2], [W]);
But linsolve just generates [].
The example in the manual works:
(%i1) e1: x + z = y$
(%i2) e2: 2*a*x - y = 2*a^2$
(%i3) e3: y - 2*z = 2$
(%i4) linsolve ([e1, e2, e3], [x, y, z]);
(%o4) [x = a + 1, y = 2 a, z = a - 1]
That means that the equation cannot be solved for the variables that you requested. You have to solve in respect to both variables:
linsolve([e1,e2],[W,Y]);
D I C I
[W = - ---------, Y = ---------]
B C - A D B C - A D
You can solve for W for each of your equations separately. For example:
linsolve ([e1],[W]);
B Y - I
[W = - -------]
A

maxima: use function as function argument

Like the title says, I want to use a function as a function argument.
Intuitive I tried something like:
a(t,c) := t+c;
b(R_11, R_12, R_13, d_1x, d_1y, d_1z) := R_11*d_1x + R_12*d_1y + R_13*d_1z;
f( a(t,c), b(R_11, R_12, R_13, d_1x, d_1y, d_1z), %lambda ) := a(t,c) +
%lambda * b(R_11, R_12, R_13, d_1x, d_1y, d_1z);
But Maxima stated "define: in definition of f, found bad argument"
My goal is to simplify my equations to get a better overview. When I differentiate like
diff( f(...), R_11 )
the result for this example should be the partial derivative of b with respect to R_11.
f' = b_R11(...)
Is there a way to do such thinks or is this an odd attempt and there is maybe a better way?
You can declare that b depends on some arguments and then diff will construct formal derivatives of b.
(%i1) depends (b, [R1, R2]);
(%o1) [b(R1, R2)]
(%i2) depends (a, t);
(%o2) [a(t)]
(%i3) f(t, R1, R2) := a(t) + b(R1, R2);
(%o3) f(t, R1, R2) := a(t) + b(R1, R2)
(%i4) diff (f(t, R1, R2), R1);
d
(%o4) --- (b(R1, R2))
dR1
(%i5) diff (f(t, R1, R2), t);
d
(%o5) -- (a(t))
dt
But that only works as long as b is undefined. When b is defined, diff will go ahead and call b and compute the derivative with respect to whatever is returned.
(%i8) b(R1, R2) := 2*R1 + 3*R2;
(%o8) b(R1, R2) := 2 R1 + 3 R2
(%i9) diff (f(t, R1, R2), R1);
(%o9) 2

Resources