Is there a matrix_element_inv in Maxima? - maxima

In Maxima, we have matrix_element_add, matrix_element_mult and matrix_element_transpose.
Is there a matrix_element_inv, and if not, how could I make one?

If you want to invert matrix,first remember that not all matrix can be inverted, so first be sure that your matrix can be inverted.
For maxima working with matrix the operator for multiplying is .
so with A . A = A^2
if we want to get this value is A^^2
Normally the operator apply to each element of the matrix so if you would to invert all the elements:
(%i1) A: matrix ([17, 3], [-8, 11]);
[ 17 3 ]
(%o1) [ ]
[ - 8 11 ]
(%i9) A^-1;
[ 1 1 ]
[ -- - ]
[ 17 3 ]
(%o9) [ ]
[ 1 1 ]
[ - - -- ]
[ 8 11 ]
then to get the inverse of a matrix:
(%i2) B: A^^-1;
[ 11 3 ]
[ --- - --- ]
[ 211 211 ]
(%o2) [ ]
[ 8 17 ]
[ --- --- ]
[ 211 211 ]
(%i4) B.A;
[ 1 0 ]
(%o4) [ ]
[ 0 1 ]
(%i5) A.B;
[ 1 0 ]
(%o5) [ ]
[ 0 1 ]
be sure that your matrix is invertible:
(%i6) Bad: matrix ([2, 3], [4, 6]);
[ 2 3 ]
(%o6) [ ]
[ 4 6 ]
(%i7) Bad^^-1;
expt: undefined: 0 to a negative exponent.
-- an error. To debug this try: debugmode(true);
(%i8) newdet(Bad);
(%o8)/R/ 0
Now you should read carefully this section:
http://maxima.sourceforge.net/docs/manual/maxima_23.html
specially when telling about
matrix_element_add
so really there are only this opereators so doesn't exist a matrix_element_inv
so you can write your own using lambda functions as follows for example for getting the transpose of all the inverted elements:
(%i10) matrix_element_transpose: lambda ([x], x^-1)$
(%i11) transpose(A);
[ 1 1 ]
[ -- - - ]
[ 17 8 ]
(%o11) [ ]
[ 1 1 ]
[ - -- ]
[ 3 11 ]
hope this helps

Related

WebGL: Converting JSON IFS 3D Model Data to Float32Arrays

I have a project I'm working on that involves rendering 3D models in WebGL, GitHub here. In pulling together several different resources, I've found two different formats for the model data: one with JSON entries like so:
var houseIFS =
{
"vertices": [
[ 2, -1, 2 ],
[ 2, -1, -2 ],
[ 2, 1, -2 ],
[ 2, 1, 2 ],
[ 1.5, 1.5, 0 ],
[ -1.5, 1.5, 0 ],
[ -2, -1, 2 ],
[ -2, 1, 2 ],
[ -2, 1, -2 ],
[ -2, -1, -2 ]
],
"faces": [
[ 0, 1, 2, 3 ],
[ 3, 2, 4 ],
[ 7, 3, 4, 5 ],
[ 2, 8, 5, 4 ],
[ 5, 8, 7 ],
[ 0, 3, 7, 6 ],
[ 0, 6, 9, 1 ],
[ 2, 1, 9, 8 ],
[ 6, 7, 8, 9 ]
],
"normals": [
[ 1, 0, 0 ],
[ 0.7071, 0.7071, 0 ],
[ 0, 0.9701, 0.2425 ],
[ 0, 0.9701, -0.2425 ],
[ -0.7071, 0.7071, 0 ],
[ 0, 0, 1 ],
[ 0, -1, 0 ],
[ 0, 0, -1 ],
[ -1, 0, 0 ]
],
"faceColors": [
[ 1, .8, .8 ],
[ .7, .7, 1 ],
[ 0, 0, 1 ],
[ 0, 0, .7 ],
[ .7, .7, 1 ],
[ 1, 0, 0 ],
[ .4, .4, .4 ],
[ 1, 0, 0 ],
[ 1, .8, .8 ],
]
};
and another with more primitive return types:
/** The return value of each function is an object, model, with properties:
*
* model.vertexPositions -- the vertex coordinates;
* model.vertexNormals -- the normal vectors;
* model.vertexTextureCoords -- the texture coordinates;
* model.indices -- the face indices.
*
* The first three properties are of type Float32Array, while
* model.indices is of type Uint16Array.
*/
I tried to create a method to convert the data from the "modern" version to the "primitive":
function convertPoly(model) {
return {
vertexPositions: new Float32Array(model.vertices),
vertexNormals: new Float32Array(model.normals),
vertexTextureCoords: new Float32Array(model.faces),
indices: new Uint16Array(model.faces)
}
}
but I don't think this is correct, and I don't see anything rendered after trying to render it. How can I compute indices from the vertices or faces? I guess I don't really understand what the indices really represent or how they work (is it triangle vertices of the faces?).

Is there an easy way to extract tensor by viewing the elements as indices?

The input tensor shape as below
input =
[[ 0 0 1 2]
[ 0 3 4 5]
[ 0 6 7 8]
[ 1 9 10 11]
[ 1 12 13 14]
[ 1 15 16 17]
[ 1 18 19 20]
[ 1 21 22 23]
[ 1 24 25 26]
[ 1 27 28 29]
[ 1 30 31 32]
[ 2 33 34 35]
[ 2 36 37 38]
[ 2 39 40 41]]
And I want to extract block-wise elements according to the first element of each row(like:0,1,2), does anyone help me with it, THANKS!
If there are off-the-shelf function would be great.

5 Nearest Neighbors Using KD tree

I want to find 5 nearest neighbors for each point of blue points(T-SNE1) from red points(T-SNE2). So I wrote this code just to find out the right way to do that but I am not sure is that right or wrong to do that?
X = np.random.random((10, 2)) # 10 points in 3 dimensions
Y = np.random.random((10, 2)) # 10 points in 3 dimensions
NNlist=[]
treex = KDTree(X, leaf_size=2)
for i in range(len(Y)):
dist, ind = treex.query([Y[i]], k=5)
NNlist.append(ind[0][0])
print(ind) # indices of 5 closest neighbors
print(dist)
print("the nearest index is:" ,ind[0][0],"with distance:",dist[0][0], "for Y",i)
print(NNlist)
output
[[9 5 4 6 0]]
[[ 0.21261486 0.32859024 0.41598597 0.42960146 0.43793039]]
the nearest index is: 9 with distance: 0.212614862956 for Y 0
[[0 3 2 6 1]]
[[ 0.10907128 0.11378059 0.13984741 0.18000197 0.27475481]]
the nearest index is: 0 with distance: 0.109071275144 for Y 1
[[8 2 3 0 1]]
[[ 0.21621245 0.30543878 0.40668179 0.4370689 0.49372232]]
the nearest index is: 8 with distance: 0.216212445449 for Y 2
[[8 3 2 6 0]]
[[ 0.16648482 0.2989508 0.40967709 0.42511931 0.46589575]]
the nearest index is: 8 with distance: 0.166484820786 for Y 3
[[1 2 5 0 4]]
[[ 0.15331281 0.25121761 0.29305736 0.30173474 0.44291615]]
the nearest index is: 1 with distance: 0.153312811422 for Y 4
[[2 3 8 0 6]]
[[ 0.20441037 0.20917797 0.25121628 0.2903253 0.33914051]]
the nearest index is: 2 with distance: 0.204410367254 for Y 5
[[2 1 0 3 5]]
[[ 0.08400022 0.1484925 0.17356156 0.32387147 0.33789602]]
the nearest index is: 2 with distance: 0.0840002184199 for Y 6
[[8 2 3 7 0]]
[[ 0.2149891 0.40584999 0.50054235 0.53307269 0.5389266 ]]
the nearest index is: 8 with distance: 0.21498909502 for Y 7
[[1 0 2 5 9]]
[[ 0.07265268 0.11687068 0.19065327 0.20004392 0.30269591]]
the nearest index is: 1 with distance: 0.0726526838766 for Y 8
[[5 9 4 1 0]]
[[ 0.21563204 0.25067242 0.29904262 0.36745386 0.39634179]]
the nearest index is: 5 with distance: 0.21563203953 for Y 9
[9, 0, 8, 8, 1, 2, 2, 8, 1, 5]
import numpy as np
from scipy.spatial import KDTree
X = np.random.random((10, 2)) # 10 points in 3 dimensions
Y = np.random.random((10, 2)) # 10 points in 3 dimensions
NNlist=[]
for i in range(len(X)):
treey = KDTree(np.concatenate([Y.tolist(), np.expand_dims(X[i], axis=0)], axis=0))
dist, ind = treey.query([X[i]], k=6)
print('index', ind) # indices of 5 closest neighbors
print('distance', dist)
print('5 nearest neighbors')
for j in ind[0][1:]:
print(Y[j])
print()
you can get ...
index [[10 5 8 9 1 2]]
distance [[ 0. 0.3393312 0.38565112 0.40120109 0.44200758 0.47675255]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.42952574 0.83918788]
[ 0.26258905 0.4115705 ]
[ 0.61789523 0.96261285]
[ 0.92417172 0.13276541]
index [[10 1 3 8 4 9]]
distance [[ 0. 0.09176157 0.18219064 0.21845335 0.28876942 0.60082231]]
5 nearest neighbors
[ 0.61789523 0.96261285]
[ 0.51031835 0.99761715]
[ 0.42952574 0.83918788]
[ 0.3744326 0.97577322]
[ 0.26258905 0.4115705 ]
index [[10 7 0 9 5 6]]
distance [[ 0. 0.15771386 0.2751765 0.3457175 0.49918935 0.70597498]]
5 nearest neighbors
[ 0.19803817 0.23495888]
[ 0.41293849 0.05585981]
[ 0.26258905 0.4115705 ]
[ 0.6298789 0.18283264]
[ 0.04527532 0.78806495]
index [[10 0 5 7 9 2]]
distance [[ 0. 0.09269963 0.20597988 0.24505542 0.31104979 0.49743673]]
5 nearest neighbors
[ 0.41293849 0.05585981]
[ 0.6298789 0.18283264]
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
[ 0.92417172 0.13276541]
index [[10 9 5 7 0 8]]
distance [[ 0. 0.20406876 0.26125464 0.30645317 0.33369641 0.45509834]]
5 nearest neighbors
[ 0.26258905 0.4115705 ]
[ 0.6298789 0.18283264]
[ 0.19803817 0.23495888]
[ 0.41293849 0.05585981]
[ 0.42952574 0.83918788]
index [[10 5 2 0 7 9]]
distance [[ 0. 0.13641503 0.17524716 0.34224271 0.56393988 0.56893897]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.92417172 0.13276541]
[ 0.41293849 0.05585981]
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
index [[10 7 9 0 5 6]]
distance [[ 0. 0.04152391 0.22807566 0.25709252 0.43421854 0.61332497]]
5 nearest neighbors
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
[ 0.41293849 0.05585981]
[ 0.6298789 0.18283264]
[ 0.04527532 0.78806495]
index [[10 5 1 2 8 3]]
distance [[ 0. 0.40641681 0.43652515 0.44861766 0.45186271 0.51705369]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.61789523 0.96261285]
[ 0.92417172 0.13276541]
[ 0.42952574 0.83918788]
[ 0.51031835 0.99761715]
index [[10 6 9 7 8 4]]
distance [[ 0. 0.17568369 0.2841519 0.40184611 0.43110847 0.47835169]]
5 nearest neighbors
[ 0.04527532 0.78806495]
[ 0.26258905 0.4115705 ]
[ 0.19803817 0.23495888]
[ 0.42952574 0.83918788]
[ 0.3744326 0.97577322]
index [[10 9 7 5 0 8]]
distance [[ 0. 0.11723769 0.2275565 0.32111803 0.32446146 0.4643181 ]]
5 nearest neighbors
[ 0.26258905 0.4115705 ]
[ 0.19803817 0.23495888]
[ 0.6298789 0.18283264]
[ 0.41293849 0.05585981]
[ 0.42952574 0.83918788]

Maxima: turn eigenvectors into a matrix

The second part of the output of the eigenvectors function in Maxima is a list of the eigenvectors which correspond to the eigenvalues of the first part.
E.g.:
[[[1,-1/4],[1,1]],[[[1,2/3]],[[1,-1]]]]
(1,2/3) is the eigenvector of eigenvalue 1, and (1,-1) is the eigenvector of eigenvalue (-1/4).
How can I turn these vectors into a matrix (in this case it would be equivalent to matrix([1,1],[2/3,-1])).
Thanks
(%i1) display2d: false $
(%i2) r: [[[1,-1/4],[1,1]],[[[1,2/3]],[[1,-1]]]] $
(%i3) s: second(r) $
(%i4) s: map('first, s) $
(%i5) s: apply('maplist, cons("[", s)) $
(%i6) s: apply('matrix, s);
(%o6) matrix([1,1],[2/3,-1])
Here's an attempt. Notice I've extracted the pieces via multiple assignment first, so that it's easy to remember what the pieces mean.
(%i1) foo : [[[1,-1/4],[1,1]],[[[1,2/3]],[[1,-1]]]] $
(%i2) [[vals, mults], vecs] : foo;
1 2
(%o2) [[[1, - -], [1, 1]], [[[1, -]], [[1, - 1]]]]
4 3
(%i3) vals;
1
(%o3) [1, - -]
4
(%i4) mults;
(%o4) [1, 1]
(%i5) vecs;
2
(%o5) [[[1, -]], [[1, - 1]]]
3
(%i6) apply (append, vecs);
2
(%o6) [[1, -], [1, - 1]]
3
(%i7) apply (matrix, apply (append, vecs));
[ 2 ]
[ 1 - ]
(%o7) [ 3 ]
[ ]
[ 1 - 1 ]
(%i8) transpose (%);
[ 1 1 ]
[ ]
(%o8) [ 2 ]
[ - - 1 ]
[ 3 ]
Not sure if that will work when the number of eigenvectors is different from number of eigenvalues and other special cases. But I hope this gives you something to go on.

TensorFlow's perceptron gives unexplaineble output

I'am new to TF: I took perceptron's code from this tutorial on MNIST(actually, its not necessary to follow this link) :https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py
I wanted to remake those perceptron to a perceptron with 1 layer and linear activation function, to make it the most simpliest form of : output =w2(w1*x+b1)+b2. But this is what i get:
Data:
X_train: array([[ 10.],
[ 10.],
[ 11.],
[ 6.],
[ 8.],
[ 9.],
[ 22.],
[ 14.],
[ 6.],
[ 8.],
[ 11.],
[ 9.],
[ 13.],
[ 7.],
[ 13.],
[ 7.],
[ 13.],
[ 11.]])
y_train: array([[ 44.5825],
[ 53.99 ],
[ 52.4475],
[ 37.6 ],
[ 38.6125],
[ 39.5875],
[ 43.07 ],
[ 74.8575],
[ 34.185 ],
[ 38.61 ],
[ 34.8175],
[ 36.61 ],
[ 34.0675],
[ 37.67 ],
[ 49.725 ],
[ 79.4775],
[ 50.41 ],
[ 51.26 ]])
X_test: array([[ 6.],
[ 14.],
[ 14.],
[ 12.],
[ 13.],
[ 13.]])
y_test: array([[ 55.75 ],
[ 33.035 ],
[ 38.3275],
[ 39.2825],
[ 50.7325],
[ 45.2575]])
Parameters:
learning_rate = 1
training_epochs = 1
display_step = 1 #maintaining variable
x = tf.placeholder("float", [None, 1])
y = tf.placeholder("float", [None, 1])
Perceptron model:
def multilayer_perceptron(x, weights, biases, output_0):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
out_layer = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
output_o = out_layer #This variable is just needed to print result in session
return out_layer
output_0 = tf.Variable(tf.random_normal([1, n_classes]))
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))}
Let's build the graph:
prediction = multilayer_perceptron(x, weights, biases, output)
cost = tf.reduce_mean(tf.square(prediction-y)) #MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) #Gives the smallest cost
init = tf.initialize_all_variables()
Finally, let's run the session:
with tf.Session() as Sess:
Sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
number_of_bathces = len(X_train)/batch_size
_, c = Sess.run([optimizer, cost], feed_dict = {x: X_train, y: y_train})
avg_cost += c/len(X_train)
print(Sess.run(output_0))
if epoch % display_step ==0:
print("Epoch:", '%02d' % (epoch+1), "cost =", "{:.9f}".format(avg_cost))
print("Optimization finished")
correct_prediction = tf.equal(tf.arg_max(prediction,1), tf.arg_max(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x:X_test, y:y_test}))
And now, we get the output:
[[ 0.77995574]]
Epoch: 01 cost = 262.544189453
Optimization finished
Accuracy: 1.0
The most confusing thing is the output(first number)! It should be somewhere in range of [30; 50]! Please, explain me, where did i do wrong.
Your code is notably messy, so I've removed a lot of redundant pieces:
from __future__ import print_function
import numpy as np
import tensorflow as tf
X_train = np.array([[ 10.], [ 10.], [ 11.], [ 6.], [ 8.], [ 9.], [ 22.], [ 14.], [ 6.], [ 8.], [ 11.], [ 9.], [ 13.], [ 7.], [ 13.], [ 7.], [ 13.], [ 11.]])
y_train = np.array([[ 44.5825], [ 53.99 ], [ 52.4475], [ 37.6 ], [ 38.6125], [ 39.5875], [ 43.07 ], [ 74.8575], [ 34.185 ], [ 38.61 ], [ 34.8175], [ 36.61 ], [ 34.0675], [ 37.67 ], [ 49.725 ], [ 79.4775], [ 50.41 ], [ 51.26 ]])
X_test = np.array([[ 6.], [ 14.], [ 14.], [ 12.], [ 13.], [ 13.]])
y_test = np.array([[ 55.75 ], [ 33.035 ], [ 38.3275], [ 39.2825], [ 50.7325], [ 45.2575]])
learning_rate = 0.05
training_epochs = 10
n_classes = 1
n_hidden_1 = 5
n_hidden_2 = 5
n_input = 1
x = tf.placeholder(tf.float32, [None, 1])
y = tf.placeholder(tf.float32, [None, 1])
def multilayer_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
out_layer = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
return out_layer
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))}
prediction = multilayer_perceptron(x, weights, biases)
cost = tf.reduce_mean(tf.square(prediction - y)) #MSE
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) #Gives the smallest cost
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
_, c = sess.run([optimizer, cost], feed_dict = {x: X_train, y: y_train})
print("Epoch:", '%02d' % (epoch+1), "cost =", "{:.9f}".format(c))
print("Optimization finished")
print(sess.run(prediction, feed_dict = {x: X_test, y: y_test} ))
It seems to work now. I've got the following results:
Epoch: 01 cost = 1323.519653320
Epoch: 02 cost = 926.386840820
Epoch: 03 cost = 628.072326660
Epoch: 04 cost = 431.689270020
Epoch: 05 cost = 343.259063721
Epoch: 06 cost = 355.978668213
Epoch: 07 cost = 430.280548096
Epoch: 08 cost = 501.149414062
Epoch: 09 cost = 527.575683594
Epoch: 10 cost = 507.708007812
Optimization finished
[[ 30.79703712]
[ 69.70319366]
[ 69.70319366]
[ 59.97665405]
[ 64.83992004]
[ 64.83992004]]
Results may vary due to random initialization of weights.
Couple of tips:
Use smaller learning rate
Train over several epochs to see the dynamics

Resources