Multivariate polynomial approximation of a function in Maxima - maxima

I have long symbolic function in Maxima, let say
fn(x,y):=<<some long equation using x and y>>
I would like to calculate polynomial approximation of this function, let say
fn_poly(x,y)
within known range of x and y and with maximum error e
I know, that there is a funcionality in Maxima, e.g. plsquares, but it needs a matrix on input and I have only function fn(x,y). I don't know how to generate this matrix from my function. genmatrix creates matrix not usable by plsquares.
Is this possible in Maxima?

Make list of lists and transform it to matrix.
load(plsquares);
f(x,y):=x^2+y^3;
mat:makelist(makelist([X,Y,f(X,Y)],X,1,10,2),Y,1,10,2);
-> [[[1,1,2],[3,1,10],[5,1,26],[7,1,50],[9,1,82]],[[1,3,28],[3,3,36],[5,3,52],[7,3,76],[9,3,108]],[[1,5,126],[3,5,134],[5,5,150],[7,5,174],[9,5,206]],[[1,7,344],[3,7,352],[5,7,368],[7,7,392],[9,7,424]],[[1,9,730],[3,9,738],[5,9,754],[7,9,778],[9,9,810]]]
mat2:[];
for i:1 thru length(mat) do mat2:append(mat2,mat[i]);
mat3:funmake('matrix,mat2);
-> matrix([1,1,2],[3,1,10],[5,1,26],[7,1,50],[9,1,82],[1,3,28],[3,3,36],[5,3,52],[7,3,76],[9,3,108],[1,5,126],[3,5,134],[5,5,150],[7,5,174],[9,5,206],[1,7,344],[3,7,352],[5,7,368],[7,7,392],[9,7,424],[1,9,730],[3,9,738],[5,9,754],[7,9,778],[9,9,810])
ZZ:rhs(plsquares(mat3,[X,Y,Z],Z,3,3));
-> Determination Coefficient for Z = 1.0
-> Y^3+X^2

Related

Plotting piecewise function with Fourier series in wxMaxima

I'd like to plot the following piecewise function with Fourier series in wxMaxima:
for given values of constants.
Here's my current input in wxMaxima:
a_1(t):=A_0+sum(A_n*cos(n*ω*(t-t_0))+B_n*sin(n*ω*(t-t_0)), n, 1, N);
a_2(t):=A_0;
a(t):=if(is(t>=t_0)) then a_1(t) else a_2(t);
N=2$
ω=31.416$
t_0=-0.1614$
A_0=0$
A_1=0.227$
B_1=0$
A_2=0.413$
B_2=0$
plot2d([a(t)], [t,0,0.5])$
Unfortunately, it doesn't work. I get the expression evaluates to non-numeric value everywhere in plotting range error. What can I do to make it work? Is it possible to plot this function in wxMaxima?
UPDATE: It works with modifications suggested by Robert Dodier:
a_1(t):=A[0]+sum(A[n]*cos(n*ω*(t-t_0))+B[n]*sin(n*ω*(t-t_0)), n, 1, N);
a_2(t):=A[0];
a(t):=if t>=t_0 then a_1(t) else a_2(t);
N:2$
ω:31.416$
t_0:-0.1614$
A[0]:0$
A[1]:0.227$
B[1]:0$
A[2]:0.413$
B[2]:0$
wxplot2d([a(t)], [t,0,0.5], [ylabel,"a"])$

Getting the error "dtw() got an unexpected keyword argument 'dist'" while calculating dtw of 2 voice samples

I am getting the error "dtw() got an unexpected keyword argument 'dist'" while I'm trying to calculate the dtw of 2 wav files. I can't figure out why or what to do to fix it. I am attaching the code below.
import librosa
import librosa.display
y1, sr1 = librosa.load('sample_data/Abir_Arshad_22.wav')
y2, sr2 = librosa.load('sample_data/Abir_Arshad_22.wav')
%pylab inline
subplot(1, 2, 1)
mfcc1 = librosa.feature.mfcc(y1, sr1)
librosa.display.specshow(mfcc1)
subplot(1, 2, 2)
mfcc2 = librosa.feature.mfcc(y2, sr2)
librosa.display.specshow(mfcc2)
from dtw import dtw
from numpy.linalg import norm
dist, cost, acc_cost, path = dtw(mfcc1.T, mfcc2.T, dist=lambda x, y: norm(x - y, ord=1))
print ('Normalized distance between the two sounds:', dist)
the error is occurring in the 2nd last line.
The error message is straight forward. Lets read the docs of the method you are calling:
https://dynamictimewarping.github.io/py-api/html/api/dtw.dtw.html#dtw.dtw
The dtw function has the following parameters:
Parameters x – query vector or local cost matrix
y – reference vector, unused if x given as cost matrix
dist_method – pointwise (local) distance function to use.
step_pattern – a stepPattern object describing the local warping steps
allowed with their cost (see [stepPattern()])
window_type – windowing function. Character: “none”, “itakura”,
“sakoechiba”, “slantedband”, or a function (see details).
open_begin,open_end – perform open-ended alignments
keep_internals – preserve the cumulative cost matrix, inputs, and
other internal structures
distance_only – only compute distance (no backtrack, faster)
You try to pass an argument named dist and that argument simply is not known.
Instead, removing that argument would solve the issue, such as
dist, cost, acc_cost, path = dtw(mfcc1.T, mfcc2.T)

compute ci square distance in python

im using knn model from sklean (- documentation
https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) to train some model for images classification. as you can see in the documentation, there is no option to pass the chi square distance as a metric to the KNeighborsClassifier function. but there is an option to pass a callable item-so i can pass a function that i built to calc the chi square. so i tried to write my own function.
i know that for some two images, the chi square is calculated up to this formula:
formula to calaculate chi square distance
my task is to solve this problem without using for loops as it takes too long. i need to solve this using vectorization, as my data passed to the chi square are images so if i have two images represented as np arrays' i can do math actions on them without loops, i mean for example for A=[1,2,3] and B=[3,4,5], A+B just give us [4,6,8], no need to use loop to calc this. i this way i also need to calc the chi square function.
anyway, when i tried for example this function:
def chi2(A, B):
#compute the chi-squared distance using above formula
chi = 0.5 * (((A - B) ** 2) / (A + B))
return chi
to calc the chi square, i get an error, and if i try some others similar functions, for example this code :
def chi2(A, B):
#compute the chi-squared distance using above formula
def chi2_distance(A, B):
# compute the chi-squared distance using above formula
chi = 0.5 * np.sum([((a - b) ** 2) / (a + b)
for (a, b) in zip(A, B)])
return chi
i get warnings: RuntimeWarning: invalid value encountered in double_scalars
for (a, b) in zip(A, B)])
and the program running like forever.
any suggestions to some efficient code to calc the chi square distance (as i said, with no loops)?

Nonlinear (non-polynomial) cost function with DirectCollocation in Drake

I am trying to formulate a trajectory optimization problem for a glider, where I want to maximize the average horisontal velocity. I have formulated the system as a drakesystem, and the state vector consists of the position and velocity.
Currently, I have something like the following:
dircol = DirectCollocation(
plant,
context,
num_time_samples=N,
minimum_timestep=min_dt,
maximum_timestep=max_dt,
)
... # other constraints etc
horisontal_pos = dircol.state()[0:2] # Only (x,y)
time = dircol.time()
dircol.AddFinalCost(-w.T.dot(horisontal_pos) / time)
where AddFinalCost() should replace all instances of state() and time() with the final values, as far as I understand from the documentation. min_dt is non-zero and w is a vector of linear weights.
However, I am getting the following error message
Expression (...) is not a polynomial. ParseCost does not support non-polynomial expression.
which makes me think that there is no way of adding the type of cost function that I am looking for. Is there anything that I am missing?
Thank you in advance!
When calling AddFinalCost(e) with e being a symbolic expression, we can only handle it when e is a polynomial function of the state (more precisely, either a quadratic function or a linear function). Hence the error you see complaining that the cost is not polynomial.
You could add the cost like this
def average_speed(v):
x = v[0]
time_steps = v[1:]
return v[0] / np.sum(time_steps)
h_vars = [dircol.timestep[i] for i in range(N-1)]
dircol.AddCost(average_speed, vars=[dircol.state(N-1)[0]] + h_vars)
which uses a function average_speed to evaluate the average speed. You could find example of doing this in https://github.com/RobotLocomotion/drake/blob/e5f3c3e5f7927ef675066d97d3afac55d3481305/bindings/pydrake/solvers/test/mathematicalprogram_test.py#L590
First, the cost function should be a scalar, but you a vector-valued horisontal_pos / time, which has two entries containing both position_x / dt and position_y / dt, namely a vector as the cost. You should instead provide a scalar valued cost.
Second, it is unclear to me why you divide time in the final cost. As far as I understand it, you want the final position to be close to the origin, so something like position_x² + position_y². The code can look like
dircol.AddFinalCost(horisontal_pos[0]**2 + horisontal_pos[1]**2)

How tf.gradients work in TensorFlow

Given I have a linear model as the following I would like to get the gradient vector with regards to W and b.
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
# Construct a linear model
pred = tf.add(tf.mul(X, W), b)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
However if I try something like this where cost is a function of cost(x,y,w,b) and I only want to gradients with respect to w and b:
grads = tf.gradients(cost, tf.all_variable())
My placeholders will also be included (X and Y).
Even if I do get a gradient with [x,y,w,b] how do I know which element in the gradient that belong to each parameter since it is just a list without names to which parameter the derivative has be taken with regards to?
In this question I'm using parts of this code and I build on this question.
Quoting the docs for tf.gradients
Constructs symbolic partial derivatives of sum of ys w.r.t. x in xs.
So, this should work:
dc_dw, dc_db = tf.gradients(cost, [W, b])
Here, tf.gradients() returns the gradient of cost wrt each tensor in the second argument as a list in the same order.
Read tf.gradients for more information.

Resources