Maxima wrongly says a matrix is non-invertible - maxima

The below matrix matrix is clearly invertible modulo 3.
But Maxima returns false when I try to obtain its inverse.
0 1
1 1
What is wrong here?
M: matrix([0,1],[1,1]);
zn_invert_by_lu(M,3 );
According to doc:
zn_invert_by_lu (matrix, p)
Uses the technique of LU-decomposition to compute the modular inverse of matrix
over (Z/pZ). p must be a prime and matrix invertible. zn_invert_by_lu returns
false if matrix is not invertible.

Related

Plotting piecewise function with Fourier series in wxMaxima

I'd like to plot the following piecewise function with Fourier series in wxMaxima:
for given values of constants.
Here's my current input in wxMaxima:
a_1(t):=A_0+sum(A_n*cos(n*ω*(t-t_0))+B_n*sin(n*ω*(t-t_0)), n, 1, N);
a_2(t):=A_0;
a(t):=if(is(t>=t_0)) then a_1(t) else a_2(t);
N=2$
ω=31.416$
t_0=-0.1614$
A_0=0$
A_1=0.227$
B_1=0$
A_2=0.413$
B_2=0$
plot2d([a(t)], [t,0,0.5])$
Unfortunately, it doesn't work. I get the expression evaluates to non-numeric value everywhere in plotting range error. What can I do to make it work? Is it possible to plot this function in wxMaxima?
UPDATE: It works with modifications suggested by Robert Dodier:
a_1(t):=A[0]+sum(A[n]*cos(n*ω*(t-t_0))+B[n]*sin(n*ω*(t-t_0)), n, 1, N);
a_2(t):=A[0];
a(t):=if t>=t_0 then a_1(t) else a_2(t);
N:2$
ω:31.416$
t_0:-0.1614$
A[0]:0$
A[1]:0.227$
B[1]:0$
A[2]:0.413$
B[2]:0$
wxplot2d([a(t)], [t,0,0.5], [ylabel,"a"])$

Multivariate polynomial approximation of a function in Maxima

I have long symbolic function in Maxima, let say
fn(x,y):=<<some long equation using x and y>>
I would like to calculate polynomial approximation of this function, let say
fn_poly(x,y)
within known range of x and y and with maximum error e
I know, that there is a funcionality in Maxima, e.g. plsquares, but it needs a matrix on input and I have only function fn(x,y). I don't know how to generate this matrix from my function. genmatrix creates matrix not usable by plsquares.
Is this possible in Maxima?
Make list of lists and transform it to matrix.
load(plsquares);
f(x,y):=x^2+y^3;
mat:makelist(makelist([X,Y,f(X,Y)],X,1,10,2),Y,1,10,2);
-> [[[1,1,2],[3,1,10],[5,1,26],[7,1,50],[9,1,82]],[[1,3,28],[3,3,36],[5,3,52],[7,3,76],[9,3,108]],[[1,5,126],[3,5,134],[5,5,150],[7,5,174],[9,5,206]],[[1,7,344],[3,7,352],[5,7,368],[7,7,392],[9,7,424]],[[1,9,730],[3,9,738],[5,9,754],[7,9,778],[9,9,810]]]
mat2:[];
for i:1 thru length(mat) do mat2:append(mat2,mat[i]);
mat3:funmake('matrix,mat2);
-> matrix([1,1,2],[3,1,10],[5,1,26],[7,1,50],[9,1,82],[1,3,28],[3,3,36],[5,3,52],[7,3,76],[9,3,108],[1,5,126],[3,5,134],[5,5,150],[7,5,174],[9,5,206],[1,7,344],[3,7,352],[5,7,368],[7,7,392],[9,7,424],[1,9,730],[3,9,738],[5,9,754],[7,9,778],[9,9,810])
ZZ:rhs(plsquares(mat3,[X,Y,Z],Z,3,3));
-> Determination Coefficient for Z = 1.0
-> Y^3+X^2

The number of parameters in Gaussian mixture model

I have D-dimensional data with K components.
How many parameters if I use a model with full covariance matrices?
and How many if I use diaogonal covariance matrices?
Answer by xyLe_ at CrossValidated
https://stats.stackexchange.com/a/229321/127305
Simply do the math.
For each Gaussian you have:
1. A Symmetric full DxD covariance matrix giving (D*D - D)/2 + D parameters ((D*D - D)/2 is the number of off-diagonal elements and D is the number of diagonal elements)
2. A D dimensional mean vector giving D parameters
3. A mixing weight giving another parameter
This results in Df = (D*D - D)/2 + 2D + 1 for each gaussian.
Given you have K components, you have K*Df parameters.
In the diagonal case the covariance matrix parameters reduce to D, because of the abscence of off-diagonal elements.
Thus yielding Df = 2D + 1.

What is the norm of the translation obtained from recoverPose always 1?

I am using opencv to estimate the relative position of a stereo pair. I do so by calculating the essential matrix, decomposing it, then performing a chirality check. The latter steps are wrapped up in opencv's recoverPose API.
Mat E, R, t, mask;
E = findEssentialMat(points1, points2, focal, pp, RANSAC, 0.999, 1.0, mask);
recoverPose(E, points1, points2, R, t, focal, pp, mask);
The focal length I give to both methods is greater than 1. My question is, why is the norm of the translation vector I get back always equal to 1?
The reason for this can be found in the documentation for decomposeEssentialMat.
By decomposing E, you can only get the direction of the translation, so the function returns unit t.

Logistic Regression using Gradient Descent with OCTAVE

I've gone through few courses of Professor Andrew for machine Learning and viewed the transcript for Logistic Regression using Newton's method. However when implementing the logistic regression using gradient descent I face certain issue.
The graph generated is not convex.
My code goes as follows:
I am using the vectorized implementation of the equation.
%1. The below code would load the data present in your desktop to the octave memory
x=load('ex4x.dat');
y=load('ex4y.dat');
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
m=length(y);
x=[ones(m,1),x];
alpha=0.1;
max_iter=100;
g=inline('1.0 ./ (1.0 + exp(-z))');
theta = zeros(size(x(1,:)))'; % the theta has to be a 3*1 matrix so that it can multiply by x that is m*3 matrix
j=zeros(max_iter,1); % j is a zero matrix that is used to store the theta cost function j(theta)
for num_iter=1:max_iter
% Now we calculate the hx or hypothetis, It is calculated here inside no. of iteration because the hupothesis has to be calculated for new theta for every iteration
z=x*theta;
h=g(z); % Here the effect of inline function we used earlier will reflect
j(num_iter)=(1/m)*(-y'* log(h) - (1 - y)'*log(1-h)) ; % This formula is the vectorized form of the cost function J(theta) This calculates the cost function
j
grad=(1/m) * x' * (h-y); % This formula is the gradient descent formula that calculates the theta value.
theta=theta - alpha .* grad; % Actual Calculation for theta
theta
end
The code per say doesn't give any error but does not produce proper convex graph.
I shall be glad if any body could point out the mistake or share insight on what's causing the problem.
thanks
2 things you need to look into:
Machine Learning involves learning patterns from data. If your files ex4x.dat and ex4y.dat are randomly generated, it won't have patterns that you can learn.
You have used variables like g, h, i, j which make debugging difficult. Since it's a very small program, it might be a better idea to rewrite it.
Here's my code that gives the convex plot
clc; clear; close all;
load q1x.dat;
load q1y.dat;
X = [ones(size(q1x, 1),1) q1x];
Y = q1y;
m = size(X,1);
n = size(X,2)-1;
%initialize
theta = zeros(n+1,1);
thetaold = ones(n+1,1);
while ( ((theta-thetaold)'*(theta-thetaold)) > 0.0000001 )
%calculate dellltheta
dellltheta = zeros(n+1,1);
for j=1:n+1,
for i=1:m,
dellltheta(j,1) = dellltheta(j,1) + [Y(i,1) - (1/(1 + exp(-theta'*X(i,:)')))]*X(i,j);
end;
end;
%calculate hessian
H = zeros(n+1, n+1);
for j=1:n+1,
for k=1:n+1,
for i=1:m,
H(j,k) = H(j,k) -[1/(1 + exp(-theta'*X(i,:)'))]*[1-(1/(1 + exp(-theta'*X(i,:)')))]*[X(i,j)]*[X(i,k)];
end;
end;
end;
thetaold = theta;
theta = theta - inv(H)*dellltheta;
(theta-thetaold)'*(theta-thetaold)
end
I get the following values of error after iterations:
2.8553
0.6596
0.1532
0.0057
5.9152e-06
6.1469e-12
Which when plotted looks like:

Resources