The problem is extremely simple, there are just 5 samples.
But the Gradient Descent converges extremely slow, like couple of millions of iterations.
Why, is there a mistake in my algorithm?
P.S. The Julia code below:
X = [
1.0 34.6237 78.0247;
1.0 30.2867 43.895;
1.0 35.8474 72.9022;
1.0 60.1826 86.3086;
1.0 79.0327 75.3444
]
Y = [0 0 0 1 1]'
sigmoid(z) = 1 / (1 + e ^ -z)
# Cost function.
function costJ(Theta, X, Y)
m = length(Y)
H = map(z -> sigmoid(z), (Theta' * X')')
sum((-Y)'*log(H) - (1-Y)'*log(1 - H)) / m
end
# Gradient.
function gradient(Theta, X, Y)
m = length(Y)
H = map(z -> sigmoid(z), (Theta' * X')')
(((X'*H - X'*Y)') / m)'
end
# Gradient Descent.
function gradientDescent(X, Y, Theta, alpha, nIterations)
m = length(Y)
jHistory = Array(Float64, nIterations)
for i = 1:nIterations
jHistory[i] = costJ(Theta, X, Y)
Theta = Theta - alpha * gradient(Theta, X, Y)
end
Theta, jHistory
end
gradientDescent(X, Y, [0 0 0]', 0.0001, 1000)
I think #colinefang's comment may be the right diagnosis. Try plotting jHistory - does it always decrease?
Another thing you can do is add a simple linesearch on each iteration to make sure the cost always decreases, something like:
function linesearch(g, X, Y, Theta; alpha=1.0)
init_cost = costJ(Theta, X, Y)
while costJ(Theta - alpha*g, X, Y) > init_cost
alpha = alpha / 2.0 # or divide by some other constant >1
end
return alpha
end
Then modify the gradient descent function slightly to search over alpha on each iteration:
for i = 1:nIterations
g = gradient(Theta, X, Y)
alpha = linesearch(g,X,Y,Theta)
Theta = Theta - alpha * g
end
There are various performance enhancements you can make to the above code. I just wanted to show you the flavor.
Related
I tried implementing my own linear regression model in octave with some sample data but the theta does not seem to be correct and does not match the one provided by the normal equation which gives the correct values of theta. But running my model(with different alpha and iterations) on the data from Andrew Ng's machine learning course gives the proper theta for the hypothesis. I have tweaked alpha and iterations so that the cost function decreases. This is the image of cost function against iterations.. As you can see the cost decreases and plateaus but not to a low enough cost. Can somebody help me understand why this is happening and what I can do to fix it?
Here is the data (The first column is the x values, and the second column is the y values):
20,48
40,55.1
60,56.3
80,61.2
100,68
Here is the graph of the data and the equations plotted by gradient descent(GD) and by the normal equation(NE).
Code for the main script:
clear , close all, clc;
%loading the data
data = load("data1.txt");
X = data(:,1);
y = data(:,2);
%Plotting the data
figure
plot(X,y, 'xr', 'markersize', 7);
xlabel("Mass in kg");
ylabel("Length in cm");
X = [ones(length(y),1), X];
theta = ones(2, 1);
alpha = 0.000001; num_iter = 4000;
%Running gradientDescent
[opt_theta, J_history] = gradientDescent(X, y, theta, alpha, num_iter);
%Running Normal equation
opt_theta_norm = pinv(X' * X) * X' * y;
%Plotting the hypothesis for GD and NE
hold on
plot(X(:,2), X * opt_theta);
plot(X(:,2), X * opt_theta_norm, 'g-.', "markersize",10);
legend("Data", "GD", "NE");
hold off
%Plotting values of previous J with each iteration
figure
plot(1:numel(J_history), J_history);
xlabel("iterations"); ylabel("J");
Function for finding gradientDescent:
function [theta, J_history] = gradientDescent (X, y, theta, alpha, num_iter)
m = length(y);
J_history = zeros(num_iter,1);
for iter = 1:num_iter
theta = theta - (alpha / m) * (X' * (X * theta - y));
J_history(iter) = computeCost(X, y, theta);
endfor
endfunction
Function for computing cost:
function J = computeCost (X, y, theta)
J = 0;
m = length(y);
errors = X * theta - y;
J = sum(errors .^ 2) / (2 * m);
endfunction
Try alpha = 0.0001 and num_iter = 400000. This will solve your problem!
Now, the problem with your code is that the learning rate is way too less which is slowing down the convergence. Also, you are not giving it enough time to converge by limiting the training iterations to 4000 only which is very less given the learning rate.
Summarising, the problem is: less learning rate + less iterations.
I want to ask how this equation
can be written at octave by this way
predictions = X * theta;
delta = (1/m) * X' * (predictions - y);
theta = theta - alpha * delta;
I dont understand from where transpose come and how this equation converted to ve by this way?
The scalar product X.Y is mathematically sum (xi * yi) and can be written as X' * Y in octave when X and Y are vectors.
There are other ways to write a scalar product in octave, cf
https://octave.sourceforge.io/octave/function/dot.html
The question seems to be, given an example where:
X = randn(m, k); % m 'input' horizontal-vectors of dimensionality k
y = randn(m, n); % m 'target' horizontal-vectors of dimensionality n
theta = randn(k, n); % a (right) transformation from k to n dimensional
% horizontal-vectors
h = X * theta; % creates m rows of n-dimensional horizontal vectors
how is it that the following code
delta = zeros(k,n)
for j = 1 : k % iterating over all dimensions of the input
for l = 1 : n % iterating over all dimensions of the output
for i = 1 : m % iterating over all observations for that j,l pair
delta(j, l) += (1/m) * (h(i, l) - y(i, l)) * x(i,j);
end
theta(j, l) = theta(j, l) - alpha * delta(j, l);
end
end
can be vectorised as:
h = X * theta ;
delta = (1/ m) * X' * (h - y);
theta = theta - alpha * delta;
To confirm such a vectorised formulation makes sense, it always helps to note (e.g. below each line) the dimensions of the objects involved in the matrix / vectorised operations:
h = X * theta ;
% [m, n] [m, k] [k, n]
delta = (1/ m) * X' * (h - y);
% [k, n] [1, 1] [k, m] [m, n]
theta = theta - alpha * delta;
% [k, n] [k,n] [1, 1] [k, n]
Hopefully now it will become more obvious that they are equivalent.
W.r.t the X' * D calculation (where D = predictions - y) you can see that:
performing matrix multiplication with the 1st row of X' and the 1st column of D is equal to summing for k=1 and n=1 over all m observations, and placing that result at position [k=1, n=1] in the resulting matrix output. Then moving along the columns of D and still multiplying by the 1st row of X', you can see that we are simply moving along the n dimensions in D, and placing the result accordingly in the output. Similarly, moving along the rows of X', you move along the k dimensions of X', performing the same process for all n in that D, and placing the results accordingly, until you've finished matrix multiplications over all rows of X and columns in D.
If you follow the logic above, you will see that the summations involved are exactly the same as in the for loop formulation, but we managed to avoid using a for loop and use matrix operations instead.
need help in completing this function. Getting an error while trying to find out derJ :
error: X(0,_): subscripts must be either integers 1 to (2^63)-1 or logicals
My code:
function [theta, J_history] = gradientDescent (X, y, theta, alpha, num_iters)
m = length (y); % number of training examples
J_history = zeros (num_iters, 1);
for iter = 1 : num_iters
predictions = X * theta; % hypothesis
% derivative term for cost function
derJ = (1 / m) * sum ( (predictions - y) * X(iter-1, 2) );
% updating theta values
theta = theta - (alpha * derJ);
J_history(iter) = computeCost (X, y, theta);
end
end
Your code states X(iter - 1, 2), but in your for loop iter starts from 1.
Therefore in the very first iteration, X(iter - 1, 2) will evaluate to X(0,2), and 0 is not a valid index in matlab.
I have obtained the following learning curve on plotting the learning curves for training and cross validation sets between the error cost, and number of training examples (in 100s in the graph). Can someone please tell me if this learning curve is ever possible? Because I am of the impression that the Cross validation error should decrease as the number of training examples increase.
Learning Curve. Note that the x axis denotes the number of training examples in 100s.
EDIT :
This is the code which I use to calculate the 9 values for plotting the learning curves.
X is the 2D matrix of the training set examples. It is of dimensions m x (n+1). y is of dimensions m x 1, and each element has value 1 or 0.
for j=1:9
disp(j)
[theta,J] = trainClassifier(X(1:(j*100),:),y(1:(j*100)),lambda);
[error_train(j), grad] = costprediciton_train(theta , X(1:(j*100),:), y(1:(j*100)));
[error_cv(j), grad] = costfunction_test2(theta , Xcv(1:(j*100),:),ycv(1:(j*100)));
end
The code I use for finding the optimal value of Theta from the training set.
% Train the classifer. Return theta
function [optTheta, J] = trainClassifier(X,y,lambda)
[m,n]=size(X);
initialTheta = zeros(n, 1);
options=optimset('GradObj','on','MaxIter',100);
[optTheta, J, Exit_flag ] = fminunc(#(t)(regularizedCostFunction(t, X, y, lambda)), initialTheta, options);
end
%regularized cost
function [J, grad] = regularizedCostFunction(theta, X, y,lambda)
[m,n]=size(X);
h=sigmoid( X * theta);
temp1 = -1 * (y .* log(h));
temp2 = (1 - y) .* log(1 - h);
thetaT = theta;
thetaT(1) = 0;
correction = sum(thetaT .^ 2) * (lambda / (2 * m));
J = sum(temp1 - temp2) / m + correction;
grad = (X' * (h - y)) * (1/m) + thetaT * (lambda / m);
end
The code I use for calculating the error cost for prediction of results for training set: (similar is the code for error cost of CV set)
Theta is of dimensions (n+1) x 1 and consists of the coefficients of the features in the hypothesis function.
function [J,grad] = costprediciton_train(theta , X, y)
[m,n]=size(X);
h=sigmoid(X * theta);
temp1 = y .* log(h);
temp2 = (1-y) .* log(1- h);
J = -sum (temp1 + temp2)/m;
t=h-y;
grad=(X'*t)*(1/m);
end
function [J,grad] = costfunction_test2(theta , X, y)
m= length(y);
h=sigmoid(X*theta);
temp1 = y .* log(h);
temp2 = (1-y) .* log(1- h);
J = -sum (temp1 + temp2)/m ;
grad = (X' * (h - y)) * (1/m) ;
end
The Sigmoid function:
function g = sigmoid(z)
g= zeros(size(z));
den=1 + exp(-1*z);
g = 1 ./ den;
end
I implemented a gradient descent algorithm to minimize a cost function in order to gain a hypothesis for determining whether an image has a good quality. I did that in Octave. The idea is somehow based on the algorithm from the machine learning class by Andrew Ng
Therefore I have 880 values "y" that contains values from 0.5 to ~12. And I have 880 values from 50 to 300 in "X" that should predict the image's quality.
Sadly the algorithm seems to fail, after some iterations the value for theta is so small, that theta0 and theta1 become "NaN". And my linear regression curve has strange values...
here is the code for the gradient descent algorithm:
(theta = zeros(2, 1);, alpha= 0.01, iterations=1500)
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
tmp_j1=0;
for i=1:m,
tmp_j1 = tmp_j1+ ((theta (1,1) + theta (2,1)*X(i,2)) - y(i));
end
tmp_j2=0;
for i=1:m,
tmp_j2 = tmp_j2+ (((theta (1,1) + theta (2,1)*X(i,2)) - y(i)) *X(i,2));
end
tmp1= theta(1,1) - (alpha * ((1/m) * tmp_j1))
tmp2= theta(2,1) - (alpha * ((1/m) * tmp_j2))
theta(1,1)=tmp1
theta(2,1)=tmp2
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
And here is the computation for the costfunction:
function J = computeCost(X, y, theta) %
m = length(y); % number of training examples
J = 0;
tmp=0;
for i=1:m,
tmp = tmp+ (theta (1,1) + theta (2,1)*X(i,2) - y(i))^2; %differenzberechnung
end
J= (1/(2*m)) * tmp
end
If you are wondering how the seemingly complex looking for loop can be vectorized and cramped into a single one line expression, then please read on. The vectorized form is:
theta = theta - (alpha/m) * (X' * (X * theta - y))
Given below is a detailed explanation for how we arrive at this vectorized expression using gradient descent algorithm:
This is the gradient descent algorithm to fine tune the value of θ:
Assume that the following values of X, y and θ are given:
m = number of training examples
n = number of features + 1
Here
m = 5 (training examples)
n = 4 (features+1)
X = m x n matrix
y = m x 1 vector matrix
θ = n x 1 vector matrix
xi is the ith training example
xj is the jth feature in a given training example
Further,
h(x) = ([X] * [θ]) (m x 1 matrix of predicted values for our training set)
h(x)-y = ([X] * [θ] - [y]) (m x 1 matrix of Errors in our predictions)
whole objective of machine learning is to minimize Errors in predictions. Based on the above corollary, our Errors matrix is m x 1 vector matrix as follows:
To calculate new value of θj, we have to get a summation of all errors (m rows) multiplied by jth feature value of the training set X. That is, take all the values in E, individually multiply them with jth feature of the corresponding training example, and add them all together. This will help us in getting the new (and hopefully better) value of θj. Repeat this process for all j or the number of features. In matrix form, this can be written as:
This can be simplified as:
[E]' x [X] will give us a row vector matrix, since E' is 1 x m matrix and X is m x n matrix. But we are interested in getting a column matrix, hence we transpose the resultant matrix.
More succinctly, it can be written as:
Since (A * B)' = (B' * A'), and A'' = A, we can also write the above as
This is the original expression we started out with:
theta = theta - (alpha/m) * (X' * (X * theta - y))
i vectorized the theta thing...
may could help somebody
theta = theta - (alpha/m * (X * theta-y)' * X)';
I think that your computeCost function is wrong.
I attended NG's class last year and I have the following implementation (vectorized):
m = length(y);
J = 0;
predictions = X * theta;
sqrErrors = (predictions-y).^2;
J = 1/(2*m) * sum(sqrErrors);
The rest of the implementation seems fine to me, although you could also vectorize them.
theta_1 = theta(1) - alpha * (1/m) * sum((X*theta-y).*X(:,1));
theta_2 = theta(2) - alpha * (1/m) * sum((X*theta-y).*X(:,2));
Afterwards you are setting the temporary thetas (here called theta_1 and theta_2) correctly back to the "real" theta.
Generally it is more useful to vectorize instead of loops, it is less annoying to read and to debug.
If you are OK with using a least-squares cost function, then you could try using the normal equation instead of gradient descent. It's much simpler -- only one line -- and computationally faster.
Here is the normal equation:
http://mathworld.wolfram.com/NormalEquation.html
And in octave form:
theta = (pinv(X' * X )) * X' * y
Here is a tutorial that explains how to use the normal equation: http://www.lauradhamilton.com/tutorial-linear-regression-with-octave
While not scalable like a vectorized version, a loop-based computation of a gradient descent should generate the same results. In the example above, the most probably case of the gradient descent failing to compute the correct theta is the value of alpha.
With a verified set of cost and gradient descent functions and a set of data similar with the one described in the question, theta ends up with NaN values just after a few iterations if alpha = 0.01. However, when set as alpha = 0.000001, the gradient descent works as expected, even after 100 iterations.
Using only vectors here is the compact implementation of LR with Gradient Descent in Mathematica:
Theta = {0, 0}
alpha = 0.0001;
iteration = 1500;
Jhist = Table[0, {i, iteration}];
Table[
Theta = Theta -
alpha * Dot[Transpose[X], (Dot[X, Theta] - Y)]/m;
Jhist[[k]] =
Total[ (Dot[X, Theta] - Y[[All]])^2]/(2*m); Theta, {k, iteration}]
Note: Of course one assumes that X is a n * 2 matrix, with X[[,1]] containing only 1s'
This should work:-
theta(1,1) = theta(1,1) - (alpha*(1/m))*((X*theta - y)'* X(:,1) );
theta(2,1) = theta(2,1) - (alpha*(1/m))*((X*theta - y)'* X(:,2) );
its cleaner this way, and vectorized also
predictions = X * theta;
errorsVector = predictions - y;
theta = theta - (alpha/m) * (X' * errorsVector);
If you remember the first Pdf file for Gradient Descent form machine Learning course, you would take care of learning rate. Here is the note from the mentioned pdf.
Implementation Note: If your learning rate is too large, J(theta) can di-
verge and blow up', resulting in values which are too large for computer
calculations. In these situations, Octave/MATLAB will tend to return
NaNs. NaN stands fornot a number' and is often caused by undened
operations that involve - infinity and +infinity.