Linear Regression - Implementing Feature Scaling - machine-learning

I was trying to implement Linear Regression in Octave 5.1.0 on a data set relating the GRE score to the probability of Admission.
The data set is of the sort,
337 0.92
324 0.76
316 0.72
322 0.8
. . .
My main Program.m file looks like,
% read the data
data = load('Admission_Predict.txt');
% initiate variables
x = data(:,1);
y = data(:,2);
m = length(y);
theta = zeros(2,1);
alpha = 0.01;
iters = 1500;
J_hist = zeros(iters,1);
% plot data
subplot(1,2,1);
plot(x,y,'rx','MarkerSize', 10);
title('training data');
% compute cost function
x = [ones(m,1), (data(:,1) ./ 300)]; % feature scaling
J = computeCost(x,y,theta);
% run gradient descent
[theta, J_hist] = gradientDescent(x,y,theta,alpha,iters);
hold on;
subplot(1,2,1);
plot((x(:,2) .* 300), (x*theta),'-');
xlabel('GRE score');
ylabel('Probability');
hold off;
subplot (1,2,2);
plot(1:iters, J_hist, '-b');
xlabel('no: of iteration');
ylabel('Cost function');
computeCost.m looks like,
function J = computeCost(x,y,theta)
m = length(y);
h = x * theta;
J = (1/(2*m))*sum((h-y) .^ 2);
endfunction
and gradientDescent.m looks like,
function [theta, J_hist] = gradientDescent(x,y,theta,alpha,iters)
m = length(y);
J_hist = zeros(iters,1);
for i=1:iters
diff = (x*theta - y);
theta = theta - (alpha * (1/(m))) * (x' * diff);
J_hist(i) = computeCost(x,y,theta);
endfor
endfunction
The graphs plotted then looks like this,
which you can see, doesn't feel right even though my Cost function seems to be minimized.
Can someone please tell me if this is right? If not, what am I doing wrong?

The easiest way to check whether your implementation is correct is to compare with a validated implementation of linear regression. I suggest using an alternative implementation approach like the one suggested here, and then comparing your results. If the fits match, then this is the best linear fit to your data and if they don't match, then there may be something wrong in your implementation.

Related

Gradient function not able to find optimal theta but normal equation does

I tried implementing my own linear regression model in octave with some sample data but the theta does not seem to be correct and does not match the one provided by the normal equation which gives the correct values of theta. But running my model(with different alpha and iterations) on the data from Andrew Ng's machine learning course gives the proper theta for the hypothesis. I have tweaked alpha and iterations so that the cost function decreases. This is the image of cost function against iterations.. As you can see the cost decreases and plateaus but not to a low enough cost. Can somebody help me understand why this is happening and what I can do to fix it?
Here is the data (The first column is the x values, and the second column is the y values):
20,48
40,55.1
60,56.3
80,61.2
100,68
Here is the graph of the data and the equations plotted by gradient descent(GD) and by the normal equation(NE).
Code for the main script:
clear , close all, clc;
%loading the data
data = load("data1.txt");
X = data(:,1);
y = data(:,2);
%Plotting the data
figure
plot(X,y, 'xr', 'markersize', 7);
xlabel("Mass in kg");
ylabel("Length in cm");
X = [ones(length(y),1), X];
theta = ones(2, 1);
alpha = 0.000001; num_iter = 4000;
%Running gradientDescent
[opt_theta, J_history] = gradientDescent(X, y, theta, alpha, num_iter);
%Running Normal equation
opt_theta_norm = pinv(X' * X) * X' * y;
%Plotting the hypothesis for GD and NE
hold on
plot(X(:,2), X * opt_theta);
plot(X(:,2), X * opt_theta_norm, 'g-.', "markersize",10);
legend("Data", "GD", "NE");
hold off
%Plotting values of previous J with each iteration
figure
plot(1:numel(J_history), J_history);
xlabel("iterations"); ylabel("J");
Function for finding gradientDescent:
function [theta, J_history] = gradientDescent (X, y, theta, alpha, num_iter)
m = length(y);
J_history = zeros(num_iter,1);
for iter = 1:num_iter
theta = theta - (alpha / m) * (X' * (X * theta - y));
J_history(iter) = computeCost(X, y, theta);
endfor
endfunction
Function for computing cost:
function J = computeCost (X, y, theta)
J = 0;
m = length(y);
errors = X * theta - y;
J = sum(errors .^ 2) / (2 * m);
endfunction
Try alpha = 0.0001 and num_iter = 400000. This will solve your problem!
Now, the problem with your code is that the learning rate is way too less which is slowing down the convergence. Also, you are not giving it enough time to converge by limiting the training iterations to 4000 only which is very less given the learning rate.
Summarising, the problem is: less learning rate + less iterations.

Gradient Descent produces incorrect Thetas in octave

I'm trying out a prediction algorithm using polynomial regression of the form h(x) = theta0 + theta1 * x1 + theta2 * x2, where x2=x1^2
I'm calculating the thetas with two methods, to compare the results: Normal Equation Vs. Gradient Decent. Then I plot the regression line for both methods for scores from 65 to 100, to see how it fits with my data.
When calculating thetas using Normal Equation, all seems to be working as expected. In the graph below, "x" is the actual scores, and "o" is the predicted scores.
However when calculating thetas using Gradient Decent, the resulting regression line does not fit my data. It looks like this:
While minimizing my Cost Function, I'm plotting the Gradient Descent iterations over J, to confirm that values converge. This seems to be working correctly:
Here's my code:
function [theta_normalEq, theta_gradientDesc] = a1_LinearRegression()
clear();
% suppose you want to fit a model of the form h(x) = theta0 + theta1 * x1 + theta2 * x2
% where x1 is the midterm score and x2 is (midterm score)^2
midTerm = [89; 72; 94; 69]; % mid-term Exam scores
midTerm2 = midTerm .^2; % same as above but each element squared (. refers to "each element". If 'dot' was not there, ^2 alone would mean matrix multiplications
X = [midTerm midTerm2]; % concatinate the two vectors into a single matrix
y = [96; 74; 87; 78]; %final Exam scores
% Method A:
% calculate theta (bias for each independent variable) using Normal Equation
% This works in some cases only (see comments in corresponding function below)
theta_normalEq = normalEquation(X, y);
% Method B:
% Use Gradient Descent
theta_gradientDesc = gradientDescent(X, y, 1.3, 60);
% plot regression line to see visually how it fits with our data
plotRegressionLine(midTerm, y, X, theta_gradientDesc);
% clear unneeded variables for a tidy output window
clear ('midTerm', 'midTerm2');
endfunction
% plots a regression line to see visually how it fits with our data
function plotRegressionLine(midTerm, y, X, theta)
% Our X matrix is n-long, but our theta is n+1 (remember we are modeling h(x) = theta0 + theta1 * x1 + theta2 * x2)
% Therefore we will introduce an X0 and set it to x0 = 1 for all values of i, so that we can do matrix operations with theta and X.
% This makes the two vectors 'theta' and x(i) match each other element-wise (that is, have the same number of elements: n+1).
X0 = ones(rows(X),1);
X = [X0 X]; % concatination; X had 2 columns, now it has 3. The very first column now consists of 'ones'
clear ('X0'); % just clears the variable
% with our thetas calculated, we can now plug them in our original model to make predictions
% model form: h(x) = theta0 + theta1 * x1 + theta2 * x2
% vectorized version: h(x) = X * theta
y_predicted = X * theta;
% let's also calculate the poits for all possible scores, to draw a regression line
scoreMin = 65;
scoreMax = 100;
step = 0.1;
scores = (scoreMin: step: scoreMax)';
scoresX = [ones(rows(scores),1) scores scores.^2];
scoresY_predicted = scoresX * theta;
% plot
figure 2;
clf;
hold on;
plot(midTerm, y, "x"); % draws our actual data points
plot(midTerm, y_predicted, "or"); % draws our predicted data points
plot(scores, scoresY_predicted, "r"); % draws our calculated regression line
hold off;
endfunction
% Performs gradient descent to learn theta. Updates theta by taking num_iters
%
% X = matrix of independent variables (e.g., size of house, number of bedrooms, number of bathrooms, etc)
% y = vector of dependent variables (e.g., cost of house)
% alpha = the rate of learning
% number of iterations to try finding the optimum theta
%
% Start by trying out a random alpha, like 0.1 or 1.
% If alpha is too small, it will take too long to minimize J and see values converging (too many iterations)
% If alpha is too large, we will overshoot the function minimum and values will start increasing again
% Ideally we want as large an alpha to get enough resolution to discover the function minimum with as few iterations as possible, without overshooting the minimum
%
% We also want a numner of iterations that are enough, but not too many. Depending on our problem and data, this can be from 30 to 300 to 3000 to 3 million, or more.
% In practice, we plot J against number of iterations as we go along the loop, to discover experimentally the optimal values for 'alpha' and 'num_iters'
% The graph we are looking for looks like a hokey stick of reducing values, that flattens horizontally. The J no longer reduces (the flat horizontal part), we have converged.
%
function theta = gradientDescent(X, y, alpha, num_iters)
% NORMALIZE FEATURES
% We can speed up gradient descent by having each of our input values in roughly the same range
% This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
% The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same
% zscore() normalizes each feature (each column) independently, which is what we want: (value - mean of values for that column) / standard deviation of that column
X = zscore(X);
y = zscore(y);
% Our X matrix is n-long, but our theta is n+1 (remember we are modeling h(x) = theta0 + theta1 * x1 + theta2 * x2)
% Therefore we will introduce an X0 and set it to x0 = 1 for all values of i, so that we can do matrix operations with theta and X.
% This makes the two vectors 'theta' and x(i) match each other element-wise (that is, have the same number of elements: n+1).
X0 = ones(rows(X),1);
X = [X0 X]; % concatination; X had 2 columns, now it has 3. The very first column now consists of 'ones'
clear ('X0'); % just clears the variable
% number of training examples
m = length(y);
% save the cost J in every iteration in order to plot J vs. num_iters and check for convergence
J_history = zeros(num_iters, 1);
% We start with a random set of thetas.
% Gradient Descent improves them at each iteration until values converge
% NOTE: do not use randomMatrix() to initialize. Rather, hard code random values so that they are identical at each run attempt,
% to help us experiment with different sets of 'alpha' & 'num_iters' until we discover their optimal values.
%theta = randomMatrix(columns(X), 1, 0, 1);
theta = [0;0;0];
for iter = 1:num_iters
h = X * theta;
stderr = h - y;
theta = theta - (alpha/m) * X' * stderr;
J_history(iter) = computeCost(X, y, theta);
endfor
% plot J vs. num_iters and check for convergence
xAxis = 1:1:num_iters; % create vector from 1 to num_iters with step 1
figure 1;
clf;
plot(xAxis, J_history);
endfunction
% These two functions give identical results, but maybe one runs faster than another
function J = computeCost(X, y, theta)
m = length(y); % number of training examples
J = 1/(2*m) * sum( ( X*theta - y) .^ 2);
endfunction
%
function J = computeCostVectorized(X, y, theta)
m = length(y); % number of training examples
J = 1/(2*m) * (X*theta - y)' * (X*theta - y);
endfunction
% alternative way of finding the optimum theta without iteration and without having to try different alphas (rate of learning)
% however this method can be slow in situations with a lot of features + large training set combos
% There is no need to do feature scaling with the normal equation!!!
%
% WARNING:
% X'* X may be noninvertible. The common causes are:
% > Redundant features, where two features are very closely related (i.e. they are linearly dependent)
% > Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization" (to be explained in a later lesson)
%
% Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features
function theta = normalEquation(X, y)
% Our X matrix is n-long, but our theta is n+1 (remember we are modeling h(x) = theta0 + theta1 * x1 + theta2 * x2)
% Therefore we will introduce an X0 and set it to x0 = 1 for all values of i, so that we can do matrix operations with theta and X.
% This makes the two vectors 'theta' and x(i) match each other element-wise (that is, have the same number of elements: n+1).
X0 = ones(rows(X),1);
X = [X0 X]; % concatination; X had 2 columns, now it has 3. The very first column now consists of 'ones'
clear ('X0'); % just clears the variable
Xt = X';
theta = pinv(Xt * X) * Xt * y;
endfunction
% returns a random matrix of the specified size
% if you don't care to specify mean and variance, just use 0 and 1 respectively (or just call 'randn(rows, columns)' directly)
function retVal = randomMatrix(rows, columns, mean, variance)
retVal = mean + sqrt(variance)*(randn(rows,columns));
endfunction

Which is the correct implementation of regularization in octave?

I'm currently taking Andrew Ng's machine learning course and I try implementing the stuff as I learn so as not to forget them, I just finished regularization (chapter 7). I know that theta 0 is updated normally, separate from other parameters, however, I am not sure which of these is the correct implementation.
Implementation 1: in my gradient function, after computing the regularization vector, change theta 0 part to 0 so when it is added to the total, it is as if theta 0 was never regularized.
Implementation 2: store theta in a temp variable: _theta, update it with a reg_step of 0 (so it's as if there's no regularization), store the new theta 0 in a temp variable: t1, then update the original theta value with my desired reg_step and replace theta 0 with t1 (value from non-regularized update).
below is my code for the first implementation, it's not meant to be advanced, I'm just practicing:
I'm using octave which is 1-index, so theta(1) is theta(0)
function ret = gradient(X,Y,theta,reg_step),
H = theta' * X;
dif = H-Y;
mul = dif .* X;
total = sum(mul,2);
m=(size(Y)(1,1));
regular = (reg_step/m)*theta;
regular(1)=0;
ret = (total/m)+regular,
endfunction
Thanks in advance.
A slight tweak to the first implementation worked for me.
First, calculate regularization for every theta. Then go on to perform gradient step and later you can change the first entry of the matrix containing gradients manually to ignore regularization for theta_0.
% Calculate regularization
regularization = (reg_step / m) * theta;
% Gradient Step
gradients = (1 / m) * (X' * (predictions - y)) + regularization;
% Ignore regularization in theta_0
gradients(1) = (1 / m) * (X(:, 1)' * (predictions - y));

Batch gradient descent for polynomial regression

I am trying to move on from simple linear single-variable gradient descent into something more advanced: best polynomial fit for a set of points. I created a simple octave test script which allows me to visually set the points in a 2D space, then start the gradient dsecent algorithm and see how it gradually approaches the best fit.
Unfortunately, it doesn't work as good as it did with the simple single-variable linear regression: the results I get ( when I get them ) are inconsistent with the polynome I expect!
Here is the code:
dim=5;
h = figure();
axis([-dim dim -dim dim]);
hold on
index = 1;
data = zeros(1,2);
while(1)
[x,y,b] = ginput(1);
if( length(b) == 0 )
break;
endif
plot(x, y, "b+");
data(index, :) = [x y];
index++;
endwhile
y = data(:, 2);
m = length(y);
X = data(:, 1);
X = [ones(m, 1), data(:,1), data(:,1).^2, data(:,1).^3 ];
theta = zeros(4, 1);
iterations = 100;
alpha = 0.001;
J = zeros(1,iterations);
for iter = 1:iterations
theta -= ( (1/m) * ((X * theta) - y)' * X)' * alpha;
plot(-dim:0.01:dim, theta(1) + (-dim:0.01:dim).*theta(2) + (-dim:0.01:dim).^2.*theta(3) + (-dim:0.01:dim).^3.*theta(4), "g-");
J(iter) = sum( (1/m) * ((X * theta) - y)' * X);
end
plot(-dim:0.01:dim, theta(1) + (-dim:0.01:dim).*theta(2) + (-dim:0.01:dim).^2.*theta(3) + (-dim:0.01:dim).^3.*theta(4), "r-");
figure()
plot(1:iter, J);
I continuously get wrong results, even though it would seem that J is minimized correctly. I checked the plotting function with the normal equation ( which works correctly of course, and although I believe the error lies somewhere in the theta equation, I cannot figure out what it.
i implemented your code and it seems to be just fine, the reason that you do not have the results that you want is that Linear regression or polynomial regression in your case suffers from local minimum when you try to minimize the objective function. The algorithm traps in local minimum during execution. i implement your code changing the step (alpha) and i saw that with smaller step it fits the data better but still you are trapping in local minimum.
Choosing random initialization point of thetas every time i am trapping in a different local minimum. If you are lucky you will find a better initial points for theta and fit the data better. I think that there are some algorithms that finds the best initial points.
Below i attach the results for random initial points and the results with Matlab's polyfit.
In the above plot replace "Linear Regression with Polynomial Regression", type error.
If you observe better the plot, you will see that by chance (using rand() ) i chose some initial points that leaded me to the best data fitting comparing the other initial points.... i am showing that with a pointer.

gradient descent seems to fail

I implemented a gradient descent algorithm to minimize a cost function in order to gain a hypothesis for determining whether an image has a good quality. I did that in Octave. The idea is somehow based on the algorithm from the machine learning class by Andrew Ng
Therefore I have 880 values "y" that contains values from 0.5 to ~12. And I have 880 values from 50 to 300 in "X" that should predict the image's quality.
Sadly the algorithm seems to fail, after some iterations the value for theta is so small, that theta0 and theta1 become "NaN". And my linear regression curve has strange values...
here is the code for the gradient descent algorithm:
(theta = zeros(2, 1);, alpha= 0.01, iterations=1500)
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
tmp_j1=0;
for i=1:m,
tmp_j1 = tmp_j1+ ((theta (1,1) + theta (2,1)*X(i,2)) - y(i));
end
tmp_j2=0;
for i=1:m,
tmp_j2 = tmp_j2+ (((theta (1,1) + theta (2,1)*X(i,2)) - y(i)) *X(i,2));
end
tmp1= theta(1,1) - (alpha * ((1/m) * tmp_j1))
tmp2= theta(2,1) - (alpha * ((1/m) * tmp_j2))
theta(1,1)=tmp1
theta(2,1)=tmp2
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
And here is the computation for the costfunction:
function J = computeCost(X, y, theta) %
m = length(y); % number of training examples
J = 0;
tmp=0;
for i=1:m,
tmp = tmp+ (theta (1,1) + theta (2,1)*X(i,2) - y(i))^2; %differenzberechnung
end
J= (1/(2*m)) * tmp
end
If you are wondering how the seemingly complex looking for loop can be vectorized and cramped into a single one line expression, then please read on. The vectorized form is:
theta = theta - (alpha/m) * (X' * (X * theta - y))
Given below is a detailed explanation for how we arrive at this vectorized expression using gradient descent algorithm:
This is the gradient descent algorithm to fine tune the value of θ:
Assume that the following values of X, y and θ are given:
m = number of training examples
n = number of features + 1
Here
m = 5 (training examples)
n = 4 (features+1)
X = m x n matrix
y = m x 1 vector matrix
θ = n x 1 vector matrix
xi is the ith training example
xj is the jth feature in a given training example
Further,
h(x) = ([X] * [θ]) (m x 1 matrix of predicted values for our training set)
h(x)-y = ([X] * [θ] - [y]) (m x 1 matrix of Errors in our predictions)
whole objective of machine learning is to minimize Errors in predictions. Based on the above corollary, our Errors matrix is m x 1 vector matrix as follows:
To calculate new value of θj, we have to get a summation of all errors (m rows) multiplied by jth feature value of the training set X. That is, take all the values in E, individually multiply them with jth feature of the corresponding training example, and add them all together. This will help us in getting the new (and hopefully better) value of θj. Repeat this process for all j or the number of features. In matrix form, this can be written as:
This can be simplified as:
[E]' x [X] will give us a row vector matrix, since E' is 1 x m matrix and X is m x n matrix. But we are interested in getting a column matrix, hence we transpose the resultant matrix.
More succinctly, it can be written as:
Since (A * B)' = (B' * A'), and A'' = A, we can also write the above as
This is the original expression we started out with:
theta = theta - (alpha/m) * (X' * (X * theta - y))
i vectorized the theta thing...
may could help somebody
theta = theta - (alpha/m * (X * theta-y)' * X)';
I think that your computeCost function is wrong.
I attended NG's class last year and I have the following implementation (vectorized):
m = length(y);
J = 0;
predictions = X * theta;
sqrErrors = (predictions-y).^2;
J = 1/(2*m) * sum(sqrErrors);
The rest of the implementation seems fine to me, although you could also vectorize them.
theta_1 = theta(1) - alpha * (1/m) * sum((X*theta-y).*X(:,1));
theta_2 = theta(2) - alpha * (1/m) * sum((X*theta-y).*X(:,2));
Afterwards you are setting the temporary thetas (here called theta_1 and theta_2) correctly back to the "real" theta.
Generally it is more useful to vectorize instead of loops, it is less annoying to read and to debug.
If you are OK with using a least-squares cost function, then you could try using the normal equation instead of gradient descent. It's much simpler -- only one line -- and computationally faster.
Here is the normal equation:
http://mathworld.wolfram.com/NormalEquation.html
And in octave form:
theta = (pinv(X' * X )) * X' * y
Here is a tutorial that explains how to use the normal equation: http://www.lauradhamilton.com/tutorial-linear-regression-with-octave
While not scalable like a vectorized version, a loop-based computation of a gradient descent should generate the same results. In the example above, the most probably case of the gradient descent failing to compute the correct theta is the value of alpha.
With a verified set of cost and gradient descent functions and a set of data similar with the one described in the question, theta ends up with NaN values just after a few iterations if alpha = 0.01. However, when set as alpha = 0.000001, the gradient descent works as expected, even after 100 iterations.
Using only vectors here is the compact implementation of LR with Gradient Descent in Mathematica:
Theta = {0, 0}
alpha = 0.0001;
iteration = 1500;
Jhist = Table[0, {i, iteration}];
Table[
Theta = Theta -
alpha * Dot[Transpose[X], (Dot[X, Theta] - Y)]/m;
Jhist[[k]] =
Total[ (Dot[X, Theta] - Y[[All]])^2]/(2*m); Theta, {k, iteration}]
Note: Of course one assumes that X is a n * 2 matrix, with X[[,1]] containing only 1s'
This should work:-
theta(1,1) = theta(1,1) - (alpha*(1/m))*((X*theta - y)'* X(:,1) );
theta(2,1) = theta(2,1) - (alpha*(1/m))*((X*theta - y)'* X(:,2) );
its cleaner this way, and vectorized also
predictions = X * theta;
errorsVector = predictions - y;
theta = theta - (alpha/m) * (X' * errorsVector);
If you remember the first Pdf file for Gradient Descent form machine Learning course, you would take care of learning rate. Here is the note from the mentioned pdf.
Implementation Note: If your learning rate is too large, J(theta) can di-
verge and blow up', resulting in values which are too large for computer
calculations. In these situations, Octave/MATLAB will tend to return
NaNs. NaN stands fornot a number' and is often caused by undened
operations that involve - infinity and +infinity.

Resources