ML coursera submission (week 2) Feature Normalization - machine-learning

I have written the following code for section "feature normalization"
Here X is the Feature matrix (m*n) such that
m = number of examples
n = number of features
Code
mu = mean(X);
sigma = std(X);
m = size(X,1);
% Subtracting the mean from each row
for i = 1:m
X_norm(i,:) = X(i,:)-mu;
end;
% Dividing the STD from each row
for i = 1:m
X_norm(i,:) = X(i,:)./sigma;
end;
But on submitting it to the server built for Andrew Ng's class, It's not giving me any confirmation if it's wrong or correct.
==
== Part Name | Score | Feedback
== --------- | ----- | --------
== Warm-up Exercise | 10 / 10 | Nice work!
== Computing Cost (for One Variable) | 40 / 40 | Nice work!
== Gradient Descent (for One Variable) | 50 / 50 | Nice work!
== Feature Normalization | 0 / 0 |
== Computing Cost (for Multiple Variables) | 0 / 0 |
== Gradient Descent (for Multiple Variables) | 0 / 0 |
== Normal Equations | 0 / 0 |
== --------------------------------
== | 100 / 100 |
Is this a bug in the web frontend presentation layer or my code?

When the submit() does not give you any points, this means your answer is not correct.
This usually means, that either you have not implemented it yet, or there is a mistake in your implementation.
From what I can see, your indices are not correct. However, in order not to violate the code of conduct of this course, you should ask your question in the Coursera forum (without posting your code).
There are also tutorials with each programming exercise. Those are usually very helpful and guide you through the entire exercise.

You need to iterate for EACH feature
m = size(X,1);
What you are actually getting with m is the number of ROWS (example), but you want to get the number of COLUMNS (Features)
solution:
m = size(X,2);

Try this it worked for and notice that you're making mistake with dividing each row of X without subtracting to mean.
Combine both and do with less code like this -
% Subtracting the mean and Dividing the STD from each row:
for i = 1:m
X_norm(i,:) = (X(i,:) - mu) ./ sigma;
end;

At the end of the class, the final correct answer was given as featureNormalize.m:
function [X_norm, mu, sigma] = featureNormalize(X)
%description: Normalizes the features in X
% FEATURENORMALIZE(X) returns a normalized version of X where
% the mean value of each feature is 0 and the standard deviation
% is 1. This is often a good preprocessing step to do when
% working with learning algorithms.
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
% Instructions: First, for each feature dimension, compute the mean
% of the feature and subtract it from the dataset,
% storing the mean value in mu. Next, compute the
% standard deviation of each feature and divide
% each feature by it's standard deviation, storing
% the standard deviation in sigma.
%
% Note that X is a matrix where each column is a
% feature and each row is an example. You need
% to perform the normalization separately for
% each feature.
mu = mean(X);
sigma = std(X);
X_norm = (X - mu)./sigma;
end
If you're taking this class and feel the urge to copy and paste, you're in a grey-area on academic honesty. You're supposed to figure it out from first principles, not google it and regurgitate the answer.

Related

Dealing with NaN (missing) values for Logistic Regression- Best practices?

I am working with a data-set of patient information and trying to calculate the Propensity Score from the data using MATLAB. After removing features with many missing values, I am still left with several missing (NaN) values.
I get errors due to these missing values, as the values of my cost-function and gradient vector become NaN, when I try to perform logistic regression using the following Matlab code (from Andrew Ng's Coursera Machine Learning class) :
[m, n] = size(X);
X = [ones(m, 1) X];
initial_theta = ones(n+1, 1);
[cost, grad] = costFunction(initial_theta, X, y);
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = ...
fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
Note: sigmoid and costfunction are working functions I created for overall ease of use.
The calculations can be performed smoothly if I replace all NaN values with 1 or 0. However I am not sure if that is the best way to deal with this issue, and I was also wondering what replacement value I should pick (in general) to get the best results for performing logistic regression with missing data. Are there any benefits/drawbacks to using a particular number (0 or 1 or something else) for replacing the said missing values in my data?
Note: I have also normalized all feature values to be in the range of 0-1.
Any insight on this issue will be highly appreciated. Thank you
As pointed out earlier, this is a generic problem people deal with regardless of the programming platform. It is called "missing data imputation".
Enforcing all missing values to a particular number certainly has drawbacks. Depending on the distribution of your data it can be drastic, for example, setting all missing values to 1 in a binary sparse data having more zeroes than ones.
Fortunately, MATLAB has a function called knnimpute that estimates a missing data point by its closest neighbor.
From my experience, I often found knnimpute useful. However, it may fall short when there are too many missing sites as in your data; the neighbors of a missing site may be incomplete as well, thereby leading to inaccurate estimation. Below, I figured out a walk-around solution to that; it begins with imputing the least incomplete columns, (optionally) imposing a safe predefined distance for the neighbors. I hope this helps.
function data = dnnimpute(data,distCutoff,option,distMetric)
% data = dnnimpute(data,distCutoff,option,distMetric)
%
% Distance-based nearest neighbor imputation that impose a distance
% cutoff to determine nearest neighbors, i.e., avoids those samples
% that are more distant than the distCutoff argument.
%
% Imputes missing data coded by "NaN" starting from the covarites
% (columns) with the least number of missing data. Then it continues by
% including more (complete) covariates in the calculation of pair-wise
% distances.
%
% option,
% 'median' - Median of the nearest neighboring values
% 'weighted' - Weighted average of the nearest neighboring values
% 'default' - Unweighted average of the nearest neighboring values
%
% distMetric,
% 'euclidean' - Euclidean distance (default)
% 'seuclidean' - Standardized Euclidean distance. Each coordinate
% difference between rows in X is scaled by dividing
% by the corresponding element of the standard
% deviation S=NANSTD(X). To specify another value for
% S, use D=pdist(X,'seuclidean',S).
% 'cityblock' - City Block distance
% 'minkowski' - Minkowski distance. The default exponent is 2. To
% specify a different exponent, use
% D = pdist(X,'minkowski',P), where the exponent P is
% a scalar positive value.
% 'chebychev' - Chebychev distance (maximum coordinate difference)
% 'mahalanobis' - Mahalanobis distance, using the sample covariance
% of X as computed by NANCOV. To compute the distance
% with a different covariance, use
% D = pdist(X,'mahalanobis',C), where the matrix C
% is symmetric and positive definite.
% 'cosine' - One minus the cosine of the included angle
% between observations (treated as vectors)
% 'correlation' - One minus the sample linear correlation between
% observations (treated as sequences of values).
% 'spearman' - One minus the sample Spearman's rank correlation
% between observations (treated as sequences of values).
% 'hamming' - Hamming distance, percentage of coordinates
% that differ
% 'jaccard' - One minus the Jaccard coefficient, the
% percentage of nonzero coordinates that differ
% function - A distance function specified using #, for
% example #DISTFUN.
%
if nargin < 3
option = 'mean';
end
if nargin < 4
distMetric = 'euclidean';
end
nanVals = isnan(data);
nanValsPerCov = sum(nanVals,1);
noNansCov = nanValsPerCov == 0;
if isempty(find(noNansCov, 1))
[~,leastNans] = min(nanValsPerCov);
noNansCov(leastNans) = true;
first = data(nanVals(:,noNansCov),:);
nanRows = find(nanVals(:,noNansCov)==true); i = 1;
for row = first'
data(nanRows(i),noNansCov) = mean(row(~isnan(row)));
i = i+1;
end
end
nSamples = size(data,1);
if nargin < 2
dataNoNans = data(:,noNansCov);
distances = pdist(dataNoNans);
distCutoff = min(distances);
end
[stdCovMissDat,idxCovMissDat] = sort(nanValsPerCov,'ascend');
imputeCols = idxCovMissDat(stdCovMissDat>0);
% Impute starting from the cols (covariates) with the least number of
% missing data.
for c = reshape(imputeCols,1,length(imputeCols))
imputeRows = 1:nSamples;
imputeRows = imputeRows(nanVals(:,c));
for r = reshape(imputeRows,1,length(imputeRows))
% Calculate distances
distR = inf(nSamples,1);
%
noNansCov_r = find(isnan(data(r,:))==0);
noNansCov_r = noNansCov_r(sum(isnan(data(nanVals(:,c)'==false,~isnan(data(r,:)))),1)==0);
%
for i = find(nanVals(:,c)'==false)
distR(i) = pdist([data(r,noNansCov_r); data(i,noNansCov_r)],distMetric);
end
tmp = min(distR(distR>0));
% Impute the missing data at sample r of covariate c
switch option
case 'weighted'
data(r,c) = (1./distR(distR<=max(distCutoff,tmp)))' * data(distR<=max(distCutoff,tmp),c) / sum(1./distR(distR<=max(distCutoff,tmp)));
case 'median'
data(r,c) = median(data(distR<=max(distCutoff,tmp),c),1);
case 'mean'
data(r,c) = mean(data(distR<=max(distCutoff,tmp),c),1);
end
% The missing data in sample r is imputed. Update the sample
% indices of c which are imputed.
nanVals(r,c) = false;
end
fprintf('%u/%u of the covariates are imputed.\n',find(c==imputeCols),length(imputeCols));
end
To deal with missing data you can use one of the following three options:
If there are not many instances with missing values, you can just delete the ones with missing values.
If you have many features and it is affordable to lose some information, delete the entire feature with missing values.
The best method is to fill some value (mean, median) in place of missing value. You can calculate the mean of the rest of the training examples for that feature and fill all the missing values with the mean. This works out pretty well as the mean value stays in the distribution of your data.
Note: When you replace the missing values with the mean, calculate the mean only using training set. Also, store that value and use it to change the missing values in the test set also.
If you use 0 or 1 to replace all the missing values then the data may get skewed so it is better to replace the missing values by an average of all the other values.

Couting or integrating multivariate Gaussian probabilities on opposite side of non-linear decision line

So I have something that looks like the following:
However, I am having real trouble integrating the data on the other side of this decision line to get my errors.
In general, given analytic form of the decision boundary you could compute the integrals exactly. However, why not use monte carlo which is fast, simple and generic (will work for any distributions and decision boundaries). All you have to do is repeatedly sample from your gaussians, check if the sampled point is on the correct side (N_c) or incorrect (N_i) and in the limit you will get your integrals from
INTEGRAL_of_distributions_being_on_correct_side ~ N_c / (N_c + N_i)
INTEGRAL_of_distributions_being_on_incorrect_side ~ N_i / (N_c + N_i)
thus in pseudo code:
N_c = 0
N_i = 0
for i=1 to N do
y ~ P({-, +}) # sample distribution
x ~ P(X|y) # sample point from given class
if side_of_decision(x) == y then
N_c += 1
else
N_i += 1
end
end
return N_c, N_i
In your case P({-, +}) is probably just 50-50 chance and P(X|-) and P(X|+) are your two Gaussians.

What are some specific examples of Ensemble Learning?

What are some concrete real life examples which can be solved using Boosting/Bagging algorithms? Code snippets would be greatly appreciated.
Ensembles are used to fight overfitting / improve generalization or to fight specific weaknesses / use strength of different classifiers. They can be applied in any classification task.
I used ensembles in my masters thesis. The code is on Github.
Example 1
For example, think of a binary problem where you have to tell if a data point is of class A or B. This could be an image and you have to decide if there is a (A) a dog or (B) a cat on it. Now you have two classifiers (1) and (2) (e.g. two neural networks, but trained in different ways; or one SVM and a decision tree, or ...). They make the following errors:
(1): Predicted
T | A B
R ------------
U A | 90% 10%
E B | 50% 50%
(2): Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
You could, for example, combine them to an ensemble by first using (1). If it predicts B, then you can use (2). Otherwise you stick with it.
Now, what would be the expected error, (falsely) assuming both are independent)?
If the true class is A, then we predict with 90% the true result. In 10% of the cases we predict B and use the second classifier. This one gets it right in 60% of the cases. This means if we have A, we predict A in 0.9 + 0.1*0.6 = 0.96 = 96% of the cases.
If the true class is B, we predict in 50% of the cases B. But we also need to get it right the second time, so only in 0.5*0.6 = 0.3 = 30% of the cases we get it right there.
So in this simple example we made the situation better for one class, but worse for the other.
Example 2
Now, lets say we have 3 classifiers with
Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
each, but the classifications are independent. What do you get when you make a majority vote?
If you have class A, the probability that at least two say it is class A is
0.6 * 0.6 * 0.6 + 0.6 * 0.6 * 0.4 + 0.6 * 0.4 * 0.6 + 0.4 * 0.6 * 0.6
= 1*0.6^3 + 3*(0.6^2 * 0.4^1)
= (3 nCr 3) * 0.6 + (3 nCr 2) * (0.6^2 * 0.4^1)
= 0.648
The same goes for the other class. So we improved the classifier to
Predicted
T | A B
R ------------
U A | 65% 35%
E B | 35% 65%
Code
See sklearns page on Ensembles for code.
The most specific example of ensemble learning are random forests.
Ensemble is the art of combining diverse set of learners (individual models) together to improvise on the stability and predictive power of the model.
Ensemble Learning Techniques:
Bagging : Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. In generalized bagging, you can use different learners on different population.
Boosting : Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. Boosting in general decreases the bias error and builds strong predictive models.
Stacking : This is a very interesting way of combining models. Here we use a learner to combine output from different learners. This can lead to decrease in either bias or variance error depending on the combining learner we use.
for more reference:
Basics of Ensemble Learning Explained
Here's Python based pseudo code for basic Ensemble Learning:
# 3 ML/DL models -> first_model, second_model, third_model
all_models = [first_model, second_model, third_model]
first_model.load_weights(first_weight_file)
second_model.load_weights(second_weight_file)
third_model.load_weights(third_weight_file)
def ensemble_average(models: List [Model]): # averaging
outputs = [model.outputs[0] for model in all_models]
y = Average()(outputs)
model = Model(model_input, y, name='ensemble_average')
pred = model.predict(x_test, batch_size = 32)
pred = numpy.argmax(pred, axis=1)
E = numpy.sum(numpy.not_equal(pred, y_test))/ y_test.shape[0]
return E
def ensemble_vote(models: List [Model]): # max-voting
pred = []
yhats = [model.predict(x_test) for model in all_models]
yhats = numpy.argmax(yhats, axis=2)
yhats = numpy.array(yhats)
for i in range(0,len(x_test)):
m = mode([yhats[0][i], yhats[1][i], yhats[2][i]])
pred = numpy.append(pred, m[0])
E = numpy.sum(numpy.not_equal(pred, y_test))/ y_test.shape[0]
return E
# Errors calculation
E1 = ensemble_average(all_models);
E2 = ensemble_vote(all_models);

Erlang Calculating Pi to X decimal places

I have been given this question to work on a solution. I'm struggling to get my head around the recursion. Some break down of the question would be very helpful.
Given that Pi can be estimated using the function 4 * (1 – 1/3 + 1/5 – 1/7 + …) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
I have got some example code however I really don't understand where/why the variables are entered like this. Possible breakdown of this code and why it is not accurate would be appreciated.
-module (pi).
-export ([pi/0]).
pi() -> 4 * pi(0,1,1).
pi(T,M,D) ->
A = 1 / D,
if
A > 0.00001 -> pi(T+(M*A), M*-1, D+2);
true -> T
end.
The formula comes from the evaluation of tg(pi/4) which is equal to 1. The inverse:
pi/4 = arctg(1)
so
pi = 4* arctg(1).
using the technique of the Taylor series:
arctg (x) = x - x^3/3 + ... + (-1)^n x^(2n+1)/(2n+1) + o(x^(2n+1))
so when x = 1 you get your formula:
pi = 4 * (1 – 1/3 + 1/5 – 1/7 + …)
the problem is to find an approximation of pi with an accuracy of 0.00001 (5 decimal). Lookinq at the formula, you can notice that
at each step (1/3, 1/5,...) the new term to add:
is smaller than the previous one,
has the opposite sign.
This means that each term is an upper estimation of the error (the term o(x^(2n+1))) between the real value of pi and the evaluation up to this term.
So it can be use to stop the recursion at a level where it is guaranty that the approximation is better than this term. To be correct, the program
you propose multiply the final result of the recursion by 4, so the error is no more guaranteed to be smaller than term.
looking at the code:
pi() -> 4 * pi(0,1,1).
% T = 0 is the initial estimation
% M = 1 is the sign
% D = 1 initial value of the term's index in the Taylor serie
pi(T,M,D) ->
A = 1 / D,
% evaluate the term value
if
A > 0.00001 -> pi(T+(M*A), M*-1, D+2);
% if the precision is not reach call the pi function with,
% new serie's evaluation (the previous one + sign * term): T+(M*A)
% new inverted sign: M*-1
% new index: D+2
true -> T
% if the precision is reached, give the result T
end.
To be sure that you have reached the right accuracy, I propose to replace A > 0.00001 by A > 0.0000025 (= 0.00001/4)
I can't find any error in this code, but I can't test it right now, anyway:
T is probably "total", M is "multiplicator", and D is "divisor".
By every step you:
check (the 'if' is in some way similar to a switch/case in c/c++/java) if the next term (A = 1/D) is bigger than 0.00001. If not, you can stop the recursion, you've got the 5 decimal places you were looking for. So "if true (default case) -> return T"
if it's bigger, you multiply A by M, add to the total, then multiply M by -1, add 2 to D, and repeat (so you get the next term, add again, and so on).
pi(T,M,D) ->
A = 1 / D,
if
A > 0.00001 -> pi(T+(M*A), M*-1, D+2);
true -> T
end.
I don't know Erlang myself but from the looks of it you are checking if 1/D is < 0.00001 when in reality you should be checking 4 * 1/D because that 4 is going to be multiplied through. For example in your case if 1/D was 0.000003 you would stop four function, but your total would actually have changed by 0.000012. Hope this helps.

How does fitEllipse work in OpenCV?

I am working with opencv and I need to understand how does the function fitEllipse exactly works. I looked at the code at (https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/shapedescr.cpp) and I know it uses least-squares to determine the likely ellipses. I also looked at the paper given in the documentation(Andrew W. Fitzgibbon, R.B.Fisher. A Buyer’s Guide to Conic Fitting. Proc.5th British Machine Vision Conference, Birmingham, pp. 513-522, 1995.)
But I cannot understand exactly the algorithm. For example, why does it need to solve 3 times the least square problem? why bd is initialized to 10000 before the first svd(I guess it is juste a random value for the initialization but why this value can be random?)? why does the values in Ad needs to be negative before the first svd?
Thank you!
Here is Matlab code.. it might help
function [Q,a]=fit_ellipse_fitzgibbon(data)
% function [Q,a]=fit_ellipse_fitzgibbon(data)
%
% Ellipse specific fit, according to:
%
% Direct Least Square Fitting of Ellipses,
% A. Fitzgibbon, M. Pilu and R. Fisher. PAMI 1996
%
%
% See Also:
% FIT_ELLIPSE_LS
% FIT_ELLIPSE_HALIR
[m,n] = size(data);
assert((m==2||m==3)&&n>5);
x = data(1,:)';
y = data(2,:)';
D = [x.^2 x.*y y.^2 x y ones(size(x))]; % design matrix
S = D'*D; % scatter matrix
C(6,6)=0; C(1,3)=-2; C(2,2)=1; C(3,1)=-2; % constraints matrix
% solve the generalized eigensystem
[V,D] = eig(S, C);
% find the only negative eigenvalue
[n_r, n_c] = find(D<0 & ~isinf(D));
if isempty(n_c),
warning('Error getting the ellipse parameters, will do LS');
[Q,a] = fit_ellipse_ls(data); %
return;
end
% the parameters
a = V(:, n_c);
[A B C D E F] = deal(a(1),a(2),a(3),a(4),a(5),a(6)); % deal is slow!
Q = [A B/2 D/2; B/2 C E/2; D/2 E/2 F];
end % fit_ellipse_fitzgibbon
Fitzibbon solution has some numerical stability though. See the work of Halir for a solution to this.
It is essentially least squares solution, but specifically designed so that it will produce a valid ellipse, not just any conic.

Resources