Meaning of Error Term e - machine-learning

I was reading the book "Introduction to Statistical Learning". The book says that:
More generally, suppose that we observe a quantitative response Y and a set of predictor variables X1, X2, .... Xn.
We assume that there is some relationship between Y and X (X1, X2, ... Xn) which can be written in the very general form as:
Y = f(X) + e
Here, f is some fixed but unknown function of X and e is a random error term which is independent of X and has mean zero.
I want to know what does it mean to have zero mean ?

I want to know what does it mean to have zero mean ?
It means, that e, treated as a random variable has expected value of 0. In other words if you compute average of these errors, then with the sample set growing to infinity - it will converge to zero.
In more practical terms it simply means, that your noise does not change your f(x) function, but so if you observe some "positive" noise, there was exact same probability of observing "negative" noise of the same strength. Notice, that if you have e with mean m this would mean that
E[f(x) + e] = E[f(x)] + E[e] = E[f(x)] + m
thus for every single point "x" you would expect to observe value f(x) + m instead of just f(x). Thus it would be the same as modelling
g(x) + e'
where
g(x) = f(x) + m
and e' is now zero-mean random noise. Thus the whole statistical setting is still valid for non-zero mean noise, but then your task (that ML is solving) is not to model "f" but "g" instead.

Let's say for illustration that your errors are normally distributed, since in introductory settings we often make that assumption. If you're willing to accept that, then another way of thinking about zero mean error is to say that your outcome variable Y is itself a random variable that is distributed like N(f(X),sigma^2). In other words, the outcome is like a random draw from some probability distributed that is centered at f(X). Note that if you have different Xs for each Y observed, then you'll see that the value of f(X) changes, and so the normal distribution that generates each observed outcome Y changes too. Yet all the observations are tied together by that basic rule (f) about how features (i.e. your X data) got assigned to the distributions that generated your outcomes.

Related

Trying to understand expected value in Linear Regression

I'm having trouble understanding a lecture slide in my school's machine learning course
why does the expected value of Y = f(X)? what does it mean
my understanding is that X, Y are vectors and f(X) outputs a vector of Y where each individual value (y_i) in the Y vector corresponds to a f(x_i) where x_i is the value in X at index i; But now it's taking the expected value of Y, which is going to be a single value, so how is that equal to f(X)?
X, Y (uppercase) are vectors
x_i,y_i (lowercase with subscript) are scalars at index i in X,Y
There is a lot of confusion here. First let's start with definitions
Definitions
Expectation operator E[.]: Takes a random variable as an input and gives a scalar/vector as an output. Let's say Y is a normally distributed random variable with mean Mu and Variance Sigma^{2} (usually stated as:
Y ~ N( Mu , Sigma^{2} ), then E[Y] = Mu
Function f(.): Takes a scalar/vector (not a random variable) and gives a scalar/vector. In this context it is an affine function, that is f(X) = a*X + b where a and b are fixed constants.
What's Going On
Now you can view linear regression from two angles.
Stats View
One angle assumes that your response variable-Y- is a normally distributed random variable because:
Y ~ a*X + b + epsilon
where
epsilon ~ N( 0 , sigma^sq )
and X is some other distribution. We don't really care how X is distributed and treat it as given. In that case the conditional distribution is
Y|X ~ N( a*X + b , sigma^sq )
Notice here that a,b and also X is a number, there is no randomness associated with them.
Maths View
The other view is the math view where I assume that there is a function f(.) that governs the real life process, that if in real life I observe X, then f(X) should be the output. Of course this is not the case and the deviations are assumed to be due to various reasons such as gauge error etc. The claim is that this function is linear:
f(X) = a*X + b
Synthesis
Now how do we combine these? Well, as follows:
E[Y|X] = a*X + b = f(X)
About your question, I first would like to challenge that it should be Y|X and not Y by itself.
Second, there are tons of possible ontological discussions over what each term here represents in real life. X,Y (uppercase) could be vectors. X,Y (uppercase) could also be random variables. A sample of these random variables might be stored in vectors and both would be represented with uppercase letters (the best way is to use different fonts for each). In this case, your sample will become your data. Discussions about the general view of the model and its relevance to real life should be made at random variable level. The way to infer the parameters, how linear regression algorithms works should be made at matrix and vectors levels. There could be other discussion where you should care about both.
I hope this overly unorganized answer helps you. In general if you want to learn such stuff, be sure you know what kind of math objects and operators you are dealing with , what do they take as input and what are their relevance to real life.

Fourier Transform Scaling the magnitude

I have a set of signals S1, S2, ....,SN, for which I am numerically computing the Fourier transforms F1, F2, ,,,,FN. Where Si's and Fi's are C++ vectors (my computations are in C++).
My Computation objective is as follows:
compute product: F = F1*F2*...*FN
inverse Fourier transform F to get a S.
What I numerically observe is when I am trying to compute the product either I am running into situation where numbers become too small. Or numbers become too big.
I thought of scaling F1 by say a1 so that the overflow or underflow is avoided.
With the scaling my step 1 above will become
F' = (F1/a1)*(F2/a2)*...*(FN/aN) = F'1*F'2*...*F'N
And when I inverse transform my final S' will differ from S by a scale factor. The structural form of the S will not change. By this I mean only the normalization of S is different.
My question is :
Is my rationale correct.
If my rationale is correct then given a C++ vector "Fi" how can I choose a good scale "ai" to scale up or down the Fi's.
Many thanks in advance.
You can change the range of the Fi vector in a new domain, let's say [a, b]. So from your vector Fi, the minimum number will become a and the maximum number will become b. You can scale the vector using this equation:
Fnew[i] = (b-a)/(max-min) * (F[i] - min) + a,
where: min = the minimum value from F
max= the maximum value from F
The only problem now is choosing a and b. I would choose [1,2], because the values are small (so you won't get overflow easily), but not smaller than 0.

Gradient from non-trainable weights function

I'm trying to implement a self-written loss function. My pipeline is as follows
x -> {constant computation} = x_feature -> machine learning training -> y_feature -> {constant computation} = y_produced
These "constant computations" are necessary to bring out the differences between the desired o/p and produced o/p.
So if I take the L2 norm of the y_produced and y_original, how should I incorporate this loss in the original loss.
Please Note that y_produced has a different dimension than y_feature.
As long as you are using differentiable operations there is no difference between "constant transformations" and "learnable ones". There is no such distinction, look even at the linear layer of a neural net
f(x) = sigmoid( W * x + b )
is it constant or learnable? W and b are trained, but "sigmoid" is not, yet gradient flows the same way, no matter if something is a variable or not. In particular gradient wrt. to x is the same for
g(x) = sigmoid( A * x + c )
where A and c are constants.
The only problem you will encounter is using non-differentiable operations, such as: argmax, sorting, indexing, sampling etc. these operations do not have a well defined gradient thus you cannot directly use first order optimisers with them. As long as you stick with the differentiable ones - the problem described does not really exist - there is no difference between "constant transromations" and any other transformations - no matter change of the size etc.

Machine Learning: Why xW+b instead of Wx+b?

I started to learn Machine Learning. Now i tried to play around with tensorflow.
Often i see examples like this:
pred = tf.add(tf.mul(X, W), b)
I also saw such a line in a plain numpy implementation. Why is always x*W+b used instead of W*x+b? Is there an advantage if matrices are multiplied in this way? I see that it is possible (if X, W and b are transposed), but i do not see an advantage. In school in the math class we always only used Wx+b.
Thank you very much
This is the reason:
By default w is a vector of weights and in maths a vector is considered a column, not a row.
X is a collection of data. And it is a matrix nxd (where n is the number of data and d the number of features) (upper case X is a matrix n x d and lower case only 1 data 1 x d matrix).
To correctly multiply both and use the correct weight in the correct feature you must use X*w+b:
With X*w you mutliply every feature by its corresponding weight and by adding b you add the bias term on every prediction.
If you multiply w * X you multipy a (1 x d)*(n x d) and it has no sense.
I'm also confused with this. I guess this may be a dimension matter. For a n*m-dimension matrix W and a n-dimension vector x, using xW+b can be easily viewed as that maping a n-dimension feature to a m-dimension feature, i.e., you can easily think W as a n-dimension -> m-dimension operation, where as Wx+b (x must be m-dimension vector now) becomes a m-dimension -> n-dimension operation, which looks less comfortable in my opinion. :D

Are there any machine learning regression algorithms that can train on ordinal data?

I have a function f(x): R^n --> R (sorry, is there a way to do LaTeX here?), and I want to build a machine learning algorithm that estimates f(x) for any input point x, based on a bunch of sample xs in a training data set. If I know the value of f(x) for every x in the training data, this should be simple - just do a regression, or take the weighted average of nearby points, or whatever.
However, this isn't what my training data looks like. Rather, I have a bunch of pairs of points (x, y), and I know the value of f(x) - f(y) for each pair, but I don't know the absolute values of f(x) for any particular x. It seems like there ought to be a way to use this data to find an approximation to f(x), but I haven't found anything after some Googling; there are papers like this but they seem to assume that the training data comes in the form of a set of discrete labels for each entity, rather than having labels over pairs of entities.
This is just making something up, but could I try kernel density estimation over f'(x), and then do integration to get f(x)? Or is that crazy, or is there a known better technique?
You could assume that f is linear, which would simplify things - if f is linear we know that:
f(x-y) = f(x) - f(y)
For example, Suppose you assume f(x) = <w, x>, making w the parameter you want to learn. How would the squared loss per sample (x,y) and known difference d look like?
loss((x,y), d) = (f(x)-f(y) - d)^2
= (<w,x> - <w,y> - d)^2
= (<w, x-y> - d)^2
= (<w, z> - d)^2 // where z:=x-y
Which is simply the squared loss for z=x-y
Practically, you would need to construct z=x-y for each pair and then learn f using linear regression over inputs z and outputs d.
This model might be too weak for your needs, but its probably the first thing you should try. Otherwise, as soon as you step away from the linearity assumption, you'd likely arrive at a difficult non-convex optimization problem.
I don't see a way to get absolute results. Any constant in your function (f(x) = g(x) + c) will disappear, in the same way constants disappear in an integral.

Resources