In machine learning suppose we have a GDA (Gaussian Discriminant Analysis) model for classification.
If y can take values 0 or 1 and x represents the vector with n features(n x 1 dimensional)
What does p(x| y=0) or p(x|y=1) signify for a particular training example?
x is actually a vector..how is conditional probability defined for this?
Any help would be much appreciated.
Say that X0 is the set of vectors x that mapped to output 0, and X1 is the set of vectors x that mapped to output 1. Take the mean of each set's vectors, and, similarly, approximate the covariance.
Now build two multivariate normal distributions, with these means and covariances, respectively.
Once you have these distribution, simply plug in the vector you want into the PDF to obtain its density. Note that since the probabilities are continuous, the probability about which you asked is 0, in general.
Related
What approach is the best one to normalize / standardize features that have no theoretical maximum value ?
for example a trend like a stock value that has always been between 0-1000$ doesn't mean it couldn't go further up, so what is the correct approach?
i thought about training a model on a higher maximum (ex. 2000 ),but it doesn't feel right, because no data would be available for the 1000-2000 range, and i think this would introduce bias
TL;DR: use z-scores, maybe take log, maybe take inverse logit, maybe don't normalize at all.
If you wish to normalize safely, use a monotonic mapping, e.g.:
To map (0, inf) into (-inf, inf), you can use y = log(x)
To map (-inf, inf) into (0, 1), you can use y = 1 / (1 + exp(-x)) (inverse logit)
To map (0, inf) into (0, 1), you can use y = x / (1 + x) (inverse logit after log)
If you don't care about bounds, use a linear mapping: y=(x - m) / s, where m is the mean of your feature, and s is its standard deviation. This is called standard scaling, or sometimes z-scoring.
The question you should have asked yourself: why normalize at all?. What are you going to do with your data? Use it as an input feature? Or use it as a target to predict?
For an input feature, leaving it not-normalized is OK, unless you do regularization on model coefficients (like Ridge or Lasso), which works best if all the coefficients are in the same scale (that is, after standard scaling).
For a target feature, leaving it non-normalized is sometimes also OK.
Additive models (like linear regression or gradient boosting) sometimes work better with symmetric distributions. Distributions of stock values (and money values in general) are often skewed to the right, so taking log makes them more convenient.
Finally, if you predict your feature with a neural net with sigmoid activation function, it is inherently bounded. In this case, you might wish the target to be bounded as well. To achieve this, you may use x / (1 + x) as a target: if x is always positive, this value will always be between 0 and 1, just like output of the neural net.
This is a very basic question but I cannot could not find enough reasons to convince myself. Why must logistic regression use multiplication instead of addition for the likelihood function l(w)?
Your question is more general than just joint likelihood for logistic regression. You're asking why we multiply probabilities instead of add them to represent a joint probability distribution. Two notes:
This applies when we assume random variables are independent. Otherwise we need to calculate conditional probabilities using the chain rule of probability. You can look at wikipedia for more information.
We multiply because that's how the joint distribution is defined. Here is a simple example:
Say we have two probability distributions:
X = 1, 2, 3, each with probability 1/3
Y = 0 or 1, each with probability 1/2
We want to calculate the joint likelihood function, L(X=x,Y=y), which is that X takes on values x and Y takes on values y.
For example, L(X=1,Y=0) = P(X=1) * P(Y=0) = 1/6. It wouldn't make sense to write P(X=1) + P(Y=0) = 1/3 + 1/2 = 5/6.
Now it's true that in maximum likelihood estimation, we only care about those values of some parameter, theta, which maximizes the likelihood function. In this case, we know that if theta maximizes L(X=x,Y=y) then the same theta will also maximize log L(X=x,Y=y). This is where you may have seen addition of probabilities come into play.
Hence we can take the log P(X=x,Y=y) = log P(X=x) + log P(Y=y)
In short
This could be summarized as "joint probabilities represent an AND". When X and Y are independent, P(X AND Y) = P(X,Y) = P(X)P(Y). Not to be confused with P(X OR Y) = P(X) + P(Y) - P(X,Y).
Let me know if this helps.
I started to learn Machine Learning. Now i tried to play around with tensorflow.
Often i see examples like this:
pred = tf.add(tf.mul(X, W), b)
I also saw such a line in a plain numpy implementation. Why is always x*W+b used instead of W*x+b? Is there an advantage if matrices are multiplied in this way? I see that it is possible (if X, W and b are transposed), but i do not see an advantage. In school in the math class we always only used Wx+b.
Thank you very much
This is the reason:
By default w is a vector of weights and in maths a vector is considered a column, not a row.
X is a collection of data. And it is a matrix nxd (where n is the number of data and d the number of features) (upper case X is a matrix n x d and lower case only 1 data 1 x d matrix).
To correctly multiply both and use the correct weight in the correct feature you must use X*w+b:
With X*w you mutliply every feature by its corresponding weight and by adding b you add the bias term on every prediction.
If you multiply w * X you multipy a (1 x d)*(n x d) and it has no sense.
I'm also confused with this. I guess this may be a dimension matter. For a n*m-dimension matrix W and a n-dimension vector x, using xW+b can be easily viewed as that maping a n-dimension feature to a m-dimension feature, i.e., you can easily think W as a n-dimension -> m-dimension operation, where as Wx+b (x must be m-dimension vector now) becomes a m-dimension -> n-dimension operation, which looks less comfortable in my opinion. :D
I am reading the theory of SVM. In kernel trick, what I understand is, if we have a data which is not linear separable in the original dimensions n, we use the kernel to map the data to a higher space to be linear separable (we have to choose the right kernel depending on the data set, etc). However, when I watched this video of Andrew ng Kernel SVM, What I understand is we can map original data into a smaller space which make me confused!? Any explanation.
Could you explain me how does RBF kernel work to map each original data sample x1(x11,x12,x13,....,x1n) to a higher space (with dimensions m) to be X1(X11,X12,X13,...,X1m) with a concrete example. Also, what I understand is the kernel compute the inner product of the transformed data (so there is an other transformation before the RBF, which means that RBF transform implicitly the data to a higher space but How?).
other thing: the kernel is a function k(x,x1):(R^n)^2->R =g(x).g(x1), with g is a transformation function, how to define g in the case of RBF kernel?
Suppose that we are in the test set, What I understand is x is the sample to be classified and x1 is the support vector (because only the support vectors will be used to calculate the hyperplane). in the case of RBF
k(x,x1)=exp(-(x-x1)^2/2sigma), so where is the transformation?
Last question: Admit that the RBF do the mapping to a higher dimension m, it is possible to show this m? I want to see the theoretical reality.
I want to implement SVM with RBF kernel. What is the m here and how to choose it? How to implement kernel trick in practice?
Could you explain me how does RBF kernel work to map each original data sample x1(x11,x12,x13,....,x1n) to a higher space (with dimensions m) to be X1(X11,X12,X13,...,X1m) with a concrete example. Also, what I understand is the kernel compute the inner product of the transformed data (so there is an other transformation before the RBF, which means that RBF transform implicitly the data to a higher space but How?).
Exactly as you said - kernel is an inner product of the projected space, not the projection itself. The whole trick is that you do not ever transform your data, because it is computationally too expensive to do so.
other thing: the kernel is a function k(x,x1):(R^n)^2->R =g(x).g(x1), with g is a transformation function, how to define g in the case of RBF kernel?
For rbf kernel, g is actually a mapping from R^n into the space of continuous functions (L2), and each point is mapped into unnormalized gaussian distribution with mean x, and variance sigma^2. Thus (up to some normalizing constant A that we will drop)
g(x) = N(x, sigma^2)[z] / A # notice this is not a number but a function of z!
and now inner product in the space of functions is the integral of products over the whole domain thus
K(x, y) = <g(x), g(y)>
= INT_{R^n} N(x, sigma^2)[z] N(y, sigma^2)[z] / A^2 dz
= B exp(-||x-y||^2 / (2*sigma^2))
where B is some constant factor (normalization) depending solely on sigma^2, thus we can drop it (as scaling does not really matter here) for computational simplicity.
Suppose that we are in the test set, What I understand is x is the sample to be classified and x1 is the support vector (because only the support vectors will be used to calculate the hyperplane). in the case of RBF k(x,x1)=exp(-(x-x1)^2/2sigma), so where is the transformation?
as said before - transformation is never explicitly used, you simply show that inner product of your hyperplane with the transformed point can be expressed again as inner products with support vectors, thus you do not ever transform anything, just use kernels
<w, g(x)> = < SUM_{i=1}^N alpha_i y_i g(sv_i), g(x)>
= SUM_{i=1}^N alpha_i y_i <g(sv_i), g(x)>
= SUM_{i=1}^N alpha_i y_i K(sv_i, x)
where sv_i is i'th support vector, alpha_i is the per-sample weight (Lagrange multiplier) found during the optimization process and y_i is label of i'th support vector.
Last question: Admit that the RBF do the mapping to a higher dimension m, it is possible to show this m? I want to see the theoretical reality.
In this case m is infinity, as your new space is space of continuous functions in the domain of R^n -> R, thus a single vector (function) is defined as a continuum (size of the set of real numbers) values - one per each possible input value coming from R^n (it is a simple set theory result that R^n for any positive n is of size continuum). Thus in terms of pure mathematics, m = |R|, and using set theory this is so called Beth_1 (https://en.wikipedia.org/wiki/Beth_number).
I want to implement SVM with RBF kernel. What is the m here and how to choose it? How to implement kernel trick in practice?
You do not choose m, it is defined by the kernel itself. Implementing kernel trick in practise requires expressing all your optimization routines in the form, where training points are used solely in the context of inner products, and just replacing them with kernel calls. This is way too complex to describe in SO form.
I previously asked for an explanation of linearly separable data. Still reading Mitchell's Machine Learning book, I have some trouble understanding why exactly the perceptron rule only works for linearly separable data?
Mitchell defines a perceptron as follows:
That is, it is y is 1 or -1 if the sum of the weighted inputs exceeds some threshold.
Now, the problem is to determine a weight vector that causes the perceptron to produce the correct output (1 or -1) for each of the given training examples. One way of achieving this is through the perceptron rule:
One way to learn an acceptable weight vector is to begin with random
weights, then iteratively apply the perceptron to each training
example, modify- ing the perceptron weights whenever it misclassifies
an example. This process is repeated, iterating through the training
examples as many times as needed until the perceptron classifies all
training examples correctly. Weights are modified at each step
according to the perceptron training rule, which revises the weight wi
associated with input xi according to the rule:
So, my question is: Why does this only work with linearly separable data? Thanks.
Because the dot product of w and x is a linear combination of xs, and you, in fact, split your data into 2 classes using a hyperplane a_1 x_1 + … + a_n x_n > 0
Consider a 2D example: X = (x, y) and W = (a, b) then X * W = a*x + b*y. sgn returns 1 if its argument is greater than 0, that is, for class #1 you have a*x + b*y > 0, which is equivalent to y > -a/b x (assuming b != 0). And this equation is linear and divides a 2D plane into 2 parts.