Difference between Generative, Discriminating and Parametric, Nonparametric Algorithm/Model - machine-learning

Here in SO I found the following explanation of generative and discriminitive algorithms:
"A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal?
A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal."
And here is the definition for parametric and nonparametric algorithms
"Parametric: data are drawn from a probability distribution of specific form up to unknown parameters.
Nonparametric: data are drawn from a certain unspecified probability distribution.
"
So essentially can we say that generative and parametric algorithms assume underlying model whereas discriminitve and nonparametric algorithms dont assume any model?
thanks.

Say you have inputs X (probably a vector) and output Y (probably univariate). Your goal is to predict Y given X.
A generative method uses a model of the joint probability p(X,Y) to determine P(Y|X). It is thus possible given a generative model with known parameters to sample jointly from the distribution p(X,Y) to produce new samples of both input X and output Y (note they are distributed according to the assumed, not true, distribution if you do this). Contrast this to discriminative approaches which only have a model of the form p(Y|X). Thus provided with input X they can sample Y; however, they cannot sample new X.
Both assume a model. However, discriminative approaches assume only a model of how Y depends on X, not on X. Generative approaches model both. Thus given a fixed number of parameters you might argue (and many have) that it's easier to use them to model the thing you care about, p(Y|X), than the distribution of X since you'll always be provided with the X for which you wish to know Y.
Useful references: this (very short) paper by Tom Minka. This seminal paper by Andrew Ng and Michael Jordan.
The distinction between parametric and non-parametric models is probably going to be harder to grasp until you have more stats experience. A parametric model has a fixed and finite number of parameters regardless of how many data points are observed. Most probability distributions are parametric: consider a variable z which is the height of people, assumed to be normally distributed. As you observe more people, your estimate for the parameters \mu and \sigma, the mean and standard deviation of z, become more accurate but you still only have two parameters.
In contrast, the number of parameters in a non-parametric model can grow with the amount of data. Consider an induced distribution over peoples' heights which places a normal distribution over each observed sample, with mean given by the measurement and fixed standard deviation. The marginal distribution over new heights is then a mixture of normal distributions, and the number of mixture components increases with each new data point. This is a non-parametric model of people's height. This specific example is called a kernel density estimator. Popular (but more complicated) non parametric models include Gaussian Processes for regression and Dirichlet Processes.
A pretty good tutorial on non-parametrics can be found here, which constructs the Chinese Restaurant Process as the limit of a finite mixture model.

I don't think you can say it. E.g. linear regression is a discriminative algorithm - you make an assumption about P(Y|X), and then estimate paramenters directly from the data, without making any assumption about P(X) or P(X|Y), as you would do in case of generative models. But at the same time, aby inference based on linear regression, including the properties of the paramenters, is a parametric estimation, as there is an assumption about behaviour of unobserved errors.

Here I'm only talking about parametric/non-parametric. Generative/ discriminative is a separate concept.
Non-parametric model means you don't make any assumptions on the distribution of your data. For example, in the real world, data will not 100% follow theoretical distributions like Gaussian, beta, Poisson, Weibull, etc. Those distributions are developed for our need's to model the data.
On the other hand, parametric models try to completely explain our data using parameters. In practice, this way is preferred because it makes easier to define how the model should behave in different circumstances (for example, we already know the derivative/gradients of the model, what happens when we set the rate too high/too low in Poisson, etc.)

Related

Non-linear models GAM, MARS assumptions

Do models such as MARS and GAM assume heteroscedasticity and IID errors? There seems to be a disagreement in the literature about certain assumptions. Looks like MARS is more robust than GAM but it is not clearly stated in the original papers.
If normality is an issue, should one use transformed data (Box-Cox or Yeo-Johnson) for the regression?
GAMs don't assume conditional normality; the "G" stands for generalised and indicates these models build off of the generalized linear model framework, which traditionally can model data as draws a distribution from the Exponential family of distributions.
If you fit a GAM with a Gaussian conditional distribution, then that model would assume conditional normality, but the general class of GAMs does not as one can choose an appropriate distribution for the response.
The Gaussian GAM also assumes the observations are conditionally homoscedastic. Other response distributions imply mean-variance relationships; e.g. with a Poisson response distribution, the variance equals the mean and hence larger counts are assumed to have higher variance.
GAMs do assume that the observations are i.i.d.; GEEs, GLMMs, and GAMMs are extensions that relax the assumption of independece.
MARS originally fitted via OLS so it would pick up some of the assumptions of the general linear model, but typically one uses some form of cross-validation to assess the model fit. As long as the cross-validation scheme reflects the properties of the data, then the classical assumptions of the linear model don't really apply as you're not relying on th theory to do inference.

Linear Regression: Is there a difference in the model between using ML instead MSE?

We know we need 4 things for building a machine learning algorithm:
A Dataset
A Model
A cost function
An optimization procedure
Taking the example of linear regression (y = m*x +q) we have two most common way of finding the best parameters: using ML or MSE as cost functions.
We hypotize data are Gaussian-distributed, using ML.
Is this assumption part of the model, also?
It it's not, why? Is it part of the cost function?
I can't see the "edge" of the model, in this case.
Is this assumption part of the model, also?
Yes it is. The ideas of different loss functions derived from the nature of the problem, consequently the nature of the model.
MSE by definition calculates for the mean of the squares of the errors (error means the difference between real y and predicted y) which in its turn will be high if the data is not Gaussian-Like distributed. Just imagine a few extreme values among the data, what will happen to the line slope and consequently the residual error?
It is worth mentioning the assumptions of Linear Regression:
Linear relationship
Multivariate normality
No or little multicollinearity
No auto-correlation
Homoscedasticity
If it's not, why? Is it part of the cost function?
As far I have seen, the assumption is not directly related to the cost function itself, rather related -as above-mentioned- to the model itself.
For example, Support Vector Machine idea is separation of classes. That’s finding out a line/ hyper-plane (in multidimensional space that separate outs classes), thus its cost function is Hinge Loss to "maximum-margin" of classification.
On the other hand, Logistic Regression uses Log-Loss (related to cross-entropy) because the model is binary and works on the probability of the output (0 or 1). And the list goes on...
The assumption that the data is Gaussian-distributed is part of the model in the sense that, for Gaussian distributed data the minimal Mean Squared Error also yields the maximum liklelihood solution for the data, given the model parameters. (Common proof, you can look it up if you are interested).
So you could say that the Gaussian distribution assumption justifies the choice of least squares as the loss function.

Estimating cluster assignment using Stan

I am trying to fit a finite Gaussian mixture model with unknown mean and covariances using Stan. I am aware that as HMC can't be applied to sample from discrete distributions, the marginalization technique is usually used to infer mixture parameters using Stan. However, for my application, I need the data cluster assignments. What is the best way to infer them in Stan? Suggestions will be appreciated.
Chapter 13 of the Stan User Manual discusses this in some detail. In short, you can calculate (in the generated quantities block) the posterior probability that an observation falls in each of a finite number of categories and then use that vector of probabilities to draw a category realization from the categorical distribution.

Creating synthetic dataset with known SVM parameters

I want to create a synthetic dataset consisting of 2 classes and 3 features for testing a hyperparameter optimization technique for a SVM classifier with a RBF kernel. The hyperparameters are gamma and C (the cost).
I have created my current 3D synthetic dataset as follows:
I have created 10 based points for each class by sampling from a multivariate normal distribution with mean (1,0,0) and (0,1,0), respectively, and unit variance.
I have added more points to each class by picking a base point at random and then sampling a new point from a normal distribution with mean equal to the chosen base point and variance I/5.
It would be a very cool thing if I could determine the best C and gamma from the dataset (before running SVM), so that I can see if my optimization technique provides me the best parameters in the end.
Is there a possibility to calculate the best gamma and C parameter from the synthetic dataset described above?
Or else is there a way to create a synthetic dataset where the best gamma and C parameters are known?
Very interesting question, but the answer is no. It is completely data specific, even knowing exactly the distributions, unless you have an infinite sample, it is mathematicaly impossible to prove best C/gamma as SVM in the end is purely point-based method (as opposed to density estimation based). Typical comparison is done in a different scenario - you take real data, and fit hyperparams using other techniques, like gaussian processes (bayesian optimization) etc, which generate baseline (and probably will get to optimal C and gamma too, or at least realy close to them). In the end looking for best C and gamma is not complex problem, thus simply run good techniqe (like bayesopt) for a longer time, and you will get your optimas to compare against. Furthermore, remember that the task of hyperparams optimization is not to find a particular C and gamma, it is to find hyperparams yielding best results, and in fact, even for SVM, there might be many sets of "optimal" C and gammas, all yielding the same results (in terms of your finite dataset) despite being very far away from each other.

Are these different definitions of Likelihood functions In Machine Learning equivalent?

Okay I have a lot of confusion in regards to the way likelihood functions are defined in the context of different machine learning algorithms. For the context of this discussion, I will reference Andrew Ng 229 lecture notes.
Here is my understanding thus far.
In the context of classification, we have two different types of algorithms: discriminative and generative. The goal in both of these cases is to determine the posterior probability, that is p(C_k|x;w), where w is parameter vector and x is feature vector and C_k is kth class. The approaches are different as in discriminative we are trying to solve for the posterior probability directly given x. And in the generative case, we are determining the conditional distributions p(x|C_k), and prior classes p(C_k), and using Bayes theorem to determine P(C_k|x;w).
From my understanding Bayes theorem takes the form: p(parameters|data) = p(data|parameters)p(parameters)/p(data) where the likelihood function is p(data|parameters), posterior is p(parameters|data) and prior is p(parameters).
Now in the context of linear regression, we have the likelihood function:
p(y|X;w) where y is the vector of target values, X is design matrix.
This makes sense in according to how we defined the likelihood function above.
Now moving over to classification, the likelihood is defined still as p(y|X;w). Will the likelihood always be defined as such ?
The posterior probability we want is p(y_i|x;w) for each class which is very weird since this is apparently the likelihood function as well.
When reading through a text, it just seems the likelihood is always defined to different ways, which just confuses me profusely. Is there a difference in how the likelihood function should be interpreted for regression vs classification or say generative vs discriminative. I.e the way the likelihood is defined in Gaussian discriminant analysis looks very different.
If anyone can recommend resources that go over this in detail I would appreciate this.
A quick answer is that the likelihood function is a function proportional to the probability of seeing the data conditional on all the parameters in your model. As you said in linear regression it is p(y|X,w) where w is your vector of regression coefficients and X is your design matrix.
In a classification context, your likelihood would be proportional to P(y|X,w) where y is your vector of observed class labels. You do not have a y_i for each class, because your training data was observed to be in one particular class. Given your model specification and your model parameters, for each observed data point you should be able to calculate the probability of seeing the observed class. This is your likelihood.
The posterior predictive distribution, p(y_new_i|X,y), is the probability you want in paragraph 4. This is distinct from the likelihood because it is the probability for some unobserved case, rather than the likelihood, which relates to your training data. Note that I removed w because typically you would want to marginalize over it rather than condition on it because there is still uncertainty in the estimate after training your model and you would want your predictions to marginalize over that rather than condition on one particular value.
As an aside, the goal of all classification methods is not to find a posterior distribution, only Bayesian methods are really concerned with a posterior and those methods are necessarily generative. There are plenty of non-Bayesian methods and plenty of non-probabilistic discriminative models out there.
Any function proportional to p(a|b) where a is fixed is a likelihood function for b. Note that p(a|b) might be called something else, depending on what's interesting at the moment. For example, p(a|b) can also be called the posterior for a given b. The names don't really matter.

Resources