When we train a training set using decision tree classifier, we will get a tree model. And this model can be converted to rules and can be incorporated into a java code.
Now if I train the training set using Naive Bayes, in what form is the model? And how can I incorporated the model into my java code?
If there is no model resulted from the training, then what is the difference between Naive Bayes and lazy learner (ex. kNN)?
Thanks in advance.
Naive Bayes constructs estimations of conditional probabilities P(f_1,...,f_n|C_j), where f_i are features and C_j are classes, which, using bayes rule and estimation of priors (P(C_j)) and evidence (P(f_i)) can be translated into x=P(C_j|f_1,...,f_n), which can be roughly read as "Given features f_i I think, that their describe object of class C_j and my certainty is x". In fact, NB assumes that festures are independent, and so it actualy uses simple propabilities in form of x=P(f_i|C_j), so "given f_i I think that it is C_j with probability x".
So the form of the model is set of probabilities:
Conditional probabilities P(f_i|C_j) for each feature f_i and each class C_j
priors P(C_j) for each class
KNN on the other hand is something completely different. It actually is not a "learned model" in a strict sense, as you don't tune any parameters. It is rather a classification algorithm, which given training set and number k simply answers question "For given point x, what is the major class of k nearest points in the training set?".
The main difference is in the input data - Naive Bayes works on objects that are "observations", so you simply need some features which are present in classified object or absent. It does not matter if it is a color, object on the photo, word in the sentence or an abstract concept in the highly complex topological object. While KNN is a distance-based classifier which requires you to classify object which you can measure distance between. So in order to classify abstract objects you have to first come up with some metric, distance measure, which describes their similarity and the result will be highly dependent on those definitions. Naive Bayes on the other hand is a simple probabilistic model, which does not use the concept of distance at all. It treats all objects in the same way - they are there or they aren't, end of story (of course it can be generalised to the continuous variables with given density function, but it is not the point here).
The Naive Bayes will construct/estimate the probability distribution from which your training samples have been generated.
Now, given this probability distribution for all your output classes, you take a test sample, and depending on which class has the highest probability of generating this sample, you assign the test sample to that class.
In short, you take the test sample and run it through all the probability distributions (one for each class) and calculate the probability of generating this test sample for that particular distribution.
Related
So as I understand it, to implement an unsupervised Naive Bayes, we assign random probability to each class for each instance, then run it through the normal Naive Bayes algorithm. I understand that, through each iteration, the random estimates get better, but I can't for the life of me figure out exactly how that works.
Anyone care to shed some light on the matter?
The variant of Naive Bayes in unsupervised learning that I've seen is basically application of Gaussian Mixture Model (GMM, also known as Expectation Maximization or EM) to determine the clusters in the data.
In this setting, it is assumed that the data can be classified, but the classes are hidden. The problem is to determine the most probable classes by fitting a Gaussian distribution per class. Naive Bayes assumption defines the particular probabilistic model to use, in which the attributes are conditionally independent given the class.
From "Unsupervised naive Bayes for data clustering with mixtures of
truncated exponentials" paper by Jose A. Gamez:
From the previous setting, probabilistic model-based clustering is
modeled as a mixture of models (see e.g. (Duda et al., 2001)), where
the states of the hidden class variable correspond to the components
of the mixture (the number of clusters), and the multinomial
distribution is used to model discrete variables while the Gaussian
distribution is used to model numeric variables. In this way we move
to a problem of learning from unlabeled data and usually the EM
algorithm (Dempster et al., 1977) is used to carry out the learning
task when the graphical structure is fixed and structural EM
(Friedman, 1998) when the graphical structure also has to be
discovered (Pena et al., 2000). In this paper we focus on the
simplest model with fixed structure, the so-called Naive Bayes
structure (fig. 1) where the class is the only root variable and all
the attributes are conditionally independent given the class.
See also this discussion on CV.SE.
Generative and discriminative models seem to learn conditional P(x|y) and joint P(x,y) probability distributions. But at the fundamental level I fail to convince myself what it means by the probability distribution is learnt.
It means that your model is either functioning as an estimator for the distribution from which your training samples were drawn, or is utilizing that estimator to perform some other prediction.
To give a trivial example, consider a set of observations {x[1], ..., x[N]}. Let's say you want to train a Gaussian estimator on it. From these samples, the maximum-likelihood parameters for this Gaussian estimator would be the mean and variance of the data
Mean = 1/N * (x[1] + ... + x[N])
Variance = 1/(N-1) * ((x[1] - Mean)^2 + ... + (x[N] - Mean)^2)
Now you have a model capable of generating new samples from (an estimate of) the distribution your training sample was drawn from.
Going a little more sophisticated, you could consider something like a Gaussian mixture model. This similarly infers the best-fitting parameters of a model given your data. Except this time, that model is comprised of multiple Gaussians. As a result, if you are given some test data, you may probabilistically assign classes to each of those samples, based on the relative contribution of each Gaussian component to the probability density at the points of observation. This of course makes the fundamental assumption of machine learning: your training and test data are both drawn from the same distribution (something you ought to check).
Okay I have a lot of confusion in regards to the way likelihood functions are defined in the context of different machine learning algorithms. For the context of this discussion, I will reference Andrew Ng 229 lecture notes.
Here is my understanding thus far.
In the context of classification, we have two different types of algorithms: discriminative and generative. The goal in both of these cases is to determine the posterior probability, that is p(C_k|x;w), where w is parameter vector and x is feature vector and C_k is kth class. The approaches are different as in discriminative we are trying to solve for the posterior probability directly given x. And in the generative case, we are determining the conditional distributions p(x|C_k), and prior classes p(C_k), and using Bayes theorem to determine P(C_k|x;w).
From my understanding Bayes theorem takes the form: p(parameters|data) = p(data|parameters)p(parameters)/p(data) where the likelihood function is p(data|parameters), posterior is p(parameters|data) and prior is p(parameters).
Now in the context of linear regression, we have the likelihood function:
p(y|X;w) where y is the vector of target values, X is design matrix.
This makes sense in according to how we defined the likelihood function above.
Now moving over to classification, the likelihood is defined still as p(y|X;w). Will the likelihood always be defined as such ?
The posterior probability we want is p(y_i|x;w) for each class which is very weird since this is apparently the likelihood function as well.
When reading through a text, it just seems the likelihood is always defined to different ways, which just confuses me profusely. Is there a difference in how the likelihood function should be interpreted for regression vs classification or say generative vs discriminative. I.e the way the likelihood is defined in Gaussian discriminant analysis looks very different.
If anyone can recommend resources that go over this in detail I would appreciate this.
A quick answer is that the likelihood function is a function proportional to the probability of seeing the data conditional on all the parameters in your model. As you said in linear regression it is p(y|X,w) where w is your vector of regression coefficients and X is your design matrix.
In a classification context, your likelihood would be proportional to P(y|X,w) where y is your vector of observed class labels. You do not have a y_i for each class, because your training data was observed to be in one particular class. Given your model specification and your model parameters, for each observed data point you should be able to calculate the probability of seeing the observed class. This is your likelihood.
The posterior predictive distribution, p(y_new_i|X,y), is the probability you want in paragraph 4. This is distinct from the likelihood because it is the probability for some unobserved case, rather than the likelihood, which relates to your training data. Note that I removed w because typically you would want to marginalize over it rather than condition on it because there is still uncertainty in the estimate after training your model and you would want your predictions to marginalize over that rather than condition on one particular value.
As an aside, the goal of all classification methods is not to find a posterior distribution, only Bayesian methods are really concerned with a posterior and those methods are necessarily generative. There are plenty of non-Bayesian methods and plenty of non-probabilistic discriminative models out there.
Any function proportional to p(a|b) where a is fixed is a likelihood function for b. Note that p(a|b) might be called something else, depending on what's interesting at the moment. For example, p(a|b) can also be called the posterior for a given b. The names don't really matter.
Here is the example where there is step by step procedure to make system learn and classify input data.
It classifies correctly for given 5 datasets domains. Additionally it also classifies stopwords.
e.g
Input : docs_new = ['God is love', 'what is where']
Output :
'God is love' => soc.religion.christian
'what is where' => soc.religion.christian
Here what is where should not be classified as it contains only stopwords. How scikit learn functions in this scenario?
I am not sure what classifier you are using. But let's assume you use a Naive Bayes classifier.
In this case, the sample is labeled as the class for which the posterior probability is maximum given a particular pattern of words.
And the posterior probability is calculated as
posterior = likelihood x prior
Note that the evidence term was dropped since it is constant). Additionally, there is an additive smoothening to avoid scenarios where the likelihood is zero.
Anyway, if you have only stop words in your input text, the likelihood is constant for all classes and the posterior probability is entirely determined by your prior probability. So, what basically happens is that a Naive Bayes classifier (if the priors were estimated from the training data) will assign the class label that occurs most often in the training data.
A classifier always predicts one of the classes that it saw during its training phase, by definition. I don't know what you did to produce the classifier, but most likely it's just predicting the majority class for any sample without interesting features; that what naive Bayes, linear SVMs and other typical text classifiers do.
Standard text classification uses TfidfVectorizer to transform text to tokens and to vectors of features as input to classifier.
One of the init parameters is stop_words, in case stop_words='english' the vectorizer will produce no features for the sentence 'what is where'.
Stop words are matched lexically against every input token using a built in english stop words list you can examine here: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/stop_words.py
I'm trying to solve a text classification problem for academic purpose. I need to classify the tweets into labels like "cloud" ,"cold", "dry", "hot", "humid", "hurricane", "ice", "rain", "snow", "storms", "wind" and "other". Each tweet in training data has probabilities against all the label. Say the message "Can already tell it's going to be a tough scoring day. It's as windy right now as it was yesterday afternoon." has 21% chance for being hot and 79% chance for wind. I have worked on the classification problems which predicts whether its wind or hot or others. But in this problem, each training data has probabilities against all the labels. I have previously used mahout naive bayes classifier which take a specific label for a given text to build model. How to convert these input probabilities for various labels as input to any classifier?
In a probabilistic setting, these probabilities reflect uncertainty about the class label of your training instance. This affects parameter learning in your classifier.
There's a natural way to incorporate this: in Naive Bayes, for instance, when estimating parameters in your models, instead of each word getting a count of one for the class to which the document belongs, it gets a count of probability. Thus documents with high probability of belonging to a class contribute more to that class's parameters. The situation is exactly equivalent to when learning a mixture of multinomials model using EM, where the probabilities you have are identical to the membership/indicator variables for your instances.
Alternatively, if your classifier were a neural net with softmax output, instead of the target output being a vector with a single [1] and lots of zeros, the target output becomes the probability vector you're supplied with.
I don't, unfortunately, know of any standard implementations that would allow you to incorporate these ideas.
If you want an off the shelf solution, you could use a learner the supports multiclass classification and instance weights. Let's say you have k classes with probabilities p_1, ..., p_k. For each input instance, create k new training instances with identical features, and with label 1, ..., k, and assign weights p_1, ..., p_k respectively.
Vowpal Wabbit is one such learner that supports multiclass classification with instance weights.