Understanding Naive Bayes - machine-learning

I've been looking around, and can't seem to find an answer to this question:
If I train Naive-bayes to be a classifier on some data. Then I RE-USE this training data as the TEST DATA. Shouldn't I get 100% classification success? Thanks for reading!
Edit: It seems I've stimulated a discussion above my level of understanding. As such I don't feel it is up to me to take the role of 'accepting' an answer. However I am grateful for your input and will read all answer.

Actually, despite being accepted answer, #flyingmeatball's answer is (at least partially) wrong in this particular case. It describes related phenomenon, but clearly not the crucial one for the situation given.
What you described is a case, where you expect your model to have 100% training accuracy, and it does not. This has nothing to do with "data not expressing phenomenon enough" - this would be true for high generalization error, not training one.
Less than 100% training error means, that maybe data itself is too noisy to model (which flyingmeatball suggests), but actually, for training set this would be the case if and only if there are two exactly the same points with different labels. If this is not the case (and probably is not) the actual "problem" is that model that you selected has some internal bias. In simple terms - think about it as assumptions about the data, or even constraints, which your model will not change even if data clearly does not follow it. In particular, Naive Bayes have two such assumptions:
Features are independent, meaning that there is no correlation, no important link between label and more than a single feature at the time. If your features are wind and temperature, Naive Bayes will asume that it can make good decisions based on temperature itself, for example assuming that "good temperature is around 20 degress" and the same for wind, for example "at most 10km/h". It will fail to find relation based on both values, like "it is important for the temperature minus the wind to be at least 30", or something similar
It assumes particular distribution of values over each feature - usually this is MultiNomial distribution or Gaussian. These are nice families of distributions, but lots of features do not follow them. For example if your feature is "time at which people buy at my grocery store" (say you treat it as a continuos variable, measures exactly in some microseconds etc.), you will notice, that you have two "peak hours" one in the morning and one in the evening, thus Naive Bayes will do a horrible job fitting a single gaussian, which will have a peak at noon instead! Again, wrong assumption leads to wrong decisions.
Why do we do such assumptions then? Well, for many reasons, but one of them is because we care about generalization and not training, consequently it is a way of preventing our model from overfitting to heavily, at the cost of "underfitting" training set. This also helps dealing with noise, simplifies optimization and makes lots of other wonderful things :-)
Hope this helps.

No. Not all of the variability in the data may be explained in the features you have selected. Imagine you are classifying whether it is a good day to play tennis. Your features are temp, wind, precip. Those may be good descriptors, but you didn't train on whether there is a parade in town! On the parade days, the tennis court is blocked off, so even though your features should do a good job of explaining the known data, there are outliers that didn't fit the features.
Generally, there is randomness in data that you will not be able to capture 100%.
Updated per comments below
The question is whether training and testing on the same dataset will be 100% accurate, which I think we both agree will not work (they didn't ask what the assumptions of the NB were). Here is a sample dataset demonstrating the scenario above:
import pandas as pd
import numpy as np
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
df = pd.DataFrame([[1,1,0],[1,0,0],[0,0,1],[1,0,1],[1,1,0]], columns = ['hot','windy','rainy'])
targets = [1,1,0,0,0]
preds = gnb.fit(df, targets).predict(df)
print preds
array([1, 1, 0, 0, 1])
Notice that the first case and the last case are identical, but the classifier missed the prediction for the last case. This is because the data at hand does not always perfectly describe accompanying classification. There are many other assumptions to NB,which could also describe cases where it fails, (which you excellently pointed out below) but my goal was just to come up with a quick demonstration that they would hopefully understand and would answer the question.

Related

Is it considered overfit a decision tree with a perfect attribute?

I have a 6-dimensional training dataset where there is a perfect numeric attribute which separates all the training examples this way: if TIME<200 then the example belongs to class1, if TIME>=200 then example belongs to class2. J48 creates a tree with only 1 level and this attribute as the only node.
However, the test dataset does not follow this hypothesis and all the examples are missclassified. I'm having trouble figuring out whether this case is considered overfitting or not. I would say it is not as the dataset is that simple, but as far as I understood the definition of overfit, it implies a high fitting to the training data, and this I what I have. Any help?
However, the test dataset does not follow this hypothesis and all the examples are missclassified. I'm having trouble figuring out whether this case is considered overfitting or not. I would say it is not as the dataset is that simple, but as far as I understood the definition of overfit, it implies a high fitting to the training data, and this I what I have. Any help?
Usually great training score and bad testing means overfitting. But this assumes IID of the data, and you are clearly violating this assumption - your training data is completely different from the testing one (there is a clear rule for the training data which has no meaning for testing one). In other words - your train/test split is incorrect, or your whole problem does not follow basic assumptions of where to use statistical ml. Of course we often fit models without valid assumptions about the data, in your case - the most natural approach is to drop a feature which violates the assumption the most - the one used to construct the node. This kind of "expert decisions" should be done prior to building any classifier, you have to think about "what is different in test scenario as compared to training one" and remove things that show this difference - otherwise you have heavy skew in your data collection, thus statistical methods will fail.
Yes, it is an overfit. The first rule in creating a training set is to make it look as much like any other set as possible. Your training set is clearly different than any other. It has the answer embedded within it while your test set doesn't. Any learning algorithm will likely find the correlation to the answer and use it and, just like the J48 algorithm, will regard the other variables as noise. The software equivalent of Clever Hans.
You can overcome this by either removing the variable or by training on a set drawn randomly from the entire available set. However, since you know that there is a subset with an embedded major hint, you should remove the hint.
You're lucky. At times these hints can be quite subtle which you won't discover until you start applying the model to future data.

Different performance by different ML classifiers, what can I deduce?

I have used a ML approach to my research using python scikit-learn. I found that SVM and logistic regression classifiers work best (eg: 85% accuracy), decision trees works markedly worse (65%), and then Naive Bayes works markedly worse (40%).
I will write up the conclusion to illustrate the obvious that some ML classifiers worked better than the others by a large margin, but what else can I say about my learning task or data structure based on these observations?
Edition:
The data set involved 500,000 rows, and I have 15 features but some of the features are various combination of substrings of certain text, so it naturally expands to tens of thousands of columns as a sparse matrix. I am using people's name to predict some binary class (eg: Gender), though I feature engineer a lot from the name entity like the length of the name, the substrings of the name, etc.
I recommend you to visit this awesome map on choosing the right estimator by the scikit-learn team http://scikit-learn.org/stable/tutorial/machine_learning_map
As describing the specifics of your own case would be an enormous task (I totally understand you didn't do it!) I encourage you to ask yourself several questions. Thus, I think the map on 'choosing the right estimator' is a good start.
Literally, go to the 'start' node in the map and follow the path:
is my number of samples > 50?
And so on. In the end you might end at some point and see if your results match with the recommendations in the map (i.e. did I end up in a SVM, which gives me better results?). If so, go deeper into the documentation and ask yourself why is that one classifier performing better on text data or whatever insight you get.
As I told you, we don't know the specifics of your data, but you should be able to ask such questions: what type of data do I have (text, binary, ...), how many samples, how many classes to predict, ... So ideally your data is going to give you some hints about the context of your problem, therefore why some estimators perform better than others.
But yeah, your question is really broad to grasp in a single answer (and specially without knowing the type of problem you are dealing with). You could also check if there might by any of those approaches more inclined to overfit, for example.
The list of recommendations could be endless, this is why I encourage you to start defining the type of problem you are dealing with and your data (plus to the number of samples, is it normalized? Is it disperse? Are you representing text in sparse matrix, are your inputs floats from 0.11 to 0.99).
Anyway, if you want to share some specifics on your data we might be able to answer more precisely. Hope this helped a little bit, though ;)

Predictive features with high presence in one class

I am doing a logistic regression to predict the outcome of a binary variable, say whether a journal paper gets accepted or not. The dependent variable or predictors are all the phrases used in these papers - (unigrams, bigrams, trigrams). One of these phrases has a skewed presence in the 'accepted' class. Including this phrase gives me a classifier with a very high accuracy (more than 90%), while removing this phrase results in accuracy dropping to about 70%.
My more general (naive) machine learning question is:
Is it advisable to remove such skewed features when doing classification?
Is there a method to check skewed presence for every feature and then decide whether to keep it in the model or not?
If I understand correctly you ask whether some feature should be removed because it is a good predictor (it makes your classifier works better). So the answer is short and simple - do not remove it in fact, the whole concept is to find exactly such features.
The only reason to remove such feature would be that this phenomena only occurs in the training set, and not in real data. But in such case you have wrong data - which does not represnt the underlying data density and you should gather better data or "clean" the current one so it has analogous characteristics as the "real ones".
Based on your comments, it sounds like the feature in your documents that's highly predictive of the class is a near-tautology: "paper accepted on" correlates with accepted papers because at least some of the papers in your database were scraped from already-accepted papers and have been annotated by the authors as such.
To me, this sounds like a useless feature for trying to predict whether a paper will be accepted, because (I'd imagine) you're trying to predict paper acceptance before the actual acceptance has been issued ! In such a case, none of the papers you'd like to test your algorithm with will be annotated with "paper accepted on." So, I'd remove it.
You also asked about how to determine whether a feature correlates strongly with one class. There are three things that come to mind for this problem.
First, you could just compute a basic frequency count for each feature in your dataset and compare those values across classes. This is probably not super informative, but it's easy.
Second, since you're using a log-linear model, you can train your model on your training dataset, and then rank each feature in your model by its weight in the logistic regression parameter vector. Features with high positive weight are indicative of one class, while features with large negative weight are strongly indicative of the other.
Finally, just for the sake of completeness, I'll point out that you might also want to look into feature selection. There are many ways of selecting relevant features for a machine learning algorithm, but I think one of the most intuitive from your perspective might be greedy feature elimination. In such an approach, you train a classifier using all N features in your model, and measure the accuracy on some held-out validation set. Then, train N new models, each with N-1 features, such that each model eliminates one of the N features, and measure the resulting drop in accuracy. The feature with the biggest drop was probably strongly predictive of the class, while features that have no measurable difference can probably be omitted from your final model. As larsmans points out correctly in the comments below, this doesn't scale well at all, but it can be a useful method sometimes.

Document Classification using Naive Bayes classifier

I am making a document classifier in mahout using the simple naive bayes algorithm. Currently, 98% of the data(documents) I have is of Class A and only 2% is of class B. My question is, since there is such a wide gap in the percentage of Class A docs vs Class B docs, would the classifier be able to train accurately still?
What I'm thinking of doing is ignoring a whole bunch of Class A documents and "manipulating" the dataset I have so that there isn't such a wide gap in the composition of the documents. Thus, the dataset I'll end up having will consist 30% of Class B and 70% of Class A. But, are there any repercussions of doing that I am not aware of?
A lot of this gets into how good "accuracy" is as a measure of performance, and that depends on your problem. If misclassifying "A" as "B" is just as bad/ok as misclassifying "B" as "A", then there is little reason to do anything other than just mark everything as "A", since you know it will reliably get you a 98% accuracy (so long as that unbalanced distribution is representative of the true distribution).
Without knowing your problem (and if accuracy is the measure you should use), the best answer I could give is "it depends on the data set". It is possible that you could get past 99% accuracy with standard naive bays, though it may be unlikely. For Naive Bayes in particular, one thing you could do is to disable the use of priors (the prior is essentially the proportion of each class). This has the effect of pretending that every class is equally likely to occur, though the model parameters will have been learned from uneven amounts of data.
Your proposed solution is a common practice, it sometimes works well. Another practice is to create fake data for the smaller class (how would depend on your data, for text documents I'm not aware of any particularly good way). Another practice is to increase the weights of the data points in the under-represented classes.
You can search for "imbalanced classification" and find a lot more information about these types of problems (they are one of the harder ones).
If accuracy is not actually a good measure for your problem, you can search for more information about "cost sensitive classification" which should be helpful.
You should not necessarily sample dataset A to reduce its instances. Several methods are available for efficient learning from imbalanced datasets, such as Majority Undersampling (exactly what you did), Minority Oversampling, SMOTE, and etc. Here is an empirical comparison of these methods: http://machinelearning.org/proceedings/icml2007/papers/62.pdf
Alternatively, you may define a custom cost matrix for the classifier. In other words, assuming B=Positive class, you may define cost(False Positive) < cost(False Negative). In this case, the classifier's output will bias towards the positive class. Here is a very helpful tutorial: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.4418&rep=rep1&type=pdf

Text categorization using Naive Bayes

I am doing the text categorization machine learning problem using Naive Bayes. I have each word as a feature. I have been able to implement it and I am getting good accuracy.
Is it possible for me to use tuples of words as features?
For example, if there are two classes, Politics and sports. The word called government might appear in both of them. However, in politics I can have a tuple (government, democracy) whereas in the class sports I can have a tuple (government, sportsman). So, if a new text article comes in which is politics, the probability of the tuple (government, democracy) has more probability than the tuple (government, sportsman).
I am asking this is because by doing this am I violating the independence assumption of the Naive Bayes problem, because I am considering single words as features too.
Also, I am thinking of adding weights to features. For example, a 3-tuple feature will have less weight than a 4-tuple feature.
Theoretically, are these two approaches not changing the independence assumptions on the Naive Bayes classifier? Also, I have not started with the approach I mentioned yet but will this improve the accuracy? I think the accuracy might not improve but the amount of training data required to get the same accuracy would be less.
Even without adding bigrams, real documents already violate the independence assumption. Conditioned on having Obama in a document, President is much more likely to appear. Nonetheless, naive bayes still does a decent job at classification, even if the probability estimates it gives are hopelessly off. So I recommend that you go on and add more complex features to your classifier and see if they improve accuracy.
If you get the same accuracy with less data, that is basically equivalent to getting better accuracy with the same amount of data.
On the other hand, using simpler, more common features works better as you decrease the amount of data. If you try to fit too many parameters to too little data, you tend to overfit badly.
But the bottom line is to try it and see.
No, from a theoretical viewpoint, you are not changing the independence assumption. You are simply creating a modified (or new) sample space. In general, once you start using higher n-grams as events in your sample space, data sparsity becomes a problem. I think using tuples will lead to the same issue. You will probably need more training data, not less. You will probably also have to give a little more thought to the type of smoothing you use. Simple Laplace smoothing may not be ideal.
Most important point, I think, is this: whatever classifier you are using, the features are highly dependent on the domain (and sometimes even the dataset). For example, if you are classifying sentiment of texts based on movie reviews, using only unigrams may seem to be counterintuitive, but they perform better than using only adjectives. On the other hand, for twitter datasets, a combination of unigrams and bigrams were found to be good, but higher n-grams were not useful. Based on such reports (ref. Pang and Lee, Opinion mining and Sentiment Analysis), I think using longer tuples will show similar results, since, after all, tuples of words are simply points in a higher-dimensional space. The basic algorithm behaves the same way.

Resources