Oversampled train set and test set - machine learning classification - machine-learning

Let's say that I have oversampled my training set after splitting, then I selected the features of interest to be extracted based on the training set analysis.
After this, do I use the oversampled training set with the testing set together to determine the classification performance (accuracy, precision, F1 measure, and etc) OR I just use the testing set for it?

(Not really a programming question but it's important enough to be clarified imho)
To measure performance reliably you must use the original test set, without any resampling.
This is one of the reasons why the train/test split should always be done first, the test set should be kept "fresh". Resampling the test set would be like cheating, because it makes the problem easier to solve.
Note: in general resampling rarely works, especially with text.

Related

Model selection for classification with random train/test sets

I'm working with an extremelly unbalanced and heterogeneous multiclass {K = 16} database for research, with a small N ~= 250. For some labels the database has a sufficient amount of examples for supervised machine learning, but for others I have almost none. I'm also not in a position to expand my database for a number of reasons.
As a first approach I divided my database into training (80%) and test (20%) sets in a stratified way. On top of that, I applied several classification algorithms that provide some results. I applied this procedure over 500 stratified train/test sets (as each stratified sampling takes individuals randomly within each stratum), hoping to select an algorithm (model) that performed acceptably.
Because of my database, depending on the specific examples that are part of the train set, the performance on the test set varies greatly. I'm dealing with runs that have as high (for my application) as 82% accuracy and runs that have as low as 40%. The median over all runs is around 67% accuracy.
When facing this situation, I'm unsure on what is the standard procedure (if there is any) when selecting the best performing model. My rationale is that the 90% model may generalize better because the specific examples selected in the training set are be richer so that the test set is better classified. However, I'm fully aware of the possibility of the test set being composed of "simpler" cases that are easier to classify or the train set comprising all hard-to-classify cases.
Is there any standard procedure to select the best performing model considering that the distribution of examples in my train/test sets cause the results to vary greatly? Am I making a conceptual mistake somewhere? Do practitioners usually select the best performing model without any further exploration?
I don't like the idea of using the mean/median accuracy, as obviously some models generalize better than others, but I'm by no means an expert in the field.
Confusion matrix of the predicted label on the test set of one of the best cases:
Confusion matrix of the predicted label on the test set of one of the worst cases:
They both use the same algorithm and parameters.
Good Accuracy =/= Good Model
I want to firstly point out that a good accuracy on your test set need not equal a good model in general! This has (in your case) mainly to do with your extremely skewed distribution of samples.
Especially when doing a stratified split, and having one class dominatingly represented, you will likely get good results by simply predicting this one class over and over again.
A good way to see if this is happening is to look at a confusion matrix (better picture here) of your predictions.
If there is one class that seems to confuse other classes as well, that is an indicator for a bad model. I would argue that in your case it would be generally very hard to find a good model unless you do actively try to balance your classes more during training.
Use the power of Ensembles
Another idea is indeed to use ensembling over multiple models (in your case resulting from different splits), since it is assumed to generalize better.
Even if you might sacrifice a lot of accuracy on paper, I would bet that a confusion matrix of an ensemble is likely to look much better than the one of a single "high accuracy" model. Especially if you disregard the models that perform extremely poor (make sure that, again, the "poor" performance comes from an actual bad performance, and not just an unlucky split), I can see a very good generalization.
Try k-fold Cross-Validation
Another common technique is k-fold cross-validation. Instead of performing your evaluation on a single 80/20 split, you essentially divide your data in k equally large sets, and then always train on k-1 sets, while evaluating on the other set. You then not only get a feeling whether your split was reasonable (you usually get all the results for different splits in k-fold CV implementations, like the one from sklearn), but you also get an overall score that tells you the average of all folds.
Note that 5-fold CV would equal a split into 5 20% sets, so essentially what you are doing now, plus the "shuffling part".
CV is also a good way to deal with little training data, in settings where you have imbalanced classes, or where you generally want to make sure your model actually performs well.

In Machine Learning, is it okay to add the development set to the training set after development?

Usually we train our models on the training set, evaluate them on the development set, make some changes, train and evaluate again, etc. (the development phase), and in the end evaluate once on the test set.
Assume we have little training data. Then, it could make sense to use training AND development set after the development phase. One could estimate hyperparameters as usual and in the end (the final training) add the dev set to the training set, train the model with the previously estimated hyperparameters and evaluate it once on the test set.
Would this be "cheating" in any way? Do people do this, or do they usually leave out the dev set from any training?
I don't think it's cheating in any way. If it improves your model against real world data and your unseen test data, it should be ok. There are reasons why a training/dev/test set is recommended, but if you have such small training data set, I believe it's a valid strategy. In any case, it's hard to have definitive answer without knowing more details such as nature of data and the task you would like to accomplish. Another approach you might like to have look is data augmentation.
I'd recommend the following course which covers training/dev/test set distribution, among other things:
https://www.coursera.org/learn/machine-learning-projects
Once you decided on the hyperparameter using the dev set, you can use the train + dev to perform the training again. This is method is used quite often.
For example with using GridSearchCV method in sklearn, if you use refit=True, this would perform the training after the hyperparameter search is done. i.e. if cv=4 and refit=True, the model performs training 5 times, (4 times for searching best hyperparameters) + (1 for the final training using the complete training set)

would we ever compute the cost J(θ) on the *test* set?

I'm pretty sure that the answer is no, but wanted to confirm...
When training a neural network or other learning algorithm, we will compute the cost function J(θ) as an expression of how well our algorithm fits the training data (higher values mean it fits the data less well). When training our algorithm, we generally expect to see J(theta) go down with each iteration of gradient descent.
But I'm just curious, would there ever be a value to computing J(θ) against our test data?
I think the answer is no, because since we only evaluate our test data once, we would only get one value of J(θ), and I think that it is meaningless except when compared with other values.
Your question touches on a very common ambiguity regarding the terminology: one between the validation and the test sets (the Wikipedia entry and this Cross Vaidated post may be helpful in resolving this).
So, assuming that you indeed refer to the test set proper and not the validation one, then:
You are right in that this set is only used once, just at the end of the whole modeling process
You are, in general, not right in assuming that we don't compute the cost J(θ) in this set.
Elaborating on (2): in fact, the only usefulness of the test set is exactly for evaluating our final model, in a set that has not been used at all in the various stages of the fitting process (notice that the validation set has been used indirectly, i.e. for model selection); and in order to evaluate it, we obviously have to compute the cost.
I think that a possible source of confusion is that you may have in mind only classification settings (although you don't specify this in your question); true, in this case, we are usually interested in the model performance regarding a business metric (e.g. accuracy), and not regarding the optimization cost J(θ) itself. But in regression settings it may very well be the case that the optimization cost and the business metric are one and the same thing (e.g. RMSE, MSE, MAE etc). And, as I hope is clear, in such settings computing the cost in the test set is by no means meaningless, despite the fact that we don't compare it with other values (it provides an "absolute" performance metric for our final model).
You may find this and this answers of mine useful regarding the distinction between loss & accuracy; quoting from these answers:
Loss and accuracy are different things; roughly speaking, the accuracy is what we are actually interested in from a business perspective, while the loss is the objective function that the learning algorithms (optimizers) are trying to minimize from a mathematical perspective. Even more roughly speaking, you can think of the loss as the "translation" of the business objective (accuracy) to the mathematical domain, a translation which is necessary in classification problems (in regression ones, usually the loss and the business objective are the same, or at least can be the same in principle, e.g. the RMSE)...

Is it considered overfit a decision tree with a perfect attribute?

I have a 6-dimensional training dataset where there is a perfect numeric attribute which separates all the training examples this way: if TIME<200 then the example belongs to class1, if TIME>=200 then example belongs to class2. J48 creates a tree with only 1 level and this attribute as the only node.
However, the test dataset does not follow this hypothesis and all the examples are missclassified. I'm having trouble figuring out whether this case is considered overfitting or not. I would say it is not as the dataset is that simple, but as far as I understood the definition of overfit, it implies a high fitting to the training data, and this I what I have. Any help?
However, the test dataset does not follow this hypothesis and all the examples are missclassified. I'm having trouble figuring out whether this case is considered overfitting or not. I would say it is not as the dataset is that simple, but as far as I understood the definition of overfit, it implies a high fitting to the training data, and this I what I have. Any help?
Usually great training score and bad testing means overfitting. But this assumes IID of the data, and you are clearly violating this assumption - your training data is completely different from the testing one (there is a clear rule for the training data which has no meaning for testing one). In other words - your train/test split is incorrect, or your whole problem does not follow basic assumptions of where to use statistical ml. Of course we often fit models without valid assumptions about the data, in your case - the most natural approach is to drop a feature which violates the assumption the most - the one used to construct the node. This kind of "expert decisions" should be done prior to building any classifier, you have to think about "what is different in test scenario as compared to training one" and remove things that show this difference - otherwise you have heavy skew in your data collection, thus statistical methods will fail.
Yes, it is an overfit. The first rule in creating a training set is to make it look as much like any other set as possible. Your training set is clearly different than any other. It has the answer embedded within it while your test set doesn't. Any learning algorithm will likely find the correlation to the answer and use it and, just like the J48 algorithm, will regard the other variables as noise. The software equivalent of Clever Hans.
You can overcome this by either removing the variable or by training on a set drawn randomly from the entire available set. However, since you know that there is a subset with an embedded major hint, you should remove the hint.
You're lucky. At times these hints can be quite subtle which you won't discover until you start applying the model to future data.

cross validating a train set where the class variable has a different distribution than the actual population

(noob in ML, be patient)
I want to test the performance of my scikit-learn SVMLinear classifier. My train-set has a different class distribution than the actual population, but my test-set is a representative, and distributes like the actual population.
I noticed that there's a class-weight parameter, and I want to try giving my classifier the actual population distribution, and see if it helps it perform better.
However - as my train-set distribution is different, so will be my validation set, right? So should I expect an improvement on the validation, or must I use my test-set to see the improvement? And if so - isn't it against the rules to calibrate using the test-set which will lead to burning the test-set or overfitting?
I've thought about bootstrap re-sampling of my train-set: making it distribute the same as the general population, and only then training and validating my model. Is this a good solution?
Thanks!
It seems that you have some good ideas which are mostly worth trying. The answers mostly depend on the application and the size of your train/test set.
It is against the rules to calibrate based on test set and again use the whole test set for evaluation. However, if your test set is large enough, you can always divide your test set to two sets: validation set, and actual test set. Then, your final evaluation will be based on a smaller test set, which might be still acceptable depending on the application.
For your training set that you believe it has different class distribution than the actual population, there might be several things worth trying. Usually the most acceptable approach is to use a classifier that can handle these differences (usually with fewer parameters to avoid over-fitting). There is a whole topic of classification and regression on skewed datasets that you can look through. Other than the classifier, provided that you did not derive the actual population from your test set, the methods below might help too:
1- One of them can be (as you said) bootstrap re-sampling in case that your training set is large enough for that.
2- Another approach can be generating more training samples by adding some noise to the current samples of the training set. For example if you are classifying images of birds, you can randomly make images darker or brighter, or randomly move them a few pixels to the sides or up and down (select values randomly in a small enough range). This way, you can add to the training set in a way to get the desired distribution.

Resources