I have run some prediction models e.g. Logistic Regression, SVM, decision tree, ... on a dataset. When I add more dimensions (predictors) then my accuracy rates in all models drops . How can I interpret this?
Usually it means that the features that you are adding are either unimportant or even strongly correlated with other features you already have. Your model might therefore pick up a "random" signal in the training set from these features, and then fail to apply it to test data, as it is not a real pattern.
However, interpretation of this kind of problem is very model dependent. Linear Models do not behave the same as decision trees (for example, they are more sensitive to correlated features), so it is weird that they would react the same way. Please detail more if you can.
Related
I'm studying the difference between GLM models (OLS, Logistic Regression, Zero Inflated, etc.), which are deterministic, since we can infer the parameters exactly, and some CART models (Random Forest, LightGBM, CatBoost, etc.) that are based on stochastic prediction.
What I've heard is that for stochastic models we should split into train and test to avoid over-fitting, fact that does not happen in deterministic models, because they use Linear Programming for finding the best parameters.
I've like to start some discussion about it.
My opinion is that it's true. Deterministic models are just equations solved, and it should not over-fit the data at all, and it differs from stochastic models based on randomness to make predictions.
But what I found was every course saying to split every datasets, independent if its deterministic or not.
There is confusion over multiple concepts in your question.
Should one use train/test set splits for deterministic models? If you are training a model for prediction, absolutely! The important thing to remember is that a prediction model needs to generalize to data other than the one used for training. This is evaluated using the test set. Even if a model is being learned simply as a means to explore the data, this is still recommended as a way to verify that one isn't just overfitting to the noise.
The second point of confusion is that splitting into train and test sets avoid overfitting. This is not true per se. The separation is so that one can use the test set to verify if the model is overfitting. If the performance on the train and test sets differ "dramatically" then a model is likely overfitting and needs to be simplified, regularized, or otherwise constrained somehow.
The other point pertains to what constitutes a stochastic model. All of the CART models that you mention are actually deterministic in the sense that, once you train then, they always yield exactly the same output for the same input. The stochasticity that you may have been referring is that the training uses random initializations which may result in quite different final models. If this is a concern (because of local optima for example), then use multiple initializations (a.k.a., multiple restarts, or Monte Carlo runs) to resolve them.
Finally, you mentioned that deterministic models don't need this split because they cannot overfit. This is not true. Consider an SVM classifier with a Gaussian kernel of sufficiently small bandwidth. If solved to optimality, the training is deterministic and will most assuredly overfit the training data.
Normalization e.g. z-scoring is a common preprocessing method in Machine Learning.
I am analyzing a dataset and use ensemble methods like Random Forests or the XGBOOST framework.
Now I compare models using
non normalized features
z-scored features
Using crossvalidation I observe in both cases that with higher max_depth parameter the training error decreases.
For the 1. case the test error also decreases and saturates at a certain MAE:
For the z-scored features however the test error is non decreasing at all.
In this question: https://datascience.stackexchange.com/questions/16225/would-you-recommend-feature-normalization-when-using-boosting-trees it was discussed that normalization is not necessary for tree based methods. But the example above shows that it has a severe effect.
So I have two questions regarding this:
Does it imply that overfitting with ensemble based methods is possible even when the test error decreases?
Should normalization like z-scoring always be common practice when working with ensemble methods?
Is it possible that normalization methods decrease the model performance?
Thanks!
It is not easy to see what is going on in the absence of any code or data.
Normalisation may or may not be helpful depending on the particular data and how the normalisation step is applied.
Tree based methods ought to be robust enough to handle the raw data.
In your cross validations is your code doing the normalisation separately for each fold?
Doing a single normalisation prior to cv may lead to significant leakage.
With very high values of depth you will have a much more complex model that will fit the training data well but will fail to generalise to new data.
I tend to prefer max depths from 2 to 5.
If I can't get a reasonable model I turn my efforts to feature engineering rather than trying to tweak the hyperparameters too much.
I've been studying neural networks for a bit and recently learned about the dropout training algorithm. There are excellent papers out there to understand how it works, including the ones from the authors.
So I built a neural network with dropout training (it was fairly easy) but I'm a bit confused about how to perform model selection. From what I understand, looks like dropout is a method to be used when training the final model obtained through model selection.
As for the test part, papers always talk about using the complete network with halved weights, but they do not mention how to use it in the training/validation part (at least the ones I read).
I was thinking about using the network without dropout for the model selection part. Say that makes me find that the net performs well with N neurons. Then, for the final training (the one I use to train the network for the test part) I use 2N neurons with dropout probability p=0.5. That assures me to have exactly N neurons active on average, thus using the network at the right capacity most of the time.
Is this a correct approach?
By the way, I'm aware of the fact that dropout might not be the best choice with small datasets. The project I'm working on has academic purposes, so it's not really needed that I use the best model for the data, as long as I stick with machine learning good practices.
First of all, model selection and the training of a particular model are completely different issues. For model selection, you would usually need a data set that is completely independent of both training set used to build the model and test set used to estimate its performance. So if you're doing for example a cross-validation, you would need an inner cross-validation (to train the models and estimate the performance in general) and an outer cross-validation to do the model selection.
To see why, consider the following thought experiment (shamelessly stolen from this paper). You have a model that makes a completely random prediction. It has a number of parameters that you can set, but have no effect. If you're trying different parameter settings long enough, you'll eventually get a model that has a better performance than all the others simply because you're sampling from a random distribution. If you're using the same data for all of these models, this is the model you will choose. If you have a separate test set, it will quickly tell you that there is no real effect because the performance of this parameter setting that achieves good results during the model-building phase is not better on the separate set.
Now, back to neural networks with dropout. You didn't refer to any particular paper; I'm assuming that you mean Srivastava et. al. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". I'm not an expert on the subject, but the method to me seems to be similar to what's used in random forests or bagging to mitigate the flaws an individual learner may exhibit by applying it repeatedly in slightly different contexts. If I understood the method correctly, essentially what you end up with is an average over several possible models, very similar to random forests.
This is a way to make an individual model better, but not for model selection. The dropout is a way of adjusting the learned weights for a single neural network model.
To do model selection on this, you would need to train and test neural networks with different parameters and then evaluate those on completely different sets of data, as described in the paper I've referenced above.
I am doing a logistic regression to predict the outcome of a binary variable, say whether a journal paper gets accepted or not. The dependent variable or predictors are all the phrases used in these papers - (unigrams, bigrams, trigrams). One of these phrases has a skewed presence in the 'accepted' class. Including this phrase gives me a classifier with a very high accuracy (more than 90%), while removing this phrase results in accuracy dropping to about 70%.
My more general (naive) machine learning question is:
Is it advisable to remove such skewed features when doing classification?
Is there a method to check skewed presence for every feature and then decide whether to keep it in the model or not?
If I understand correctly you ask whether some feature should be removed because it is a good predictor (it makes your classifier works better). So the answer is short and simple - do not remove it in fact, the whole concept is to find exactly such features.
The only reason to remove such feature would be that this phenomena only occurs in the training set, and not in real data. But in such case you have wrong data - which does not represnt the underlying data density and you should gather better data or "clean" the current one so it has analogous characteristics as the "real ones".
Based on your comments, it sounds like the feature in your documents that's highly predictive of the class is a near-tautology: "paper accepted on" correlates with accepted papers because at least some of the papers in your database were scraped from already-accepted papers and have been annotated by the authors as such.
To me, this sounds like a useless feature for trying to predict whether a paper will be accepted, because (I'd imagine) you're trying to predict paper acceptance before the actual acceptance has been issued ! In such a case, none of the papers you'd like to test your algorithm with will be annotated with "paper accepted on." So, I'd remove it.
You also asked about how to determine whether a feature correlates strongly with one class. There are three things that come to mind for this problem.
First, you could just compute a basic frequency count for each feature in your dataset and compare those values across classes. This is probably not super informative, but it's easy.
Second, since you're using a log-linear model, you can train your model on your training dataset, and then rank each feature in your model by its weight in the logistic regression parameter vector. Features with high positive weight are indicative of one class, while features with large negative weight are strongly indicative of the other.
Finally, just for the sake of completeness, I'll point out that you might also want to look into feature selection. There are many ways of selecting relevant features for a machine learning algorithm, but I think one of the most intuitive from your perspective might be greedy feature elimination. In such an approach, you train a classifier using all N features in your model, and measure the accuracy on some held-out validation set. Then, train N new models, each with N-1 features, such that each model eliminates one of the N features, and measure the resulting drop in accuracy. The feature with the biggest drop was probably strongly predictive of the class, while features that have no measurable difference can probably be omitted from your final model. As larsmans points out correctly in the comments below, this doesn't scale well at all, but it can be a useful method sometimes.
I have 20 attributes and one target feature. All the attributes are binary(present or not present) and the target feature is multinomial(5 classes).
But for each instance, apart from the presence of some attributes, I also have the information that how much effect(scale 1-5) did each present attribute have on the target feature.
How do I make use of this extra information that I have, and build a classification model that helps in better prediction for the test classes.
Why not just use the weights as the features, instead of binary presence indicator? You can code the lack of presence as a 0 on the continuous scale.
EDIT:
The classifier you choose to use will learn optimal weights on the features in training to separate the classes... thus I don't believe there's any better you can do if you do not have access to test weights. Essentially a linear classifier is learning a rule of the form:
c_i = sgn(w . x_i)
You're saying you have access to weights, but without an example of what the data look like, and an explanation of where the weights come from, I'd have to say I don't see how you'd use them (or even why you'd want to---is standard classification with binary features not working well enough?)
This clearly depends on the actual algorithms that you are using.
For decision trees, the information is useless. They are meant to learn which attributes have how much effect.
Similarly, support vector machines will learn the best linear split, so any kind of weight will disappear since the SVM already learns this automatically.
However, if you are doing NN classification, just scale the attributes as desired, to emphasize differences in the influential attributes.
Sorry, you need to look at other algorithms yourself. There are just too many.
Use the knowledge as prior over the weight of features. You can actually compute the posterior estimation out of the data and then have the final model