Performance metrics after downsampling - machine-learning

I am working on a binary classification problem with an imbalanced dataset. I have decided to downsample the majority class and I’m wondering what the best approach is when calculating performance metrics on a model that has been trained on a downsampled dataset.
I noticed that the sklearn.metrics.precision_score and sklearn.metrics.recal_score functions have a sample_weight attribute. Is the purpose of this attribute to supply a weight for the downsampled class relative to the ratio in which I downsampled?
For example, if I had 1,000,000 samples for the negative class and I decided to downsample to 100,000, would I set the sample_weight attribute to be equal to 1,000,000 / 100,000 = 10 for the negative class?

Related

naive bayes accuracy increasing as increasing in the alpha value

I'm using naive Bayes for text classification and I have 100k records in which 88k are positive class records and 12krecords are negative class records. I converted sentences to unigrams and bigrams using countvectorizer and I took alpha range from [0,10] with 50 values and I draw the plot.
In Laplace additive smoothing, If I keep increasing the alpha value then accuracy on the cross-validation dataset also increasing. My question is is this trend expected or not?
If you keep increasing the alpha value then naive bayes model will bias towards the class which has more records and model becomes a dumb model(underfitting) so by choosing small alpha value is good idea.
Because you have 88k Positive Point and 12K negative point which means that you have unbalanced data set.
You can add more negative point to balanced data set, you can clone or replicate your negative point which we called upsampling. After that, your data set is balanced now you can apply naive bayes with alpha it will work properly, now your model is not dumb model, earlier you model was dumb that's why as increase alpha it increase you Accuracy.

Scaling data with large range in Machine learning preprocessing

I am very much new to Machine Learning.
And I am trying to apply ML on data containing nearly 50 features. Some features have range from 0 to 1000000 and some have range from 0 to 100 or even less than that. Now when I use feature scaling by using MinMaxScaler for range (0,1) I think features having large range scales down to very small values and this might affect me to give good predictions.
I would like to know if there is some efficient way to do scaling so that all the features are scaled appropriately.
I also tried standared scaler but accuracy did not improve.
Also Can I use different scaling function for some features and another for remaining features.
Thanks in advance!
Feature scaling, or data normalization, is an important part of training a machine learning model. It is generally recommended that the same scaling approach is used for all features. If the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what methods you're using to do it). By ensuring standardized feature values, all features are implicitly weighted equally in their representation.
Two common methods of normalization are:
Rescaling (also known as min-max normalization):
where x is an original value, and x' is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).
Mean normalization
where x is an original value, and x' is the normalized value.

Future-proofing feature scaling in machine learning?

I have a question about how feature scaling works after training a model.
Let's say a neural network model predicts the height of a tree by training on outside temperature.
The lowest outside temperature in my training data is 60F and the max is 100F. I scale the temperature between 0 and 1 and train the model. I save the model for future predictions. Two months later, I want to predict on some new data. But this time the min and max temperatures in my test data are -20F and 50F, respectively.
How does the trained model deal with this? The range I imposed the scaling on in the training set to generate my trained model does not match the test data range.
What would prevent me from hard-coding a range to scale to that I know the data will always be within, say from -50F to 130F? The problem I see here is if I have a model with many features. If I impose a different hard scale to each feature, using feature scaling is essentially pointless, is it not?
Different scales won't work. Your model trains for one scale, it learns one scale, if you change the scale, your model will still think it's the same scale and make very shifted predictions.
Training again will overwrite what was learned before.
So, yes, hardcode your scaling (preferentially directly on your data, not inside the model).
And for a quality result, train with all the data you can gather.

Accuracy below 50% for binary classification

I am training a Naive Bayes classifier on a balanced dataset with equal number of positive and negative examples. At test time I am computing the accuracy in turn for the examples in the positive class, negative class, and the subsets which make up the negative class. However, for some subsets of the negative class I get accuracy values lower than 50%, i.e. random guessing. I am wondering, should I worry about these results being much lower than 50%? Thank you!
It's impossible to fully answer this question without specific details, so here instead are guidelines:
If you have a dataset with equal amounts of classes, then random guessing would give you 50% accuracy on average.
To be clear, are you certain your model has learned something on your training dataset? Is the training dataset accuracy higher than 50%? If yes, continue reading.
Assuming that your validation set is large enough to rule out statistical fluctuations, then lower than 50% accuracy suggests that something is indeed wrong with your model.
For example, are your classes accidentally switched somehow in the validation dataset? Because notice that if you instead use 1 - model.predict(x), your accuracy would be above 50%.

How can I make Weka classify the smaller class, with a 2:1 class imbalance?

How can I make Weka classify the smaller classification? I have a data set where the positive classification is 35% of the data set and the negative classification is 65% of the data set. I want Weka to predict the positive classification but in some cases, the resultant model predicts all instances to be the negative classification. Regardless, it is classifying the negative (larger) class. How can I force it to classify the positive (smaller) classification?
One simple solution is to adjust your training set to be more balanced (50% positive, 50% negative) to encourage classification for both cases. I would guess that more of your cases are negative in the problem space, and therefore you would need to find some way to ensure that the negative cases still represent the problem well.
Since the ratio of positive to negative is 1:2, you could also try duplicating the positive cases in the training set to make it 2:2 and see how that goes.
Use stratified sampling (e.g. train on a 50%/50% sample) or class weights/class priors. It helps greatly if you tell us which specific classifier? Weka seems to have at least 50.
Is the penalty for Type I errors = penalty for Type II errors?
This is a special case of the receiver operating curve (ROC).
If the penalties are not equal, experiment with the cutoff value and the AUC.
You probably also want to read the sister site CrossValidated for statistics.
Use CostSensitiveClassifier, which is available under "meta" classifiers
You will need to change "classifier" to your J48 and (!) change cost matrix
to be like [(0,1), (2,0)]. This will tell J48 that misclassification of a positive instance is twice more costly than misclassification of a negative instance. Of course, you adjust your cost matrix according to your business values.

Resources