I am developing a model using Random Forest in R. The data has 2000 obs x 20 features. The target class that I am trying to classify has 6 levels. All the variables are categorical in nature.
The target is skewed towards one class constitutes over 65% of the observation. Remaining 35% is distributed amongst the other five target classes. Distribution is as below
Class A Class B Class C Class D Class E Class F Class G
0.660185185 0.002314815 0.0027777 0.0722222 0.181944444 0.013425926 0.067129630
I am trying to using ROSE or SMOTE to balance the data set, but getting an error that they work only on binary classifiers.
Is there a library available in R to balance multiclass data sets. Right now the accuracy on the model is very less (around 64%). I am hoping that balancing the data sets might improve the accuracy.
Any help in this matter will be appreciated.
cheers
-Nitin
Related
I am working on a classification problem with very imbalanced classes. I have 3 classes in my dataset : class 0,1 and 2. Class 0 is 11% of the training set, class 1 is 13% and class 2 is 75%.
I used and random forest classifier and got 76% accuracy. But I discovered 93% of this accuracy comes from class 2 (majority class). Here is the Crosstable I got.
The results I would like to have :
fewer false negatives for class 0 and 1 OR/AND fewer false positives for class 0 and 1
What I found on the internet to solve the problem and what I've tried :
using class_weight='balanced' or customized class_weight ( 1/11% for class 0, 1/13% for class 1, 1/75% for class 2), but it doesn't change anything (the accuracy and crosstable are still the same). Do you have an interpretation/explenation of this ?
as I know accuracy is not the best metric in this context, I used other metrics : precision_macro, precision_weighted, f1_macro and f1_weighted, and I implemented the area under the curve of precision vs recall for each class and use the average as a metric.
Here's my code (feedback welcome) :
from sklearn.preprocessing import label_binarize
def pr_auc_score(y_true, y_pred):
y=label_binarize(y_true, classes=[0, 1, 2])
return average_precision_score(y[:,:],y_pred[:,:])
pr_auc = make_scorer(pr_auc_score, greater_is_better=True,needs_proba=True)
and here's a plot of the precision vs recall curves.
Alas, for all these metrics, the crosstab remains the same... they seem to have no effect
I also tuned the parameters of Boosting algorithms ( XGBoost and AdaBoost) (with accuracy as metric) and again the results are not improved.. I don't understand because boosting algorithms are supposed to handle imbalanced data
Finally, I used another model (BalancedRandomForestClassifier) and the metric I used is accuracy. The results are good as we can see in this crosstab. I am happy to have such results but I notice that, when I change the metric for this model, there is again no change in the results...
So I'm really interested in knowing why using class_weight, changing the metric or using boosting algorithms, don't lead to better results...
As you have figured out, you have encountered the "accuracy paradox";
Say you have a classifier which has an accuracy of 98%, it would be amazing, right? It might be, but if your data consists of 98% class 0 and 2% class 1, you obtain a 98% accuracy by assigning all values to class 0, which indeed is a bad classifier.
So, what should we do? We need a measure which is invariant to the distribution of the data - entering ROC-curves.
ROC-curves are invariant to the distribution of the data, thus are a great tool to visualize classification-performances for a classifier whether or not it is imbalanced. But, they only work for a two-class problem (you can extend it to multiclass by creating a one-vs-rest or one-vs-one ROC-curve).
F-score might a bit more "tricky" to use than the ROC-AUC since it's a trade off between precision and recall and you need to set the beta-variable (which is often a "1" thus the F1 score).
You write: "fewer false negatives for class 0 and 1 OR/AND fewer false positives for class 0 and 1". Remember, that all algorithms work by either minimizing something or maximizing something - often we minimize a loss function of some sort. For a random forest, lets say we want to minimize the following function L:
L = (w0+w1+w2)/n
where wi is the number of class i being classified as not class i i.e if w0=13 we have missclassified 13 samples from class 0, and n the total number of samples.
It is clear that when class 0 consists of most of the data then an easy way to get a small L is to classify most of the samples as 0. Now, we can overcome this by adding a weight instead to each class e.g
L = (b0*w0+b1*w1+b2*x2)/n
as an example say b0=1, b1=5, b2=10. Now you can see, we cannot just assign most of the data to c0 without being punished by the weights i.e we are way more conservative by assigning samples to class 0, since assigning a class 1 to class 0 gives us 5 times as much loss now as before! This is exactly how the weight in (most) of the classifiers work - they assign a penalty/weight to each class (often proportional to it's ratio i.e if class 0 consists of 80% and class 1 consists of 20% of the data then b0=1 and b1=4) but you can often specify the weight your self; if you find that the classifier still generates to many false negatives of a class then increase the penalty for that class.
Unfortunately "there is no such thing as a free lunch" i.e it's a problem, data and usage specific choice, of what metric to use.
On a side note - "random forest" might actually be bad by design when you don't have much data due to how the splits are calculated (let me know, if you want to know why - it's rather easy to see when using e.g Gini as splitting). Since you have only provided us with the ratio for each class and not the numbers, I cannot tell.
I need to build a classification model for protein sequences using machine learning techniques. Each observation can either be classified as either a 0 or a 1. However, I noticed that my training set contains a total of 170 000 observations, of which only 5000 are labeled as 1. Therefore, I wish to down sample the number of observations labeled as 0 to 5000.
One of the features I am currently using in the model is the length of the sequence. How can I down sample the data for my class 0 while making sure the distribution of length_sequence remains similar to the one in my class 1?
Here is the histogram of length_sequence for class 1:
Here is the histogram of length_sequence for class 0:
You can see that in both cases, the lengths go from 2 to 255 characters. However, class 0 has many more observations, and they also tend to be significantly longer than the ones seen in class 0.
How can I down sample class 0 and make the new histogram look similar to the one in class 1?
I am trying to do stratified down sampling with scikit-learn, but I'm stuck.
I have a classification problem with 10 features and I have to predict 1 or 0. When I train the SVC model, with the train test split, all the predicted values for the test portion of the data comes out to be 0. The data has the following 0-1 count:
0: 1875
1: 1463
The code to train the model is given below:
from sklearn.svm import SVC
model = SVC()
model.fit(X_train, y_train)
pred= model.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, pred)`
Why does it predict 0 for all the cases?
The model predicts the more frequent class, even though the dataset is nor much imbalanced. It is very likely that the class cannot be predicted from the features as they are right now.
You may try normalizing the features.
Another thing you might want to try is to have a look at how correlated the features are with each other. Having highly correlated features might also prevent the model from converging.
Also, you might have chosen the wrong features.
For a classification problem, it is always good to run a dummy classifiar as a starting point. This will give you an idea how good your model can be.
You can use this as a code:
from sklearn.dummy import DummyClassifier
dummy_classifier = DummyClassifier(strategy="most_frequent")
dummy_classifier.fit(X_train,y_train)
pred_dum= dummy_classifier.predict(X_test)
accuracy_score(y_test, pred_dum)
this will give you an accuracy, if you predict always the most frequent class. If this is for example: 100% , this would mean that you only have one class in your dataset. 80% means, that 80% of your data belongs to one class.
In a first step you can adjust your SVC:
model = SVC(C=1.0, kernel=’rbf’, random_state=42)
C : float, optional (default=1.0)Penalty parameter C of the error
term.
kernel : Specifies the kernel type to be used in the algorithm. It
must be one of ‘linear’, ‘poly’, ‘rbf’
This can give you a starting point.
On top you should run also a prediction for your training data, to see the comparison if you are over- or underfitting.
trainpred= model.predict(X_train)
accuracy_score(y_test, trainpred)
While using keras, particularly for a U-net, I am only aware of just specifying the model parameters in the following manner:
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[mean_iou])
Now I can set the loss equal to whatever I define it to be. However, this loss function will be evenly applied to all classes. How do I make it so that mis-predictions for certain classes are weighed more than other mis-predictions.
For example, let's say I have the following classes in each image.
Class A, B, and C. Now, class A and B account for about 45% of the entire image and class C only accounts for about 10% of the entire image. However, I care much more about having high prediction for class C.
In this situation, the loss functions don't do such a good job since the class imbalance absorbs the loss of class c. Hence, I would like to figure out a way to weight the loss of one class higher than the other.
I am also open to other suggestions to solving this problem - like for instance, perhaps having two separate networks?
EDIT: Here is a follow up to this question that will be required to implement the answer that has been accepted by this post.
You can assign weights for each class manually. For example:
class_weight = {0: 0.2, 1: 0.3, 2: 0.25, 3: 0.25}
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[mean_iou], class_weight=class_weight)
or you can use this scikit library function
There are also many examples in the web, didn't any of them work for you?
Let's assume that we have a few data points that can be used as the training set. Each row is consisted of 4 say columns (features) that take boolean values. The 5th column expresses the class and it also takes boolean values. Here is an example (they are almost random):
1,1,1,0,1
0,1,1,0,1
1,1,0,0,1
0,0,0,0,0
1,0,0,1,0
0,0,0,0,0
Now, what I want to do is to build a model such that for any given input (new line) the system does not return the class itself (like in the case of a regular classification problem) but instead the probability this particular input belongs to class 0 or class 1. How can I do that? What's more, how can I generate a confidence interval or error rate associated with that computation?
Not all classification algorithms return probabilities, because not all of them have an underlying probabilistic model. For example, a classification tree is just a set of rules that you follow to assign each new input to a particular class.
An example of a classification algorithm that does have an underlying probabilistic model is logistic regression. In this algorithm, the probability that a particular input x is in the class is
prob = 1 / (1 + exp( -theta * x ))
where theta is a vector of coefficients with the same number of dimensions as x. Generally to move from probabilities to classifications, you simply threshold, e.g.
if prob < 0.5
return 0;
else
return 1;
end
Other classification algorithms may have probabilistic interpretations, for example random forests are essentially a voting algorithm with multiple classification trees. If 80% of the trees vote for class 1 and 20% vote for class 2, then you could output an 80% probability of being in class 1. But this is a side effect of how the model works, rather than an explicit underlying probability model.