Why is F1 measure effective for evaluating multiclass classifiers? - machine-learning

I was looking for a good error metric for multiclass classifiers, and many people say that F1 measure is usually used.
But given that predictions of multiclass classifiers are one-hot vectors, doesn't it mean there are no true positives when the prediction is wrong?
What i mean is:
image
when the prediction is correct, every element is true negatives except for the single '1'. So the precision here is just 1.
image
And when the prediction is incorrect, there is no true positives. So the precision is 0.
I would understand that F1 is a powerful metric method when it comes to multilabel classifications, since there can be more than one 1's in the vector, but applying F1 on multiclass classification seems a bit weird to me. Isn't it same with just accuracy?
Or does it mean that F1 score per class should be used?

I'd suggest giving a look on Wikipedia, in particular the section "Extension to multi-class classification".
A good explanation about how to apply F1 to multiclass classifiers is found on Coursera.

Related

Is the loss function='Multiclass' in catboost same as log loss if I am doing a multiclassification problem?

I am making a multiclass prediction model using catboost, The final solution should have minimum Logloss error but Logloss is not present in catboost, they have something called 'Multiclass' as the loss function. Are they both same? if not then how can I measure the accuracy of the catboost model in terms of Logloss?
Are they both same? Effectively, Yes...
The catboost documentation describe the calculation of 'MultiClass' loss as what is generally considered as Multinomial/Multiclass Cross Entropy Loss. That is effectively, a Log Softmax applied to the classifier output 'a' to produce values that can be interpreted as probabilities, and subsequently then apply Negative Log Likelihood Loss (NLLLoss), wiki1 & wiki2.
Their documentation describe the calculation of 'LogLoss' also, which again is NLLLoss, however applied to 'p'. Which they describe here to be result of applying the sigmoid fn to the classifier output. Since the NLLLoss is reworked for the binary problem, only a single class probability is calculated, using 'p' and '1-p' for each class. And in this special (binary) case, use of sigmoid and softmax are equivalent.
How can I measure the the catboost model in terms of Logloss?
They describe a method to produce desired metrics on given data.
Be careful not to confuse loss/objective function 'loss_function' with evaluation metric 'eval_metric', however in this instance, the same function can be used for both, as listed in their supported metrics.
Hope this helps!
Log loss is not a loss function but a metric to measure the performance of a classification model where the prediction is a probability value between 0 and 1.
Learn more here.

How will i identify which evaluation metric should i use for classification problem statement in machine learning?

Which Evaluation metric should i use for classification problem statement ? On what factor should i decide ?
1. Accuracy
2. F1 Score
3. AUC ROC Score
4. Log Loss
Accuracy is a great metric when you are working with a balanced dataset. It's the number of true predictions over the total number of predictions.
F1 Score is a great metric when you want to maximaze the precision and the recall of the prediction, it's also great to unbalanced datasets.
AUC ROC Score represents how much of your data is covered by the algorithm. I really like using this evaluation metric, it works well for both balanced and unbalanced datasets.
Log Loss is the logarithmic loss of the prediction, beased on the cross-entropy between the predicted label and the true label. I never used this metric before.

Need help choosing loss function

I have used resnet50 to solve a multi-class classification problem. The model outputs probabilities for each class. Which loss function should I choose for my model?
After choosing binary cross entropy :
After choosing categorical cross entropy:
The above results are for the same model with just different loss functions.This model is supposed to classify images into 26 classes so categorical cross entropy should work.
Also, in the first case accuracy is about 96% but losses are so high. Why?
edit 2:
Model architecture:
You definitely need to use categorical_crossentropy for a multi-classification problem. binary_crossentropy will reduce your problem down to a binary classification problem in a way that's unclear without further looking into it.
I would say that the reason you are seeing high accuracy in the first (and to some extent the second) case is because you are overfitting. The first dense layer you are adding contains 8 million parameters (!!! to see that do model.summary()), and you only have 70k images to train it with 8 epochs. This architectural choice is very demanding both in computing power and in data requirement. You are also using a very basic optimizer (SGD). Try to use a more powerful Adam.
Finally, I am a bit surprised at your choice to take a 'sigmoid' activation function in the output layer. Why not a more classic 'softmax'?
For a multi-class classification problem you use the categorical_crossentropy loss, as what it does is match the ground truth probability distribution with the one predicted by the model.
This is exactly what is used for multi-class classification, you have a misconception of you think you can't use this loss.

Computing "evidence" probability with Naive Bayes classification

I just coded a Naive Bayes classifier for text classification that is giving me expected results. My features are words, and my classes are text classes. I've coded a multinomial Naive Bayes classifier.
However I would prefer my classifier to output real percentage values ...
To do so I've got to compute the evidence probability as explained in this wikipedia page.
I've got no problem to compute the prior and the conditional probabilities. However I do not know how to compute the evidence probability P(X). And the few documentations talking about it are not very clear.
I've tried :
P(X) as the product of P(Xi) where Xi is my feature (basically it is the product of the percentage of feature within the pool).
P(X) as the sum of P(Ck) * (product of P(Xi/Ck) for all classes.
None of these solutions give me correct percentages ...
Do you know how to compute the evidence probability in my case?

Loss function for OneVsRestClassifier

I have a OneVsRestClassifier (scikit-learn) which has been trained.
clf = OneVsRestClassifier(LogisticRegression(C=1.2, penalty='l1')).fit(X_train, y_train)
I want to find out the loss for my test data. I used log_loss function but it does not seem to work because I have multiple classes as outputs for each test case. What do I do?
The classification problem that you are referring to is known as a Multi-Label Classification problem. You have made a good decision of using the OneVsRestClassifier for this purpose. By default the score method uses the subset accuracy which is a very harsh metric as it requires you to guess the entire subset of labels correctly.
Some other loss functions, provided by scikit-learn, that you can use are as follows:
Hamming Loss - This measures the hamming distance between your prediction of labels and the true label. This is an intuitive formula to understand the hamming distance.
Jaccard Similarity Coefficient Score - This measures the Jaccard similarity between your predicted labels and the true labels.
Precision, Recall and F-Measures - In the case of multi-label classification, the notion of Precision, Recall and F-Measures can be applied to each class independently. The following guide explains how to combine them across all labels in multi-label classification.
If you need to also rank the labels as it is done in multi-label ranking problems, then there are other more advanced techniques available in scikit-learn which are very well documented with examples here. If you are dealing with this kind of a problem, then let me know in the comments, I will explain each of these metrics in more details.
Hope this helps!

Resources