RandomForestRegressor model evaluation? - machine-learning

I am new to Machine-learning and trying to understand the correct and suitable evaluation for RandomForestRegressor. I have mentioned below Regression metrics and understood these concepts.
I am not sure that Which metrics I can use the for RandomForestRegressor's evaluation. Can I use r2_score all the time after prediction ?
I am using sklearn packages.
Regression metrics
See the Regression metrics section of the user guide for further details.
metrics.explained_variance_score(y_true, y_pred) Explained variance regression score function
metrics.max_error(y_true, y_pred) max_error metric calculates the maximum residual error.
metrics.mean_absolute_error(y_true, y_pred) Mean absolute error regression loss
metrics.mean_squared_error(y_true, y_pred[, …]) Mean squared error regression loss
metrics.mean_squared_log_error(y_true, y_pred) Mean squared logarithmic error regression loss
metrics.median_absolute_error(y_true, y_pred) Median absolute error regression loss
metrics.r2_score(y_true, y_pred[, …]) R^2 (coefficient of determination) regression score function.

Related

How will i identify which evaluation metric should i use for classification problem statement in machine learning?

Which Evaluation metric should i use for classification problem statement ? On what factor should i decide ?
1. Accuracy
2. F1 Score
3. AUC ROC Score
4. Log Loss
Accuracy is a great metric when you are working with a balanced dataset. It's the number of true predictions over the total number of predictions.
F1 Score is a great metric when you want to maximaze the precision and the recall of the prediction, it's also great to unbalanced datasets.
AUC ROC Score represents how much of your data is covered by the algorithm. I really like using this evaluation metric, it works well for both balanced and unbalanced datasets.
Log Loss is the logarithmic loss of the prediction, beased on the cross-entropy between the predicted label and the true label. I never used this metric before.

Weak Learners of Gradient Boosting Tree for Classification/ Multiclass Classification

I am a beginner in machine learning field and I want to learn how to do multiclass classification with Gradient Boosting Tree (GBT). I have read some of the articles about GBT but for regression problem and I couldn't find the right explanation about GBT for multiclass classfication. I also check GBT in scikit-learn library for machine learning. The implementation of GBT is GradientBoostingClassifier which used regression tree as the weak learners for multiclass classification.
GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage n_classes_ regression trees are fit on the negative gradient of the binomial or multinomial deviance loss function. Binary classification is a special case where only a single regression tree is induced.
Source : http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html#sklearn.ensemble.GradientBoostingClassifier
The things is, why do we use regression tree as our learners for GBT instead of classification tree ? It would be very helpful, if someone can provide me the explanation about why regression tree is being used rather than classification tree and how regression tree can do the classification. Thank you
You are interpreting 'regression' too literally here (as numeric prediction), which is not the case; remember, classification is handled with logistic regression. See, for example, the entry for loss in the documentation page you have linked:
loss : {‘deviance’, ‘exponential’}, optional (default=’deviance’)
loss function to be optimized. ‘deviance’ refers to deviance (= logistic regression) for classification with probabilistic outputs. For loss ‘exponential’ gradient boosting recovers the AdaBoost algorithm.
So, a 'classification tree' is just a regression tree with loss='deviance'...

Comparing MSE loss and cross-entropy loss in terms of convergence

For a very simple classification problem where I have a target vector [0,0,0,....0] and a prediction vector [0,0.1,0.2,....1] would cross-entropy loss converge better/faster or would MSE loss?
When I plot them it seems to me that MSE loss has a lower error margin. Why would that be?
Or for example when I have the target as [1,1,1,1....1] I get the following:
As complement to the accepted answer, I will answer the following questions
What is the interpretation of MSE loss and cross entropy loss from probability perspective?
Why cross entropy is used for classification and MSE is used for linear regression?
TL;DR Use MSE loss if (random) target variable is from Gaussian distribution and categorical cross entropy loss if (random) target variable is from Multinomial distribution.
MSE (Mean squared error)
One of the assumptions of the linear regression is multi-variant normality. From this it follows that the target variable is normally distributed(more on the assumptions of linear regression can be found here and here).
Gaussian distribution(Normal distribution) with mean and variance is given by
Often in machine learning we deal with distribution with mean 0 and variance 1(Or we transform our data to have mean 0 and variance 1). In this case the normal distribution will be,
This is called standard normal distribution.
For normal distribution model with weight parameter and precision(inverse variance) parameter , the probability of observing a single target t given input x is expressed by the following equation
, where is mean of the distribution and is calculated by model as
Now the probability of target vector given input can be expressed by
Taking natural logarithm of left and right terms yields
Where is log likelihood of normal function. Often training a model involves optimizing the likelihood function with respect to . Now maximum likelihood function for parameter is given by (constant terms with respect to can be omitted),
For training the model omitting the constant doesn't affect the convergence.
This is called squared error and taking the mean yields mean squared error.
,
Cross entropy
Before going into more general cross entropy function, I will explain specific type of cross entropy - binary cross entropy.
Binary Cross entropy
The assumption of binary cross entropy is probability distribution of target variable is drawn from Bernoulli distribution. According to Wikipedia
Bernoulli distribution is the discrete probability distribution of a random variable which
takes the value 1 with probability p and the value 0
with probability q=1-p
Probability of Bernoulli distribution random variable is given by
, where and p is probability of success.
This can be simply written as
Taking negative natural logarithm of both sides yields
, this is called binary cross entropy.
Categorical cross entropy
Generalization of the cross entropy follows the general case
when the random variable is multi-variant(is from Multinomial distribution
) with the following probability distribution
Taking negative natural logarithm of both sides yields categorical cross entropy loss.
,
You sound a little confused...
Comparing the values of MSE & cross-entropy loss and saying that one is lower than the other is like comparing apples to oranges
MSE is for regression problems, while cross-entropy loss is for classification ones; these contexts are mutually exclusive, hence comparing the numerical values of their corresponding loss measures makes no sense
When your prediction vector is like [0,0.1,0.2,....1] (i.e. with non-integer components), as you say, the problem is a regression (and not a classification) one; in classification settings, we usually use one-hot encoded target vectors, where only one component is 1 and the rest are 0
A target vector of [1,1,1,1....1] could be the case either in a regression setting, or in a multi-label multi-class classification, i.e. where the output may belong to more than one class simultaneously
On top of these, your plot choice, with the percentage (?) of predictions in the horizontal axis, is puzzling - I have never seen such plots in ML diagnostics, and I am not quite sure what exactly they represent or why they can be useful...
If you like a detailed discussion of the cross-entropy loss & accuracy in classification settings, you may have a look at this answer of mine.
I tend to disagree with the previously given answers. The point is that the cross-entropy and MSE loss are the same.
The modern NN learn their parameters using maximum likelihood estimation (MLE) of the parameter space. The maximum likelihood estimator is given by argmax of the product of probability distribution over the parameter space. If we apply a log transformation and scale the MLE by the number of free parameters, we will get an expectation of the empirical distribution defined by the training data.
Furthermore, we can assume different priors, e.g. Gaussian or Bernoulli, which yield either the MSE loss or negative log-likelihood of the sigmoid function.
For further reading:
Ian Goodfellow "Deep Learning"
A simple answer to your first question:
For a very simple classification problem ... would cross-entropy loss converge better/faster or would MSE loss?
is that MSE loss, when combined with sigmoid activation, will result in non-convex cost function with multiple local minima. This is explained by Prof Andrew Ng in his lecture:
Lecture 6.4 — Logistic Regression | Cost Function — [ Machine Learning | Andrew Ng]
I imagine the same applies to multiclass classification with softmax activation.

Tensorflow: Output probabilities from sigmoid cross entropy loss

I have a CNN for a multilabel classification problem and as a loss function I use the tf.nn.sigmoid_cross_entropy_with_logits .
From the cross entropy equation I would expect that the output would be probabilities of each class but instead I get floats in the (-∞, ∞) .
After some googling I found that due to some internal normalizing operation each row of logits is interpretable as probability before being fed to the equation.
I'm confused about how I can actually output the posterior probabilities instead of floats in order to draw a ROC.
tf.sigmoid(logits) gives you the probabilities.
You can see in the documentation of tf.nn.sigmoid_cross_entropy_with_logits that tf.sigmoid is the function that normalizes the logits to probabilities.

Bootstrap aggregation (bagging) of logistic regression classifiers

So I'm taking N bootstrap samples and training N logistic regression classifiers on these samples. Each classifier gives me some probability of being in a binary class and then I average these N probabilities to get a final prediction.
My question is if I took the N sets of regression coefficients and averaged those and used that averaged set of coefficients in a logistic regression classifier and took the output probability as the final prediction, is this the same as taking the average of the resultant N probabilities as described in the previous paragraph?
The answer is no because the logistic function is non-linear: 1/(1+exp(-a)) + 1/(1+exp(-b)) is not equal to 1/(1+exp(-(a+b))).
But the inverse of the logistic function (also called log-odds), is linear (g(x) in this wiki page). If you are calculating the log-odds, you can average the corresponding coefficients (beta0 and beta1 in the page) in your bagging procedure.

Resources