Use RMSE in Gradient Boosting - machine-learning

Is it possible to use the RMSE loss function in XGBoost or in sklearn's GradientBoostingRegressor as is done in CatBoost?

Yes you can use rmse for XGBoost. RMSE is used there in the eval_metric for the regression and error in classification.
Reference: https://xgboost.readthedocs.io/en/latest/parameter.html?highlight=rmse

Related

What is pixel-wise softmax loss?

what is the pixel-wise softmax loss? In my understanding, it's just a cross-entropy loss, but I didn't find the formula. Can someone help me? It's better to have the pytorch code.
You can read here all about it (there's also a link to source code there).
As you already observed the "softmax loss" is basically a cross entropy loss which computation combines the softmax function and the loss for numerical stability and efficiency.
In your example, the loss is computed for a pixel-wise prediction so you have a per-pixel prediction, a per-pixel target and a per-pixel loss term.

RandomForestRegressor model evaluation?

I am new to Machine-learning and trying to understand the correct and suitable evaluation for RandomForestRegressor. I have mentioned below Regression metrics and understood these concepts.
I am not sure that Which metrics I can use the for RandomForestRegressor's evaluation. Can I use r2_score all the time after prediction ?
I am using sklearn packages.
Regression metrics
See the Regression metrics section of the user guide for further details.
metrics.explained_variance_score(y_true, y_pred) Explained variance regression score function
metrics.max_error(y_true, y_pred) max_error metric calculates the maximum residual error.
metrics.mean_absolute_error(y_true, y_pred) Mean absolute error regression loss
metrics.mean_squared_error(y_true, y_pred[, …]) Mean squared error regression loss
metrics.mean_squared_log_error(y_true, y_pred) Mean squared logarithmic error regression loss
metrics.median_absolute_error(y_true, y_pred) Median absolute error regression loss
metrics.r2_score(y_true, y_pred[, …]) R^2 (coefficient of determination) regression score function.

Tensorflow: Output probabilities from sigmoid cross entropy loss

I have a CNN for a multilabel classification problem and as a loss function I use the tf.nn.sigmoid_cross_entropy_with_logits .
From the cross entropy equation I would expect that the output would be probabilities of each class but instead I get floats in the (-∞, ∞) .
After some googling I found that due to some internal normalizing operation each row of logits is interpretable as probability before being fed to the equation.
I'm confused about how I can actually output the posterior probabilities instead of floats in order to draw a ROC.
tf.sigmoid(logits) gives you the probabilities.
You can see in the documentation of tf.nn.sigmoid_cross_entropy_with_logits that tf.sigmoid is the function that normalizes the logits to probabilities.

Why stochastic gradient descent does not support non-linear SVM

I have read that SGD supports linear SVM, but not non-linear SVM. Why is that? I was looking at the cost function of non-linear SVM. It does has a "sum" sign in the beginning.
Please read about Mercer Theorem. Maybe, it will shed some light!

Is Kullback Liebler Divergence already implented in TensorFlow?

I am working with tensorflow and using Nueral Networks to solve multi-label classification problem. I was using Softmax cross entropy as my loss function:
#Softmax loss
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
Now, i thought that i should use KL divergence loss function. But, i didn't find it in tensorflow can any body help me to use KL Divergence loss function instead of Softmax loss?
Here you go:
tf.contrib.distributions.kl(distribution_1, distribution_2)

Resources