XGBoost: Is it possible to predict multiple labels and calculate their MAPE? - machine-learning

As far as I'm concerned, XGBoost supports multi-class prediction with objective functions such as softmax.
In my case, I'd like it to output several labels (float numbers) and minimize the MAPE of them. Is it viable? What should I do to make that happen? (Say, how do I construct a DMatrix with multiple labels at first hand.)

data = numpy.array([[1,2,3],[3,4,5]])
label = numpy.array([[0.2,0.1], [0.3,0.4]])
dtrain = xgb.DMatrix(data, label=label)
param = {'gamma':2.0,'nthread':8, 'max_depth':15, 'eta':0.000000003, 'silent':1, 'objective':'multi:softprob', 'eval_metric':'auc' ,'num_class':105}
bst = xgb.train(param, dtrain, num_round)

Related

Use of 'is_unbalance' parameter in Lightgbm

I am trying to use the 'is_unbalance' parameter in my model training for a binary classification problem where the positive class is approximately 3%. If I set the parameter 'is_unbalance', I observe that the binary log loss drops in the first iteration but then keeps on increasing. I'm noticing this behavior only if I enable this parameter 'is_unbalance'. Otherwise, there is a steady drop in log_loss. Appreciate your help on this. Thanks.
When you do not balance the sets for such an unbalanced dataset, then obviously the objective value will always drop - and will probably reach the point of classifying all the predictions to the majority class, while having a fantastic objective value.
Balancing the classes is necessary, but it doesn't mean that you should stop on is_unbalanced - you can use sample_pos_weight, have customized metric, or apply weights to your samples, like following:
WEIGHTS = y_train.value_counts(normalize = True).min() / y_train.value_counts(normalize = True)
TRAIN_WEIGHTS = pd.DataFrame(y_train.rename('old_target')).merge(WEIGHTS, how = 'left', left_on = 'old_target', right_on = WEIGHTS.index).target.values
train_data = lgb.Dataset(X_train, label=y_train, weight = TRAIN_WEIGHTS)
Also, optimizing other hyperparameters should solve the issue of increasing log_loss.
When you set Is_unbalace: True, the algorithm will try to Automatically balance the weight of the dominated label (with the pos/neg fraction in train set).
If you want change scale_pos_weight (it is by default 1 which mean assume both positive and negative label are equal) in case of unbalance dataset you can use following formula(based on this issue on lightgbm repository) to set it correctly.
sample_pos_weight = number of negative samples / number of positive samples

large number of dummy variables in deep learning

I am trying to train a deep learning model for a regression problem. I have 2000 significant categorical inputs each of which has 3 categories. If I convert them to dummy variables, then I will have 6,000 dummy variables as input to deep learning model and it makes optimization very hard since my inputs (6,000 dummy variables) are not zero centered. Also, variance in each dummy variable is small so 6,000 dummy variables will have a hard time to explain variance in output. I was wondering if I need to use z score for dummy variables to help optimization? Also, is there a better solution to deal with these 2,000 categorical inputs?
You should use Embeddings, which translates large sparse vectors into a lower-dimensional space that preserves semantic relationships. So for each categorical feature, you will have dense vector representation.
Here is pseudocode using TensorFlow:
unique_amount_1 = np.unique(col1)
input_1 = tf.keras.layers.Input(shape=(1,), name='input_1')
embedding_1 = tf.keras.layers.Embedding(unique_amount_1, 50, trainable=True)(input_1)
col1_embedding = tf.keras.layers.Flatten()(embedding_1)
unique_amount_2 = np.unique(col2)
input_2 = tf.keras.layers.Input(shape=(1,), name='input_2')
embedding_2 = tf.keras.layers.Embedding(unique_amount_2, 50, trainable=True)(input_2)
col2_embedding = tf.keras.layers.Flatten()(embedding_2)
combined = tf.keras.layers.concatenate([col1_embedding, col2_embedding])
result = tf.keras.layers.Dense()(combined)
model = tf.keras.Model(inputs=[col1, col2], outputs=result)
Where 50 - the size of the embedding vector.

TensorFlow: Why does tf.one_hot give the best performance on tf.uint8 dtypes?

I am rather puzzled by how there is a huge variance (5% difference in accuracy) in the performance of the same model (keeping all other factors the same), when I simply place the conversion of my labels dtype (tf.uint8) after using tf.one_hot, meaning to say the tf.one_hot function processes uint8 integers instead.
For example
...
labels = tf.cast(labels, tf.int64)
labels = tf.one_hot(labels, num_classes=12)
In comparison to
...
labels = tf.one_hot(labels, num_classes=12)
labels = tf.cast(labels, tf.int64)
the latter has better performance.
Is there a preferred dtype when using tf.one_hot?

How to handle gradients when training two sub-graphs simultaneously

The general idea I am trying to realize is a seq2seq-model (taken from the translate.py-example in the models, based on the seq2seq-class). This trains well.
Furthermore I am using the hidden state of the rnn after all the encoding is done, right before decoding starts (I call it the “hidden state at end of encoding”). I use this hidden state at end of encoding to feed it into a further sub-graph which I call “prices” (see below). The training gradients of this sub-graph backprop not only through this additional sub-graph, but also back into the encoder-part of the rnn (which is what I want and need).
The plan is to add more such sub-graph to the hidden state at end of encoding, as I want to analyze the input phrases in a variety of ways.
Now during training when I evaluate and train both sub-graphs (encoder+prices AND encoder+decoder) at the same time, the net does NOT converge. However, if I train by executing the training in the following way (pseudo-code):
if global_step % 10 == 0:
execute-the-price-training_code
else:
execute-the-decoder-training_code
So I am not training both sub-graphs simultaneously. Now it does converge, but the encoder+decoder-part converges MUCH slower than if I ONLY train this part and never train the prices-sub-graph.
My question is: I should be able to train both sub-graphs simultaneously. But probably I have to rescale the gradients flowing back into the hidden state at end of encoding. Here we get the gradients from the prices sub-graph AND from the decoder-sub-graph. How should this rescaling be done. I didnt find any papers describing such an undertaking, but maybe I am searching with the wrong keywords.
Here is the training-part of the code:
This is the (almost original) training-op-preparation:
if not forward_only:
self.gradient_norms = []
self.updates = []
opt = tf.train.AdadeltaOptimizer(self.learning_rate)
for bucket_id in xrange(len(buckets)):
tf.scalar_summary("seq2seq loss", self.losses[bucket_id])
gradients = tf.gradients(self.losses[bucket_id], var_list_seq2seq)
clipped_gradients, norm = tf.clip_by_global_norm(gradients, max_gradient_norm)
self.gradient_norms.append(norm)
self.updates.append(opt.apply_gradients(zip(clipped_gradients, var_list_seq2seq), global_step=self.global_step))
Now, additionally, I am running a second sub-graph that takes the hidden state at end of encoding as input:
with tf.name_scope('prices') as scope:
#First layer
W_price_first_layer = tf.Variable(tf.random_normal([num_layers*size, self.prices_hidden_layer_size], stddev=0.35), name="W_price_first_layer")
B_price_first_layer = tf.Variable(tf.zeros([self.prices_hidden_layer_size]), name="B_price_first_layer")
self.output_price_first_layer = tf.add(tf.matmul(self.hidden_state, W_price_first_layer), B_price_first_layer)
self.activation_price_first_layer = tf.nn.sigmoid(self.output_price_first_layer)
#self.activation_price_first_layer = tf.nn.Relu(self.output_price_first_layer)
#Second layer to softmax (price ranges)
W_price = tf.Variable(tf.random_normal([self.prices_hidden_layer_size, self.prices_bit_size], stddev=0.35), name="W_price")
W_price_t = tf.transpose(W_price)
B_price = tf.Variable(tf.zeros([self.prices_bit_size]), name="B_price")
self.output_price_second_layer = tf.add(tf.matmul(self.activation_price_first_layer, W_price),B_price)
self.price_prediction = tf.nn.softmax(self.output_price_second_layer)
self.label_price = tf.placeholder(tf.int32, shape=[self.batch_size], name="price_label")
#Remember the prices trainables
var_list_prices = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "prices")
var_list_all = tf.trainable_variables()
#Backprop
self.loss_price = tf.nn.sparse_softmax_cross_entropy_with_logits(self.output_price_second_layer, self.label_price)
self.loss_price_scalar = tf.reduce_mean(self.loss_price)
self.optimizer_price = tf.train.AdadeltaOptimizer(self.learning_rate_prices)
self.training_op_price = self.optimizer_price.minimize(self.loss_price, var_list=var_list_all)
Thx a bunch
I expect that running two optimizers simultaneously will lead to inconsistent gradient updates on the common variables, and this might be causing your training not to converge.
Instead, if you add the scalar loss from each sub-network to the "losses collection" (e.g. via tf.contrib.losses.add_loss() or tf.add_to_collection(tf.GraphKeys.LOSSES, ...), you can use tf.contrib.losses.get_total_loss() to get a single loss value that can be passed to a single standard TensorFlow tf.train.Optimizer subclass. TensorFlow will derive the appropriate back-prop computation for your split network.
The get_total_loss() method simply computes an unweighted sum of the values that have been added to the losses collection. I'm not familiar with the literature on how or if you should scale these values, but you can use any arbitrary (differentiable) TensorFlow expression to combine the losses and pass the result to a single optimizer.

Parameter selection and k-fold cross-validation

I have one dataset, and need to do cross-validation, for example, a 10-fold cross-validation, on the entire dataset. I would like to use radial basis function (RBF) kernel with parameter selection (there are two parameters for an RBF kernel: C and gamma). Usually, people select the hyperparameters of SVM using a dev set, and then use the best hyperparameters based on the dev set and apply it to the test set for evaluations. However, in my case, the original dataset is partitioned into 10 subsets. Sequentially one subset is tested using the classifier trained on the remaining 9 subsets. It is obviously that we do not have fixed training and test data. How should I do hyper-parameter selection in this case?
Is your data partitioned into exactly those 10 partitions for a specific reason? If not you could concatenate/shuffle them together again, then do regular (repeated) cross validation to perform a parameter grid search. For example, with using 10 partitions and 10 repeats gives a total of 100 training and evaluation sets. Those are now used to train and evaluate all parameter sets, hence you will get 100 results per parameter set you tried. The average performance per parameter set can be computed from those 100 results per set then.
This process is built-in in most ML tools already, like with this short example in R, using the caret library:
library(caret)
library(lattice)
library(doMC)
registerDoMC(3)
model <- train(x = iris[,1:4],
y = iris[,5],
method = 'svmRadial',
preProcess = c('center', 'scale'),
tuneGrid = expand.grid(C=3**(-3:3), sigma=3**(-3:3)), # all permutations of these parameters get evaluated
trControl = trainControl(method = 'repeatedcv',
number = 10,
repeats = 10,
returnResamp = 'all', # store results of all parameter sets on all partitions and repeats
allowParallel = T))
# performance of different parameter set (e.g. average and standard deviation of performance)
print(model$results)
# visualization of the above
levelplot(x = Accuracy~C*sigma, data = model$results, col.regions=gray(100:0/100), scales=list(log=3))
# results of all parameter sets over all partitions and repeats. From this the metrics above get calculated
str(model$resample)
Once you have evaluated a grid of hyperparameters you can chose a reasonable parameter set ("model selection", e.g. by choosing a well performing while still reasonable incomplex model).
BTW: I would recommend repeated cross validation over cross validation if possible (eventually using more than 10 repeats, but details depend on your problem); and as #christian-cerri already recommended, having an additional, unseen test set that is used to estimate the performance of your final model on new data is a good idea.

Resources