Why do we want to scale outputs when using dropout? - machine-learning

From the dropout paper:
"The idea is to use a single neural net at test time without dropout.
The weights of this network are scaled-down versions of the trained
weights. If a unit is retained with probability p during training, the
outgoing weights of that unit are multiplied by p at test time as
shown in Figure 2. This ensures that for any hidden unit the expected
output (under the distribution used to drop units at training time) is
the same as the actual output at test time."
Why do we want to preserve the expected output? If we use ReLU activations, linear scaling of weights or activations results in linear scaling of network outputs and does not have any effect on the classification accuracy.
What am I missing?

To be precise, we want to preserve not the "expected output" but the expected value of the output, that is, we want to make up for the difference in training (when we don't pass values of some nodes) and testing phases by preserving mean (expected) values of outputs.
In case of ReLU activations this scaling indeed leads to linear scaling of outputs (when they are positive) but why do you think it doesn't affect final accuracy of a classification model? At least in the end, we usually apply either softmax of sigmoid which are non-linear and depend on this scaling.

Related

Numerically stable cross entropy loss calculation for Mixture of Experts model in Pytorch

I am stuck with a supposedly simple problem. I have a mixture of experts system, consisting of multiple neural networks for classification, whose mixture weights are determined by the data as well. For the posterior probability of label y, given data x and K different expert networks, we have:
In this scheme, p(y|z_k,x) are expert network posterior probabilities, which are Softmax functions applied to network outputs. p(z_k|x) are the network weights.
My problem is the following. Usually, in Pytorch we feed the outputs of the last layer (logits) into the cross entropy loss function. Pytorch handles the numerical stability issues with the Logsumexp trick (How is log_softmax() implemented to compute its value (and gradient) with better speed and numerical stability?). In my case here however, my model output is not the logits, the probabilities, directly instead, due to the nature of the mixture model. Taking the logarithm of the mixture probabilities and feeding into the NLL loss crashes after a couple of iterations since some probabilities quickly become very close to 0 and underflow-overflow issues start to appear. The calculation would be very unstable, numerically.
In this particular case, what would be the correct way to calculate CE (or NLL) loss, without losing the numerical stability?

Is the loss function='Multiclass' in catboost same as log loss if I am doing a multiclassification problem?

I am making a multiclass prediction model using catboost, The final solution should have minimum Logloss error but Logloss is not present in catboost, they have something called 'Multiclass' as the loss function. Are they both same? if not then how can I measure the accuracy of the catboost model in terms of Logloss?
Are they both same? Effectively, Yes...
The catboost documentation describe the calculation of 'MultiClass' loss as what is generally considered as Multinomial/Multiclass Cross Entropy Loss. That is effectively, a Log Softmax applied to the classifier output 'a' to produce values that can be interpreted as probabilities, and subsequently then apply Negative Log Likelihood Loss (NLLLoss), wiki1 & wiki2.
Their documentation describe the calculation of 'LogLoss' also, which again is NLLLoss, however applied to 'p'. Which they describe here to be result of applying the sigmoid fn to the classifier output. Since the NLLLoss is reworked for the binary problem, only a single class probability is calculated, using 'p' and '1-p' for each class. And in this special (binary) case, use of sigmoid and softmax are equivalent.
How can I measure the the catboost model in terms of Logloss?
They describe a method to produce desired metrics on given data.
Be careful not to confuse loss/objective function 'loss_function' with evaluation metric 'eval_metric', however in this instance, the same function can be used for both, as listed in their supported metrics.
Hope this helps!
Log loss is not a loss function but a metric to measure the performance of a classification model where the prediction is a probability value between 0 and 1.
Learn more here.

Neural Network Custom Binary Prediction

I am trying to design a neural network that makes a custom binary prediction.
Normally to do binary prediction, I would use a softmax as my last layer, and then my loss could be the difference between the prediction I made and the true binary value.
However, what if I don't want to use a softmax layer. Instead, I output a real valued number, and check if some condition on this number is true. In a really simple case, I check if this number is positive. If it is, I predict 1, else I predict 0. Let's say I want all the numbers to be positive, so the true predictions should be all 1, and then I want to train this network such that it outputs all positive numbers. I am confused as how to formulate a loss function for this problem, so that I am able to back propagate and train the network.
Does anyone have an idea how to create this kind of network?
I am confused as how to formulate a loss function for this problem, so
that I am able to back propagate and train the network.
Here's how you should approach it. Effectively, you need to transform the labels to positive and negative target values (say +1 and -1) and solve the regression problem. The loss function can be a simple L1 or L2 loss. The network will try to learn to output a prediction close to the training target, which you can afterwards interpret if it's closer to one target or another, i.e. positive or negative. You can even go ahead and make some targets larger (e.g. +2 or +10) to emphasize that these examples are very important. Example code: linear regression in tensorflow.
However, I simply have to warn you that your approach has serious drawbacks, see for instance this question. One outlier in training data can easily skew your predictions. Classification with softmax + cross-entropy loss is more stable, that's why almost always a better choice.

Neural Networks normalizing output data

I have a training data for NN along with expected outputs. Each input is 10 dimensional vector and has 1 expected output.I have normalised the training data using Gaussian but I don't know how to normalise the outputs since it only has single dimension. Any ideas?
Example:
Raw Input Vector:-128.91, 71.076, -100.75,4.2475, -98.811, 77.219, 4.4096, -15.382, -6.1477, -361.18
Normalised Input Vector: -0.6049, 1.0412, -0.3731, 0.4912, -0.3571, 1.0918, 0.4925, 0.3296, 0.4056, -2.5168
The raw expected output for the above input is 1183.6 but I don't know how to normalise that. Should I normalise the expected output as part of the input vector?
From the looks of your problem, you are trying to implement some sort of regression algorithm. For regression problems you don't normally normalize the outputs. For the training data you provide for a regression system, the expected output should be within the range you're expecting, or simply whatever data you have for the expected outputs.
Therefore, you can normalize the training
inputs to allow the training to go faster, but you typically don't normalize the target outputs. When it comes to testing time or providing new inputs, make sure you normalize the data in the same way that you did during training. Specifically, use exactly the same parameters for normalization during training for any test inputs into the network.
One important remark is that you normalized elements of a single input vector. Having one-dimensional output space, you could not normalize the output.
The correct way is, indeed, to take a complete batch of training data, say N input (and output) vectors, and normalize each dimension (variable) individually (using N samples). Thus, for one-dimensional output, you will have N samples for normalization. In this way, the vector space of your input will not be distorted.
The normalization of the output dimension is usually required when the scale-space of output variables significantly different. After training, you should use the same set normalization parameters (e.g., for zscore it is "mean" and "std") as you obtain from the training data. In this case, you will put new (unseen) data into the same scale space as you in training.

Is the Keras implementation of dropout correct?

The Keras implementation of dropout references this paper.
The following excerpt is from that paper:
The idea is to use a single neural net at test time without dropout.
The weights of this network are scaled-down versions of the trained
weights. If a unit is retained with probability p during training, the
outgoing weights of that unit are multiplied by p at test time as
shown in Figure 2.
The Keras documentation mentions that dropout is only used at train time, and the following line from the Dropout implementation
x = K.in_train_phase(K.dropout(x, level=self.p), x)
seems to indicate that indeed outputs from layers are simply passed along during test time.
Further, I cannot find code which scales down the weights after training is complete as the paper suggests. My understanding is this scaling step is fundamentally necessary to make dropout work, since it is equivalent to taking the expected output of intermediate layers in an ensemble of "subnetworks." Without it, the computation can no longer be considered sampling from this ensemble of "subnetworks."
My question, then, is where is this scaling effect of dropout implemented in Keras, if at all?
Update 1: Ok, so Keras uses inverted dropout, though it is called dropout in the Keras documentation and code. The link http://cs231n.github.io/neural-networks-2/#reg doesn't seem to indicate that the two are equivalent. Nor does the answer at https://stats.stackexchange.com/questions/205932/dropout-scaling-the-activation-versus-inverting-the-dropout. I can see that they do similar things, but I have yet to see anyone say they are exactly the same. I think they are not.
So a new question: Are dropout and inverted dropout equivalent? To be clear, I'm looking for mathematical justification for saying they are or aren't.
Yes. It is implemented properly. From the time when Dropout was invented - folks improved it also from the implementation point of view. Keras is using one of this techniques. It's called inverted dropout and you may read about it here.
UPDATE:
To be honest - in the strict mathematical sense this two approaches are not equivalent. In inverted case you are multiplying every hidden activation by a reciprocal of dropout parameter. But due to that derivative is linear it is equivalent to multiplying all gradient by the same factor. To overcome this difference you must set different learning weight then. From this point of view this approaches differ. But from a practical point view - this approaches are equivalent because:
If you use a method which automatically sets the learning rate (like RMSProp or Adagrad) - it will make almost no change in algorithm.
If you use a method where you set your learning rate automatically - you must take into account the stochastic nature of dropout and that due to the fact that some neurons will be turned off during training phase (what will not happen during test / evaluation phase) - you must to rescale your learning rate in order to overcome this difference. Probability theory gives us the best rescalling factor - and it is a reciprocal of dropout parameter which makes the expected value of a loss function gradient length the same in both train and test / eval phases.
Of course - both points above are about inverted dropout technique.
Excerpted from the original Dropout paper (Section 10):
In this paper, we described dropout as a method where we retain units with probability p at training time and scale down the weights by multiplying them by a factor of p at test time. Another way to achieve the same effect is to scale up the retained activations by multiplying by 1/p at training time and not modifying the weights at test time. These methods are equivalent with appropriate scaling of the learning rate and weight initializations at each layer.
Note though, that while keras's dropout layer is implemented using inverted dropout. The rate parameter the opposite of keep_rate.
keras.layers.Dropout(rate, noise_shape=None, seed=None)
Dropout consists in randomly setting a fraction rate of input units to
0 at each update during training time, which helps prevent
overfitting.
That is, rate sets the rate of dropout and not the rate to keep which you would expect with inverted dropout
Keras Dropout

Resources