Loss of Multi-Output Model in Pytorch - machine-learning

I have a multi-output model in PyTorch when I train them using the same loss and then to backpropagate I combine the loss of both the output but when one output loss decreases others increase and so on. How can I fix the problem?
def forward(self, x):
#neural network arch. forward pass
x = dense1(x)
x1 = dense2(x)
x2 = dense2(x)
x1 = F.log_softmax(x1)
x2 = F.log_softmax(x2)
return x1, x2
out1, out2 = model(data)
loss1 = NLLL(out1, target1)
loss2 = NLLL(out2, target2)
loss = loss1 + loss2
loss.backward()
When loss1 decrease loss2 increase and when loss2 decrease loss1 increase how can I fix the issue.
Can any other operator other than '+' be used to combine the loss or should I put weights to the different loss?

If you have two different loss functions, and you finish the forwards for both of them separately, it is smart to do
(loss1 + loss2).backward()
This is computationally efficient.
What you should achieve is to make your model learn, how to minimize the loss.
Some code from your example is absent, but you should have the nn.Module, probable your custom module with parameters inside that should learn to lower to loss.
I liked your approach summing the loss = loss1 + loss2.

Related

Discriminator's loss stuck at value = 1 while training conditional GAN

I am training a conditional GAN that generates image time series (similar to video prediction). I built a conditional GAN based on this paper. However, several probelms happened when I was training the cGAN.
Problems of training cGAN:
The discriminator's loss stucks at one.
It seems like the generator's loss is not effected by discriminator no matter how I adjust the hyper parameters related to the discriminator.
Training loss of discriminator
D_loss = (fake_D_loss + true_D_loss) / 2
fake_D_loss = Hinge_loss(D(G(x, z)))
true_D_loss = Hinge_loss(D(x, y))
The margin of hinge loss = 1
Training loss of generator
D_loss = -torch.mean(D(G(x,z))
G_loss = weighted MAE
Gradient flow of discriminator
Gradient flow of generator
Several settings of the cGAN:
The output layer of discriminator is linear sum.
The discriminator is trained twice per epoch while the generator is only trained once.
The number of neurons of the generator and discriminator are exactly the same as the paper.
I replaced the ReLU (original setting) to LeakyReLU to avoid nan.
I added gradient norm to avoid gradient vanishing problem.
Other hyper parameters are listed as follows:
Hyper parameters
Paper
Mine
number of input images
4
4
number of predicted images
18
10
batch size
16
16
opt_g, opt_d
Adam
Adam
lr_g
5e-5
5e-5
lr_d
2e-4
2e-4
The loss function I use for discriminator.
def HingeLoss(pred, validity, margin=1.):
if validity:
loss = F.relu(margin - pred)
else:
loss = F.relu(margin + pred)
return loss.mean()
The loss function for examining the validity of predicted image from generator.
def HingeLossG(pred):
return -torch.mean(pred)
I use the trainer of pytorch_lightning to train the model. The training codes I wrote are as follows.
def training_step(self, batch, batch_idx, optimizer_idx):
x, y = batch
x.requires_grad = True
if self.n_sample > 1:
pred = [self(x) for _ in range(self.n_sample)]
pred = torch.mean(torch.stack(pred, dim=0), dim=0)
else:
pred = self(x)
##### TRAIN DISCRIMINATOR #####
if optimizer_idx == 1:
true_D_loss = self.discriminator_loss(self.discriminator(x, y), True)
fake_D_loss = self.discriminator_loss(self.discriminator(x, pred.detach()), False)
D_loss = (fake_D_loss + true_D_loss) / 2
return D_loss
##### TRAIN GENERATOR #####
if optimizer_idx == 0:
G_loss = self.generator_loss(pred, y)
GD_loss = self.generator_d_loss(self.discriminator(x, pred.detach()))
train_G_loss = G_loss + GD_loss
return train_G_loss
I have several guesses of why these problems may occur:
Since the original model predicts 18 frames rather than 10 frames (my version), maybe the number of neurons in the original generator is too much for my case (predicting 10 frames), leading an exceedingly powerful generator that breaks the balance of training. However, I've tried to lower the learning rate of generator to 1e-5 (original 5e-5) or increase the training times of discriminator to 3 to 5 times. It seems that the loss curve of generator didn't much changed.
Various results of training cGAN
I have also adjust the weights of generator's loss, but the same problems still occurred.
The architecture codes of this model: https://github.com/hyungting/DGMR-pytorch

Variational autoencoder (VAE) predicts 1 constant value

I'm currently traininig a VAE model.
The images in question are microstructure rocks images (like these).
I defind a compount loss function having the sum 2 folds:
MSE as my images are grayscale but non binary.
KLL divergence.
I was having nan values for loss function, but figured out that a way around this is to use the weighted sum of the 2 losses. I've chosen the weight the MSE by the images size (256x256), so it becomes:
MSE = MSEx256x256
and the KLL divergence by 0.1 factor.
The nan problem was solved then, but my model when predicting just predicts one value for the whole image, so if I predict an output it will be an array of 256*256 values all the same at e.g. 0.502.
Model specs:
10 layers encoder / decoder
Latent vector space of dimension 5
SGD optimizer at lr=0.001
Loss values upon training goes from a billion number to 3000 from 2nd epoch and fluctuates around it
Accuracy upon training or valiudating is below 0.001, I've read this metric is irrelavnt anyway when it comes to VAE
Here is how I sample from the latent vector specs:
sample = Lambda(get_sample_from_dist, output_shape=(latent_dim, ), name='sample')([mu, log_sigma])
def get_sample_from_dist(args):
mean_vec, std_dev_vec = args
eta_vec = K.random_normal(shape=(K.shape(mean_vec)[0], K.int_shape(mean_vec)[1]), mean=0, stddev=1)
return mean_vec + K.exp(std_dev_vec) * eta_vec
and here is how the encoder generate mu and log_sigma:
x is the output of the last encoder layer
mu = Dense(latent_dim, name='latent_mu')(x)
log_sigma = Dense(latent_dim, name='latent_sigma')(x)
and here is my loss
def vae_loss_func(inputs, outputs, mu, log_sigma):
x1 = K.flatten(inputs)
x2 = K.flatten(outputs)
reconstruction_loss = losses.mse(x1, x2)*256**2
kl_loss = -0.5* 0.1*K.sum(1 + log_sigma - K.square(mu) - K.square(K.exp(log_sigma)), axis=-1)
vae_loss = K.mean(reconstruction_loss + kl_loss)
return vae_loss
Any thoughts where things are going wrong?
I tried different weighing factors in the loss function and using strides and dropouts layers, none of these worked. I'm expecting the generated image to be varying in pixel value and evenatually capturing the rock structure.

Generalized dice loss for multi-class segmentation: keras implementation

I just implemented the generalised dice loss (multi-class version of dice loss) in keras, as described in ref :
(my targets are defined as: (batch_size, image_dim1, image_dim2, image_dim3, nb_of_classes))
def generalized_dice_loss_w(y_true, y_pred):
# Compute weights: "the contribution of each label is corrected by the inverse of its volume"
Ncl = y_pred.shape[-1]
w = np.zeros((Ncl,))
for l in range(0,Ncl): w[l] = np.sum( np.asarray(y_true[:,:,:,:,l]==1,np.int8) )
w = 1/(w**2+0.00001)
# Compute gen dice coef:
numerator = y_true*y_pred
numerator = w*K.sum(numerator,(0,1,2,3))
numerator = K.sum(numerator)
denominator = y_true+y_pred
denominator = w*K.sum(denominator,(0,1,2,3))
denominator = K.sum(denominator)
gen_dice_coef = numerator/denominator
return 1-2*gen_dice_coef
But something must be wrong. I'm working with 3D images that I have to segment for 4 classes (1 background class and 3 object classes, I have a imbalanced dataset). First odd thing: while my train loss and accuracy improve during training (and converge really fast), my validation loss/accuracy are constant trough epochs (see image). Second, when predicting on test data, only the background class is predicted: I get a constant volume.
I used the exact same data and script but with categorical cross-entropy loss and get plausible results (object classes are segmented). Which means something is wrong with my implementation. Any idea what it could be?
Plus I believe it would be usefull to the keras community to have a generalised dice loss implementation, as it seems to be used in most of recent semantic segmentation tasks (at least in the medical image community).
PS: it seems odd to me how the weights are defined; I get values around 10^-10. Anyone else has tried to implement this? I also tested my function without the weights but get same problems.
I think the problem here are your weights. Imagine you are trying to solve a multiclass segmentation problem, but in each image only a few classes are ever present. A toy example of this (and the one which led me to this problem) is to create a segmentation dataset from mnist in the following way.
x = 28x28 image and y = 28x28x11 where each pixel is classified as background if it is below a normalised grayscale value of 0.4, and otherwise is classified as the digit which is the original class of x. So if you see a picture of the number one, you will have a bunch of pixels classified as one, and the background.
Now in this dataset you will only ever have two classes present in the image. This means that, following your dice loss, 9 of the weights will be
1./(0. + eps) = large
and so for every image we are strongly penalising all 9 non-present classes. An evidently strong local minima the network wants to find in this situation is to predict everything as a background class.
We do want to penalise any incorrectly predicted classes which are not in the image, but not so strongly. So we just need to modify the weights. This is how I did it:
def gen_dice(y_true, y_pred, eps=1e-6):
"""both tensors are [b, h, w, classes] and y_pred is in logit form"""
# [b, h, w, classes]
pred_tensor = tf.nn.softmax(y_pred)
y_true_shape = tf.shape(y_true)
# [b, h*w, classes]
y_true = tf.reshape(y_true, [-1, y_true_shape[1]*y_true_shape[2], y_true_shape[3]])
y_pred = tf.reshape(pred_tensor, [-1, y_true_shape[1]*y_true_shape[2], y_true_shape[3]])
# [b, classes]
# count how many of each class are present in
# each image, if there are zero, then assign
# them a fixed weight of eps
counts = tf.reduce_sum(y_true, axis=1)
weights = 1. / (counts ** 2)
weights = tf.where(tf.math.is_finite(weights), weights, eps)
multed = tf.reduce_sum(y_true * y_pred, axis=1)
summed = tf.reduce_sum(y_true + y_pred, axis=1)
# [b]
numerators = tf.reduce_sum(weights*multed, axis=-1)
denom = tf.reduce_sum(weights*summed, axis=-1)
dices = 1. - 2. * numerators / denom
dices = tf.where(tf.math.is_finite(dices), dices, tf.zeros_like(dices))
return tf.reduce_mean(dices)

TensorFlow binary classification task bad accuracy, but SciKit-Learn GBM works well

I used the following tensorflow implementation for a binary classification task and got really bad accuracy. However when I trained the same dataset with a sklearn.ensemble.GradientBoostingClassifier without any tuning and the result was pretty good. When I took a deep look at the out-of-sample predictions of the neural network made, I realized most of the predictions were the positive class.
precision recall f1-score support
0 0.01 1.00 0.02 8
1 1.00 0.37 0.55 1630
avg / total 1.00 0.38 0.54 1638
The implementation of 2 layer fully connected network:
import math
batch_size = 200
feature_size = len(train_features.columns)
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, feature_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([feature_size, 512]))
biases1 = tf.Variable(tf.zeros([512]))
weights2 = tf.Variable(tf.truncated_normal([512, 512], stddev=0.005))
biases2 = tf.Variable(tf.zeros([512]))
weights = tf.Variable(tf.truncated_normal([512, num_labels], stddev=0.005))
biases = tf.Variable(tf.zeros([num_labels]))
hidden_layer1 = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
hidden_layer2 = tf.nn.relu(tf.matmul(hidden_layer1, weights2) + biases2)
logits = tf.matmul(hidden_layer2, weights) + biases
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.AdamOptimizer(0.0005).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_hidden_layer1 = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1)
valid_hidden_layer2 = tf.nn.relu(tf.matmul(valid_hidden_layer1, weights2) + biases2)
valid_prediction = tf.nn.softmax(tf.matmul(valid_hidden_layer2, weights) + biases)
test_hidden_layer1 = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1)
test_hidden_layer2 = tf.nn.relu(tf.matmul(test_hidden_layer1, weights2) + biases2)
test_prediction = tf.nn.softmax(tf.matmul(test_hidden_layer2, weights) + biases)
Any suggestion on how to debug this?
The sklearn GradientBoostingClassifier is a different algorithm than a neural network. It does something based on regression trees, which require less fine tuning in order to give good performance than neural networks. This is the trade-off when using neural networks; if you want performance better than alternative algorithms like random forests and SVM, you need to tune the hyper parameters.
As far as that goes, the first thing you should do is initialize the bias on your relu units to nonzero. This helps prevent them from entering a regime where they 'die' and end up giving 0 output and 0 gradient forever. You should also try different learning rates; a learning rate too high will cause the algorithm to not learn properly, and too low will waste resources.
You should also experiment with the number of neurons and layers. I see you have 512 neurons in each hidden layer, and this might be too much unless your problem is that high of dimension and you have enough data. What is your training and test/cross-validation error like? You should keep track of these while you train. If you're getting low training error but high validation error, then you should cut down on the number of neurons because you are overfitting. You could also try having just one hidden layer and see if that helps.

Understanding softmax classifier

I am trying to understand a simple implementation of Softmax classifier from this link - CS231n - Convolutional Neural Networks for Visual Recognition. Here they implemented a simple softmax classifier. In the example of Softmax Classifier on the link, there are random 300 points on a 2D space and a label associated with them. The softmax classifier will learn which point belong to which class.
Here is the full code of the softmax classifier. Or you can see the link I have provided.
# initialize parameters randomly
W = 0.01 * np.random.randn(D,K)
b = np.zeros((1,K))
# some hyperparameters
step_size = 1e-0
reg = 1e-3 # regularization strength
# gradient descent loop
num_examples = X.shape[0]
for i in xrange(200):
# evaluate class scores, [N x K]
scores = np.dot(X, W) + b
# compute the class probabilities
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K]
# compute the loss: average cross-entropy loss and regularization
corect_logprobs = -np.log(probs[range(num_examples),y])
data_loss = np.sum(corect_logprobs)/num_examples
reg_loss = 0.5*reg*np.sum(W*W)
loss = data_loss + reg_loss
if i % 10 == 0:
print "iteration %d: loss %f" % (i, loss)
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
# backpropate the gradient to the parameters (W,b)
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
# perform a parameter update
W += -step_size * dW
b += -step_size * db
I cant understand how they computed the gradient here. I assume that they computed the gradient here -
dW = np.dot(X.T, dscores)
db = np.sum(dscores, axis=0, keepdims=True)
dW += reg*W # regularization gradient
But How? I mean Why gradient of dW is np.dot(X.T, dscores)? And Why the gradient of db is np.sum(dscores, axis=0, keepdims=True)?? So how they computed the gradient on weight and bias? Also why they computed the regularization gradient?
I am just starting to learn about convolutional neural networks and deep learning. And I heard that CS231n - Convolutional Neural Networks for Visual Recognition is a good starting place for that. I did not know where to place deep learning related post. So, i placed them on stackoverflow. If there is any place to post questions related to deep learning please let me know.
The gradients start being computed here:
# compute the gradient on scores
dscores = probs
dscores[range(num_examples),y] -= 1
dscores /= num_examples
First, this sets dscores equal to the probabilities computed by the softmax function. Then, it subtracts 1 from the probabilities computed for the correct classes in the second line, and then it divides by the number of training samples in the third line.
Why does it subtract 1? Because you want the probabilities of the correct labels to be 1, ideally. So it subtracts what it should predict from what it actually predicts: if it predicts something close to 1, the subtraction will be a large negative number (close to zero), so the gradient will be small, because you're close to a solution. Otherwise, it will be a small negative number (far from zero), so the gradient will be bigger, and you'll take larger steps towards the solution.
Your activation function is simply w*x + b. Its derivative with respect to w is x, which is why dW is the dot product between x and the gradient of the scores / output layer.
The derivative of w*x + b with respect to b is 1, which is why you simply sum dscores when backpropagating.
Gradient Descent
Backpropagation is to reduce the cost J of the entire system (softmax classifier here) and it is a problem to optimize the weight parameter W to minimize the cost. Providing the cost function J = f(W) is convex, the gradient descent W = W - α * f'(W) will result in the Wmin which minimizes J. The hyperparameter α is called learning rate which we need to optimize too, but not in this answer.
Y should be read as J in the diagram. Imagine you are on the surface of a place whose shape is defined as J = f(W) and you need to reach the point Wmin. There is no gravity so you do not know which way is toward the bottom but you know the function and your coordinate. How do you know which way you should go? You can find the direction from the derivative f'(W) and move to a new coordinate by W = W - α * f'(W). By repeating this, you can get closer and closer to the point Wmin.
Back propagation at Affin Layer
At the node where multiply or dot operation happens (affin), the function is J = f(W) = X * W. Suppose there are m number of fixed two dimensional coordinates represented as X. How can we find the hyper-plane which minimizes J = f(W) = X * W and its vector W?
We can get closer to the optimal W by repeating the gradient descent W += -α * X if α is appropriate.
Chain Rule
When there are layers after the Affine layer such as the softmax layer and the log loss layer in the softmax classifier, we can calculate the gradient with the chain rule. In the diagram, replace sigmoid with softmax.
As stated in Computing the Analytic Gradient with Backpropagation in the cs321 page, the gradient contribution from the softmax layer and the log loss layer is the dscore part. See the Note section below too.
By applying the gradient to that of the affine layer via the chain rule, the code is derived where α is replaced with step_size. In reality, the step_size needs to be learned as well.
dW = np.dot(X.T, dscores)
W += -step_size * dW
The bias gradient can be derived by applying the chain rule towards the bias b with the gradients (dscore) from the post layers.
db = np.sum(dscores, axis=0, keepdims=True)
Regularization
As stated in Regularization of the cs231 page, the cost function (objective) is adjusted by adding the regularization, which is reg_loss in the code. It is to reduce the over-fitting. The intuition is, in my understanding, if specific feature(s) cause overfitting, we can reduce it by inflating the cost with their weight parameters W, because the gradient descent will work to reduce the cost contributions from the weights. Since we do not know which ones, use all W. The reason of 0.5 * W*W is because it gives simple derivative W.
reg_loss = 0.5*reg*np.sum(W*W)
The gradient contribution reg*W is from the derivative of reg_loss. The reg is a hyper parameter to be learned in the real training.
reg_loss/dw -> 0.5 * reg * 2 * W
It is added to the gradient from the layers after the affin.
dW += reg*W # regularization gradient
The process to get the derivative from the cost including the regularization is omitted in the cs231 page referenced in the post, probably because it is a common practice to just put the gradient of the regularization, but confusing for those who are learning. See Coursera Machine Learning Week 3 Cost Function by Andrew Ng for the regularization.
Note
The bias parameter b is substituted with X0 as the bias can be omitted by shifting to the base.

Resources