I am trying to perform a backward pass through my network and I don't want to update my network weights of my network when, I do a backward pass.
output = net:forward(input)
err = criterion:forward(output, label)
df_do = criterion:backward(output, label)
net:backward(input, df_do)
I'm assuming this can be done using by any of the two methods
accGradParameters(input, gradOutput, scale)
accUpdateGradParameters(input, gradOutput, learningRate)
Can,I do this using the optim package in torch?
Related
I am looking for a way to re initialize layer's weights in an existing keras pre trained model.
I am using python with keras and need to use transfer learning,
I use the following code to load the pre trained keras models
from keras.applications import vgg16, inception_v3, resnet50, mobilenet
vgg_model = vgg16.VGG16(weights='imagenet')
I read that when using a dataset that is very different than the original dataset it might be beneficial to create new layers over the lower level features that we have in the trained net.
I found how to allow fine tuning of parameters and now I am looking for a way to reset a selected layer for it to re train. I know I can create a new model and use layer n-1 as input and add layer n to it, but I am looking for a way to reset the parameters in an existing layer in an existing model.
For whatever reason you may want to re-initalize the weights of a single layer k, here is a general way to do it:
from keras.applications import vgg16
from keras import backend as K
vgg_model = vgg16.VGG16(weights='imagenet')
sess = K.get_session()
initial_weights = vgg_model.get_weights()
from keras.initializers import glorot_uniform # Or your initializer of choice
k = 30 # say for layer 30
new_weights = [glorot_uniform()(initial_weights[i].shape).eval(session=sess) if i==k else initial_weights[i] for i in range(len(initial_weights))]
vgg_model.set_weights(new_weights)
You can easily verify that initial_weights[k]==new_weights[k] returns an array of False, while initial_weights[i]==new_weights[i] for any other i returns an array of True.
Here is a toy model. I print the model parameters before calling backward exactly once, then print the model parameters again. The parameters are unchanged. If I add the line model:updateParameters(<learning_rate>) after calling backward, I see the parameters update.
But in the example code I've seen, for example https://github.com/torch/demos/blob/master/train-a-digit-classifier/train-on-mnist.lua, no one actually calls updateParameters. Also, it doesn't look like optim.sgd, optim.adam, or nn.StochasticGradient ever call updateParameters either. What am I missing here? How do the parameters get updated automatically? If I must call updateParameters, why do no examples do that?
require 'nn'
require 'optim'
local model = nn.Sequential()
model:add(nn.Linear(4, 1, false))
local params, grads = model:getParameters()
local criterion = nn.MSECriterion()
local inputs = torch.randn(1, 4)
local labels = torch.Tensor{1}
print(params)
model:zeroGradParameters()
local output = model:forward(inputs)
local loss = criterion:forward(output, labels)
local dfdw = criterion:backward(output, labels)
model:backward(inputs, dfdw)
-- With the line below uncommented, the parameters are updated:
-- model:updateParameters(1000)
print(params)
The backward() is not supposed to change parameters, it merely computes the derivatives of the error function with respect to all of the parameters of the network.
In general the training is the sequence of the steps:
repeat
local output = model:forward(input) --see what model predicts
local loss = criterion:forward(output, answer) --see how wrong it is
local loss_grad = criterion:backward(output, answer) --see where it is the most wrong
model:backward(input,loss_grad) --see how much each particular parameter of network is responsible for error
model:updateParameters(learningRate) --fix the parameters based on their wrongness
model:zeroGradParameters() --network parameters are different now, so old gradients are of no use now
until is_user_satisfied()
updateParameters implements the most simple optimization algorithm here (gradient descent).
If so inclined, you may use your own function instead. In theory, you might perform explicit loops through the network storages to update their values.
In practice, you usually call getParameters()
local model_parameters,model_parameters_gradient=model:getParameters()
Which yields you homogeneous tensors of all the values and the gradients. These tensors are views inside the network, so changes in them affect the network.
You may not know which point in the network corresponds to which value, but most optimizers do not care about that.
The demo of optim.sgd usage is as follows:
optim.sgd(
function_to_return_error_and_its_gradients,
model_parameters,
optimizer_special_settings)
The specifics are covered in demo, but here it is relevant that optimizer receives the model_parameters as a parameter which gives it write access to network. And it is not explicitly stated in the documentation, but in the source code it is seen, that the optimizer changes the values of its input tensor (also, note that it is returning the same tensor it received).
Recently I have learned about Generative Adversarial Networks.
For training the Generator, I am somehow confused how it learns. Here is an implemenation of GANs:
`# train generator
z = Variable(xp.random.uniform(-1, 1, (batchsize, nz), dtype=np.float32))
x = gen(z)
yl = dis(x)
L_gen = F.softmax_cross_entropy(yl, Variable(xp.zeros(batchsize, dtype=np.int32)))
L_dis = F.softmax_cross_entropy(yl, Variable(xp.ones(batchsize, dtype=np.int32)))
# train discriminator
x2 = Variable(cuda.to_gpu(x2))
yl2 = dis(x2)
L_dis += F.softmax_cross_entropy(yl2, Variable(xp.zeros(batchsize, dtype=np.int32)))
#print "forward done"
o_gen.zero_grads()
L_gen.backward()
o_gen.update()
o_dis.zero_grads()
L_dis.backward()
o_dis.update()`
So it computes a loss for the Generator as it is mentioned in the paper.
However, it calls the Generator backward function based on the Discriminator output. The discriminator output is just a number (not an array).
But we know that in general, for training a network, we compute a loss function in the last layer (a loss between the last layers output and the real output) and then we compute the gradients. So for example, if the output is 64*64, then we compare it with a 64*64 image and then compute the loss and do the back propagation.
However, in the codes that I see in Generative Adversarial Networks, I see they compute a loss for the Generator from the discriminator output (which is just a number) and then they call the back propagation for Generator. The Generators last layers is for example 64*64 pixels but the discriminator loss is 1*1 (which is different from the usual networks) So I do not understand how it cause the Generator to be learned and trained?
I thought if we attach the two networks (attaching the Generator and Discriminator) and then call the back propagation but just update the Generators parameters, it makes sense and it should work. But what I see in the codes are totally different.
So I am asking how it is possible?
Thanks
You say 'However, it calls the Generator backward function based on the Discriminator output. The discriminator output is just a number (not an array)' whereas the loss is always a scalar value. When we compute mean square error of two images it is also a scalar value.
L_adversarial = E[log(D(x))]+E[log(1−D(G(z))]
x is from real data distribution
z is the latent data distribution which is transformed by the Generator
Coming back to your actual question, The Discriminator network has a sigmoid activation function in the last layer which means it outputs in the range [0,1]. Discriminator tries to maximize this loss by maximizing both terms that are added in the loss function. Maximum value of first term is 0 and occurs when D(x) is 1 and maximum value of second term is also 0 and occurs when 1-D(G(z)) is 1 which means D(G(z)) is 0. So Discriminator tries to do a binary classification my maximizing this loss function where it tries to output 1 when it is fed x(real data) and 0 when it is fed G(z)(generated fake data).
But the Generator tries to minimize this loss in other words it tries to fool the Discriminator by generating fake samples which are similar to real samples. With time both Generator and Discriminator gets better and better. This is the intuition behind GAN.
The code is in pytorch
bce_loss = nn.BCELoss() #bce_loss = -ylog(y_hat)-(1-y)log(1-y_hat)[similar to L_adversarial]
Discriminator = ..... #some network
Generator = ..... #some network
optimizer_generator = ....... #some optimizer for generator network
optimizer_discriminator = ....... #some optimizer for discriminator network
z = ...... #some latent data distribution that is transformed by the generator
real = ..... #real data distribution
#####################
#Update Discriminator
#####################
fake = Generator(z)
fake_prediction = Discriminator(fake)
real_prediction = Discriminator(real)
discriminator_loss = bce_loss(fake_prediction,torch.zeros(batch_size))+bce_loss(real_prediction,torch.ones(batch_size))
discriminator_loss.backward()
optimizer_discriminator.step()
#################
#Update Generator
#################
fake = Generator(z)
fake_prediction = Discriminator(fake)
generator_loss = bce_loss(fake_prediction,torch.ones(batch_size))
generator_loss.backward()
optimizer_generator.step()
The general idea I am trying to realize is a seq2seq-model (taken from the translate.py-example in the models, based on the seq2seq-class). This trains well.
Furthermore I am using the hidden state of the rnn after all the encoding is done, right before decoding starts (I call it the “hidden state at end of encoding”). I use this hidden state at end of encoding to feed it into a further sub-graph which I call “prices” (see below). The training gradients of this sub-graph backprop not only through this additional sub-graph, but also back into the encoder-part of the rnn (which is what I want and need).
The plan is to add more such sub-graph to the hidden state at end of encoding, as I want to analyze the input phrases in a variety of ways.
Now during training when I evaluate and train both sub-graphs (encoder+prices AND encoder+decoder) at the same time, the net does NOT converge. However, if I train by executing the training in the following way (pseudo-code):
if global_step % 10 == 0:
execute-the-price-training_code
else:
execute-the-decoder-training_code
So I am not training both sub-graphs simultaneously. Now it does converge, but the encoder+decoder-part converges MUCH slower than if I ONLY train this part and never train the prices-sub-graph.
My question is: I should be able to train both sub-graphs simultaneously. But probably I have to rescale the gradients flowing back into the hidden state at end of encoding. Here we get the gradients from the prices sub-graph AND from the decoder-sub-graph. How should this rescaling be done. I didnt find any papers describing such an undertaking, but maybe I am searching with the wrong keywords.
Here is the training-part of the code:
This is the (almost original) training-op-preparation:
if not forward_only:
self.gradient_norms = []
self.updates = []
opt = tf.train.AdadeltaOptimizer(self.learning_rate)
for bucket_id in xrange(len(buckets)):
tf.scalar_summary("seq2seq loss", self.losses[bucket_id])
gradients = tf.gradients(self.losses[bucket_id], var_list_seq2seq)
clipped_gradients, norm = tf.clip_by_global_norm(gradients, max_gradient_norm)
self.gradient_norms.append(norm)
self.updates.append(opt.apply_gradients(zip(clipped_gradients, var_list_seq2seq), global_step=self.global_step))
Now, additionally, I am running a second sub-graph that takes the hidden state at end of encoding as input:
with tf.name_scope('prices') as scope:
#First layer
W_price_first_layer = tf.Variable(tf.random_normal([num_layers*size, self.prices_hidden_layer_size], stddev=0.35), name="W_price_first_layer")
B_price_first_layer = tf.Variable(tf.zeros([self.prices_hidden_layer_size]), name="B_price_first_layer")
self.output_price_first_layer = tf.add(tf.matmul(self.hidden_state, W_price_first_layer), B_price_first_layer)
self.activation_price_first_layer = tf.nn.sigmoid(self.output_price_first_layer)
#self.activation_price_first_layer = tf.nn.Relu(self.output_price_first_layer)
#Second layer to softmax (price ranges)
W_price = tf.Variable(tf.random_normal([self.prices_hidden_layer_size, self.prices_bit_size], stddev=0.35), name="W_price")
W_price_t = tf.transpose(W_price)
B_price = tf.Variable(tf.zeros([self.prices_bit_size]), name="B_price")
self.output_price_second_layer = tf.add(tf.matmul(self.activation_price_first_layer, W_price),B_price)
self.price_prediction = tf.nn.softmax(self.output_price_second_layer)
self.label_price = tf.placeholder(tf.int32, shape=[self.batch_size], name="price_label")
#Remember the prices trainables
var_list_prices = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "prices")
var_list_all = tf.trainable_variables()
#Backprop
self.loss_price = tf.nn.sparse_softmax_cross_entropy_with_logits(self.output_price_second_layer, self.label_price)
self.loss_price_scalar = tf.reduce_mean(self.loss_price)
self.optimizer_price = tf.train.AdadeltaOptimizer(self.learning_rate_prices)
self.training_op_price = self.optimizer_price.minimize(self.loss_price, var_list=var_list_all)
Thx a bunch
I expect that running two optimizers simultaneously will lead to inconsistent gradient updates on the common variables, and this might be causing your training not to converge.
Instead, if you add the scalar loss from each sub-network to the "losses collection" (e.g. via tf.contrib.losses.add_loss() or tf.add_to_collection(tf.GraphKeys.LOSSES, ...), you can use tf.contrib.losses.get_total_loss() to get a single loss value that can be passed to a single standard TensorFlow tf.train.Optimizer subclass. TensorFlow will derive the appropriate back-prop computation for your split network.
The get_total_loss() method simply computes an unweighted sum of the values that have been added to the losses collection. I'm not familiar with the literature on how or if you should scale these values, but you can use any arbitrary (differentiable) TensorFlow expression to combine the losses and pass the result to a single optimizer.
I am trying to implement a neural network with multiple layers. I am trying to understand if what I have done is correct and if not, how do I debug this. The way I do it is, I define my neural network in the following manner (I initialise the lookuptable layer with some prior learned embeddings):
lookupTableLayer = nn.LookupTable(vector:size()[1], d)
for i=1,vector:size()[1] do
lookupTableLayer.weight[i] = vector[i]
end
mlp=nn.Sequential();
mlp:add(lookupTableLayer)
mlp:add(nn.TemporalConvolution(d,H,K,dw))
mlp:add(nn.Tanh())
mlp:add(nn.Max(1))
mlp:add(nn.Tanh())
mlp:add(nn.Linear(H,d))
Now, to train the network, I loop through every training example and for every example I call gradUpdate() which has this code (this is straight from the examples):
function gradUpdate(mlp, x, indexY, learningRate)
local pred = mlp:forward(x)
local gradCriterion = findGrad(pred, indexY)
mlp:zeroGradParameters()
mlp:backward(x, gradCriterion)
mlp:updateParameters(learningRate)
end
The findGrad function is just an implementation of WARP Loss which returns the gradient wrt output. I am wondering if this is all I need? I assume this will backpropagate and update the parameters of all the layers. To check this, I trained this network and saved the model. Then I loaded the model and did:
{load saved mlp after training}
lookuptable = mlp:findModules('nn.LookupTable')[1]
Now, I checked vector[1] and lookuptable.weight[1] and they were the same. I can't understand why did the weights in the lookup table layer not get updated? What am I missing here?
Looking forward to your replies!