Could anyone please explain me the below code:
import torch
import torch.nn as nn
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
c0 = torch.randn(2, 3, 20)
rnn = nn.LSTM(10,20,2)
output, (hn, cn) = rnn(input, (h0, c0))
print(input)
While calling rnn rnn(input, (h0, c0)) we gave arguments h0 and c0 in parenthesis. What is it supposed to mean? if (h0, c0) represents a single value then what is that value and what is the third argument passed here?
However, in the line rnn = nn.LSTM(10,20,2) we are passing arguments in LSTM function without paranthesis.
Can anyone explain me how this function call is working?
The assignment rnn = nn.LSTM(10, 20, 2) instanciates a new nn.Module using the nn.LSTM class. It's first three arguments are input_size (here 10), hidden_size (here 20) and num_layers (here 2).
On the other hand rnn(input, (h0, c0)) corresponds to actually calling the class instance, i.e. running __call__ which is roughly equivalent to the forward function of that module. The __call__ method of nn.LSTM takes in two parameters: input (shaped (sequnce_length, batch_size, input_size), and a tuple of two tensors (h_0, c_0) (both shaped (num_layers, batch_size, hidden_size) in the basic use case of nn.LSTM)
Please refer to the PyTorch documentation whenever using builtins, you will find the exact definition of the parameters list (the arguments used to initialize the class instance) as well as the input/outputs specifications (whenever inferring with that said module).
You might be confused with the notation, here's a small example that could help:
tuple as input:
def fn1(x, p):
a, b = p # unpack input
return a*x + b
>>> fn1(2, (3, 1))
>>> 7
tuple as output
def fn2(x):
return x, (3*x, x**2) # actually output is a tuple of int and tuple
>>> x, (a, b) = fn2(2) # unpacking
(2, (6, 4))
>>> x, a, b
(2, 6, 4)
Related
This question already has an answer here:
RuntimeError: expected scalar type Long but found Float
(1 answer)
Closed 1 year ago.
I have float32 data that I am trying to get RESNet18 to work with. I am using the RESNet model in torchvision (and using pytorch lightning) and modified it to use one layer (grayscale) data like so:
class ResNetMSTAR(pl.LightningModule):
def __init__(self):
super().__init__()
# define model and loss
self.model = resnet18(num_classes=3)
self.model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
self.loss = nn.CrossEntropyLoss()
#auto_move_data # this decorator automatically handles moving your tensors to GPU if required
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_no):
# implement single training step
x, y = batch
logits = self(x)
loss = self.loss(logits, y)
return loss
def configure_optimizers(self):
# choose your optimizer
return torch.optim.RMSprop(self.parameters(), lr=0.005)
When I try to run this model I am getting the following error:
File "/usr/local/lib64/python3.6/site-packages/torch/nn/functional.py", line 2824, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward
Is there anything that I can do differently to keep this error from happening?
The problem is that the y your feeding your cross entropy loss, is not a LongTensor, but a FloatTensor. CrossEntropy expects getting fed a LongTensor for the target, and raises the error.
This is an ugly fix:
x, y = batch
y = y.long()
But what I recommend you to do is going to where the dataset is defined, and make sure you are generating long targets, this way you won't reproduce this error if you change how your training loop is working.
I'm using Julia's Flux library to learn about neural networks. According to the documentation for train! (where train! takes arguments (loss, params, data, opt)):
For each datapoint d in data, compute the gradient of loss with respect to params through backpropagation and call the optimizer opt.
(see source for train!: https://github.com/FluxML/Flux.jl/blob/master/src/optimise/train.jl)
For a conventional NN based on Dense -- let's say with a one-dimensional input and output, i.e. with one feature -- this is easy to understand. Each element in data is a pair of single numbers, an independent sample of 1-d input/output values. train! does forward- and backpropagation on each pair of 1-d samples one at a time. In the process, the loss function is evaluated on each sample. (Do I have this right?)
My question is: how does this extend to a recurrent NN? Take the case of an RNN with 1-d (i.e. one feature) input and output. It seems like there's some ambiguity in how to structure the input and output data, and the results change based on the structure. As one example:
x = [[1], [2], [3]]
y = [4, 5, 6]
data = zip(x, y)
m = RNN(1, 1)
opt = Descent()
loss(x, y) = sum((Flux.stack(m.(x), 1) .- y) .^ 2)
train!(loss, params(m), data, opt)
(loss function taken from: https://github.com/FluxML/Flux.jl/blob/master/docs/src/models/recurrence.md)
In this example, when train! loops through each sample (for d in data), each value of d is a pair of single values from x and y, e.g. ([1], 4). loss is evaluated based on these single values. This is the same as in the Dense case.
On the other hand, consider:
x = [[[1], [2], [3]]]
y = [[4, 5, 6]]
m = RNN(1, 1)
opt = Descent()
loss(x, y) = sum((Flux.stack(m.(x), 1) .- y) .^ 2)
train!(loss, params(m), zip(x, y), opt)
Note that the only difference here is that x and y are nested in an extra pair of square brackets. As a result there's only one d in data, and it's a pair of sequences: ([[1], [2], [3]], [4, 5, 6]). loss can be evaluated on this version of d, and it returns a 1-d value, as required for training. But the value returned by loss is different than in any of the three results from the previous case, so the training process turns out differently.
The point is that both structures are valid in the sense that loss and train! handle them without error. Conceptually, I can make an argument for both structures being correct. But the results are different, and I assume that only one way is right. In other words, for training an RNN, should each d in data be a whole sequence, or a single element from a sequence?
After looking at the following gist, and doing some basic tests, I am trying to create a NER system using a LSTM in keras. I am using a generator and calling fit_generator.
Here is my basic keras model:
model = Sequential([
Embedding(input_dim=max_features, output_dim=embedding_size, input_length=maxlen, mask_zero=True),
Bidirectional(LSTM(hidden_size, return_sequences=True)),
TimeDistributed(Dense(out_size)),
Activation('softmax')
])
model.compile(loss='binary_crossentropy', optimizer='adam')
My input dimension seem right:
>>> generator = generate()
>>> i,t = next(generator)
>>> print( "Inputs: {}".format(model.input_shape))
>>> print( "Outputs: {}".format(model.output_shape))
>>> print( "Actual input: {}".format(i.shape))
Inputs: (None, 3949)
Outputs: (None, 3949, 1)
Actual input: (45, 3949)
However when I call:
model.fit_generator(generator, steps_per_epoch=STEPS_PER_EPOCH, epochs=EPOCHS)
I seem to get the following error:
ValueError:
Error when checking target:
expected activation_1 to have 3 dimensions,
but got array with shape (45, 3949)
I have seen a few other examples of similar issues, which leads me to believe I need to Flatten() my inputs before the Activation() but if I do so I get the following error.
Layer flatten_1 does not support masking,
but was passed an input_mask:
Tensor("embedding_37/NotEqual:0", shape=(?, 3949), dtype=bool)
As per previous questions, my generator is functionally equivalent to:
def generate():
maxlen=3949
while True:
inputs = np.random.randint(55604, size=maxlen)
targets = np.random.randint(2, size=maxlen)
yield inputs, targets
I am not assuming that I need to Flatten and I am open to additional suggestions.
You either need to return only the last element of the sequence (return_sequences=False):
model = Sequential([
Embedding(input_dim=max_features, output_dim=embedding_size, input_length=maxlen, mask_zero=True),
Bidirectional(LSTM(hidden_size)),
Dense(out_size),
Activation('softmax')
])
Or remove the masking (mask_zero=False) to be able to use Flatten:
model = Sequential([
Embedding(input_dim=max_features, output_dim=embedding_size, input_length=maxlen),
Bidirectional(LSTM(hidden_size, return_sequences=True)),
TimeDistributed(Dense(out_size)),
Flatten(),
Activation('softmax')
])
*Be careful that the output will be out_size x maxlen.
And I think you want the first option.
Edit 1: Looking at the example diagram, it makes a prediction on every timestep, so it need the softmax activation also TimeDistributed. The target dimension should be (None, maxlen, out_size):
model = Sequential([
Embedding(input_dim=max_features, output_dim=embedding_size, input_length=maxlen, mask_zero=True),
Bidirectional(LSTM(hidden_size, return_sequences=True)),
TimeDistributed(Dense(out_size)),
TimeDistributed(Activation('softmax'))
])
I am trying to create a list based on my neural network outputs and use it in Tensorflow as a loss function.
Assume that results is list of size [1, batch_size] that is output by a neural network. I check to see whether the first value of this list is in a specific range passed in as a placeholder called valid_range, and if it is add 1 to a list. If it is not, add -1. The goal is to make all predictions of the network in the range, so the correct predictions is a tensor of all 1, which I call correct_predictions.
values_list = []
for j in range(batch_size):
a = results[0, j] >= valid_range[0]
b = result[0, j] <= valid_range[1]
c = tf.logical_and(a, b)
if (c == 1):
values_list.append(1)
else:
values_list.append(-1.)
values_list_tensor = tf.convert_to_tensor(values_list)
correct_predictions = tf.ones([batch_size, ], tf.float32)
Now, I want to use this as a loss function in my network, so that I can force all the predictions to be in the specified range. I try to train like this:
loss = tf.reduce_mean(tf.squared_difference(values_list_tensor, correct_predictions))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gradients, variables = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, gradient_clip_threshold)
optimize = optimizer.apply_gradients(zip(gradients, variables))
This, however, has a problem and throws an error on the last optimize line, saying:
ValueError: No gradients provided for any variable: ['<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d4afd0>',
'<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d66050>'
...
I tried to debug this in Tensorboard, and I notice that the list I am creating does not appear in the graph, so basically the x part of the loss function is not part of the network itself. Is there some way to accurately create a list based on the predictions of a neural network and use it in the loss function in Tensorflow to train the network?
Please help, I have been stuck on this for a few days now.
Edit:
Following what was suggested in the comments, I decided to use a l2 loss function, multiplying it by the binary vector I had from before values_list_tensor. The binary vector now has values 1 and 0 instead of 1 and -1. This way when the prediction is in the range the loss is 0, else it is the normal l2 loss. As I am unable to see the values of the tensors, I am not sure if this is correct. However, I can view the final loss and it is always 0, so something is wrong here. I am unsure if the multiplication is being done correctly and if values_list_tensor is calculated accurately? Can someone help and tell me what could be wrong?
loss = tf.reduce_mean(tf.nn.l2_loss(tf.matmul(tf.transpose(tf.expand_dims(values_list_tensor, 1)), tf.expand_dims(result[0, :], 1))))
Thanks
To answer the question in the comment. One way to write a piece-wise function is using tf.cond. For example, here is a function that returns 0 in [-1, 1] and x everywhere else:
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32)
y = tf.cond(tf.logical_or(tf.greater(x, 1.0), tf.less(x, -1.0)), lambda : x, lambda : 0.0)
y.eval({x: 1.5}) # prints 1.5
y.eval({x: 0.5}) # prints 0.0
I am using z3py. My question is, how do I retrieve the bounds of an Extract node? I thought Extract would be a function with arity three, but it isn't:
>>> x = BitVecVal(3, 32)
>>> e = Extract(15, 0, x)
>>> e.decl()
Extract
>>> e.decl().arity()
1
>>> e2 = Extract(7, 0, x)
>>> e2.decl()
Extract
>>> e.decl() == e2.decl()
False
Each Extract operation is typed (apparently) by the first two arguments (I infer this because the decls aren't equal).
If I'm given a BitVecRef which is an Extract operation, how can I tell the bounds of the operation? So for Extract(i, j, x) I want a function that gives me back i and j.
The bounds are encoded as "parameters" together with the term. These parameters don't get passed as regular arguments. The python API does not expose access to parameters, but the C API does, and you can call these functions from Python (it is just a little more work).
The function you need is Z3_get_decl_int_parameter.
Here is a sample using the function: http://rise4fun.com/Z3Py/Rsl8
x = BitVec('x',32)
t = Extract(10,5,x)
f = t.decl()
print Z3_get_decl_int_parameter(t.ctx.ref(), f.ast, 0)
print Z3_get_decl_int_parameter(t.ctx.ref(), f.ast, 1)