In keras' documentation there is no information regarding how dropout is actually implemented for LSTM layers.
However, there is a link to the paper "A Theoretically Grounded Application of Dropout in Recurrent Neural Networks", which led me to believe that dropout is implemented as described in said paper.
That is, for each time-step in the time-series the layer is processing, the same dropout mask is used.
Looking at the source code, it seems to me that LSTMCell.call gets called iteratively, once for every time-step in the time-series, and generates a new dropout mask each time it is called.
My question is:
Either I misinterpreted keras' code, or the reference to the paper in keras' documentation is misleading. Which is it?
Both the paper and the code are consistent. You have understood correctly but interpreted the code a bit wrong.
There is a check before initialising dropout_mask, self._dropout_mask is None
So LSTMCell.call gets called iteratively, once for every time-step in the time-series, but only in the first call a new dropout mask is generated.
if 0 < self.dropout < 1 and self._dropout_mask is None:
self._dropout_mask = _generate_dropout_mask(
K.ones_like(inputs),
self.dropout,
training=training,
count=4)
if (0 < self.recurrent_dropout < 1 and
self._recurrent_dropout_mask is None):
self._recurrent_dropout_mask = _generate_dropout_mask(
K.ones_like(states[0]),
self.recurrent_dropout,
training=training,
count=4)
Hope that clarifies your doubt.
Related
I think this is a comprehension issue, but I would appreciate any help.
I'm trying to learn how to use PyTorch for autoencoding. In the nn.Linear function, there are two specified parameters,
nn.Linear(input_size, hidden_size)
When reshaping a tensor to its minimum meaningful representation, as one would in autoencoding, it makes sense that the hidden_size would be smaller. However, in the PyTorch tutorial there is a line specifying identical input_size and hidden_size:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
)
I guess my question is, what is the purpose of having the same input and hidden size? Wouldn't this just return an identical tensor?
I suspect that this just a requirement after calling the nn.ReLU() activation function.
As well stated by wikipedia:
An autoencoder is a type of artificial neural network used to learn
efficient codings of unlabeled data. The
encoding is validated and refined by attempting to regenerate the
input from the encoding.
In other words, the idea of the autoencoder is to learn an identity. This identity-function will be learned only for particular inputs (i.e. without anomalies). From this, the following points derive:
Input will have same dimensions as output
Autoencoders are (generally) built to learn the essential features of the input
Because of point (1), you have that autoencoder will have a series of layers (e.g. a series of nn.Linear() or nn.Conv()).
Because of point (2), you generally have an Encoder which compresses the information (as your code-snippet, you start from 28x28 to the ending 10) and a Decoder that decompress the information (10 -> 28x28). Generally the latent space dimensionality (10) is much smaller than the input (28x28) across several implementation of this theoretical architecture. Now that the end-goal of the Encoder part is clear, you may appreciate that the compression may produce additional data during the compression itself (nn.Linear(28*28, 512)), which will disappear when the series of layers will give the final output (10).
Note that because the model in your question includes a nonlinearity after the linear layer, the model will not learn an identity transform between the input and output. In the specific case of the relu nonlinearity, the model could learn an identity transform if all of the input values were positive, but in general this won't be the case.
I find it a little easier to imagine the issue if we had an even smaller model consisting of Linear --> Sigmoid --> Linear. In such a case, the input will be mapped through the first matrix transform and then "squashed" into the space [0, 1] as the "hidden" layer representation. The next ("output") layer would need to take this squashed view of the input and come up with some way of "unsquashing" it back into the original. But with an affine output layer, it's not possible to do this, so the model will have to learn some other, non-identity, transforms for the two matrices.
There are some neat visualizations of this concept on Chris Olah's blog that are well worth a look.
I am training a unsupervised NN model and for some reason, after exactly one epoch (80 steps), model stops learning.
]
Do you have any idea why it might happen and what should I do to prevent it?
This is more info about my NN:
I have a deep NN that tries to solve an optimization problem. My loss function is customized and it is my objective function in the optimization problem.
So if my optimization problems is min f(x) ==> loss, now in my DNN loss = f(x). I have 64 input, 64 output, 3 layers in between :
self.l1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.LeakyReLU()
self.BN1 = nn.BatchNorm1d(hidden_size)
and last layer is:
self.l5 = nn.Linear(hidden_size, output_size)
self.tan5 = nn.Tanh()
self.BN5 = nn.BatchNorm1d(output_size)
to scale my network.
with more layers and nodes(doubles: 8 layers each 200 nodes), I can get a little more progress toward lower error, but again after 100 steps training error becomes flat!
The symptom is that the training loss stops being improved relatively early. Suppose that your problem is learnable at all, there are many reasons for the for this behavior. Following are most relavant:
Improper preprocessing of input: Neural network prefers input with
zero mean. E.g., if the input is all positive, it will restrict the
weights to be updated in the same direction, which may not be
desirable (https://youtu.be/gYpoJMlgyXA).
Therefore, you may want to subtract the mean from all the images (e.g., subtract 127.5 from each of the 3 channels). Scaling to make unit standard deviation in each channel may also be helpful.
Generalization ability of the network: The network is not complicated
or deep enough for the task.
This is very easy to check. You can train the network on just a few
images (says from 3 to 10). The network should be able to overfit the
data and drives the loss to almost 0. If it is not the case, you may
have to add more layers such as using more than 1 Dense layer.
Another good idea is to used pre-trained weights (in applications of Keras documentation). You may adjust the Dense layers at the top to fit with your problem.
Improper weight initialization. Improper weight initialization can
prevent the network from converging (https://youtu.be/gYpoJMlgyXA,
the same video as before).
For the ReLU activation, you may want to use He initialization
instead of the default Glorot initialiation. I find that this may be
necessary sometimes but not always.
Lastly, you can use debugging tools for Keras such as keras-vis, keplr-io, deep-viz-keras. They are very useful to open the blackbox of convolutional networks.
I faced the same problem then I followed the following:
After going through a blog post, I managed to determine that my problem resulted from the encoding of my labels. Originally I had them as one-hot encodings which looked like [[0, 1], [1, 0], [1, 0]] and in the blog post they were in the format [0 1 0 0 1]. Changing my labels to this and using binary crossentropy has gotten my model to work properly. Thanks to Ngoc Anh Huynh and rafaelvalle!
I have implemented Autoencoder using Keras that takes 112*112*3 neurons as input and 100 neurons as the compressed/encoded state. I want to find the neurons out of these 100 that learns the important features. So far i have calculated eigen values(e) and eigen vectors(v) using the following steps. And i found out that around first 30 values of (e) is greater than 0. Does that mean the first 30 modes are the important ones? Is there any other method that could find the important neurons?
Thanks in Advance
x_enc = enc_model.predict(x_train, batch_size=BATCH_SIZE) # shape (3156,100)
x_mean = np.mean(x_enc, axis=0) # shape (100,)
x_stds = np.std(x_enc, axis=0) # shape (100,)
x_cov = np.cov((x_enc - x_mean).T) # shape (100,100)
e, v = np.linalg.eig(x_cov) # shape (100,) and (100,100) respectively
I don't know if the approach you are using will actually give you any useful results since the way the network learns and what it exactly learns aren't known, I suggest you use a different kind of autoencoder, that automatically learns disentangled representations of the data in a latent space, this way you can be sure that all the parameters you find are actually contributing to the representation of your data. check this article
I'm trying to define custom loss function for Caffe using Python layer but I can't clarify what is a required output.
Let's a function for the layer is defined as L = sum(F(xi, yi))/batch_size, where L is loss function to be minimized (i.e. top[0]), x is a network output (bottom[0]), y is ground truth label (i.e. bottom[1]) and xi,yi are i-th samples in a batch.
Widely known example with EuclideanLossLayer (https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py) shows that backward level in this case must return bottom[0].diff[i] = dL(x,y)/dxi. Another reference I've found shows the same: Implement Bhattacharyya loss function using python layer Caffe
But in other examples I have seen that it should be multiplied by top[0].diff.
1. What is correct? bottom[0][i] = dL/dx or bottom[0].diff[i] = dL/dxi*top[0].diff[i]
Each loss layer may have loss_weight: indicating the "importance" of this specific loss (in case there are several loss layers for the net). Caffe implements this weight as top[0].diff to be multiplied by the gradients.
Let's back off to basic principles: the purpose of back-propagation is to adjust the layer weights according to the ground-truth feedback. The most basic parts of this include "how far off is my current guess" and "how hard should I yank the change lever?" These are formalized as top.diff and learning_rate, respectively.
At a micro level, the ground truth for each layer is that top feedback, so top.diff is the local avatar of "how far off ...". Thus at some point, you need to include top[0].diff as a primary factor in your adjustment computation.
I know this isn't a complete, direct answer -- but I hope it continues to help even after you solve the immediate problem.
I am using pylearn2 library to design a CNN. I want to use Leaky ReLus as the activation function in one layer. Is there any possible way to do this using pylearn2? Do I have to write a custom function for it or does pylearn2 have inbuilt funtions for tha? If so, how to write a custom code? Please can anyone help me out here?
ConvElemwise super-class is a generic convolutional elemwise layer. Among its subclasses ConvRectifiedLinear is a convolutional rectified linear layer that uses RectifierConvNonlinearity class.
In the apply() method:
p = linear_response * (linear_response > 0.) + self.left_slope *\
linear_response * (linear_response < 0.)
As this gentle review points out:
... Maxout neuron (introduced recently by Goodfellow et al.) that generalizes the ReLU and its leaky version.
Examples are MaxoutLocalC01B or MaxoutConvC01B.
The reason for lack of answer in pylearn2-user may be that pylearn2 is mostly written by researches at LISA lab and, thus, the threshold for point 13 in FAQ may be high.