Can you make an LSTM forget context manually? - machine-learning

I am very new to machine learning and was wandering if it’s possible to manually empty an LSTM’s short term memory. Say, for instance, I wanted to train an LSTM on the sentence
“Jack and Jill went up the,” but decided not to end the sentence. If I then wanted to train it on “Humpty dumpty sat on a wall,” how would I prevent it from immediately predicting the word “hill”? I’m using Keras.

Relevant: When does keras reset an LSTM state?
You should not need to reset context if your LSTM is stateless (stateful = False, which would make more sense for you I think) or if you train the LSTM on those two sentences by putting both in the same batch.
If you really do, use model.reset_state().

Related

In what circumstances might using biases in a neural network not be beneficial?

I am currently looking through Michael Nielsen's ebook Neural Networks and Deep Learning and have run the code found at the end of chapter 1 which trains a neural network to recognize hand-written digits (with a slight modification to make the backpropagation algorithm over a mini-batch matrix-based).
However, having run this code and achieving a classification accuracy of just under 94%, I decided to remove the use of biases from the network. After re-training the modified network, I found no difference in classification accuracy!
NB: The output layer of this network contains ten neurons; if the ith of these neurons has the highest activation then the input is classified as being the digit i.
This got me wondering why it is necessary to use biases in a neural network, rather than just weights, and what differentiates between a task where biases will improve the performance of a network and a task where they will not?
My code can be found here: https://github.com/pipthagoras/neural-network-1
Biases are used to account for the fact that your underlying data might not be centered. It is clearer to see in the case of a linear regression.
If you do a regression without an intercept (or bias), you are forcing the underlying model to pass through the origin, which will result in a poor model if the underlying data is not centered (for example if the true generating process is Y=3000). If, on the other hand, your data is centered or close to centered, then eliminating bias is good, since you won't introduce a term that is, in fact, independent to your predictive variable (it's like selecting a simpler model, which will tend to generalize better PROVIDED that it actually reflects the underlying data).

how to add more outputs to neural networks?

Of course no one making a neural network for image recognition and classification can make place for all possible image outputs. so If I make a neural network that takes the array input and get the output as a bird or not a bird. can I add more outputs for more images after I finish learning the first network or that will make the learning vanish.
so I add fixed input number and 1 output then I add 1 more and 1 more is that applicable or no?
Retrain
If you can spend the resources, it would be a good thing to re-train (or to be more specific: train something from scratch) your network. But read the approaches following when you might achieve something better (or at least less costly).
Transfer-learning
But if you are using one of the huge popular NNs which take weeks to train (on very costly) hardware, there might be a way touching the idea of transfer-learning.
There are at least two different approaches then:
Using the pretrained NN as feature-extractor
Here you will remove the final dense-layers and just use the trained NN to extract some features out of your images. Then you can build some arbitrarily new classifier on your new dataset, which maps OLD-NN-OUTPUT = FEATURES-INPUT -> classes (new softmax-NN or SVM/Kernel-SVM or anything else). This sounds quite robust if we assume that your pretrained NN is of high-quality and your new class is not too different from the learned ones.
In general this approach might be favorable if your new class + dataset is small and similar to the original one.
If the new data is not that similar, one might use some features at some earlier layer (more generic).
Continuing training
Here you would continue training the weights of your original NN, probably keeping the first layers (maybe even all but the final dense ones). As above the general idea is that we assume a good NN to be very general at the first layers (= extracting features) and more specific in the last ones.
This approach should be favorable if you got huge data for your new class. Depending on the similarity you might either continue to retrain all weights or if quite similar, fix some layer-weights (first ones).
There might be technical issues here how to achieve this approach (like different image-size inputs and other stuff). So it needs some work if some constraints of the original NN are broken. It's also important to tune the hyper-parameters for learning (maybe learning-rates should be lower!).

How does Fine-tuning Word Embeddings work?

I've been reading some NLP with Deep Learning papers and found Fine-tuning seems to be a simple but yet confusing concept. There's been the same question asked here but still not quite clear.
Fine-tuning pre-trained word embeddings to task-specific word embeddings as mentioned in papers like Y. Kim, “Convolutional Neural Networks for Sentence Classification,” and K. S. Tai, R. Socher, and C. D. Manning, “Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks,” had only a brief mention without getting into any details.
My question is:
Word Embeddings generated using word2vec or Glove as pretrained word vectors are used as input features (X) for downstream tasks like parsing or sentiment analysis, meaning those input vectors are plugged into a new neural network model for some specific task, while training this new model, somehow we can get updated task-specific word embeddings.
But as far as I know, during the training, what back-propagation does is updating the weights (W) of the model, it does not change the input features (X), so how exactly does the original word embeddings get fine-tuned? and where do these fine-tuned vectors come from?
Yes, if you feed the embedding vector as your input, you can't fine-tune the embeddings (at least easily). However, all the frameworks provide some sort of an EmbeddingLayer that takes as input an integer that is the class ordinal of the word/character/other input token, and performs a embedding lookup. Such an embedding layer is very similar to a fully connected layer that is fed a one-hot encoded class, but is way more efficient, as it only needs to fetch/change one row from the matrix on both front and back passes. More importantly, it allows the weights of the embedding to be learned.
So the classic way would be to feed the actual classes to the network instead of embeddings, and prepend the entire network with a embedding layer, that is initialized with word2vec / glove, and which continues learning the weights. It might also be reasonable to freeze them for several iterations at the beginning until the rest of the network starts doing something reasonable with them before you start fine tuning them.
One hot encoding is the base for constructing initial layer for embeddings. Once you train the network one hot encoding essentially serves as a table lookup. In fine-tuning step you can select data for specific works and mention variables that need to be fine tune when you define the optimizer using something like this
embedding_variables = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="embedding_variables/kernel")
ft_optimizer = tf.train.AdamOptimizer(learning_rate=0.001,name='FineTune')
ft_op = ft_optimizer.minimize(mean_loss,var_list=embedding_variables)
where "embedding_variables/kernel" is the name of the next layer after one-hot encoding.

Advantages of RNN over DNN in prediction

I am going to work on a problem that needs to be addressed with either RNN or Deep Neural Nets. In general, the problem is predicting financial values. So, because I am given the sequence of financial data as an input, I thought that RNN would be better. On the other hand, I think that if I can fit the data into some structure, I can train with DNN much better because the training phase is easier in DNN than RNN. For example, I could get last 1-month info and keep 30 inputs and predict 31'th day while using DNN.
I don't understand the advantage of RNN over DNN in this perspective. My first question is about the proper usage of RNN or DNN in this problem.
My second questions are somehow basic. While training RNN, isn't it possible for a network to get "confused"? I mean, consider the following input: 10101111, and our inputs are one digits 0 or 1 and we have 2-sequences (1-0,1-0,1-1,1-1) Hereafter 1, comes 0 several times. And then at the end, after 1 comes 1. While training, wouldn't this become a major problem? That is, why the system not gets confused while training this sequence?
I think your question is phrased a bit problematically. First, DNNs are a class of architectures. A Convolutional Neural Network differs greatly from a Deep Belief Network or a simple Deep MLP. There are feed forward architectures (e.g. TDNN) fit for timeseries prediction but it depends on you, whether you're more interested in research or just solving your problem.
Second, RNNs are as "deep" as it gets. Considering the most basic RNN, the Elman Network: During training with Backpropagation through time (BPTT) they are unfolded in time - backpropagating over T timesteps. Since this backpropagation is done not only vertically like in a standard DNN but also horizontally over T-1 context layers, the past activations of the hidden layers from T-1 timesteps before the present are actually considered for the activation at the current timestep. This illustration of an unfolded net might help in understanding what I just wrote (source):
This makes RNNs so powerful for timeseries prediction (and should answer both your questions). If you have more questions, read about Elman Networks. LSTMs etc. will only confuse you. Understanding Elman Networks and BPTT is the needed foundation to understand any other RNN.
And one last thing you'll need to look out for: The vanishing gradient problem. While it's tempting to say let's make T=infinity and give our RNN as much memory as possible: It doesn't work. There are many ways working around this problem, LSTMs are quite popular at the moment and there are even some proper LSTM implementations around nowadays. But it's important to know that a basic Elman Network could really struggle with T=30.
As you answered yourself - RNN are for sequences. If data has sequential nature (time series) than it is preferable to use such model over DNN and other "static" models. The main reason is that RNN can model process which is responsible for each conequence, so for example given sequences
0011100
0111000
0001110
RNN will be able to build a model, that "after seeing '1' I will see two more" and correctly build a prediction when seeing
0000001**** -> 0000001110
While in the same time, for DNN (and other non sequential models) there is no relation between these three sequences, in fact the only common thing for them is that "there is 1 on forth position, so I guess it is always like that".
Regarding the second question. Why it won't get confused? Because it models sequences, because it has memory. It makes its recisions based on everything that was observed before, and assuming that your signal has any type of regularity, there is always some vent in the past that differentiate between two possible paths of signals. Once again, such phenomena are much better addressed by RNN than non-recurrent models. See for example natural language and enormous progress given by LSTM-based models in recent years.

Model selection with dropout training neural network

I've been studying neural networks for a bit and recently learned about the dropout training algorithm. There are excellent papers out there to understand how it works, including the ones from the authors.
So I built a neural network with dropout training (it was fairly easy) but I'm a bit confused about how to perform model selection. From what I understand, looks like dropout is a method to be used when training the final model obtained through model selection.
As for the test part, papers always talk about using the complete network with halved weights, but they do not mention how to use it in the training/validation part (at least the ones I read).
I was thinking about using the network without dropout for the model selection part. Say that makes me find that the net performs well with N neurons. Then, for the final training (the one I use to train the network for the test part) I use 2N neurons with dropout probability p=0.5. That assures me to have exactly N neurons active on average, thus using the network at the right capacity most of the time.
Is this a correct approach?
By the way, I'm aware of the fact that dropout might not be the best choice with small datasets. The project I'm working on has academic purposes, so it's not really needed that I use the best model for the data, as long as I stick with machine learning good practices.
First of all, model selection and the training of a particular model are completely different issues. For model selection, you would usually need a data set that is completely independent of both training set used to build the model and test set used to estimate its performance. So if you're doing for example a cross-validation, you would need an inner cross-validation (to train the models and estimate the performance in general) and an outer cross-validation to do the model selection.
To see why, consider the following thought experiment (shamelessly stolen from this paper). You have a model that makes a completely random prediction. It has a number of parameters that you can set, but have no effect. If you're trying different parameter settings long enough, you'll eventually get a model that has a better performance than all the others simply because you're sampling from a random distribution. If you're using the same data for all of these models, this is the model you will choose. If you have a separate test set, it will quickly tell you that there is no real effect because the performance of this parameter setting that achieves good results during the model-building phase is not better on the separate set.
Now, back to neural networks with dropout. You didn't refer to any particular paper; I'm assuming that you mean Srivastava et. al. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". I'm not an expert on the subject, but the method to me seems to be similar to what's used in random forests or bagging to mitigate the flaws an individual learner may exhibit by applying it repeatedly in slightly different contexts. If I understood the method correctly, essentially what you end up with is an average over several possible models, very similar to random forests.
This is a way to make an individual model better, but not for model selection. The dropout is a way of adjusting the learned weights for a single neural network model.
To do model selection on this, you would need to train and test neural networks with different parameters and then evaluate those on completely different sets of data, as described in the paper I've referenced above.

Resources