I have a problem which requires that I use LSTM many-to-one architecture i.e. it will take in 19 image frames first and then give out an output.
The image frame has size (128,128,3).
I have been trying since days but could not find the answer, what should be the input_shape for LSTM?
I believe since the image frame is of size 128*128*3, thus the number of units in the input layer would be 49152. Currently the code looks like this:
timesteps = 19
data_dim = 128*128*3
model = Sequential()
model.add(LSTM(data_dim,input_shape=(timesteps, data_dim)))
model.add(Dense(10))
optimizer = 'sgd'
momentum=0.6
decay=0.0005
nesterov=True
optimizer = SGD(lr=lr, momentum=momentum, decay=decay, nesterov=nesterov)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
This code gives Memory Error after compilation.
Thus is it due to some error in input_shape and other parameters to LSTM or is it due to my computer's hardware?
It's certainly your achitecture. Try less LSTM units, data_dim is way too much.
Considering that your input is also data_dim, this will result in more than 4*49152*49152 = 9.663.676.416 weights (not counting biases).
model.add(LSTM(less_units, input_shape=(timesteps,data_dim)))
A sequence of images:
model.add(TimeDistributed(Conv2D(output_filters,kernel_size,...), input_shape=(timesteps,x,y,channels))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(output_dim,...))
Related
I created a randomly generated(using numpy, between range 30 and 60) Data of about 12000 points (to
generate an artificial time-series data for more than a year in Time).
Now I am trying to fit that data points in an LSTM model and forecast
based upon that.
The LSTM model i applied,(here data is a single series so n_features = 1, and steps-in and out are for sequence-generation function for time-series, i took both equal to 5. Also the for the activation functions i tried all with both relu, both tanh and 1st tanh & 2nd relu (as shown here))
X, y = split_sequences(data, n_steps_in, n_steps_out)
n_features = X.shape[2]
model = Sequential()
model.add(LSTM(200, activation='tanh', input_shape=(n_steps_in,
n_features)))
model.add(RepeatVector(n_steps_out))
model.add(LSTM(200, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(n_features)))
opt = keras.optimizers.Adam(learning_rate=0.05)
model.compile(optimizer=opt, loss='mse')
model.fit(X, y, epochs= n, batch_size=10, verbose=1,
workers=4, use_multiprocessing = True, initial_epoch = 0)
I also tried smoothening of the data-points as they are randomly
distributed (in the predefined boundaries).
and then applied the model on the smoothed data, but still i am getting similar results.
for e.g., In this image showing both the smoothed-training data and the forecasted-prediction from the model
plt.plot(Training_data, 'g')
plt.plot(Pred_Forecasts,'r')
Every time the models are giving straight lines in prediction.
and which is obvious since it is a set of random numbers so model tends to get to a mean value between the upper and lower limits of the data, but still is there any way to generate a somewhat real looking model.
P.S-1 - I have also tried applying different models like prophet, sarima, arima.
But i think i need to find a way to increase the Variance of the prediction, which i am unable to find.
PS-2 - Sorry for the long question i am new to deep-learning so i tried to explain more.
It makes intuitive sense to me that the label's dimension should be the same as the neural network's last layer's dimension. However, with some experiments using PyTorch, it turns out that it somehow works.
Code:
import torch
import torch.nn as nn
X = torch.tensor([[1],[2],[3],[4]], dtype=torch.float32) # training input
Y = torch.tensor([[2],[4],[6],[8]], dtype=torch.float32) # training label
model = nn.Linear(1,3)
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(10):
y_pred model(X)
loss = nn.MSELoss(Y, y_pred)
loss.backward()
optimizer.zero_grad()
optimizer.step()
In the above code, model = nn.Linear(1,3) is used instead of model = nn.Linear(1,1). As a result, while Y.shape is (4,1), y_pred.shape is (4,3).
The code works with a warning saying that "Using a target size that is different to the input size will likely lead to incorrect results due to broadcasting. "
I got the following output when I executed model(torch.tensor([10], dtype=torch.float32)):
tensor([20.0089, 19.6121, 19.1967], grad_fn=<AddBackward0>)
All three outputs seems correct to me. But how is the loss calculated if the sizes of the data are different?
Should we in any case use a target size that is different to the input size? Is there a benefit for this?
Assuming you are working with batch_size=4, you are using a target with 1 component vs 3 for your predicted tensor. You don't actually see the intermediate results when computing the loss with nn.MSELoss, using the reduction='none' option will allow you to do so:
>>> criterion = nn.MSELoss(reduction='none')
>>> y = torch.rand(2,1)
>>> y_hat = torch.rand(2,3)
>>> criterion(y_hat, y).shape
(2, 3)
Considering this, you can conclude that the target y, being too small, has been broadcasted to the predicted tensor y_hat. Essentially, in your example, you will get the same result (without the warning) as:
>>> y_repeat = y.repeat(1, 3)
>>> criterion(y_hat, y_repeat)
This means that, for each batch, you are L2-optimizing all its components against a single value: MSE(y_hat[0,0], y[0]), MSE(y_hat[0,1], y[0]), and MSE(y_hat[0,2], y[0]), same goes for y[1] and y[2].
The warning is there to make sure you're conscious of this broadcast operation. Maybe this is what you're looking to do, in this case, you should broadcast the target tensor yourself. Otherwise, it won't make sense to do so.
I found problems when trying to compile a network with one recurrent layer. It seems there is some issue with the dimensionality of the first layer and thus my understanding of how RNN layers work in Keras.
My code sample is:
model.add(Dense(8,
input_dim = 2,
activation = "tanh",
use_bias = False))
model.add(SimpleRNN(2,
activation = "tanh",
use_bias = False))
model.add(Dense(1,
activation = "tanh",
use_bias = False))
The error is
ValueError: Input 0 is incompatible with layer simple_rnn_1: expected ndim=3, found ndim=2
This error is returned regardless of input_dim value. What am I missing ?
That message means: the input going into the rnn has 2 dimensions, but an rnn layer expects 3 dimensions.
For an RNN layer, you need inputs shaped like (BatchSize, TimeSteps, FeaturesPerStep). These are the 3 dimensions expected.
A Dense layer (in keras 2) can work with either 2 or 3 dimensions. We can see that you're working with 2 because you passed an input_dim instead of passing an input_shape=(Steps,Features).
There are many possible ways to solve this, but the most meaningful and logical would be a case where your input data is a sequence with time steps.
Solution 1 - Your training data is a sequence:
If your training data is a sequence, you shape it like (NumberOfSamples, TimeSteps, Features) and pass it to your model. Make sure you use input_shape=(TimeSteps,Features) in the first layer instead of using input_dim.
Solution 2 - You reshape the output of the first dense layer so it has the additional dimension:
model.add(Reshape((TimeSteps,Features)))
Make sure that the product TimeSteps*Features is equal to 8, the output of your first dense layer.
I am attempting to train an ANN on time series data in Keras. I have three vectors of data that are broken into scrolling window sequences (i.e. for vector l).
np.array([l[i:i+window_size] for i in range( len(l) - window_size)])
The target vector is similarly windowed so the neural net output is a prediction of the target vector for the next window_size number of time steps. All the data is normalized with a min-max scaler. It is fed into the neural network as a shape=(nb_samples, window_size, 3). Here is a plot of the 3 input vectors.
The only output I've managed to muster from the ANN is the following plot. Target vector in blue, predictions in red (plot is zoomed in to make the prediction pattern legible). Prediction vectors are plotted at window_size intervals so each one of the repeated patterns is one prediction from the net.
I've tried many different model architectures, number of epochs, activation functions, short and fat networks, skinny, tall. This is my current one (it's a little out there).
Conv1D(64,4, input_shape=(None,3)) ->
Conv1d(32,4) ->
Dropout(24) ->
LSTM(32) ->
Dense(window_size)
But nothing I try will affect the neural net from outputting this repeated pattern. I must be misunderstanding something about time-series or LSTMs in Keras. But I'm very lost at this point so any help is greatly appreciated. I've attached the full code at this repository.
https://github.com/jaybutera/dat-toy
I played with your code a little and I think I have a few suggestions for getting you on the right track. The code doesn't seem to match your graphs exactly, but I assume you've tweaked it a bit since then. Anyway, there are two main problems:
The biggest problem is in your data preparation step. You basically have the data shapes backwards, in that you have a single timestep of input for X and a timeseries for Y. Your input shape is (18830, 1, 8), when what you really want is (18830, 30, 8) so that the full 30 timesteps are fed into the LSTM. Otherwise the LSTM is only operating on one timestep and isn't really useful. To fix this, I changed the line in common.py from
X = X.reshape(X.shape[0], 1, X.shape[1])
to
X = windowfy(X, winsize)
Similarly, the output data should probably be only 1 value, from what I've gathered of your goals from the plotting function. There are certainly some situations where you want to predict a whole timeseries, but I don't know if that's what you want in this case. I changed Y_train to use fuels instead of fuels_w so that it only had to predict one step of the timeseries.
Training for 100 epochs might be way too much for this simple network architecture. In some cases when I ran it, it looked like there was some overfitting going on. Observing the decrease of loss in the network, it seems like maybe only 3-4 epochs are needed.
Here is the graph of predictions after 3 training epochs with the adjustments I mentioned. It's not a great prediction, but it looks like it's on the right track now at least. Good luck to you!
EDIT: Example predicting multiple output timesteps:
from sklearn import datasets, preprocessing
import numpy as np
from scipy import stats
from keras import models, layers
INPUT_WINDOW = 10
OUTPUT_WINDOW = 5 # Predict 5 steps of the output variable.
# Randomly generate some regression data (not true sequential data; samples are independent).
np.random.seed(11798)
X, y = datasets.make_regression(n_samples=1000, n_features=4, noise=.1)
# Rescale 0-1 and convert into windowed sequences.
X = preprocessing.MinMaxScaler().fit_transform(X)
y = preprocessing.MinMaxScaler().fit_transform(y.reshape(-1, 1))
X = np.array([X[i:i + INPUT_WINDOW] for i in range(len(X) - INPUT_WINDOW)])
y = np.array([y[i:i + OUTPUT_WINDOW] for i in range(INPUT_WINDOW - OUTPUT_WINDOW,
len(y) - OUTPUT_WINDOW)])
print(np.shape(X)) # (990, 10, 4) - Ten timesteps of four features
print(np.shape(y)) # (990, 5, 1) - Five timesteps of one features
# Construct a simple model predicting output sequences.
m = models.Sequential()
m.add(layers.LSTM(20, activation='relu', return_sequences=True, input_shape=(INPUT_WINDOW, 4)))
m.add(layers.LSTM(20, activation='relu'))
m.add(layers.RepeatVector(OUTPUT_WINDOW))
m.add(layers.LSTM(20, activation='relu', return_sequences=True))
m.add(layers.wrappers.TimeDistributed(layers.Dense(1, activation='sigmoid')))
print(m.summary())
m.compile(optimizer='adam', loss='mse')
m.fit(X[:800], y[:800], batch_size=10, epochs=60) # Train on first 800 sequences.
preds = m.predict(X[800:], batch_size=10) # Predict the remaining sequences.
print('Prediction:\n' + str(preds[0]))
print('Actual:\n' + str(y[800]))
# Correlation should be around r = .98, essentially perfect.
print('Correlation: ' + str(stats.pearsonr(y[800:].flatten(), preds.flatten())[0]))
I tried to build a simple MLP with 2 hidden layers and 3 output classes.
What I have done in the model is:
Input images are 120x120 rgb images. Flattened size (3 * 120 * 120)
2 hidden layers of size 100.
Relu activation is used
Output layer has 3 neurons
Code
def model(input, weights, biases):
l_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
l_1 = tf.nn.relu(l_1)
l_2 = tf.add(tf.matmul(l_1, weights['h2']), biases['b2'])
l_2 = tf.nn.relu(l_2)
out = tf.matmul(l_2, weights['out']) + biases['out']
return out
Optimizer
pred = model(input_batch, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(rate).minimize(cost)
The model however does not work. The accuracy is only equal to that of a random model.
The example followed is this one:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py
You have a copy-paste typo in def model. First argument name is input while it is x on the next line.
Another trick to use when you suspect that model is not being trained is to run it on the same batch again and again. If implementation is correct and model is being trained it will soon learn that batch by heart yielding 100% accuracy. If it does not then it is an indicator that something is wrong in your implementation.