I'm trying to train a small CNN from scratch to classify images of 10 different animal species. The images have different dimensions, but I'd say around 300x300. Anyway, every image is resized to 224x224 before going into the model.
Here is the network I'm training:
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1
self.fc1 = nn.Linear(32 * 54 * 54, 10)
I'm using a SGD optimizer with fixed learning rate = 0.005 and weight decay = 0.01. I'm using a cross entropy function.
The accuracy of the model is good (around 99% after the 43-th epoch). However:
in some epoch I get a 'nan' as training loss
in some other epoch the accuracy drops significantly (sometimes the two happen in the same epoch). However, in the next epoch the accuracy comes back to a normal level.
If I understood it correctly a nan in training loss most of the times is caused by gradient values getting too small (underflow) or too big (overflow). Could this be the case?
Should I try by increasing the weight decay to 0.05? Or should I do gradient clipping to avoid exploding gradients? If so which would be a reasonable bound?
Still I don't understand the second issue.
Related
I'm currently traininig a VAE model.
The images in question are microstructure rocks images (like these).
I defind a compount loss function having the sum 2 folds:
MSE as my images are grayscale but non binary.
KLL divergence.
I was having nan values for loss function, but figured out that a way around this is to use the weighted sum of the 2 losses. I've chosen the weight the MSE by the images size (256x256), so it becomes:
MSE = MSEx256x256
and the KLL divergence by 0.1 factor.
The nan problem was solved then, but my model when predicting just predicts one value for the whole image, so if I predict an output it will be an array of 256*256 values all the same at e.g. 0.502.
Model specs:
10 layers encoder / decoder
Latent vector space of dimension 5
SGD optimizer at lr=0.001
Loss values upon training goes from a billion number to 3000 from 2nd epoch and fluctuates around it
Accuracy upon training or valiudating is below 0.001, I've read this metric is irrelavnt anyway when it comes to VAE
Here is how I sample from the latent vector specs:
sample = Lambda(get_sample_from_dist, output_shape=(latent_dim, ), name='sample')([mu, log_sigma])
def get_sample_from_dist(args):
mean_vec, std_dev_vec = args
eta_vec = K.random_normal(shape=(K.shape(mean_vec)[0], K.int_shape(mean_vec)[1]), mean=0, stddev=1)
return mean_vec + K.exp(std_dev_vec) * eta_vec
and here is how the encoder generate mu and log_sigma:
x is the output of the last encoder layer
mu = Dense(latent_dim, name='latent_mu')(x)
log_sigma = Dense(latent_dim, name='latent_sigma')(x)
and here is my loss
def vae_loss_func(inputs, outputs, mu, log_sigma):
x1 = K.flatten(inputs)
x2 = K.flatten(outputs)
reconstruction_loss = losses.mse(x1, x2)*256**2
kl_loss = -0.5* 0.1*K.sum(1 + log_sigma - K.square(mu) - K.square(K.exp(log_sigma)), axis=-1)
vae_loss = K.mean(reconstruction_loss + kl_loss)
return vae_loss
Any thoughts where things are going wrong?
I tried different weighing factors in the loss function and using strides and dropouts layers, none of these worked. I'm expecting the generated image to be varying in pixel value and evenatually capturing the rock structure.
I am new to PyTorch and I'm trying to build a simple neural net for classification. The problem is the network doesn't learn at all. I tried various learning rate ranging from 0.3 to 1e-8 and I also tried training it for a longer duration. My data is small with only 120 training examples and the batch size is 16. Here is the code
Define network
model = nn.Sequential(nn.Linear(4999, 1000),
nn.ReLU(),
nn.Linear(1000,200),
nn.ReLU(),
nn.Linear(200,1),
nn.Sigmoid())
Loss and optimizer
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.BCELoss(reduction="mean")
Training
num_epochs = 100
for epoch in range(num_epochs):
cumulative_loss = 0
for i, data in enumerate(batch_gen(X_train, y_train, batch_size=16)):
inputs, labels = data
inputs = torch.from_numpy(inputs).float()
labels = torch.from_numpy(labels).float()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
cumulative_loss += loss.item()
if i%5 == 0 and i != 0:
print(f"epoch {epoch} batch {i} => Loss: {cumulative_loss/5}")
print("Finished Training!!")
Any help is appreciated!
The reason your loss doesn't seem to decrease every epoch is because you're not printing it every epoch. You're actually printing it every 5th batch. And the loss does not decrease a lot per batch.
Try the following. Here, loss every epoch will be printed.
num_epochs = 100
for epoch in range(num_epochs):
cumulative_loss = 0
for i, data in enumerate(batch_gen(X_train, y_train, batch_size=16)):
inputs, labels = data
inputs = torch.from_numpy(inputs).float()
labels = torch.from_numpy(labels).float()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
cumulative_loss += loss.item()
print(f"epoch {epoch} => Loss: {cumulative_loss}")
print("Finished Training!!")
One reason that your loss doesn't decrease could be because your neural-net isn't deep enough to learn anything. So, trying add more layers.
model = nn.Sequential(nn.Linear(4999, 3000),
nn.ReLU(),
nn.Linear(3000,200),
nn.ReLU(),
nn.Linear(2000,1000),
nn.ReLU(),
nn.Linear(500,250),
nn.ReLU(),
nn.Linear(250,1),
nn.Sigmoid())
Also, I just noticed you're passing data that has very high dimensionality. You have 4999 features/columns and only 120 training examples/rows. Converging a model with so less data is next to impossible (considering you have very high dimensional data).
I'd suggest you try finding more rows or perform dimensionality reduction on your input data (like PCA) to reduce the feature space (to maybe 50/100 or lesser features) and then try again. Chances are that your model still won't converge but it's worth a try.
I am trying to predict the hygrothermal response of a wall, given the interior and exterior climate. Based on literature research, I believe this should be possible with RNN but I have not been able to get good accuracy.
The dataset has 12 input features (time-series of exterior and interior climate data) and 10 output features (time-series of hygrothermal response), both containing hourly values for 10 years. This data was created with hygrothermal simulation software, there is no missing data.
Dataset features:
Dataset targets:
Unlike most time-series prediction problems, I want to predict the response for the full length of the input features time-series at each time-step, rather than the subsequent values of a time-series (eg financial time-series prediction). I have not been able to find similar prediction problems (in similar or other fields), so if you know of one, references are very welcome.
I think this should be possible with RNN, so I am currently using LSTM from Keras. Before training, I preprocess my data the following way:
Discard first year of data, as the first time steps of the hygrothermal response of the wall is influenced by the initial temperature and relative humidity.
Split into training and testing set. Training set contains the first 8 years of data, the test set contains the remaining 2 years.
Normalise training set (zero mean, unit variance) using StandardScaler from Sklearn. Normalise test set analogously using mean an variance from training set.
This results in: X_train.shape = (1, 61320, 12), y_train.shape = (1, 61320, 10), X_test.shape = (1, 17520, 12), y_test.shape = (1, 17520, 10)
As these are long time-series, I use stateful LSTM and cut the time-series as explained here, using the stateful_cut() function. I only have 1 sample, so batch_size is 1. For T_after_cut I have tried 24 and 120 (24*5); 24 appears to give better results. This results in X_train.shape = (2555, 24, 12), y_train.shape = (2555, 24, 10), X_test.shape = (730, 24, 12), y_test.shape = (730, 24, 10).
Next, I build and train the LSTM model as follows:
model = Sequential()
model.add(LSTM(128,
batch_input_shape=(batch_size,T_after_cut,features),
return_sequences=True,
stateful=True,
))
model.addTimeDistributed(Dense(targets)))
model.compile(loss='mean_squared_error', optimizer=Adam())
model.fit(X_train, y_train, epochs=100, batch_size=batch=batch_size, verbose=2, shuffle=False)
Unfortunately, I don't get accurate prediction results; not even for the training set, thus the model has high bias.
The prediction results of the LSTM model for all targets
How can I improve my model? I have already tried the following:
Not discarding the first year of the dataset -> no significant difference
Differentiating the input features time-series (subtract previous value from current value) -> slightly worse results
Up to four stacked LSTM layers, all with the same hyperparameters -> no significant difference in results but longer training time
Dropout layer after LSTM layer (though this is usually used to reduce variance and my model has high bias) -> slightly better results, but difference might not be statistically significant
Am I doing something wrong with the stateful LSTM? Do I need to try different RNN models? Should I preprocess the data differently?
Furthermore, training is very slow: about 4 hours for the model above. Hence I am reluctant to do an extensive hyperparameter gridsearch...
In the end, I managed to solve this the following way:
Using more samples to train instead of only 1 (I used 18 samples to train and 6 to test)
Keep the first year of data, as the output time-series for all samples have the same 'starting point' and the model needs this information to learn
Standardise both input and output features (zero mean, unit variance). I found this improved prediction accuracy and training speed
Use stateful LSTM as described here, but add reset states after epoch (see below for code). I used batch_size = 6 and T_after_cut = 1460. If T_after_cut is longer, training is slower; if T_after_cut is shorter, accuracy decreases slightly. If more samples are available, I think using a larger batch_size will be faster.
use CuDNNLSTM instead of LSTM, this speed up the training time x4!
I found that more units resulted in higher accuracy and faster convergence (shorter training time). Also I found that the GRU is as accurate as the LSTM tough converged faster for the same number of units.
Monitor validation loss during training and use early stopping
The LSTM model is build and trained as follows:
def define_reset_states_batch(nb_cuts):
class ResetStatesCallback(Callback):
def __init__(self):
self.counter = 0
def on_batch_begin(self, batch, logs={}):
# reset states when nb_cuts batches are completed
if self.counter % nb_cuts == 0:
self.model.reset_states()
self.counter += 1
def on_epoch_end(self, epoch, logs={}):
# reset states after each epoch
self.model.reset_states()
return(ResetStatesCallback)
model = Sequential()
model.add(layers.CuDNNLSTM(256, batch_input_shape=(batch_size,T_after_cut ,features),
return_sequences=True,
stateful=True))
model.add(layers.TimeDistributed(layers.Dense(targets, activation='linear')))
optimizer = RMSprop(lr=0.002)
model.compile(loss='mean_squared_error', optimizer=optimizer)
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0.005, patience=15, verbose=1, mode='auto')
ResetStatesCallback = define_reset_states_batch(nb_cuts)
model.fit(X_dev, y_dev, epochs=n_epochs, batch_size=n_batch, verbose=1, shuffle=False, validation_data=(X_eval,y_eval), callbacks=[ResetStatesCallback(), earlyStopping])
This gave me very statisfying accuracy (R2 over 0.98):
This figure shows the temperature (left) and relative humidity (right) in the wall over 2 years (data not used in training), prediction in red and true output in black. The residuals show that the error is very small and that the LSTM learns to capture the long-term dependencies to predict the relative humidity.
While training a convolutional neural network following this article, the accuracy of the training set increases too much while the accuracy on the test set settles.
Below is an example with 6400 training examples, randomly chosen at each epoch (so some examples might be seen at the previous epochs, some might be new), and 6400 same test examples.
For a bigger data set (64000 or 100000 training examples), the increase in training accuracy is even more abrupt, going to 98 on the third epoch.
I also tried using the same 6400 training examples each epoch, just randomly shuffled. As expected, the result is worse.
epoch 3 loss 0.54871 acc 79.01
learning rate 0.1
nr_test_examples 6400
TEST epoch 3 loss 0.60812 acc 68.48
nr_training_examples 6400
tb 91
epoch 4 loss 0.51283 acc 83.52
learning rate 0.1
nr_test_examples 6400
TEST epoch 4 loss 0.60494 acc 68.68
nr_training_examples 6400
tb 91
epoch 5 loss 0.47531 acc 86.91
learning rate 0.05
nr_test_examples 6400
TEST epoch 5 loss 0.59846 acc 68.98
nr_training_examples 6400
tb 91
epoch 6 loss 0.42325 acc 92.17
learning rate 0.05
nr_test_examples 6400
TEST epoch 6 loss 0.60667 acc 68.10
nr_training_examples 6400
tb 91
epoch 7 loss 0.38460 acc 95.84
learning rate 0.05
nr_test_examples 6400
TEST epoch 7 loss 0.59695 acc 69.92
nr_training_examples 6400
tb 91
epoch 8 loss 0.35238 acc 97.58
learning rate 0.05
nr_test_examples 6400
TEST epoch 8 loss 0.60952 acc 68.21
This is my model (I'm using RELU activation after each convolution):
conv 5x5 (1, 64)
max-pooling 2x2
dropout
conv 3x3 (64, 128)
max-pooling 2x2
dropout
conv 3x3 (128, 256)
max-pooling 2x2
dropout
conv 3x3 (256, 128)
dropout
fully_connected(18*18*128, 128)
dropout
output(128, 128)
What could be the cause?
I'm using Momentum Optimizer with learning rate decay:
batch = tf.Variable(0, trainable=False)
train_size = 6400
learning_rate = tf.train.exponential_decay(
0.1, # Base learning rate.
batch * batch_size, # Current index into the dataset.
train_size*5, # Decay step.
0.5, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(cost, global_step=batch)
This is very much expected. This problem is called over-fitting. This is when your model starts "memorizing" the training examples without actually learning anything useful for the Test set. In fact, this is exactly why we use a test set in the first place. Since if we have a complex enough model we can always fit the data perfectly, even if not meaningfully. The test set is what tells us what the model has actually learned.
Its also useful to use a Validation set which is like a test set, but you use it to find out when to stop training. When the Validation error stops lowering you stop training. why not use the test set for this? The test set is to know how well your model would do in the real world. If you start using information from the test set to choose things about your training process, than its like your cheating and you will be punished by your test error no longer representing your real world error.
Lastly, convolutional neural networks are notorious for their ability to over-fit. It has been shown the Conv-nets can get zero training error even if you shuffle the labels and even random pixels. That means that there doesn't have to be a real pattern for the Conv-net to learn to represent it. This means that you have to regularize a conv-net. That is, you have to use things like Dropout, batch normalization, early stopping.
I'll leave a few links if you want to read more:
Over-fitting, validation, early stopping
https://elitedatascience.com/overfitting-in-machine-learning
Conv-nets fitting random labels:
https://arxiv.org/pdf/1611.03530.pdf
(this paper is a bit advanced, but its interresting to skim through)
P.S. to actually improve your test accuracy you will need to change your model or train with data augmentation. You might want to try transfer learning as well.
I used the following tensorflow implementation for a binary classification task and got really bad accuracy. However when I trained the same dataset with a sklearn.ensemble.GradientBoostingClassifier without any tuning and the result was pretty good. When I took a deep look at the out-of-sample predictions of the neural network made, I realized most of the predictions were the positive class.
precision recall f1-score support
0 0.01 1.00 0.02 8
1 1.00 0.37 0.55 1630
avg / total 1.00 0.38 0.54 1638
The implementation of 2 layer fully connected network:
import math
batch_size = 200
feature_size = len(train_features.columns)
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, feature_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights1 = tf.Variable(tf.truncated_normal([feature_size, 512]))
biases1 = tf.Variable(tf.zeros([512]))
weights2 = tf.Variable(tf.truncated_normal([512, 512], stddev=0.005))
biases2 = tf.Variable(tf.zeros([512]))
weights = tf.Variable(tf.truncated_normal([512, num_labels], stddev=0.005))
biases = tf.Variable(tf.zeros([num_labels]))
hidden_layer1 = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
hidden_layer2 = tf.nn.relu(tf.matmul(hidden_layer1, weights2) + biases2)
logits = tf.matmul(hidden_layer2, weights) + biases
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.AdamOptimizer(0.0005).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_hidden_layer1 = tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1)
valid_hidden_layer2 = tf.nn.relu(tf.matmul(valid_hidden_layer1, weights2) + biases2)
valid_prediction = tf.nn.softmax(tf.matmul(valid_hidden_layer2, weights) + biases)
test_hidden_layer1 = tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1)
test_hidden_layer2 = tf.nn.relu(tf.matmul(test_hidden_layer1, weights2) + biases2)
test_prediction = tf.nn.softmax(tf.matmul(test_hidden_layer2, weights) + biases)
Any suggestion on how to debug this?
The sklearn GradientBoostingClassifier is a different algorithm than a neural network. It does something based on regression trees, which require less fine tuning in order to give good performance than neural networks. This is the trade-off when using neural networks; if you want performance better than alternative algorithms like random forests and SVM, you need to tune the hyper parameters.
As far as that goes, the first thing you should do is initialize the bias on your relu units to nonzero. This helps prevent them from entering a regime where they 'die' and end up giving 0 output and 0 gradient forever. You should also try different learning rates; a learning rate too high will cause the algorithm to not learn properly, and too low will waste resources.
You should also experiment with the number of neurons and layers. I see you have 512 neurons in each hidden layer, and this might be too much unless your problem is that high of dimension and you have enough data. What is your training and test/cross-validation error like? You should keep track of these while you train. If you're getting low training error but high validation error, then you should cut down on the number of neurons because you are overfitting. You could also try having just one hidden layer and see if that helps.