PyTorch Neural Net not learning - machine-learning

I am new to PyTorch and I'm trying to build a simple neural net for classification. The problem is the network doesn't learn at all. I tried various learning rate ranging from 0.3 to 1e-8 and I also tried training it for a longer duration. My data is small with only 120 training examples and the batch size is 16. Here is the code
Define network
model = nn.Sequential(nn.Linear(4999, 1000),
nn.ReLU(),
nn.Linear(1000,200),
nn.ReLU(),
nn.Linear(200,1),
nn.Sigmoid())
Loss and optimizer
import torch.optim as optim
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.BCELoss(reduction="mean")
Training
num_epochs = 100
for epoch in range(num_epochs):
cumulative_loss = 0
for i, data in enumerate(batch_gen(X_train, y_train, batch_size=16)):
inputs, labels = data
inputs = torch.from_numpy(inputs).float()
labels = torch.from_numpy(labels).float()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
cumulative_loss += loss.item()
if i%5 == 0 and i != 0:
print(f"epoch {epoch} batch {i} => Loss: {cumulative_loss/5}")
print("Finished Training!!")
Any help is appreciated!

The reason your loss doesn't seem to decrease every epoch is because you're not printing it every epoch. You're actually printing it every 5th batch. And the loss does not decrease a lot per batch.
Try the following. Here, loss every epoch will be printed.
num_epochs = 100
for epoch in range(num_epochs):
cumulative_loss = 0
for i, data in enumerate(batch_gen(X_train, y_train, batch_size=16)):
inputs, labels = data
inputs = torch.from_numpy(inputs).float()
labels = torch.from_numpy(labels).float()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
cumulative_loss += loss.item()
print(f"epoch {epoch} => Loss: {cumulative_loss}")
print("Finished Training!!")
One reason that your loss doesn't decrease could be because your neural-net isn't deep enough to learn anything. So, trying add more layers.
model = nn.Sequential(nn.Linear(4999, 3000),
nn.ReLU(),
nn.Linear(3000,200),
nn.ReLU(),
nn.Linear(2000,1000),
nn.ReLU(),
nn.Linear(500,250),
nn.ReLU(),
nn.Linear(250,1),
nn.Sigmoid())
Also, I just noticed you're passing data that has very high dimensionality. You have 4999 features/columns and only 120 training examples/rows. Converging a model with so less data is next to impossible (considering you have very high dimensional data).
I'd suggest you try finding more rows or perform dimensionality reduction on your input data (like PCA) to reduce the feature space (to maybe 50/100 or lesser features) and then try again. Chances are that your model still won't converge but it's worth a try.

Related

How can I implement validation loss to my training body?

I have a regression model and I have :
input_data = torch.Tensor(features)
target_data = torch.Tensor(target)
Features values are x_values and y_values. I did not split them into train and test. My training body is shown below and I am able to see only loss but I would like to add val_loss and would like to compare. How can I Implement it?
#Training the models
losses = []
for epoch in range(3000):
# Forward pass
output = model(input_data)
# Compute loss
loss = criterion(output, target_data)
#r
# Backward pass and update
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print loss
if epoch % 10 == 0:
losses.append(float(loss.item()))
print(f'Epoch {epoch}, Loss: {loss.item()}')

Nan as training loss in a CNN

I'm trying to train a small CNN from scratch to classify images of 10 different animal species. The images have different dimensions, but I'd say around 300x300. Anyway, every image is resized to 224x224 before going into the model.
Here is the network I'm training:
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1
self.fc1 = nn.Linear(32 * 54 * 54, 10)
I'm using a SGD optimizer with fixed learning rate = 0.005 and weight decay = 0.01. I'm using a cross entropy function.
The accuracy of the model is good (around 99% after the 43-th epoch). However:
in some epoch I get a 'nan' as training loss
in some other epoch the accuracy drops significantly (sometimes the two happen in the same epoch). However, in the next epoch the accuracy comes back to a normal level.
If I understood it correctly a nan in training loss most of the times is caused by gradient values getting too small (underflow) or too big (overflow). Could this be the case?
Should I try by increasing the weight decay to 0.05? Or should I do gradient clipping to avoid exploding gradients? If so which would be a reasonable bound?
Still I don't understand the second issue.

How is the training accuracy in Keras determined for every epoch?

I am training a model in Keras with as follows:
model.fit(Xtrn, ytrn batch_size=16, epochs=50, verbose=1, shuffle=True,
callbacks=[model_checkpoint], validation_data=(Xval, yval))
The fitting output looks as follows:
As shown in the model.fit I have a batch size of 16 and a total of 8000 training samples as shown in the output. So from my understanding, training takes place every 16 batches. Which also means training is ran 500 times for a single epoch (i.e., 8000/16 =500)
So let's take the training accuracy printed in the output for Epoch 1/50, which in this case is 0.9381. I would like to know how is this training accuracy of 0.9381 derived.
Is it the:
Is the mean training accuracy, taken as the average from the 500 times training, performed for every batch?
OR,
Is it the best (or max) training accuracy from out of the 500 instances the training procedure is run?
Take a look at the BaseLogger in Keras where they're computing a running mean.
For each epoch the accuracy is the average of all the batches seen before in that epoch.
class BaseLogger(Callback):
"""Callback that accumulates epoch averages of metrics.
This callback is automatically applied to every Keras model.
"""
def on_epoch_begin(self, epoch, logs=None):
self.seen = 0
self.totals = {}
def on_batch_end(self, batch, logs=None):
logs = logs or {}
batch_size = logs.get('size', 0)
self.seen += batch_size
for k, v in logs.items():
if k in self.totals:
self.totals[k] += v * batch_size
else:
self.totals[k] = v * batch_size
def on_epoch_end(self, epoch, logs=None):
if logs is not None:
for k in self.params['metrics']:
if k in self.totals:
# Make value available to next callbacks.
logs[k] = self.totals[k] / self.seen

What represents the loss or accuracy in training results in Keras

I have two classes with 3 images each. I tried this code in Keras.
trainingDataGenerator = ImageDataGenerator()
trainGenerator = trainingDataGenerator.flow_from_directory(
trainingDataDir,
target_size=(28, 28),
batch_size = 1,
seed=7,
class_mode='binary',
)
FilterSize = (3,3)
inputShape = (imageWidth, imageHeight,3)
model = Sequential()
model.add (Conv2D(32, FilterSize, input_shape= inputShape))
model.add (Activation('relu'))
model.add ( MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer = 'rmsprop',
metrics=['accuracy'])
model.fit_generator(
trainGenerator,
steps_per_epoch=3,
epochs=epochs)
My Output:
When I train this model, I get this output:
Using TensorFlow backend.
Found 2 images belonging to 2 classes.
Epoch 1/1
3/3 [==============================] - 0s - loss: 5.3142 - acc: 0.6667
My Question:
I wonder how it determines the loss and accuracy and on what basis? (ie: loss: 5.3142 - acc: 0.6667 ). I have not given any validation image to validate the model to find accuracy and loss. Does this loss, and accuracy is against the input image itself?
In short, can we say something like this: "This model has accuracy of %, and loss of % without validation images"?
The training loss and accuracy is calculated not by comparing to validation data but rather by comparing the prediction of your neural network of sample x with the label y for that sample that you provide in your training set.
You initialize your neural network and (usually) set all weights to a random value with a certain deviation. After that you feed the features of your training dataset into your network, and let it "guess" the outcome aka the label that you have (if you do supervised learning like in your case).
Then your framework compares that guess with the actual label and calculates the error which it then backpropagates through your network thereby adjusting and improving all weights.
This works perfectly well without any validation data.
Validation data serves you to see the quality of your model (loss, accuracy etc.) by letting the model predict on unseen data. With that you get the so called validation loss / accuracy and with this information you tune your hyperparameters.
In a last step you use your test data to evaluate the final quality of your training.

What does the 'training loss' mean in machine learning?

I found some sample code on the tensorflow website as follows.
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_train}, y_train, batch_size=4, num_epochs=1000)
eval_input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_eval}, y_eval, batch_size=4, num_epochs=1000)
# We can invoke 1000 training steps by invoking the method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_loss = estimator.evaluate(input_fn=input_fn)
eval_loss = estimator.evaluate(input_fn=eval_input_fn)
print("train loss: %r"% train_loss)
print("eval loss: %r"% eval_loss)
Would you let me know what the 'training loss' means?
Training loss is the loss on training data. Loss is a function that takes the correct output and model output and computes the error between them. The loss is then used to adjust weights based on how big the error was and which elements contributed to it the most.

Resources