I found some sample code on the tensorflow website as follows.
input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_train}, y_train, batch_size=4, num_epochs=1000)
eval_input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_eval}, y_eval, batch_size=4, num_epochs=1000)
# We can invoke 1000 training steps by invoking the method and passing the
# training data set.
estimator.fit(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
train_loss = estimator.evaluate(input_fn=input_fn)
eval_loss = estimator.evaluate(input_fn=eval_input_fn)
print("train loss: %r"% train_loss)
print("eval loss: %r"% eval_loss)
Would you let me know what the 'training loss' means?
Training loss is the loss on training data. Loss is a function that takes the correct output and model output and computes the error between them. The loss is then used to adjust weights based on how big the error was and which elements contributed to it the most.
Related
I'm a bit confused about how to accumulate the batch losses to obtain the epoch loss.
Two questions:
Is #1 (see comments below) correct way to calculate loss with masks)
Is #2 correct way to report epoch loss)
optimizer = torch.optim.Adam(model.parameters, lr=1e-3, weight_decay=1e-5)
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
for epoch in range(10):
EPOCH_LOSS = 0.
for inputs, gt_labels, masks in training_dataloader:
optimizer.zero_grad()
outputs = model(inputs)
#1: Is this the correct way to calculate batch loss? Do I multiply batch_loss with outputs.shape[0[ before adding it to epoch_loss?
batch_loss = (masks * criterion(outputs, gt_labels.float())).mean()
EPOCH_LOSS += batch_loss
loss.backward()
optimizer.step()
#2: then what do I do here? Do I divide the EPOCH_LOSS with len(training_dataloader)?
print(f'EPOCH LOSS: {EPOCH_LOSS/len(training_dataloader)}:.3f')
In your criterion, you have got the default reduction field set (see the docs), so your masking approach won't work. You should use your masking one step earlier (prior to the loss calculation) like so:
batch_loss = (criterion(outputs*masks, gt_labels.float()*masks)).mean()
OR
batch_loss = (criterion(outputs[masks], gt_labels.float()[masks])).mean()
But, without seeing your data it might be a different format. You might want to check that this is working as expected.
In regards to your actual question, it depends on how you want to represent your data. What I would do is just to sum all of the batches' losses and represent that, but you can choose to divide by the number of batches if you want to represent the AVERAGE loss of each batch in the epoch.
Because this is purely an illustrative property of your model, it actually doesn't matter which one you pick, as long as it's consistent between epochs to represent the fact that your model is learning.
I'm trying to create a neural network for image classification. This is my Model summary. I have done normalization to my dataset and shuffling to my data.
. When I run model.fit the val_loss is very high sometimes close to 100 whereas my loss is less than 0.8
When you don't normalize test data, validation loss will be very high when compared to training data that was normalized. I used simple mnist model to demonstrate the point of normalization.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# this is to demonstrate the importance of normalizing both training and testing data
x_train, x_test = x_train / 255.0, x_test / 1.
When we don't normalize test data where as training data was normalized,
training loss is loss: 0.0771 where as loss during test is 13.1599. Please check the complete code here. Thanks!
I have a multiclassification problem that depends on historical data. I am trying LSTM using loss='sparse_categorical_crossentropy'. The train accuracy and loss increase and decrease respectively. But, my test accuracy starts to fluctuate wildly.
What I am doing wrong?
Input data:
X = np.reshape(X, (X.shape[0], X.shape[1], 1))
X.shape
(200146, 13, 1)
My model
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# define 10-fold cross validation test harness
kfold = StratifiedKFold(n_splits=10, shuffle=False, random_state=seed)
cvscores = []
for train, test in kfold.split(X, y):
regressor = Sequential()
# Units = the number of LSTM that we want to have in this first layer -> we want very high dimentionality, we need high number
# return_sequences = True because we are adding another layer after this
# input shape = the last two dimensions and the indicator
regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X[train].shape[1], 1)))
regressor.add(Dropout(0.2))
# Extra LSTM layer
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.2))
# 3rd
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.2))
#4th
regressor.add(LSTM(units=50))
regressor.add(Dropout(0.2))
# output layer
regressor.add(Dense(4, activation='softmax', kernel_regularizer=regularizers.l2(0.001)))
# Compile the RNN
regressor.compile(optimizer='adam', loss='sparse_categorical_crossentropy',metrics=['accuracy'])
# Set callback functions to early stop training and save the best model so far
callbacks = [EarlyStopping(monitor='val_loss', patience=9),
ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)]
history = regressor.fit(X[train], y[train], epochs=250, callbacks=callbacks,
validation_data=(X[test], y[test]))
# plot train and validation loss
pyplot.plot(history.history['loss'])
pyplot.plot(history.history['val_loss'])
pyplot.title('model train vs validation loss')
pyplot.ylabel('loss')
pyplot.xlabel('epoch')
pyplot.legend(['train', 'validation'], loc='upper right')
pyplot.show()
# evaluate the model
scores = regressor.evaluate(X[test], y[test], verbose=0)
print("%s: %.2f%%" % (regressor.metrics_names[1], scores[1]*100))
cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))
Results:
trainingmodel
Plot
What you are describing here is overfitting. This means your model keeps learning about your training data and doesn't generalize, or other said it is learning the exact features of your training set. This is the main problem you can deal with in deep learning. There is no solution per se. You have to try out different architectures, different hyperparameters and so on.
You can try with a small model that underfits (that is the train acc and validation are at low percentage) and keep increasing your model until it overfits. Then you can play around with the optimizer and other hyperparameters.
By smaller model I mean one with fewer hidden units or fewer layers.
you seem to have too many LSTM layers stacked over and over again which eventually leads to overfitting. Probably should decrease the num of layers.
Your model seems to be overfitting, since the training error keeps on reducing while validation error fails to. Overall, it fails to generalize.
You should try reducing the model complexity by removing some of the LSTM layers. Also, try varying the batch sizes, it will reduce the number of fluctuations in the loss.
You can also consider varying the learning rate.
I am new in Keras and I learned fitting and evaluating the model.
After evaluating the model one can see the actual predictions made by model.
I am wondering Is it also possible to see the predictions during fitting in Keras? Till now I cant find any code doing this.
Since this question doesn't specify "epochs", and since using callbacks may represent extra computation, I don't think it's exactly a duplication.
With tensorflow, you can use a custom training loop with eager execution turned on. A simple tutorial for creating a custom training loop: https://www.tensorflow.org/tutorials/eager/custom_training_walkthrough
Basically you will:
#transform your data in to a Dataset:
dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(some_buffer).batch(batchSize)
#the above is buggy in some versions regarding shuffling, you may need to shuffle
#again between each epoch
#create an optimizer
optimizer = tf.keras.optimizers.Adam()
#create an epoch loop:
for e in range(epochs):
#create a batch loop
for i, (x, y_true) in enumerate(dataset):
#create a tape to record actions
with tf.GradientTape() as tape:
#take the model's predictions
y_pred = model(x)
#calculate loss
loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
#calculate gradients
gradients = tape.gradient(loss, model.trainable_weights)
#apply gradients
optimizer.apply_gradients(zip(gradients, model.trainable_weights)
You can use the y_pred var for doing anything, including getting its numpy_pred = y_pred.numpy() value.
The tutorial gives some more details about metrics and validation loop.
I have two classes with 3 images each. I tried this code in Keras.
trainingDataGenerator = ImageDataGenerator()
trainGenerator = trainingDataGenerator.flow_from_directory(
trainingDataDir,
target_size=(28, 28),
batch_size = 1,
seed=7,
class_mode='binary',
)
FilterSize = (3,3)
inputShape = (imageWidth, imageHeight,3)
model = Sequential()
model.add (Conv2D(32, FilterSize, input_shape= inputShape))
model.add (Activation('relu'))
model.add ( MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer = 'rmsprop',
metrics=['accuracy'])
model.fit_generator(
trainGenerator,
steps_per_epoch=3,
epochs=epochs)
My Output:
When I train this model, I get this output:
Using TensorFlow backend.
Found 2 images belonging to 2 classes.
Epoch 1/1
3/3 [==============================] - 0s - loss: 5.3142 - acc: 0.6667
My Question:
I wonder how it determines the loss and accuracy and on what basis? (ie: loss: 5.3142 - acc: 0.6667 ). I have not given any validation image to validate the model to find accuracy and loss. Does this loss, and accuracy is against the input image itself?
In short, can we say something like this: "This model has accuracy of %, and loss of % without validation images"?
The training loss and accuracy is calculated not by comparing to validation data but rather by comparing the prediction of your neural network of sample x with the label y for that sample that you provide in your training set.
You initialize your neural network and (usually) set all weights to a random value with a certain deviation. After that you feed the features of your training dataset into your network, and let it "guess" the outcome aka the label that you have (if you do supervised learning like in your case).
Then your framework compares that guess with the actual label and calculates the error which it then backpropagates through your network thereby adjusting and improving all weights.
This works perfectly well without any validation data.
Validation data serves you to see the quality of your model (loss, accuracy etc.) by letting the model predict on unseen data. With that you get the so called validation loss / accuracy and with this information you tune your hyperparameters.
In a last step you use your test data to evaluate the final quality of your training.