So i have seen differing implementations of cross validation.
I'm currently using pytorch to train a neural network.
My current layout looks like this:
I have 6 discrete Datasets. 5 are used for cross validation.
Network_1 trains on Datasets: 1,2,3,4 computes loss on 5
Network_2 trains on Datasets: 1,2,3,5 computes loss on 4
Network_3 trains on Datasets: 1,2,4,5 computes loss on 3
Network_4 trains on Datasets: 1,3,4,5 computes loss on 2
Network_5 trains on Datasets: 2,3,4,5 computes loss on 1
Then comes epoch 2 and i do the exact same again:
Network_1 trains on Datasets: 1,2,3,4 computes loss on 5
Network_2 trains on Datasets: 1,2,3,5 computes loss on 4
Network_3 trains on Datasets: 1,2,4,5 computes loss on 3
Network_4 trains on Datasets: 1,3,4,5 computes loss on 2
Network_5 trains on Datasets: 2,3,4,5 computes loss on 1
For testing on the Dataset 6 i should merge the predictions from all 5 networks and take the average score of the prediction (still have to do the averaging of the prediction matrices).
Have i understood cross validation correctly? Is this how it's supposed to work? Will this work properly?
I put effort on not testing with data that i already trained on. I still dont
Would greatly appreciate the help :)
You can definitely apply cross validation with neural network, but because neural network are computationally demanding models this is not usually done. To reduce variance, there are other techniques which are ordinarily applied in neural networks, such as early stopping or dropout.
That being said, I am not sure you're applying it in the right way. You should train across all the epochs, so that:
Network_1 trains on Datasets: 1,2,3,4 up to the end of training. Then computes loss on 5
Network_2 trains on Datasets: 1,2,3,5 up to the end of training. Then computes loss on 4
Network_3 trains on Datasets: 1,2,4,5 up to the end of training. Then computes loss on 3
Network_4 trains on Datasets: 1,3,4,5 up to the end of training. Then computes loss on 2
Network_5 trains on Datasets: 2,3,4,5 up to the end of training. Then computes loss on 1
Once each network is trained up to the end of training (so across all the epochs), and validated on the left-out dataset (called validation dataset), you can average the scores you obtained.
This score (and indeed the real point of cross validation) should give you a fair evaluation of your model, which should not drop when you're going to test your it on the test set (the one you left out from training from the beginning).
Cross validation is usually used in pair with some form of grid search to produce an unbiased form of evaluation of different models you want to compare. So if you want for example to compare NetworkA and NetworkB which differ with respect to some parameters, you use cross validation for NetworkA, cross validation for NetworkB, and then take that one having the highest cross validation score as final model.
As last step, once you decided which is the best model, you usually retrain your model taking all the data you have in the train set (i.e. datasets 1,2,3,4,5 in your case) and test this model on the test set (Dataset 6).
Related
I am trying to do a transfer learning with ResNet50V2 model using triplet loss function. I have kept Include_top = False, input shape = (160,160,3) with Imagenet weights. The last 3 layers of my model is shown in the below image with 6 million trainable parameters.
During the training process, I could see the loss function values reducing from 7.6 to 0.8 but the accuracy does not improve. But when I replace the model with VGG16 and while training the last 3 layers, the accuracy improves from 50% to 90% along with loss value reducing from 6.0 to 0.5.
Where am I going wrong ? Is there anything specific I should look at while training resnet model ? How to train the resnet model ?
My neural network trainign in pytorch is getting very wierd.
I am training a known dataset that came splitted into train and validation.
I'm shuffeling the data during training and do data augmentation on the fly.
I have those results:
Train accuracy start at 80% and increases
Train loss decreases and stays stable
Validation accuracy start at 30% but increases slowly
Validation loss increases
I have the following graphs to show:
How can you explain that the validation loss increases and the validation accuracy increases?
How can be such a big difference of accuracy between validation and training sets? 90% and 40%?
Update:
I balanced the data set.
It is binary classification. It now has now 1700 examples from class 1, 1200 examples from class 2. Total 600 for validation and 2300 for training.
I still see similar behavior:
**Can it be becuase I froze the weights in part of the network?
**Can it be becuase the hyperparametrs like lr?
I found the solution:
I had different data augmentation for training set and validation set. Matching them also increased the validation accuracy!
If the training set is very large in comparison to the validation set, you are more likely to overfit and learn the training data, which would make generalizing the model very difficult. I see your training accuracy is at 0.98 and your validation accuracy increases at a very slow rate, which would imply that you have overfit your training data.
Try reducing the number of samples in your training set to improve how well your model generalizes to unseen data.
Let me answer your 2nd question first. High accuracy on training data and low accuracy on val/test data indicates the model might not generalize well to infer real cases. That is what the validation process is all about. You need to finetune or even rebuild your model.
With regard to the first question, val loss might not necessarily correspond to the val accuracy. The model makes the prediction based on its model, and loss function calculates the difference between probablities of matrix and the target if you are using CrossEntropy function.
I develop a simple autoencoder and to find the right parameters I use a grid search on a small subset of dataset. The number of epochs in output can be used on the training set with higher dimension? The number of epochs depends on the dimension of dataset? or not? E.g. I have much more epochs in a dataset with a big dimension and a lower number of epochs for a small dataset
In general yes, the number of epochs will change if the dataset is bigger.
The number of epochs should not be decided a-priori. You should run the training and monitor the training and validation losses over time and stop training when the validation loss reaches a plateau or start increasing. This technique is called "early stopping" and is a good practice in machine learning.
I am working on an image classification problem in Keras.
I am training the model using model.fit_generator for data augmentation.
While training per epoch, I am also evaluating on validation data.
Training is done on 90% of the data and Validation is done on 10% of the data. The following is my code:
datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.3)
batch_size=32
epochs=30
model_checkpoint = ModelCheckpoint('myweights.hdf5', monitor='val_acc', verbose=1, save_best_only=True, mode='max')
lr = 0.01
sgd = SGD(lr=lr, decay=1e-6, momentum=0.9, nesterov=False)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
def step_decay(epoch):
# initialize the base initial learning rate, drop factor, and
# epochs to drop every
initAlpha = 0.01
factor = 1
dropEvery = 3
# compute learning rate for the current epoch
alpha = initAlpha * (factor ** np.floor((1 + epoch) / dropEvery))
# return the learning rate
return float(alpha)
history=model.fit_generator(datagen.flow(xtrain, ytrain, batch_size=batch_size),
steps_per_epoch=xtrain.shape[0] // batch_size,
callbacks[LearningRateScheduler(step_decay),model_checkpoint],
validation_data = (xvalid, yvalid),
epochs = epochs, verbose = 1)
However, upon plotting the training accuracy and validation accuracy (as well as the training loss and validation loss), I noticed the validation accuracy is higher than training accuracy (and likewise, validation loss is lower than training loss). Here are my resultant plots after training (please note that validation is referred to as "test" in the plots):
When I do not apply data augmentation, the training accuracy is higher than the validation accuracy.From my understanding, the training accuracy should typically be greater than validation accuracy. Can anyone give insights why this is not the case in my situation where data augmentation is applied?
The following is just a theory, but it is one that you can test!
One possible explanation why your validation accuracy is better than your training accuracy, is that the data augmentation you are applying to the training data is making the task significantly harder for the network. (It's not totally clear from your code sample. but it looks like you are applying the augmentation only to your training data, not your validation data).
To see why this might be the case, imagine you are training a model to recognise whether someone in the picture is smiling or frowning. Most pictures of faces have the face the "right way up" so the model could solve the task by recognising the mouth and measuring if it curves upwards or downwards. If you now augment the data by applying random rotations, the model can no longer focus just on the mouth, as the face could be upside down. In addition to recognising the mouth and measuring its curve, the model now also has to work out the orientation of the face as a whole and compare the two.
In general, applying random transformations to your data is likely to make it harder to classify. This can be a good thing as it makes your model more robust to changes in the input, but it also means that your model gets an easier ride when you test it on non-augmented data.
This explanation might not apply to your model and data, but you can test it in two ways:
If you decrease the range of the augmentation transformations you are using you should see the training and validation loss get closer together.
If you apply the exact same augmentation transformations to the validation data as you do the training data, then you should see the validation accuracy drop below the training accuracy as you expected.
In many examples, I see train/cross-validation dataset splits being performed by using a Kfold, StratifiedKfold, or other pre-built dataset splitter. Keras models have a built in validation_split kwarg that can be used for training.
model.fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)
(https://keras.io/models/model/)
validation_split: float between 0 and 1: fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch.
I am new to the field and tools, so my intuition on what the different splitters offer you. Mainly though, I can't find any information on how Keras' validation_split works. Can someone explain it to me and when separate method is preferable? The built-in kwarg seems to me like the cleanest and easiest way to split test datasets, without having to architect your training loops much differently.
The difference between the two is quite subtle and they can be used in conjunction.
Kfold and similar functions in scikit-learn will randomly split your data into k folds. You can then train models holding out a single fold each time and testing on the fold.
validation_split takes a fraction of your data non-randomly. According to the Keras documentation it will take the fraction from the end of your data, e.g. 0.1 will hold out the final 10% of rows in the input matrix. The purpose of the validation split is to allow you to assess how the model is performing on the training set and a held out set at every epoch in the training period. If the model continues to improve on the training set but not the validation set then it is a clear sign of potential overfitting.
You could theoretically use KFold cross-validation to construct a model while also using validation_split to monitor the performance of each model. At each fold you will be generating a new validation_split from the training data.