I am using pre-trained GoogLeNet and then fine tuned it on my dataset for classifying 11 classes. Validation dataset seems to give the "loss3/top1" 86.5%. But when I am evaluating the performance on my evaluation dataset it gives me 77% accuracy. Whatever changes I did it train_val.prototxt, I did the same changes in deploy.prototxt. Is the difference between the validation and evaluation accuracy is normal or I did something wrong?
Any suggestions?
In order to you get the fair estimation of your trained model on the validation dataset you need to set the test_itr and test_batch_size in a meaningful manner.
So, test_itr should be set to:
Val_data / test_batch_Size
Where, Val_data is the size of your validation dataset and test_batch_Size is validation batch size value set in batch_size for the Validation phase.
Related
What is the difference between the Holdout dataset and the Validation dataset in Machine Learning context?
The Validation dataset is used during training to track the performance of your model on "unseen" data. I wrote the unseen in quotes because although the model doesn't directly see the data in validation set, you will optimize the hyper-parameters to decrease the loss on validation set (since increasing val loss will mean over-fitting). However, by doing so, you may over-fit the hyper-parameters to validation set (So that the loss will be low on that specific validation set, but will become worse on any other unseen set). That's why you usually keep another 3rd set, called test set (or held-out set), which will be your truly unseen data, and you will test the performance of your model on that test set only once, after training your final model.
I'm referring to the validation_split parameter from the fit method from Keras:
validation_split: Float between 0 and 1. Fraction of the training data
to be used as validation data. The model will set apart this fraction
of the training data, will not train on it, and will evaluate the loss
and any model metrics on this data at the end of each epoch. The
validation data is selected from the last samples in the x and y data
provided, before shuffling.
I noticed that the default value is 0 instead of conventional 0.2 or 0.33. I can't wrap my head around why they chose to use 0 as the default value since I thought no validation set would always cause training to overfit. Am I wrong in that assumption?
A validation set is used to detect overfitting, not having a validation set just means that you cannot detect overfitting. It does not mean that the model will automatically overfit. Remember that validation data is not used at all to train the model, so the model cannot possibly behave differently if validation data is not being used.
That said, having a default of no validation set makes sense, because in the end is a human who detects overfitting by looking at the learning curves and the difference between training and validation loss. This process cannot (currently) be automated, so a human has to decide a value for the validation split, or just provide validation data by itself in the validation_data parameter.
Sometimes you want to define the validation data yourself, and you pass the argument validation_data= (x_val, y_val)
Sometimes you want a K-fold cross validation.
Sometimes you simply don't want validation data.
The system cannot assume your training data includes validation, that is not a good thing for the user.
As for overfitting, it depends on the model and the data. It's not necessarily true that it will always overfit.
My question is in continuation to the one asked by another user: What's is the difference between train, validation and test set, in neural networks?
Once learning is over by terminating when the minimum MSE is reached by looking at the validation and train set performance (easy to do so using nntool box in Matlab), then using the trained net structure if the performance of the unseen test set is slightly poor than the training set we have an overfitting problem. I am always encountering this case eventhough the model for which during learning the parameters corresponding to validation and train set having nearly same performance is selected. Then how come the test set performance is worse than the train set?
Training data= Data we use to train our model.
Validation data= Data we use to test our model on every-epoch or on run-time So that we can early stop our model manually because of over-fitting or any other model. Now Suppose I am running 1000 epochs on my model and on 500 epochs I view that my model is giving 90% accuracy on training data and 70% accuracy on validation data. Now I can see that my model is over-fitting. I can manually stop my training and before 1000 epochs complete and tune my model more and than see the behavior.
Testing data= Now after completing my training on model after computing 1000 epochs. I will predict my test data and see the accuracy on test data. its giving 86%
My training accuracy is 90% validation accuracy is 87% and testing accuracy is 86%. this may vary because data in validation set, training set and testing set are totally different. We have 70% samples in training set 10% validation and 20% testing set. Now on my validation my model is predicting 8 images correctly and on testing my model predicting 18 images correctly out of 100. Its normal in real life projects because pixels in every image are varying form the other image thats why a little difference may happen.
In testing set their are more images than validation set that may be one reason. Because more the images more the risk of wrong prediction. e.g on 90% accuracy
my model predict 90 out of 100 correctly but if I increase the image sample to 1000 than my model may predict (850, 800 or 900) images correctly out 1000 on
I have a simple question that has made me doubt my work all of a sudden.
If I only have a training and validation set, am I allowed to monitor val_loss while training or is that adding bias to my training. I want to test my accuracy at the end of training on my validation set but suddenly I am thinking if I am monitoring that dataset while training, that'd be problematic? or no?
Short answer - yes, monitoring validation error and using it as a basis for decision about specific set up of algorithm adds bias to your algorithm. To elaborate a bit:
1) You fix hyperparameters of any ML algorithm and than train it on train set. Your resulting ML algorithm with specific set up of hyperparameters overfits to training set and you use validation set to estimate which performance you can get with these hyperparameters on unseen data
2) But you obviously want to adjust your hyperparameters to get best performance. You may be doing a gridsearch or something like it to get best hyperparameter settings for this specific algorithm using validation set. As result your hyperparameter settings overfit to validation set. Think of it as some of the information about validation set still leaks into your model through hyperparameters
3) As result you must do the following: split data set into training set, validation set and test set. Use training set for training, use validation set to make a decision about specific hyperparameters. When you are done (fully done!) with fine tuning your model you must use test set which model have never seen to get an estimation of the final performance in the combat mode.
Would you please guide me how to interpret the following results?
1) loss < validation_loss
2) loss > validation_loss
It seems that the training loss always should be less than validation loss. But, both of these cases happen when training a model.
Really a fundamental question in machine learning.
If validation loss >> training loss you can call it overfitting.
If validation loss > training loss you can call it some overfitting.
If validation loss < training loss you can call it some underfitting.
If validation loss << training loss you can call it underfitting.
Your aim is to make the validation loss as low as possible.
Some overfitting is nearly always a good thing. All that matters in the end is: is the validation loss as low as you can get it.
This often occurs when the training loss is quite a bit lower.
Also check how to prevent overfitting.
In machine learning and deep learning there are basically three cases
1) Underfitting
This is the only case where loss > validation_loss, but only slightly, if loss is far higher than validation_loss, please post your code and data so that we can have a look at
2) Overfitting
loss << validation_loss
This means that your model is fitting very nicely the training data but not at all the validation data, in other words it's not generalizing correctly to unseen data
3) Perfect fitting
loss == validation_loss
If both values end up to be roughly the same and also if the values are converging (plot the loss over time) then chances are very high that you are doing it right
1) Your model performs better on the training data than on the unknown validation data. A bit of overfitting is normal, but higher amounts need to be regulated with techniques like dropout to ensure generalization.
2) Your model performs better on the validation data. This can happen when you use augmentation on the training data, making it harder to predict in comparison to the unmodified validation samples. It can also happen when your training loss is calculated as a moving average over 1 epoch, whereas the validation loss is calculated after the learning phase of the same epoch.
Aurélien Geron made a good Twitter thread about this phenomenon. Summary:
Regularization is typically only applied during training, not validation and testing. For example, if you're using dropout, the model has fewer features available to it during training.
Training loss is measured after each batch, while the validation loss is measured after each epoch, so on average the training loss is measured ½ an epoch earlier. This means that the validation loss has the benefit of extra gradient updates.
the val set can be easier than the training set. For example, data augmentations often distort or occlude parts of the image. This can also happen if you get unlucky during sampling (val set has too many easy classes, or too many easy examples), or if your val set is too small. Or, the train set leaked into the val set.
If your validation loss is less than your training loss, you have not correctly split the training data. This correctly indicates that the distribution of the training and validation sets is different. It should ideally be the same. MOROVER, Good Fit: In the ideal case, the training and validation losses both drop and stabilize at specified points, indicating an optimal fit, i.e. a model that does neither overfit or underfit.