The dilemma of overfitting in NN training - machine-learning

My question is in continuation to the one asked by another user: What's is the difference between train, validation and test set, in neural networks?
Once learning is over by terminating when the minimum MSE is reached by looking at the validation and train set performance (easy to do so using nntool box in Matlab), then using the trained net structure if the performance of the unseen test set is slightly poor than the training set we have an overfitting problem. I am always encountering this case eventhough the model for which during learning the parameters corresponding to validation and train set having nearly same performance is selected. Then how come the test set performance is worse than the train set?

Training data= Data we use to train our model.
Validation data= Data we use to test our model on every-epoch or on run-time So that we can early stop our model manually because of over-fitting or any other model. Now Suppose I am running 1000 epochs on my model and on 500 epochs I view that my model is giving 90% accuracy on training data and 70% accuracy on validation data. Now I can see that my model is over-fitting. I can manually stop my training and before 1000 epochs complete and tune my model more and than see the behavior.
Testing data= Now after completing my training on model after computing 1000 epochs. I will predict my test data and see the accuracy on test data. its giving 86%
My training accuracy is 90% validation accuracy is 87% and testing accuracy is 86%. this may vary because data in validation set, training set and testing set are totally different. We have 70% samples in training set 10% validation and 20% testing set. Now on my validation my model is predicting 8 images correctly and on testing my model predicting 18 images correctly out of 100. Its normal in real life projects because pixels in every image are varying form the other image thats why a little difference may happen.
In testing set their are more images than validation set that may be one reason. Because more the images more the risk of wrong prediction. e.g on 90% accuracy
my model predict 90 out of 100 correctly but if I increase the image sample to 1000 than my model may predict (850, 800 or 900) images correctly out 1000 on

Related

Can I assess my model's performance with LOOCV on the whole dataset?

Let's say I initially split my dataset into training (80%) and test (20%) sets, perform a 10-fold CV on my training set and obtain an average R² of 75%. After that I check the best model's accuracy on the test set and obtain an R² of 74%, which indicates that the model is fairly robust. Now, before deploying it to real applications, I tune it with the whole data. Someone asks me the model's approximate R²; if I say 74% or 75%, I will me ignoring the fact that the model was now tunned with more data (test set). Is it a resonable approach to perform a leave one out CV on the chosen model with the whole data, compare the predicted targets with the real ones, check the R² (let's say it's 80% now) and say that the real-world model will most likely have an R² of 80%? I see no problems with that, but I do not know if this approach is correct.
It is true that you should train again on the whole data and it might lead to performance improvements. However, in this context your whole data should not be train + test! It should be just the trainign dataset but without any cross-validation. So before you had %80 for training and you were doing 10 fold CV, meaning that you were training your model actually on %72 of your complete data(train+test) and keeping the %8 for the validation. Now you should train it on the whole %80 percent and report your final results again on the unseen test set.
If you do LOOCV on the train + test, you can not report your performance on validation samples because this is how the model is finetuned and you might as well overfit to validation data.

Train Accuracy increases, Train loss is stable, Validation loss Increases, Validation Accuracy is low and increases

My neural network trainign in pytorch is getting very wierd.
I am training a known dataset that came splitted into train and validation.
I'm shuffeling the data during training and do data augmentation on the fly.
I have those results:
Train accuracy start at 80% and increases
Train loss decreases and stays stable
Validation accuracy start at 30% but increases slowly
Validation loss increases
I have the following graphs to show:
How can you explain that the validation loss increases and the validation accuracy increases?
How can be such a big difference of accuracy between validation and training sets? 90% and 40%?
Update:
I balanced the data set.
It is binary classification. It now has now 1700 examples from class 1, 1200 examples from class 2. Total 600 for validation and 2300 for training.
I still see similar behavior:
**Can it be becuase I froze the weights in part of the network?
**Can it be becuase the hyperparametrs like lr?
I found the solution:
I had different data augmentation for training set and validation set. Matching them also increased the validation accuracy!
If the training set is very large in comparison to the validation set, you are more likely to overfit and learn the training data, which would make generalizing the model very difficult. I see your training accuracy is at 0.98 and your validation accuracy increases at a very slow rate, which would imply that you have overfit your training data.
Try reducing the number of samples in your training set to improve how well your model generalizes to unseen data.
Let me answer your 2nd question first. High accuracy on training data and low accuracy on val/test data indicates the model might not generalize well to infer real cases. That is what the validation process is all about. You need to finetune or even rebuild your model.
With regard to the first question, val loss might not necessarily correspond to the val accuracy. The model makes the prediction based on its model, and loss function calculates the difference between probablities of matrix and the target if you are using CrossEntropy function.

should I get the same accuracy in the test set and the training set

I am new to machine learning, I have built a model that predicts if a client will subscribe in the following month or not. I got 73.4 on the training set and 72.8 on the test set. is it okay? or do I have Overfitting?
It's ok.
Overfitting happens when the accuracy in the training set in higher and the accuracy in the test set is lower (with a marginal difference).
This is what overfitting looks like.
Train accuracy: 99.4%
Test accuracy: 71.4%
You can, however, increase the accuracy using different models and feature engineering
We call it as over-fitting,If the accuracy of training data is abnormally higher (greater than 95%) and accuracy of test data is very low (less than 65%).
In your case,both training and testing accuracy are almost similar.So there is no over-fitting.
Try for more test data and check whether the accuracy is decreasing or not.You can also try to improve the model by
Trying different algorithms
Increasing the size of train data
Trying K-fold cross validation
Hyper parameter tuning
Using Regularization methods
Standardizing feature variables

Train Accuracy is very high, Validation accuracy is very high but the test set accuracy is very low

I have split the dataset ( around 28K images) into 75% trainset and 25% testset. Then I have taken randomly 15% of trainset and randomly 15% of testset to create a validation set. The goal is to classify the images into two categories. The exact image sample can't be shared. But its similar to the one attached. I'm using this model: VGG19 with imagenet weights with last two layers trainable and 4 dense layers appended. I am also using ImageDataGenerator to Augment the images. I trained the model for 30 epochs and found that the training accuracy is 95% and Validation accuracy is 96% and when trained on test dataset it fell down enormously to 75% only.
I have tried regularization and dropout to tackle the overfitting if it is suffering. I have also done one more thing to see what happens if I use the testset as Validation set and test the model on the same testset. The results were: Trainset Acc = 96% and Validation ACC = 96.3% and the testAcc = 68%. I don't understand what should I Do ?
image
First off, you need to make sure that when you split in data, the relative size of every class in the new datasets is equal. It can be imbalanced if that is the distribution of your initial data, but it must have the same imbalance in all datasets after the split.
Now, regarding the split. If you need a train, validation and test sets, they must all be independent of each-other (no-shared samples). This is important if you don't want to cheat yourself with the results that you are getting.
In general, in machine-learning we start from a training set and a test set. For choosing the best model architecture/hyper-parameters, we further divide the training set to get the validation set (the test set should not be touched).
After determining the best architecture/hyper-parameters for our model, we combine the training and validation set and train the best-case model from scratch with the combined full training set. Only now we get to test the results on the test set.
I had faced a similar issue in one of my practice projects.
My InceptionV3 model gave a high training accuracy (99%), a high validation accuracy (95%+) but a very low testing accuracy (55%).
The dataset was a subset of the popular Dogs vs. Cats dataset (https://www.kaggle.com/c/dogs-vs-cats/data), made by me, having 15k images split into 3 folders: train, valid, and test in the ratio of 60:20:20 (9000, 3000, 3000 each halved for cats folder and dogs folder).
The error in my case was actually in my code. It had nothing to do with the model or the data. The model had been defined inside a function and that was creating an untrained instance during the evaluation. Hence, an untrained model was being tested upon on the test dataset. After correcting the errors in my notebook I got a 96%+ testing accuracy.
Links:
https://colab.research.google.com/drive/1-PO1KJYvXdNC8LbvrdL70oG6QbHg_N-e?usp=sharing&fbclid=IwAR2k9ZCXvX_y_UNWpl4ljs1y0P3budKmlOggVrw6xI7ht0cgm03_VeoKVTI
https://drive.google.com/drive/u/3/folders/1h6jVHasLpbGLtu6Vsnpe1tyGCtR7bw_G?fbclid=IwAR3Xtsbm_EZA3TOebm5EfSvJjUmndHrWXm4Iet2fT3BjE6pPJmnqIwW8KWY
tyuhm
Other probable causes:
One possibility is that the testing set would have a different
distribution than the validation set (This could be excluded by
joining all the data, randomizing, and splitting again to train,
test, valid).
To swap valid and test with each other and see if it has an
effect (Sometimes if one set has relatively harder examples).
If the training somehow overfitted on the validation set (Is it
possible that during training, at one or more steps, the model giving the best score on the validation set is chosen).
Images overlapping, lack of shuffling.
In the deep learning world, if something seems way too odd to be
true, or even way too good to be even true, a good guess is that its
probably a bug unless proven otherwise!

Relationship Between Training Set Size and Training Epochs

I'm currently training a convulotional network on the Cifar10 dataset. Let's say I have 50,000 training images and 5,000 validation images.
To start testing my model, let's say I start with just 10,000 images to get an idea of how successful the model will be.
After 40 epochs of training and a batch size of 128 - ie every epoch I'm running my Optimizer to minimize loss 10,000 / 128 ~ 78 times on a batch of 128 images (SGD).
Now, let's say I found a model that achieves 70% accuracy on the validation set. Satisfied, I move on to train on the full training set.
This time, for every epoch I run the Optimizer to minimize loss 5 * 10,000 / 128 ~ 391 times.
This makes me think that my accuracy at each epoch should be higher on than on the limited set of 10,000. Much to my dismay, the accuracy on the limited training set increases much more quickly. At the end of the 40 epochs with the full training set, my accuracy is 30%.
Thinking the data may be corrupt, I perform limited runs on training images 10-20k, 20-30k, 30-40k, and 40-50k. Surprisingly, each of these runs resulted in an accuracy ~70%, close to the accuracy for images 0-10k.
Thus arise two questions:
Why would validation accuracy go down when the data set is larger and I've confirmed that that each segment of data indeed provides decent results on its own?
For larger training set, would I need to train through more epochs even though each epoch represents a larger number of training steps (391 vs 78)?
It turns out my intuition was right, but my code was not.
Basically, I had been running my validation accuracy with training data (data used to train the model), not with validation data (data the model had not yet seen).
After correcting this error, the validation accuracy did inevitably improve with a bigger training data set, as expected.

Resources