Confusion regarding validation data set in Machine learning? - machine-learning

https://www.youtube.com/watch?v=i_LwzRVP7bg&list=PPSV&ab_channel=freeCodeCamp.org
I was watching above youtube video , in chapter "Training Model" there were 3 sets discussed.
1)Training data set
2)Test data set
3)Validation data set
But i am confused in difference between these 3 types because in other resources of ML, i came across only two sets,Training data set and test data set, but here Validation data set is also discussed
But what is Validation data set and is it always necessary to include? and how it is different from Training data set & Test data set

Usually, in machine learning, the most discussed data sets are the training and the test sets where a model can learn a distribution (the training set) and evaluate its performance on unseen data (the test set).
In recent years, when enough data is present, a validation set has been introduced between those two sets to help with hyper-tuning (finding the best parameters for the models). It is similar to the test set because it is unseen data, but because we use it to tune the hyperparameters, we still need a final test set to see if those hyperparameters generalize well.
Hope this helps!

Related

K fold cross validation makes no part of data blind to the model

I have a conceptual question about K fold Cross validation.
In general, we train a model to learn based on test data and validate it with test data, and we assume the system is blind to this data, and this is why we can evaluate if the system really learnt or not.
Now with k fold, the final model actually have seen (indirectly, though) all data, so why it is still valid??? It already has seen all data and we do not know how it predicts unseen data.
This is my question that based on this fact, why we know this method valid?
Thanks.
In K-Fold Cross Validation, you actually train K different models. Let's say we are doing 5-Fold CV and the size of the dataset is 100 samples. Then, in each fold, we randomly split the data as 80 train samples and 20 test samples. We train on 80 train samples then we test the trained model on 20 left-out test samples. We compute accuracy and note it. At the end, we will have 5 different models. Then, we can average the accuracies of each fold and report this as the average performance of the model. Coming to your question, actually you need to think why we need K-Fold Cross Validation. The answer is, you need to report the performance of you model, right? However, if you just train and evaluate your model with single split, then there is a possibility that your model may be biassed to this specific split. I mean, in this split, a rare case may come out like a highly domain shift between train and test sets which is bad for the performance.
TL;DR: Think of your 'test data' more like 'validation data', which you hope represents truly unseen test data. Ideally if the model performs well for many different validation datasets it will work well when applied to real life test data which wasn't used in the training-validation process.
This confusion is justified. You are correct.
This is where the terminology training data, validation data and test data can make things more clear. Models are trained on training data. This is data directly seen by the model to go through the process of updating its parameters and learn. Validation data is data the we use to validate how well the model has actually learned. It is not directly seen by the model and we use it to judge things like under or overfitting. It is assumed that the validation data is a good representation of test data. Test data is what we will end up applying our model to in the real world, it have never been seen in any way by the model.
Test and validation data are often used interchangeably, with most people just using training and test terminology.
An example:
If you are build a cat detector you collect images of cats, you split these images into training and validation sets. You assume the validation set is an accurate representation of the kinds of cat images people will use your model on in the real world. You train your model on the training data, validate how well it has learned on the validation data and once you think it has learned well you deploy the model. People will use it on their own images to detect cats. These images are the true test data, which have never been seen by the model, but hopefully your validation set was a good indicator of how you model will perform on these images.
K-fold cross validation is best used when your validation set may be small, or you are unsure of how well it represents test data (e.g. if there are only ginger cats in your validation set, it lead to your model failing on test data, so you would like to mix the validation set up). By performing k-fold cross validation you can validate your model more times, with different choices of validation set, which hopefully will give a better indication of your model's generalizability.

Pretraining Deep Learning Model for Weight Initialization

When pretraining Deep Learning model (lets say a deep convolutional neural netowork) in order to achieve good weight initialization, do I use entire training set without validation (so that I avoid information leak) or just subset of training set?
If you want to fine-tune your network after training it on your dataset then you can use the same dataset (making sure that the data in the training/test, and validation sets do not switch around). What you can also do as 'pre-training' is to download a model that is already trained on a similar dataset/problem to yours and then training it on your dataset. This is known as transfer learning and works well for similar problems, but of course the bigger the gap between the 2 problems the more you need to train.
In conclusion: you can use any dataset as long as the validation set remains hidden from the network.
I think if we divide the dataset into training, validation and test data, it will be more useful. Keeping a completely new test data aside and validating the model with only validation data is a good choice. Entire training data should be used for training.

How do you do cross validation correctly? [duplicate]

I am trying to understand the process of model evaluation and validation in machine learning. Specifically, in which order and how the training, validation and test sets must be used.
Let's say I have a dataset and I want to use linear regression. I am hesitating among various polynomial degrees (hyper-parameters).
In this wikipedia article, it seems to imply that the sequence should be:
Split data into training set, validation set and test set
Use the training set to fit the model (find the best parameters: coefficients of the polynomial).
Afterwards, use the validation set to find the best hyper-parameters (in this case, polynomial degree) (wikipedia article says: "Successively, the fitted model is used to predict the responses for the observations in a second dataset called the validation dataset")
Finally, use the test set to score the model fitted with the training set.
However, this seems strange to me: how can you fit your model with the training set if you haven't chosen yet your hyper-parameters (polynomial degree in this case)?
I see three alternative approachs, I am not sure if they would be correct.
First approach
Split data into training set, validation set and test set
For each polynomial degree, fit the model with the training set and give it a score using the validation set.
For the polynomial degree with the best score, fit the model with the training set.
Evaluate with the test set
Second approach
Split data into training set, validation set and test set
For each polynomial degree, use cross-validation only on the validation set to fit and score the model
For the polynomial degree with the best score, fit the model with the training set.
Evaluate with the test set
Third approach
Split data into only two sets: the training/validation set and the test set
For each polynomial degree, use cross-validation only on the training/validation set to fit and score the model
For the polynomial degree with the best score, fit the model with the training/validation set.
Evaluate with the test set
So the question is:
Is the wikipedia article wrong or am I missing something?
Are the three approaches I envisage correct? Which one would be preferrable? Would there be another approach better than these three?
The Wikipedia article is not wrong; according to my own experience, this is a frequent point of confusion among newcomers to ML.
There are two separate ways of approaching the problem:
Either you use an explicit validation set to do hyperparameter search & tuning
Or you use cross-validation
So, the standard point is that you always put aside a portion of your data as test set; this is used for no other reason than assessing the performance of your model in the end (i.e. not back-and-forth and multiple assessments, because in that case you are using your test set as a validation set, which is bad practice).
After you have done that, you choose if you will cut another portion of your remaining data to use as a separate validation set, or if you will proceed with cross-validation (in which case, no separate and fixed validation set is required).
So, essentially, both your first and third approaches are valid (and mutually exclusive, i.e. you should choose which one you will go with). The second one, as you describe it (CV only in the validation set?), is certainly not (as said, when you choose to go with CV you don't assign a separate validation set). Apart from a brief mention of cross-validation, what the Wikipedia article actually describes is your first approach.
Questions of which approach is "better" cannot of course be answered at that level of generality; both approaches are indeed valid, and are used depending on the circumstances. Very loosely speaking, I would say that in most "traditional" (i.e. non deep learning) ML settings, most people choose to go with cross-validation; but there are cases where this is not practical (most deep learning settings, again loosely speaking), and people are going with a separate validation set instead.
What Wikipedia means is actually your first approach.
1 Split data into training set, validation set and test set
2 Use the
training set to fit the model (find the best parameters: coefficients
of the polynomial).
That just means that you use your training data to fit a model.
3 Afterwards, use the validation set to find the best hyper-parameters
(in this case, polynomial degree) (wikipedia article says:
"Successively, the fitted model is used to predict the responses for
the observations in a second dataset called the validation dataset")
That means that you use your validation dataset to predict its values with the previously (on the training set) trained model to get a score of how good your model performs on unseen data.
You repeat step 2 and 3 for all hyperparameter combinations you want to look at (in your case the different polynomial degrees you want to try) to get a score (e.g. accuracy) for every hyperparmeter combination.
Finally, use the test set to score the model fitted with the training
set.
Why you need the validation set is pretty well explained in this stackexchange question
https://datascience.stackexchange.com/questions/18339/why-use-both-validation-set-and-test-set
In the end you can use any of your three aproaches.
approach:
is the fastest because you only train one model for every hyperparameter.
also you don't need as much data as for the other two.
approach:
is slowest because you train for k folds k classifiers plus the final one with all your training data to validate it for every hyperparameter combination.
You also need a lot of data because you split your data three times and that first part again in k folds.
But here you have the least variance in your results. Its pretty unlikely to get k good classifiers and a good validation result by coincidence. That could happen more likely in the first approach. Cross Validation is also way more unlikely to overfit.
approach:
is in its pros and cons in between of the other two. Here you also have less likely overfitting.
In the end it will depend on how much data you have and if you get into more complex models like neural networks, how much time/calculationpower you have and are willing to spend.
Edit As #desertnaut mentioned: Keep in mind that you should use training- and validationset as training data for your evaluation with the test set. Also you confused training with validation set in your second approach.

In Machine Learning, is it okay to add the development set to the training set after development?

Usually we train our models on the training set, evaluate them on the development set, make some changes, train and evaluate again, etc. (the development phase), and in the end evaluate once on the test set.
Assume we have little training data. Then, it could make sense to use training AND development set after the development phase. One could estimate hyperparameters as usual and in the end (the final training) add the dev set to the training set, train the model with the previously estimated hyperparameters and evaluate it once on the test set.
Would this be "cheating" in any way? Do people do this, or do they usually leave out the dev set from any training?
I don't think it's cheating in any way. If it improves your model against real world data and your unseen test data, it should be ok. There are reasons why a training/dev/test set is recommended, but if you have such small training data set, I believe it's a valid strategy. In any case, it's hard to have definitive answer without knowing more details such as nature of data and the task you would like to accomplish. Another approach you might like to have look is data augmentation.
I'd recommend the following course which covers training/dev/test set distribution, among other things:
https://www.coursera.org/learn/machine-learning-projects
Once you decided on the hyperparameter using the dev set, you can use the train + dev to perform the training again. This is method is used quite often.
For example with using GridSearchCV method in sklearn, if you use refit=True, this would perform the training after the hyperparameter search is done. i.e. if cv=4 and refit=True, the model performs training 5 times, (4 times for searching best hyperparameters) + (1 for the final training using the complete training set)

Is it a good practice to use your full data set for predictions?

I know you're supposed to separate your training data from your testing data, but when you make predictions with your model is it OK to use the entire data set?
I assume separating your training and testing data is valuable for assessing the accuracy and prediction strength of different models, but once you've chosen a model I can't think of any downsides to using the full data set for predictions.
You can use full data for prediction but better retain indexes of train and test data. Here are pros and cons of it:
Pro:
If you retain index of rows belonging to train and test data then you just need to predict once (and so time saving) to get all results. You can calculate performance indicators (R2/MAE/AUC/F1/precision/recall etc.) for train and test data separately after subsetting actual and predicted value using train and test set indexes.
Cons:
If you calculate performance indicator for entire data set (not clearly differentiating train and test using indexes) then you will have overly optimistic estimates. This happens because (having trained on train data) model gives good results of train data. Which depending of % split of train and test, will gives illusionary good performance indicator values.
Processing large test data at once may create memory bulge which is can result in crash in all-objects-in-memory languages like R.
In general, you're right - when you've finished selecting your model and tuning the parameters, you should use all of your data to actually build the model (exception below).
The reason for dividing data into train and test is that, without out-of-bag samples, high-variance algorithms will do better than low-variance ones, almost by definition. Consequently, it's necessary to split data into train and test parts for questions such as:
deciding whether kernel-SVR is better or worse than linear regression, for your data
tuning the parameters of kernel-SVR
However, once these questions are determined, then, in general, as long as your data is generated by the same process, the better predictions will be, and you should use all of it.
An exception is the case where the data is, say, non-stationary. Suppose you're training for the stock market, and you have data from 10 years ago. It is unclear that the process hasn't changed in the meantime. You might be harming your prediction, by including more data, in this case.
Yes, there are techniques for doing this, e.g. k-fold cross-validation:
One of the main reasons for using cross-validation instead of using the conventional validation (e.g. partitioning the data set into two sets of 70% for training and 30% for test) is that there is not enough data available to partition it into separate training and test sets without losing significant modelling or testing capability. In these cases, a fair way to properly estimate model prediction performance is to use cross-validation as a powerful general technique.
That said, there may not be a good reason for doing so if you have plenty of data, because it means that the model you're using hasn't actually been tested on real data. You're inferring that it probably will perform well, since models trained using the same methods on less data also performed well. That's not always a safe assumption. Machine learning algorithms can be sensitive in ways you wouldn't expect a priori. Unless you're very starved for data, there's really no reason for it.

Resources