Why MLflow does not need to fit the model after hyperparameter tuning? - machine-learning

I am following the tutorial here: https://docs.databricks.com/_static/notebooks/mlflow/mlflow-end-to-end-example.html
I wonder why after hyperparameter tuning, it goes directly to use register_model from best_run hyperparameters.. why does it not fit the model on all the training (and validation) dataset with best_run hyperparameters?

Related

Application and Deployment of K-Fold Cross-Validation

K-Fold Cross Validation is a technique applied for splitting up the data into K number of Folds for testing and training. The goal is to estimate the generalizability of a machine learning model. The model is trained K times, once on each train fold and then tested on the corresponding test fold.
Suppose I want to compare a Decision Tree and a Logistic Regression model on some arbitrary dataset with 10 Folds. Suppose after training each model on each of the 10 folds and obtaining the corresponding test accuracies, Logistic Regression has a higher mean accuracy across the test folds, indicating that it is the better model for the dataset.
Now, for application and deployment. Do I retrain the Logistic Regression model on all the data, or do I create an ensemble from the 10 Logistic Regression models that were trained on the K-Folds?
The main goal of CV is to validate that we did not get the numbers by chance. So, I believe you can just use a single model for deployment.
If you are already satisfied with hyper-parameters and model performance one option is to train on all data that you have and deploy that model.
And, the other option is obvious that you can deploy one of the CV models.
About the ensemble option, I believe it should not give significant better results than a model trained on all data; as each model train for same amount of time with similar paparameters and they have similar architecture; but train data is slightly different. So, they shouldn't show different performance. In my experience, ensemble helps when the output of models are different due to architecture or input data (like different image sizes).
The models trained during k-fold CV should never be reused. CV is only used for reliably estimating the performance of a model.
As a consequence, the standard approach is to re-train the final model on the full training data after CV.
Note that evaluating different models is akin to hyper-parameter tuning, so in theory the performance of the selected best model should be reevaluated on a fresh test set. But with only two models tested I don't think this is important in your case.
You can find more details about k-fold cross-validation here and there.

Pretraining Deep Learning Model for Weight Initialization

When pretraining Deep Learning model (lets say a deep convolutional neural netowork) in order to achieve good weight initialization, do I use entire training set without validation (so that I avoid information leak) or just subset of training set?
If you want to fine-tune your network after training it on your dataset then you can use the same dataset (making sure that the data in the training/test, and validation sets do not switch around). What you can also do as 'pre-training' is to download a model that is already trained on a similar dataset/problem to yours and then training it on your dataset. This is known as transfer learning and works well for similar problems, but of course the bigger the gap between the 2 problems the more you need to train.
In conclusion: you can use any dataset as long as the validation set remains hidden from the network.
I think if we divide the dataset into training, validation and test data, it will be more useful. Keeping a completely new test data aside and validating the model with only validation data is a good choice. Entire training data should be used for training.

train, dev set and test set advices for implementation and hyperameter tuning

I have some doubts about the implementation and tuning of parameters and hyperparameters by using the classic train, validation and test set. So it would be of great help if somebody could clarify me these concepts and bring me some hints for its implementation in a language like Python.
For example, if I have a Neural Network, for what I know the parameter tuning (lets consider the number of hidden layers and neurons per layer), could be tuned with the training set. So when it comes to the validation set, which is approximately 20% of the dataset, I can tune my hyperparameters with the following algorithm:
Example: Tuning batch size and learning rate:
hyperListB=[]
hyperListL=[]
//let´s suppose both lists have the same dimensions
for i in range(0,hyperListB):
model=fit(train_set,hyperListB[i],hyperlistL[i]
values[].add(evaluate(model,validation_set) //add scores of each run
end for
for i in range(0,values):
plot_loss_functions(values)
select best set of hyperparameters
model=fit(test_set, selecter_hyperparameters)
evaluate(model)
would this sequence of steps be correct? I have searched thru different pages and did not find something that could help me with this. Please, bear in mind that I do not want to use cross-validation or other library-based techniques such as GridSearchCV.
Thanks
In a Train validation test split, the fit method on the train data.
Validation data is used for hyperparameter tuning. A set of hyperparameters is selected and the model is trained on the train set. Then this model will be evaluated on the validation set. This is repeated until all permutations of the different hyperparameters have been exhausted.
The best set of hyperparameters are the ones that gave the best result on the validation set. This method is called Grid search.
The test set is used to evaluate the model with the best hyperparameters selected. THis gives the final unbiased accuracy and loss.
The fit method will never be called on the validation or test set.
your example will look like:
hyperListB=[]
hyperListL=[]
//let´s suppose both lists have the same dimensions
for hyperB in hyperListB:
for hyperL in hyperListL:
model=fit(train_set,hyperB,hyperL)
values[].add(evaluate(model,validation_set) //add scores of each run
end for
end for
for i in range(0,values):
plot_loss_functions(values)
select best set of hyperparameters
evaluate(model,test_set)

How to use over-sampled data in cross validation?

I have a imbalanced dataset. I am using SMOTE (Synthetic Minority Oversampling Technique)to perform oversampling. When performing the binary classification, I use 10-fold cross validation on this oversampled dataset.
However, I recently came accross this paper; Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models that mentions that it is incorrect to use the oversampled dataset during cross-validation as it leads to overoptimistic performance estimates.
I want to verify the correct approach/procedure of using the over-sampled data in cross validation?
To avoid overoptimistic performance estimates from cross-validation in Weka when using a supervised filter, use FilteredClassifier (in the meta category) and configure it with the filter (e.g. SMOTE) and classifier (e.g. Naive Bayes) that you want to use.
For each cross-validation fold Weka will use only that fold's training data to parameterise the filter.
When you do this with SMOTE you won't see a difference in the number of instances in the Weka results window, but what's happening is that Weka is building the model on the SMOTE-applied dataset, but showing the output of evaluating it on the unfiltered training set - which makes sense in terms of understanding the real performance. Try changing the SMOTE filter settings (e.g. the -P setting, which controls how many additional minority-class instances are generated as a percentage of the number in the dataset) and you should see the performance changing, showing you that the filter is actually doing something.
The use of FilteredClassifier is illustrated in this video and these slides from the More Data Mining with Weka online course. In this example the filtering operation is supervised discretisation, not SMOTE, but the same principle applies to any supervised filter.
If you have further questions about the SMOTE technique I suggest asking them on Cross Validated and/or the Weka mailing list.
The correct approach would be first splitting the data into multiple folds and then applying sampling just to the training data and let the validation data be as is. The image below states the correct approach of how the dataset should be resampled in a K-fold fashion.
If you want to achieve this in python, there is a library for that:
Link to the library: https://pypi.org/project/k-fold-imblearn/

How to evaluate the performance of different model on one dataset?

I want to evaluate the performance different model such as SVM, RandForest, CNN etc, I only have one dataset. So I split the dataset to training set and testing set and train different model on this dataset with training data and test with testing dataset.
My question: Can I get the real performance of different model on only one dataset? For example: I found SVM model get the best result, So Should I select the SVM as my final classification model?
Its probably a better idea to cross validate your models with different test samples through cross validation to avoid biases. Also check your models against different evaluation metrics depending upon your application type. For instance use recall, accuracy and AUC for each model if its a classification problem.
Evaluation results can be pretty deceptive and require extensive validation.
You can Plot ROC curve for all the models.The model for which AUC is highest will be best model.

Resources