I have an image dataset for multi-class image classification- training & testing images. I trained and saved my model (as .h5 file) on training data, using 80-20% as train-validation split.
Now, I want to predict the classes for test images.
Which option is better and is it always the case?
Use the trained model as it is for "test images" prediction.
Train the saved model on whole training data (i.e, including 20% of the validation images) and then do predictions on test images. But in case, there will be no validation data, and hence, how does the model ensure that it keeps the loss to be minimum during training.
If you already properly trained the model, you do not need to retrain again. (Unless you are doing something specific with transfer learning). The whole purpose of having test data is to use as a test case to see how well you model did on unseen data.
Related
I am using Weka software to classify model. I have confusion using training and testing dataset partition. I divide 60% of the whole dataset as training dataset and save it to my hard disk and use 40% of data as test dataset and save this data to another file. The data that I am using is an imbalanced data. So I applied SMOTE in my training dataset. After that, in the classify tab of the Weka I selected Use training set option from Test options and used Random Forest classifier to do the classification on the training dataset. After getting the result I chose Supplied test set option from Test options and load my test dataset from hard disk and again ran the classifier.
I try to find out tutorial on how to load training set and test set in Weka but did not get it. I did the above process depend upon my understanding.
Therefore, I would like to know is that the right way to perform classification on training and test dataset?
Thank you.
There is no need to evaluate your classifier on the training set (this will be overly optimistic, since the classifier has already seen this data). Just use the Supplied test set option, then your classifier will get trained automatically on the currently loaded dataset before being evaluated on the specified test set.
Instead of manually splitting your data, you could also use the Percentage split test option, with 60% to be used for your training data.
When using filters, you should always wrap them (in this case SMOTE) and your classifier (in this case RandomForest) in the FilteredClassifier meta-classifier. That way, you will ensure that the training and test set data will get transformed correctly. This will also avoid the problem of leaking information into the test set when transforming the full dataset with a supervised filter and splitting the dataset into train/test afterwards. Finally, it also documents nicely what preprocessing is being done to your input data, all in a single command-line string.
If you need to apply more than one filter, use the MultiFilter to apply them sequentially.
When pretraining Deep Learning model (lets say a deep convolutional neural netowork) in order to achieve good weight initialization, do I use entire training set without validation (so that I avoid information leak) or just subset of training set?
If you want to fine-tune your network after training it on your dataset then you can use the same dataset (making sure that the data in the training/test, and validation sets do not switch around). What you can also do as 'pre-training' is to download a model that is already trained on a similar dataset/problem to yours and then training it on your dataset. This is known as transfer learning and works well for similar problems, but of course the bigger the gap between the 2 problems the more you need to train.
In conclusion: you can use any dataset as long as the validation set remains hidden from the network.
I think if we divide the dataset into training, validation and test data, it will be more useful. Keeping a completely new test data aside and validating the model with only validation data is a good choice. Entire training data should be used for training.
I want to build a image classifier i gathered images from web and i resized them using PIL libray
now i want those images to be converted as input .what operations do i need to perform on these
images.I also did covert images in to numpy arrays and stored them in an list named features and what to do next
Well there are a number of decisions to make. One is to partition your images into a training set, a validation set and generally also a test set. I typically use 10% of the images as a validation set and 10% of the images as a test set. Next you need to decide how you want to provide your images to the network. My preference is to use the Keras ImageDataGenerator.flow from directory. This requires you to create 3 directories to store images. I put the test images in a directory called 'test', the validation images in a directory called 'valid' and the training images in a directory called 'train'. Now in each of these directories you need to create identically named class directories. For example if you are trying to classify images of dogs and cats. You would create a 'dogs' sub directory and 'cats' sub directory within the test, train and valid directories. Be sure to name them identically because the names of the sub directories determine the names of your classes. Now populate the class directories with your images. These can be images in standard formats like jpg. Now create 3 generators a train generator, a validation generator and a test generator as in
train_gen=ImageDataGenerator(preprocessing_function=pre_process).flow_from_directory('train', target_size=(height, width), batch_size=train_batch_size, seed=rand_seed, class_mode='categorical', color_mode='rgb')
do the same for the validation generator and the test generator. Documentation for the ImageDataGenerator and flow_from_directory is here.. Now you have your images stored and the data generators set up to provide data to your model in batches based on batch size. So now we can get to actually building a model. You can build your own model however there are excellent models for image processing available for you to use. This is called transfer learning. I like to use a model called MobileNet. I prefer this because it has a small number of trainable parameters (about 4 million) versus other models which have 10's of millions. Keras has this and many other image processing models . Documentation is here. Now you have to modify the final layer of the model to adapt it to your application. MobileNet was trained on the ImageNet data set that had 1000 classes. You need to remove this last layer and make it a dense layer having as many nodes as you have classes and use the softmax activation function. An example for the case of 2 classes is shown below.
mobile = tf.keras.applications.mobilenet.MobileNet( include_top=Top,
input_shape=(height,width,3),
pooling='avg', weights='imagenet',
alpha=1, depth_multiplier=1)
x=mobile.layers[-2].output
predictions=Dense (2, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=predictions)
for layer in model.layers:
layer.trainable=True
model.compile(Adam(lr=.001, loss='categorical_crossentropy', metrics=['accuracy'])
The last line of code compiles your model using the Adam optimizer with a learning rate of .001.Now we can finally get to training the model. I use the modelfit generator as shown below:
data = model.fit_generator(generator = train_gen,validation_data=val_gen, epochs=epochs, initial_epoch=start_epoch,
callbacks = callbacks, verbose=1)
Documentation for the above is here. The model will train on your training set and validate on the validation set. For each epoch (training cycle) you will get a print out of the training loss, training accuracy, validation loss and validation accuracy so you can monitor how your model is performing. The final step is to run your test set to see how well your model performs on data it was not trained on. Do do that use the code below:
resultspmodel.evaluate(test_gen, verbose=0)
print('Model accuracy on Test Set is {0:7.2f} %'.format(results[1]* 100)
That's about it but of course there are a lot of details to fill in. If you are new to Convolutional Neural Networks and machine learning I would recommend an excellent tutorial on YouTube at here. There are about 20 sequential tutorials in the play list. I used this tutorial as a beginner and found it excellent. It will cover all the topics you need to become skilled at using CNN classifiers. Good Luck!
I have three datasets: train, validation, test and I am currently using an XGBoost Classifier to do the job on a classification task.
I trained the XGBClassifier on the train set and saved it as a pickle file to avoid having to re-train it every time. Once I load the model from the pickle file, I am able to use the predict method from it, but I don't seem to be able to train this model on the validation set or any other new dataset.
Note: I do not get any error output, the jupyter lab cell looks like it's working perfectly, but my CPU cores are all resting during this cell's operation, so I see the model isn't being fitted.
Could this be a problem with XGBoost or pickle dumped models are not able to be fitted again after loading?
I had the exact same question a year ago, You can find here the question and answer
Though, in this way, you will keep adding "trees" (boosters) to your existing model, using your new data.
It might be better to train a new model on your training + validation data sets.
Whatever you decide to do, you should try both options and evaluate your results to see what fits better for your data.
How to merge the train, test and validation set of mnist in tensorflow for batch training. Anyone can help me?
What would be the purpose of using a testing set to train a model... Then it would become a training set too.
Sets are named training, validation and testing for a reason.
So you train your model with the training data. Once the model is trained, you validate it over the validation data. You test the performance of the model over the testing data. The training method you use (batch or something else) will NEVER change the fact that training/validation/testing data should never be mixed with one another.
If this does not answer your question, then edit your question and specify, because it is rather vague at the present moment.