Using probabilities from a model as features to another - machine-learning

I'd like to use the probability output from a model as features to another model.
For instance, I want to determine what kind of bird is on a picture, I want to use a CNN, train it and then use the probability result with other data, like size and weight from the bird, and feed it to a svm.
Do I need to use training and testing set for extracting these probabilities using the CNN? Should I devide my dataset into folds and then extract the probabilities for each different testing fold or can I just train and test on all my data and save the probabilities?

A test set is intended to validate your classifier reaches its goals, or alternatively to set hyper-parameters. In this case, you're not interested in the output of the CNN, as it's just an intermediate layer in the bigger picture.
Having said that, you're apparently not back-propagating SVM errors through its inputs. That's the consequence of a two-stage model. If you did, you'd be optimizing the CNN for use as input to that particular SVM.

Related

Predicting inputs in neural network

Is it possible to predict inputs in "Keras neural network" for a particular output?
For example, I have a dataset with 28 inputs and 3 outputs. So, I have trained the model in Keras which works fine. Now, I have to enter the particular values in outputs and I have to predict that what will be the inputs for that particular output.
I'm not 100% sure I understand the question correctly, but if you're trying to build a model that can take inputs and predict outputs, then you will need to train a second model to predict inputs from outputs, where you swap the inputs and outputs so that outputs are your inputs, and your inputs are the outputs. Although this might be annoying, you might have to build a separate network to predict each of your input variables.
To get around this problem, you can consider autoencoders if you're okay with getting a close approximation of the input. An autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible (you can read more here: https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726).
Yes it is definitely possible to predict inputs from the output. In fact, what you're describing is essentially an autoencoder.
Let's say you have a NN trained on MNIST. If you then use the outputs of the classification layer to train the decoder of an auto encoder, you will get a rough indication of the input.
However this is not the best way to do it. The best way to do it is to simply have the latent space be considered the "output", then feed this output into:
a): A 1 layer classification to give you the predicted output and
b): the decoder
This will give you the predicted output and the original image

LDA as the dimension reduction before or after partitioning

I am doing a classification and I have this question about using LDA just for dimension reduction:
Shall the LDA be applied on whole feature matrix including train and test data and then (after reducing the dimension of data) do the partitioning of feature matrix to provide train and test sets for classification? Is it true?
Then, suppose we need to partition the data before applying the LDA. How is it possible to do the classification on the test data using the Matlab's internal classifiers like kNN and SVM?
You should generate the LDA on the train and afterwards apply it on the test set as well.
The reason is that you wan't to check how your entire processing chain performs on unseen data. If you generate the LDA model on train/test it might be that otherwise less important information might disappear.
Actually if you determine the number of dimensions you should go for a train/test/validation split. Where you determine the optimal number of dimension on train/test. Then build LDA+Model on train and test merged and evaluate on validation.

How to correctly combine my classifiers?

I have to solve 2 class classification problem.
I have 2 classifiers that output probabilities. Both of them are neural networks of different architecture.
Those 2 classifiers are trained and saved into 2 files.
Now I want to build meta classifier that will take probabilities as input and learn weights of those 2 classifiers.
So it will automatically decide how much should I "trust" each of my classifiers.
This model is described here:
http://rasbt.github.io/mlxtend/user_guide/classifier/StackingClassifier/#stackingclassifier
I plan to use mlxtend library, but it seems that StackingClassifier refits models.
I do not want to refit because it takes very huge amount of time.
From the other side I understand that refitting is necessary to "coordinate" work of each classifier and "tune" the whole system.
What should I do in such situation?
I won't talk about mlxtend because I haven't worked with it but I'll tell you the general idea.
You don't have to refit these models to the training set but you have to refit them to parts of it so you can create out-of-fold predictions.
Specifically, split your training data in a few pieces (usually 3 to 10). Keep one piece (i.e. fold) as validation data and train both models on the other folds. Then, predict the probabilities for the validation data using both models. Repeat the procedure treating each fold as a validation set. In the end, you should have the probabilities for all data points in the training set.
Then, you can train a meta-classifier using these probabilities and the ground truth labels. You can use the trained meta-classifier on your new data.

labelling of dataset in machine learning

I have a question about some basic concepts of machine learning. The examples, I observed, were giving a brief overview .For training the system, feature vector is given as input. In case of supervised learning, the dataset is labelled. I have confusion about labelling. For example if I have to distinguish between two types of pictures, I will provide a feature vector and on output side for testing, I'll provide 1 for type A and 2 for type B. But if I want to extract a region of interest from a dataset of images. How will I label my data to extract ROI using SVM. I hope I am able to convey my confusion. Thanks in anticipation.
In supervised learning, such as SVMs, the dataset should be composed as follows:
<i-th feature vector><i-th label>
where i goes from 1 to the number of patterns (also examples or observations) in your training set so this represents a single record in your training set which can be used to train the SVM classifier.
So you basically have a set composed by such tuples and if you do have just 2 labels (binary classification problem) you can easily use a SVM. Indeed the SVM model will be trained thanks to the training set and the training labels and once the training phase has finished you can use another set (called Validation Set or Test Set), which is structured in the same way as the training set, to test the accuracy of your SVMs.
In other words the SVM workflow should be structured as follows:
train the SVM using the training set and the training labels
predict the labels for the validation set using the model trained in the previous step
if you know what the actual validation labels are, you can match the predicted labels with the actual labels and check how many labels have been correctly predicted. The ratio between the number of correctly predicted labels and the total number of labels in the validation set returns a scalar between [0;1] and it's called the accuracy of your SVM model.
if you're interested in the ROI, you might want to check the trained SVM parameters (mainly the weights and bias) to reconstruct the separation hyperplane
It is also important to know that the training set records should be correctly, a priori labelled: if the training labels are not correct, the SVM will never be able to correctly predict the output for previously unseen patterns. You do not have to label your data according to the ROI you want to extract, the data must be correctly labelled a priori: the SVM will have the entire set of type A pictures and the set of type B pictures and will learn the decision boundary to separate pictures of type A and pictures of type B. You do not have to trick the labels: if you do, you're not doing classification and/or machine learning and/or pattern recognition. You're basically tricking the results.

percentage in classify of learning algorithem

I'm using weka, I have a training set, and the classify of the examples in the training set is boolean.
After I have the training set, I want to predict the percentage of new input to be true or false. I want to get a number between 0-1, and not only o or 1.
How can I do that, I have seen that in the prediection there are only the possibels classifes.
Thanks in advance.
You can only make the same kind of prediction with the learned classifier -- it learns to make the predictions you train it to make. The kind of prediction you want sounds more like regression. That is, you're don't want a strict classification, but a continuous value designating the membership probability.
The easiest way to achieve what you want is to replace the Booleans in your training set with 0/1 values and learn a regression model. This will give you numbers, although not necessarily only between 0 and 1.
To get real probabilities, you would need to use a classifier that calculates probabilities (such as Naive Bayes) and write some custom code (using the Weka library) to retrieve them. See the javadoc of the method that gives you access to the class probabilities.

Resources