MobileNet Pre-Trained Model - Classification - ios

I am currently working with a pre-trained MobileNet model that classifies images from a set of 1000 categories. For the purpose of my IOS application, I only need it to recognize/classify one type of object in the scene. How can I train the model so that it only classifies the one object I need but does it extremely well?
I am new to machine learning and unfamiliar with transfer learning techniques. Would doing this type of training reduce the model size and make it more efficient at recognizing the one object I need? If yes, what are resources that teach me how to keep training this pre-trained model for my objective.

Briefly, you want to turn your 1000-way classifier to a binary classifier.
The answer below assumes you have access to the original data, and that you know how to train the original model (that is, you have access to the training script). Here goes:
Assuming you're only interested in a single category C, you want to first map all instances (x, C) of the data to (x, 1) and all other instances (x, not_C) to (x, 0), then train a model on the resulting data (or, continue training the pre-trained model, if the training script also accepts a starting point for the model).
The model would then lose the ability to discern between non-C classes, and hopefully become better at discriminating C vs non-C instances.
Note: A less hacky approach would be to actually restrict the model to output only 0 or 1 and change the objective to a binary softmax. However, that would require some manipulation of the model's architecture, which you can do without.

Related

How to compare two deep learning models performance?

I'm new to deep learning so if the question doesn't make sense plz correct me.
In traditional machine learning I know how to compare models and choose one of the as the best with the metrics I choose.
However, in deep learning, each model is build with different layers, so how can I control variables to determine which model is the best fairly? Or usually people don't compare in this way?
For example I have a sequential data, I can use both CNN and LSTM model, so should I compare model with only one layer of CNN and one layer of LSTM? After that I can add more layers or tuning my model?
Or someone can just tell me the process of how to compare and choose the best deep learning model with chosen metrics?
For sequential data as you mention,
Number of layers has nothing to do with the comparison of two models. At certain number of layers your accuracy will start to decrease because of overfitting.
Comparing 1 layer of CNN with 1 layer of LSTM is not a correct approach.
You need to check the following factors for comparison like
accuracy,precision,recall,f1-score depending on your application objective.
For example , if you are working on the language translation data
LSTM would be better choice, because it over comes the problem of vanishing gradient.

How to correctly combine my classifiers?

I have to solve 2 class classification problem.
I have 2 classifiers that output probabilities. Both of them are neural networks of different architecture.
Those 2 classifiers are trained and saved into 2 files.
Now I want to build meta classifier that will take probabilities as input and learn weights of those 2 classifiers.
So it will automatically decide how much should I "trust" each of my classifiers.
This model is described here:
http://rasbt.github.io/mlxtend/user_guide/classifier/StackingClassifier/#stackingclassifier
I plan to use mlxtend library, but it seems that StackingClassifier refits models.
I do not want to refit because it takes very huge amount of time.
From the other side I understand that refitting is necessary to "coordinate" work of each classifier and "tune" the whole system.
What should I do in such situation?
I won't talk about mlxtend because I haven't worked with it but I'll tell you the general idea.
You don't have to refit these models to the training set but you have to refit them to parts of it so you can create out-of-fold predictions.
Specifically, split your training data in a few pieces (usually 3 to 10). Keep one piece (i.e. fold) as validation data and train both models on the other folds. Then, predict the probabilities for the validation data using both models. Repeat the procedure treating each fold as a validation set. In the end, you should have the probabilities for all data points in the training set.
Then, you can train a meta-classifier using these probabilities and the ground truth labels. You can use the trained meta-classifier on your new data.

Caffe's way of representing negative examples on benchmark dataset for binary classification

I would like to know how to define or represent a negative training set if I would want to train a binary classifier from a pre-trained model say, AlexNet on ILSVRC12 (or ImageNet) dataset. What I am currently thinking of is to take one the classes which is not related as the negative training set while the one which is related as positive one. Is there any better way which is more elegant?
The CNNs trained on the ILSVRC data set are already discriminating among 1000 classes of images. Yes, you can use one of those topologies to train a binary classifier, but I suggest that you start with an untrained model and run it through your two chosen classes. If you start with a trained model, you have to unlearn a lot, and your result is still trying to discriminate among 1000 classes: that last FC layer is going to give you trouble.
There are ways to work around the 1000-class problem. If your application already overlaps one or more of the trained classes, then simply add a layer that maps those classes to label "1" and all the others to label "0".
If you're insistent on retaining the trained kernels, then try replacing the final FC layer (1000) with a 2-class FC layer. Then choose your two classes (applicable images vs everything else) and run your training.

Multiclass classification growing number of classes

I am building an intent recognition system using multiclass classfication with SVM.
Currently I only have a few numbers of classes limited by the training data. However, in the future, I may get data with new classes. I can, of course, put all the data together and re-train the model, which is timing consuming and in-efficient.
My current idea is to do the one-against-one classification at the beginning, and when a new class comes in, I can just train it against all the existing classes, and get n new classifiers. I am wondering if there are some other better methods to do that. Thanks!
The most efficient approach would be to focus on one-class classifiers, then you just need to add one new model to the ensemble. Just to compare:
Let us assume that we have K classes and you get 1 new plus P new points from it, your whole dataset consists of N points (for simpliciy - equaly distributed among classes) and your training algorithm complexity is f(N) and if your classifier supports incremental learning then its complexity if g(P, N)
OVO (one vs one) - in order to get the exact results you need to train K new classifiers, each with about N/K datapoints thus leading to O(K f(P+N/K)), there is no place to use incremental training
OVA (one vs all) - in order to get the exact results you retrain all classifiers, if done in batch fassion you need O(K f(N+P)), worse than the above. However if you can train in incremental fashion you just need O(K g(P, N)) which might be better (depending on the classifier).
One-class ensemble - it might seem a bit weird, but for example Naive Bayes can be seen as such approach, you have generative model which models each class conditional distribution, thus your model for each class is actually independent on the remaining ones. Thus the complexity is O(f(P))
This list is obviously not exhaustive but should give you general idea in what to analyze.

Model selection with dropout training neural network

I've been studying neural networks for a bit and recently learned about the dropout training algorithm. There are excellent papers out there to understand how it works, including the ones from the authors.
So I built a neural network with dropout training (it was fairly easy) but I'm a bit confused about how to perform model selection. From what I understand, looks like dropout is a method to be used when training the final model obtained through model selection.
As for the test part, papers always talk about using the complete network with halved weights, but they do not mention how to use it in the training/validation part (at least the ones I read).
I was thinking about using the network without dropout for the model selection part. Say that makes me find that the net performs well with N neurons. Then, for the final training (the one I use to train the network for the test part) I use 2N neurons with dropout probability p=0.5. That assures me to have exactly N neurons active on average, thus using the network at the right capacity most of the time.
Is this a correct approach?
By the way, I'm aware of the fact that dropout might not be the best choice with small datasets. The project I'm working on has academic purposes, so it's not really needed that I use the best model for the data, as long as I stick with machine learning good practices.
First of all, model selection and the training of a particular model are completely different issues. For model selection, you would usually need a data set that is completely independent of both training set used to build the model and test set used to estimate its performance. So if you're doing for example a cross-validation, you would need an inner cross-validation (to train the models and estimate the performance in general) and an outer cross-validation to do the model selection.
To see why, consider the following thought experiment (shamelessly stolen from this paper). You have a model that makes a completely random prediction. It has a number of parameters that you can set, but have no effect. If you're trying different parameter settings long enough, you'll eventually get a model that has a better performance than all the others simply because you're sampling from a random distribution. If you're using the same data for all of these models, this is the model you will choose. If you have a separate test set, it will quickly tell you that there is no real effect because the performance of this parameter setting that achieves good results during the model-building phase is not better on the separate set.
Now, back to neural networks with dropout. You didn't refer to any particular paper; I'm assuming that you mean Srivastava et. al. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". I'm not an expert on the subject, but the method to me seems to be similar to what's used in random forests or bagging to mitigate the flaws an individual learner may exhibit by applying it repeatedly in slightly different contexts. If I understood the method correctly, essentially what you end up with is an average over several possible models, very similar to random forests.
This is a way to make an individual model better, but not for model selection. The dropout is a way of adjusting the learned weights for a single neural network model.
To do model selection on this, you would need to train and test neural networks with different parameters and then evaluate those on completely different sets of data, as described in the paper I've referenced above.

Resources