Add data to retrained Inception net - machine-learning

I did few experiments with Google's Inception-v3 net from the tutorial (https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html)
If I have a large enough data set, then it's fine. But what about when a data set is relatively small and is growing on the go (roughly 10% a day)?
Is there a way to add more data points to the retrained net?
I don't think that retraining a whole model each time we get a new data point doesn't seem efficient.

You can think of each day's data as a large batch. Tensorflow uses SGD that naturally supports this kind of training input.
You can just save your model to disk after you finish each day's training and load yesterday's model before each day's traning.

There are checkpoints in TensorFlow if you want to pause and resume. Another option is to train different categories on different layers. It's possible to use your outputs from image retraining as inputs. Better hardware should also be considered.

Related

Why does google vision take so much time to train the model?

I have been learning gcp and how to use google vision to train, test and evaluate a model over the image data set. I have been testing the model for just 60 images, Split as shown in the image. The estimated time to train has been over 60-90 min. I have been wondering what happens in the backend that it takes so much time to train.
As I understand, if you do not provide enough images, the algorithm will tend to converge to an heuristic solution. This may take more time due to the low statistical significance of the trained data ingested, especially per label given your warning...

Do Neuronal networks getting slow in adaption after a lot of training?

I am a beginner in the neuronal network field and I want to understand a certain statement. A friend said that a neuronal network gets slower after you fit a lot of data in.
Right now, I just did the coursera ML course from androw ng. There, I implemented backpropagation. I thought it just adaptes the model related to the expected output by using different types of calculations. Nevertheless, it was not like the history was used to adapt the model. Just the current state of the neurons were checked and their weight were adapted backwards in combination with regularisation.
Is my assumption correct or am I wrong? Are there some libraries that use history data that could result in a slowly adapting model after a certain amount of training?
I want to use a simple neuronal network for reinforcement learning and I want to get an idea if I need to reset my model if the target environment changes for some reason. Otherwise my model would be slower and slower in adaption after time.
Thanks for any links and explanations in advanced!
As you have said, neural networks adapt by modifying their weights during the backpropagation step. Modifying these weights will not be slower as the training goes on since the number of steps to modify these weights will always remain the same. The amount of steps needed to run an example through your model will also remain the same, therefore not slowing down your network according to the amount of examples you fed it during training.
However, you can decide to change your learning rate during your training (generally decreasing it as epochs go on). According to the way the learning rate of your model evolves, the weights will be modified in a different manner, generally resulting in a smaller difference each epoch.

Stratified sampling on images

I have a data science problem which has around 70k images already labelled across 20 different categories. Some categories have many images whereas some other have fewer images. This in turn results in an imbalanced data set and poor results (currently at 68% accuracy). After some research I found out that I need to do a sampling of the images (Image mining?) instead of selecting all the images. One such approach could be Stratified sampling. Question is how do I select images to optimise the training of the model? Any command line tool or open source code that I could use on 70k images?
You have imbalanced data so to deal with that, you can simply use a library called Imbalanced learn
This library is originally focussed for implementing SMOTE but later also implemented under sampling and over sampling techniques.
It is also compatible with scikit-learn.
Using this approach, will result in data resampling in a way that every class has nearly equal instances.
Second option:
You can simply pick equal number of images for each class and form training data. This may not boost your accuracy due to lack of proper test data but surely your model will become more robust and generalized.

How do you add new categories and training to a pretrained Inception v3 model in TensorFlow?

I'm trying to utilize a pre-trained model like Inception v3 (trained on the 2012 ImageNet data set) and expand it in several missing categories.
I have TensorFlow built from source with CUDA on Ubuntu 14.04, and the examples like transfer learning on flowers are working great. However, the flowers example strips away the final layer and removes all 1,000 existing categories, which means it can now identify 5 species of flowers, but can no longer identify pandas, for example. https://www.tensorflow.org/versions/r0.8/how_tos/image_retraining/index.html
How can I add the 5 flower categories to the existing 1,000 categories from ImageNet (and add training for those 5 new flower categories) so that I have 1,005 categories that a test image can be classified as? In other words, be able to identify both those pandas and sunflowers?
I understand one option would be to download the entire ImageNet training set and the flowers example set and to train from scratch, but given my current computing power, it would take a very long time, and wouldn't allow me to add, say, 100 more categories down the line.
One idea I had was to set the parameter fine_tune to false when retraining with the 5 flower categories so that the final layer is not stripped: https://github.com/tensorflow/models/blob/master/inception/README.md#how-to-retrain-a-trained-model-on-the-flowers-data , but I'm not sure how to proceed, and not sure if that would even result in a valid model with 1,005 categories. Thanks for your thoughts.
After much learning and working in deep learning professionally for a few years now, here is a more complete answer:
The best way to add categories to an existing models (e.g. Inception trained on the Imagenet LSVRC 1000-class dataset) would be to perform transfer learning on a pre-trained model.
If you are just trying to adapt the model to your own data set (e.g. 100 different kinds of automobiles), simply perform retraining/fine tuning by following the myriad online tutorials for transfer learning, including the official one for Tensorflow.
While the resulting model can potentially have good performance, please keep in mind that the tutorial classifier code is highly un-optimized (perhaps intentionally) and you can increase performance by several times by deploying to production or just improving their code.
However, if you're trying to build a general purpose classifier that includes the default LSVRC data set (1000 categories of everyday images) and expand that to include your own additional categories, you'll need to have access to the existing 1000 LSVRC images and append your own data set to that set. You can download the Imagenet dataset online, but access is getting spotier as time rolls on. In many cases, the images are also highly outdated (check out the images for computers or phones for a trip down memory lane).
Once you have that LSVRC dataset, perform transfer learning as above but including the 1000 default categories along with your own images. For your own images, a minimum of 100 appropriate images per category is generally recommended (the more the better), and you can get better results if you enable distortions (but this will dramatically increase retraining time, especially if you don't have a GPU enabled as the bottleneck files cannot be reused for each distortion; personally I think this is pretty lame and there's no reason why distortions couldn't also be cached as a bottleneck file, but that's a different discussion and can be added to your code manually).
Using these methods and incorporating error analysis, we've trained general purpose classifiers on 4000+ categories to state-of-the-art accuracy and deployed them on tens of millions of images. We've since moved on to proprietary model design to overcome existing model limitations, but transfer learning is a highly legitimate way to get good results and has even made its way to natural language processing via BERT and other designs.
Hopefully, this helps.
Unfortunately, you cannot add categories to an existing graph; you'll basically have to save a checkpoint and train that graph from that checkpoint onward.

how to train a classifier using video datasets

If I have a video dataset of a specific action , how could I use them to train a classifier that could be used later to classify this action.
The question is very generic. In general, there is no foul proof way of training a classifier that will work for everything. It highly depends on the data you are working with.
Here is the 'generic' pipeline:
extract features from the video
label your features (positive for the action you are looking for; negative otherwise)
split your data into 2 (or 3) sets. One for training, one for testing and the other optionally for validation
train a classifier on the labeled examples (e.g. SVM, Neural Network, Nearest Neighbor ...)
validate the results on the validation data, if that is appropriate for the algorithm
test on data you haven't used for training.
You can start with some machine learning tools here http://www.cs.waikato.ac.nz/ml/weka/
Make sure you never touch the test data for any other purposes than testing
Good luck
Almost 10 years later, here's an updated answer.
Set up a camera and collect raw video data
Save it somewhere in form of single frames. Do this yourself locally or using a cloud bucket or use a service like Sieve API. Helpful repo linked here.
Export from Sieve or cloud bucket to get data labeled. Do this yourself or using some service like Scale Rapid.
Split your dataset into train, test, and validation.
Train a classifier on the labeled samples. Use transfer learning over some existing model and fine-tune just the last few layers.
Run your model over the test set after each training epoch and save the one with the best test set performance.
Evaluate your model at the end using the validation set.
There are many repos that can help you get started: https://github.com/weiaicunzai/awesome-image-classification
The two things that can help you ensure best results include 1. high quality labeled data and 2. a diverse, curated dataset. That's what Sieve can help with!

Resources