Tensorflow model zoo? - machine-learning

One of the main advantages of caffe for me was the possibility of doing transfer learning on freely distributed pretrained models.
Is there a place to get trained models from papers/competitions in tensorflow format?
If not, is there a possibility to convert existing caffe(or any other) models into tensorflow models?

You can likely use the caffe to tensorflow model converter to convert model zoo models. If you try it and report back, it would be great to know. There's a potential issue with converting maxpooling and padding, but it seems to work for many models.

As of now, official TF github page has a list with the group of models. Also you can use keras.applications which has pretrained models.

There are a few ways to do transfer learning with Tensorflow some of which combine TF Slim and custom codes, but there is a very nice collection of pretrained Tensorflow models Tensornets which contain almost all popular models and their pretrained weights

There are different ways to do Transfer learning in TFlow.
1st solution is to write your save/load on checkpoints. However, there are incompatibilities with TF 1.0 and TF 2.0
Other one is to used some pre-framework on top of TF, some of them are :
TFLayers : TF specific: TF Slim
MLModels : Cross platform model zoo: MLmodels

Related

Evaluation of generative models like variational autoencoder

i hope everyone is doing well
I need some help with generative models.
So im working on a project where the main task is to build a binary classification model. In the dataset which contains 300000 sample and 100 feature, there is an imbalance between the 2 classes where majority class is too much bigger than the minory class.
To handle this problem, i'm using VAE (variational autoencoders) to solve this problem.
So i started training the VAE on the minority class and then use the decoder part of the VAE to generate new or fake samples that are similars to the minority class then concatenate this new data with training set in order to have a new balanced training set.
My question is : is there anyway to evalutate generative models like vae, like is there a way to know if the data generated is similar to the real one ??
I have read that there is some metrics to evaluate generated data like inception distance and Frechet inception distance but i saw that they have been only used on image data
I wanna know if i can use them too on my dataset ?
Thanks in advance
I believe your data is not image as you say there are 100 features. What I believe that you can check the similarity between the synthesised features and the original features (the ones belong to minority class), and keep only the ones with certain similarity. Cosine similarity index would be useful for this problem.
That would be also very nice to check a scatter plot of the synthesised features with the original ones to see if they are close to each other. tSNE would be useful at this point.

Using Keras to create a model that can generate new, similar data

I am working with Keras and experimenting with AI and Machine Learning. I have a few projects made already and now I'm looking to replicate a dataset. What direction do I go to learn this? What should I be looking up to begin learning about this model? I just need an expert to point me in the right direction.
To clarify; by replicating a dataset I mean I want to take a series of numbers with an easily distinguishable pattern and then have the AI generate new data that is similar.
There are several ways to generate new data similar to a current dataset, but the most prominent way nowadays is to use a Generative Adversarial Network (GAN). This works by pitting two models against one another. The generator model attempts to generate data, and the discriminator model attempts to tell the difference between real data and generated data. There are plenty of tutorials out there on how to do this, though most of them are probably based on image data.
If you want to generate labels as well, make a conditional GAN.
The only other common method for generating data is a Variational Autoencoder (VAE), but the generated data tend to be lower-quality than what a GAN can generate. I don't know if that holds true for non-image data, though.
You can also use Conditional Variational Autoencoder which produces new data with label.

How to create incremental NER training model(Appending in existing model)?

I am training customized Named Entity Recognition(NER) model using stanford NLP but the thing is i want to re-train the model.
Example :
Suppose i trained xyz model , then i will test it on some text if model detected somethings wrong then i (end user) will correct it and wanna re-train(append mode) the model on the corrected text.
Stanford Doesn't provide re-training facility so thats why i shifted towards spacy library of python , where i can retrain the model means , i can append new entities into the existing model.But after re-training the model using spacy , it overriding the existing knowledge(means existing training data in it) and just showing the result related to recent training.
Consider , i trained a model on TECHNOLOGY tag using 1000 records.after that lets say i have added one more entity BOOK_NAME to existing trained model.after this if i test model then spacy model just detecting BOOK_NAME from text.
Please give a suggestion to tackle my problem statement.
Thanks in Advance...!
I think it is a bit late to address this here. The issue you are facing is what is also called 'Catastrophic Forgetting problem'. You can get over it by sending in examples for existing examples. Like Spacy can predict well on well formed text like BBC corpus. You can choose such corpus, predict using pretrained model of spacy and create training examples. Mix these examples with your new examples and then train. You should now get better results. It was mentioned already in the spacy issues.

Image classification, narrow domain with custom labels

Let's suppose I would like to classify motorbikes by model.
there are couple of hundreds models of motorbikes I'm interested in.
I do have tens, sometimes hundreds of pictures of each motorbike model.
Can you please point me to the practical example that demonstrates how to train model on your data and then use it to classify images? It needs to be a deep learning model, not simple logistic regression.
I'm not sure about it, but it seems like I can't use pre-trained neural net because it has been trained on wide range of objects like cat, human, cars etc. They may be not too good at distinguishing the motorbike nuances I'm interested in.
I found couple of such examples (tensorflow has one), but sadly, all of them were using pre-trained model. None of it had example how to train it on your own dataset.
In cases like yours you either use transfer learning or fine tuning. If you have more then thousand images of motorbikes I would use fine tuning and if you have less transfer learning.
Fine tuning is using a pre trained model and using a different classifier part. Then the new classifier part maybe the last 1-2 layers of the trained model are trained to your dataset.
Transfer learning means using a pre trained model and letting it output features for an input image. Now you use a new classifier based on those features. Maybe a SVM or a logistic regression.
An example for this can be seen here: https://github.com/cpra/dlvc2016/blob/master/lectures/lecture10.pdf. slide 33.
This paper Quick, Draw! Doodle Recognition from a kaggle challenge may be similar enough to what you are doing. The code is on github. You may need some data augmentation if you only have a few hundred images for each category.
What you want is pretty EZ. Follow the darknet YOLO implementation
Instruction: https://pjreddie.com/darknet/yolo/
Code https://github.com/pjreddie/darknet
Training YOLO on COCO
You can train YOLO from scratch if you want to play with different training regimes, hyper-parameters, or datasets. Here's how to get it working on the COCO dataset.
Get The COCO Data
To train YOLO you will need all of the COCO data and labels. The script scripts/get_coco_dataset.sh will do this for you. Figure out where you want to put the COCO data and download it, for example:
cp scripts/get_coco_dataset.sh data
cd data
bash get_coco_dataset.sh
Add your data inside and make sure it is same as testing samples.
Now you should have all the data and the labels generated for Darknet.
Then call training script with the pre-trained weight.
Keep in mind that only training on your motorcycle may not result in good estimation. There would be biased result coming out, I red it somewhere b4.
The rest is all inside the link. Good luck

Using TensorFlow to do regression on a 13 column data set

I would like to do regression on a 13 column data set. The second column is dependent on the rest of the 12 columns. All column contains real number values.
How can I create a neural network using TensorFlow to do the regression? I have tried going through this tutorial but it is too advanced for me.
Thanks in advance for a MWE.
In that tutorial they are using logistic regression, that is a linear binary classifier. They use the class tf.contrib.learn.LinearClassifier as their model.
If you use class tf.contrib.learn.LinearRegressor then you can do linear regression instead of classification.
In that webpage you have tutorials for other models. If you want to create a neural network you have different tutorial in the left menu, for example:
https://www.tensorflow.org/tutorials/mnist/beginners/
In this repository you have python notebooks with the full code of many different neural networks:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/udacity

Resources