I've used the built-in TensorFlow tools to fine-tune the last layers of the InceptionV3 model to classify items on a custom dataset by using this tutorial. This generates a bunch of bottlenecks and a TensorFlow graph (*.pb file).
I'd like to import the *.pb TF graph into Keras, much like you would with an *.hdf5 file that contains the weights of a model. The reason being is that there are some tools written in Keras I would like to leverage while using this model.
Is this possible?
As of july 2017 Keras can only import Keras models and not raw tf graphs. This is because Keras has some metadata which is not itself kept as a part of the tf graph and can't be easily reconstructed.
Related
I am using tensorflow 2.4.1. I want to split my data into training ,test and validation sets. I can use sklearn.model_selection for splitting my data. However I want to know if there is a similar api in tensorflow ?
There are many supervised classifier algorithms available in scikit-learn but I couldn't find any information about their scaalbility regarding large datasets. I know that for instance, support vector machines don't behave well with huge datasets, but what about others?
Which supervised/semi-supervised classifier algorithms are most suitable for large datasets?
If you are specifically looking for classifiers in sklearn, you can have a look at this link : Scaling Strategies for large datasets.
Generally, the classifiers do incremental learning on your dataset by creating mini-batches. Here are some link for reference :
Incremental Learning links
Advanced ML lecture on Incremental Learning
ML on streaming data
Incremental Leanring
Microsoft paper on Incremental Learning
You can have a look at these classifiers in SKlearn for more info
SGD Classifier
Passive Agrressive Classifier
Multinomial Naive Bayes Incremental Learning
BErnoulli Naive Bayes
If your data is given as a stream during input, you can have a look at Apache Spark Streaming and jump to MlLib in Apache Spark for more info.
You can also have a look at Feature Hasher for large scale feature hashing in sklearn.
By huge datasets you mean like the "iris" deafult dataset?
Depending on what you want to do with those algorithms, like training and fitting, for example.
I am gonna write down the ones I use for BIG datasets, and work fine.
from sklearn.cross_validation import train_test_split
from sklearn import datasets, svm\n
import numpy as np\n
import matplotlib.pyplot as plt\n
from sklearn.model_selection import GridSearchCV\n
from sklearn.metrics import mean_squared_error\n
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import SGDRegressor\n
But of course you need to know what do you want to do with them.
Here you can check everything you want to know about these or many more.
http://scikit-learn.org/stable/
I am learning to create a learning model using TensorFlow.
I have successfully run the MNIST tutorial, now would like to test the model with my own images. They are same-size image (224x224) and classified into folders.
Now I would like to use those images as input for my model as in the MNIST example. I tried to open the MNIST data-set but it's unreadable. I guess it has been converted into some binary types. Through the example, I think the MNIST dataset somehow has a structure like this:
mnist
test
images
labels
train
images
labels
How can I make a dataset look like the MNIST data from my own images files?
Thank you very much!
MNIST is not stored in image format. From the mnist web-site (http://yann.lecun.com/exdb/mnist/) you could see that it has specific format which is already close to the tensor or numpy array, which could be used in tensorflow with minimal adjustments. It is a kind of a matrix with numbers.
What you need to work with usual images (.jpg for instance) is to use any python lib for image processing to convert into the np.array. For example PIL will work, like here:
PIL and numpy
Another option is to use a built-in functions from tensorflow to convert your images straight to tensors supported by tensofrlow, check this out:
https://www.tensorflow.org/versions/r0.9/api_docs/python/image.html
I want to finetune a neural net that has been pretrained. However, this model was made in Caffe and I would like to work in Torch.
I have tried loadcaffe, but this does not seem focused on finetuning.
Is there another tool that makes this possible? Or can the Caffe model be converted to a Torch net?
All you need to do is
use loadcaffe to convert your Caffe pretrained network into a Torch version.
(right after you can save it on disk with torch.save("net.t7", model))
write Torch code to fine-tune it.
One of the main advantages of caffe for me was the possibility of doing transfer learning on freely distributed pretrained models.
Is there a place to get trained models from papers/competitions in tensorflow format?
If not, is there a possibility to convert existing caffe(or any other) models into tensorflow models?
You can likely use the caffe to tensorflow model converter to convert model zoo models. If you try it and report back, it would be great to know. There's a potential issue with converting maxpooling and padding, but it seems to work for many models.
As of now, official TF github page has a list with the group of models. Also you can use keras.applications which has pretrained models.
There are a few ways to do transfer learning with Tensorflow some of which combine TF Slim and custom codes, but there is a very nice collection of pretrained Tensorflow models Tensornets which contain almost all popular models and their pretrained weights
There are different ways to do Transfer learning in TFlow.
1st solution is to write your save/load on checkpoints. However, there are incompatibilities with TF 1.0 and TF 2.0
Other one is to used some pre-framework on top of TF, some of them are :
TFLayers : TF specific: TF Slim
MLModels : Cross platform model zoo: MLmodels