Extract features using pre-trained (Tensorflow) CNN - machine-learning

Deep Learning has been applied successfully on several large data sets for the classification of a handful of classes (cats, dogs, cars, planes, etc), with performances beating simpler descriptors like Bags of Features over SIFT, color histograms, etc.
Nevertheless, training such a network requires a lot of data per class and a lot of training time. However, very often one doesn't have enough data or just wants to get an idea of how well a convolutional neural network might do, before spending time one designing and training such a device and gathering the training data.
In this particular case, it might be ideal to have a network configured and trained using some benchmark data set used by the state of the art publications, and to simply apply it to some data set that you might have as a feature extractor.
This results in a set of features for each image, which one could feed to a classical classification method like SVM's, logistic regression, neural networks, etc.
In particular when one does not have enough data to train the CNN, I may expect this to outperform a pipeline where the CNN was trained on few samples.
I was looking at the tensorflow tutorials, but they always seem to have a clear training / testing phase. I couldn't find a pickle file (or similar) with a pre-configured CNN feature extractor.
My questions are: do such pre-trained networks exist and where can I find them. Alternatively: does this approach make sense? Where could I find a CNN+weights ?
EDIT
W.r.t. #john's comment I tried using 'DecodeJpeg:0' and 'DecodeJpeg/contents:0' and checked the outputs, which are different (:S)
import cv2, requests, numpy
import tensorflow.python.platform
import tensorflow as tf
response = requests.get('https://i.stack.imgur.com/LIW6C.jpg?s=328&g=1')
data = numpy.asarray(bytearray(response.content), dtype=np.uint8)
image = cv2.imdecode(data,-1)
compression_worked, jpeg_data = cv2.imencode('.jpeg', image)
if not compression_worked:
raise Exception("Failure when compressing image to jpeg format in opencv library")
jpeg_data = jpeg_data.tostring()
with open('./deep_learning_models/inception-v3/classify_image_graph_def.pb', 'rb') as graph_file:
graph_def = tf.GraphDef()
graph_def.ParseFromString(graph_file.read())
tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('pool_3:0')
arr0 = numpy.squeeze(sess.run(
softmax_tensor,
{'DecodeJpeg:0': image}
))
arr1 = numpy.squeeze(sess.run(
softmax_tensor,
{'DecodeJpeg/contents:0': jpeg_data}
))
print(numpy.abs(arr0 - arr1).max())
So the max absolute difference is 1.27649, and in general all the elements differ (especially since the average value of the arr0 and arr1 themselves lies between 0 - 0.5).
I also would expect that 'DecodeJpeg:0' needs a jpeg-string, not a numpy array, why else does the name contain 'Jpeg'. #john: Could you state how
sure you are about your comment?
So I guess I'm not sure what is what, as I would expect a trained neural network to be deterministic (but chaotic at most).

The TensorFlow team recently released a deep CNN trained on the ImageNet dataset. You can download the script that fetches the data (including the model graph and the trained weights) from here. The associated Image Recognition tutorial has more details about the model.
While the current model isn't specifically packaged to be used in a subsequent training step, you could explore modifying the script to reuse parts of the model and the trained weights in your own network.

Related

How to classify images with Variational Autoencoder

I have trained an autoencoder in both labeled images (1200) and unlabeled images (4000) and I have both models saved separately (vae_fake_img and vae_real_img). So I was wondering what to do next. I know Variational Autoencoders are not useful for a classification task but feature extraction seems like a good try. So here are my attempts:
Labeled my unlabeled data using k-means clustering from the labeled images latent space.
My supervisor suggested training the unlabeled images on the VAE, then visualize the latent space with t-SNE, then K-means clustering, then MLP for final prediction.
I want to train a Conditional VAE to create more labeled samples and retrain the VAE and use the reconstruction (64,64,3) output and using the last three fully connected (FC) layers of VGGNet16 architecture for final classification as done in this paper Encoder as feature extraction paper.
I have tried so many methods for my thesis and I really need to achieve high accuracy if I want to get a job in my current internship. So any suggestion or guidance is highly appreciated. I've read so many Autoencoder papers but the architecture for classification is not fully explained (or Im not understanding properly), I want to know which part of the VAE holds more information for multiclassification as I believe that the latent space of the encoder has more useful information than the decoder reconstruction. I want to know which part of the autoencoder has better feature extraction for a final classification.
in case of Autoencoders yoh don't need labels for reconstructing input data. So I think these approaches might make slight improvements:
Use VAE(Variational Auto Encoder) instead of AE
Use Conditional VAE(CVAE) and the combine all the data and train the network feeding all of data into that.
consider Batch as condition, for labeled and unlabeled data and use onehot of batch of data as its condition.
Inject the condition to Encoder and Decoder
Then the latent space won't have any batch effect and you can use KNN to get the label of nearest labeled data for unlabeled ones.
Alternatively you can train a somple MLP to classify every sample of your latent space. (in this approach you should train the MLP only with labeled data and then test it on unlabeled data)
don't forget Batch normalization and drop out layers
p.s., the most meaningful layer of an AE is the latent space.

Finding the suitable CNN architecture for the calssification

I want to use convolutional Neural Network (CNN) to classify between two classes of images. I built several CNN architectures, but I always get the same result; the network always classify all cases as a second class sample. Therefore, I always get 50% accuracy in leave-one-out. The data is balanced in terms of the number of samples of each class (16 from 1st, and 16 from 2nd). Could you please clarify what does this mean.
With such small number of training samples, Your CNN model is very likely to overfit the data giving good training accuracy and worst test accuracy.
Else your model can be skewed predicting the same class at all times.
Below are some of the solutions you can try:
1) As you have commented, if you cannot get any more images, then try creating new images by modifying the ones already available. For ex: Let's say you have 16 images of a cat (cat is the class). You can crop the cat and paste it in different backgrounds, try varying the brightness, intensity etc, Try rotation, translation operations etc.
This will help you create a good training set.
2) Try creating a smaller model (with one or two layers) and check if it improves your accuracy.
3) You can do transfer learning by using a good pre-trained model as it can learn pretty well when compared to creating a model from base.

When should you use pretrained weights when training deep learning models?

I am interested in training a range of image and object detection models and I am wondering what the general rule of when to use pretrained weights of a network like VGG16 is.
For example, it seems obvious that fine-tuning pre-trained VGG16 imagenet model weights is helpful you are looking for a subset ie. Cats and Dogs.
However it seems less clear to me whether using these pretrained weights is a good idea if you are training an image classifier with 300 classes with only some of them being subsets of the classes in the pretrained model.
What is the intuition around this?
Lower layers learn features that are not necessarily specific to your application/dataset: corners, edges , simple shapes, etc. So it does not matter if your data is strictly a subset of the categories that the original network can predict.
Depending on how much data you have available for training, and how similar the data is to the one used in the pretrained network, you can decide to freeze the lower layers and learn only the higher ones, or simply train a classifier on top of your pretrained network.
Check here for a more detailed answer

Image classification, narrow domain with custom labels

Let's suppose I would like to classify motorbikes by model.
there are couple of hundreds models of motorbikes I'm interested in.
I do have tens, sometimes hundreds of pictures of each motorbike model.
Can you please point me to the practical example that demonstrates how to train model on your data and then use it to classify images? It needs to be a deep learning model, not simple logistic regression.
I'm not sure about it, but it seems like I can't use pre-trained neural net because it has been trained on wide range of objects like cat, human, cars etc. They may be not too good at distinguishing the motorbike nuances I'm interested in.
I found couple of such examples (tensorflow has one), but sadly, all of them were using pre-trained model. None of it had example how to train it on your own dataset.
In cases like yours you either use transfer learning or fine tuning. If you have more then thousand images of motorbikes I would use fine tuning and if you have less transfer learning.
Fine tuning is using a pre trained model and using a different classifier part. Then the new classifier part maybe the last 1-2 layers of the trained model are trained to your dataset.
Transfer learning means using a pre trained model and letting it output features for an input image. Now you use a new classifier based on those features. Maybe a SVM or a logistic regression.
An example for this can be seen here: https://github.com/cpra/dlvc2016/blob/master/lectures/lecture10.pdf. slide 33.
This paper Quick, Draw! Doodle Recognition from a kaggle challenge may be similar enough to what you are doing. The code is on github. You may need some data augmentation if you only have a few hundred images for each category.
What you want is pretty EZ. Follow the darknet YOLO implementation
Instruction: https://pjreddie.com/darknet/yolo/
Code https://github.com/pjreddie/darknet
Training YOLO on COCO
You can train YOLO from scratch if you want to play with different training regimes, hyper-parameters, or datasets. Here's how to get it working on the COCO dataset.
Get The COCO Data
To train YOLO you will need all of the COCO data and labels. The script scripts/get_coco_dataset.sh will do this for you. Figure out where you want to put the COCO data and download it, for example:
cp scripts/get_coco_dataset.sh data
cd data
bash get_coco_dataset.sh
Add your data inside and make sure it is same as testing samples.
Now you should have all the data and the labels generated for Darknet.
Then call training script with the pre-trained weight.
Keep in mind that only training on your motorcycle may not result in good estimation. There would be biased result coming out, I red it somewhere b4.
The rest is all inside the link. Good luck

Large Scale Image Classifier

I have a large set of plant images labeled with the botanical name. What would be the best algorithm to use to train on this dataset in order to classify an unlabel photo? The photos are processed so that 100% of the pixels contain the plant (e.g. either closeups of the leaves or bark), so there are no other objects/empty-space/background that the algorithm would have to filter out.
I've already tried generating SIFT features for all the photos and feeding these (feature,label) pairs to a LibLinear SVM, but the accuracy was a miserable 6%.
I also tried feeding this same data to a few Weka classifiers. The accuracy was a little better (25% with Logistic, 18% with IBk), but Weka's not designed for scalability (it loads everything into memory). Since the SIFT feature dataset is a several million rows, I could only test Weka with a random 3% slice, so it's probably not representative.
EDIT: Some sample images:
Normally, you would not train on the SIFT features directly. Cluster them (using k-means) and then train on the histogram of cluster membership identifiers (i.e., a k-dimensional vector, which counts, at position i, how many features were assigned to the i-th cluster).
This way, you obtain a single output per image (and a single, k-dimensional, feature vector).
Here's the quasi-code (using mahotas and milk in Pythonn):
from mahotas.surf import surf
from milk.unsupervised.kmeans import kmeans,assign_centroids
import milk
# First load your data:
images = ...
labels = ...
local_features = [surfs(im, 6, 4, 2) for im in imgs]
allfeatures = np.concatenate(local_features)
_, centroids = kmeans(allfeatures, k=100)
histograms = []
for ls in local_features:
hist = assign_centroids(ls, centroids, histogram=True)
histograms.append(hist)
cmatrix, _ = milk.nfoldcrossvalidation(histograms, labels)
print "Accuracy:", (100*cmatrix.trace())/cmatrix.sum()
This is a fairly hard problem.
You can give BoW model a try.
Basically, you extract SIFT features on all the images, then use K-means to cluster the features into visual words. After that, use the BoW vector to train you classifiers.
See the Wikipedia article above and the references papers in that for more details.
You probably need better alignment, and probably not more features. There is no way you can get acceptable performance unless you have correspondences. You need to know what points in one leaf correspond to points on another leaf. This is one of the "holy grail" problems in computer vision.
People have used shape context for this problem. You should probably look at this link. This paper describes the basic system behind leafsnap.
You can implement the BoW model according to this Bag-of-Features Descriptor on SIFT Features with OpenCV. It is a very good tutorial to implement the BoW model in OpenCV.

Resources