How to load EMNIST data to Tensorflow - image-processing

In all the tutorials i've seen for tensorflow, they've used the MNIST dataset, i've understood the modelling but how do i load this dataset into tensorflow?
https://www.nist.gov/itl/iad/image-group/emnist-dataset

The EMNIST dataset uses the same binary format as the original MNIST dataset. Therefore you can take the input pipeline code from any tutorial that uses the original MNIST dataset, and point it at the set of files you get from downloading the EMNIST dataset to train on that dataset.

You can load the EMNIST data file in Matlab format with scipy.io.loadmat(). The array has to be rotated after loading. There is a Jupyter Notebook on GitHub which does EMNIST Digits classification.

You could use the EMNIST package that can be found here: https://pypi.org/project/emnist/
To load the dataset you first need to decide which of the six different datasets you would like to work with. Details in this paper: https://arxiv.org/pdf/1702.05373v1.pdf
Let's say we want to use the byclass dataset:
from emnist import extract_training_samples, extract_test_samples
x_train, y_train = extract_training_samples('byclass')
x_test, y_test = extract_test_samples('byclass')

Related

How to decode an image so it can be feed into a keras model?

I have one image. I really want to transform/decode it to become a tensor. Why? Because I want to feed this tensor into my neural network written in keras. The question is, how do I transform this image into a tensor with values, that doesn't give me an error when feeding the neural net ?
So suppose there is a PATH, and this has to be changed into a TENSOR, which can be feed into the keras neural network.
Thank you, very much.
You can use Keras' ImageDataGenerator(), which generates batches of tensor image data. You can then call flow_from_directory() on your ImageDataGenerator() object, which takes a path to the directory where your images are, and generates batches of data from the images themselves. These two videos demonstrate this process with an example:
Image preparation for CNN training with Keras
Create and train a CNN with Keras

Calling "fit" multiple times in Keras

I've working on a CNN over several hundred GBs of images. I've created a training function that bites off 4Gb chunks of these images and calls fit over each of these pieces. I'm worried that I'm only training on the last piece on not the entire dataset.
Effectively, my pseudo-code looks like this:
DS = lazy_load_400GB_Dataset()
for section in DS:
X_train = section.images
Y_train = section.classes
model.fit(X_train, Y_train, batch_size=16, nb_epoch=30)
I know that the API and the Keras forums say that this will train over the entire dataset, but I can't intuitively understand why the network wouldn't relearn over just the last training chunk.
Some help understanding this would be much appreciated.
Best,
Joe
This question was raised at the Keras github repository in Issue #4446: Quick Question: can a model be fit for multiple times? It was closed by François Chollet with the following statement:
Yes, successive calls to fit will incrementally train the model.
So, yes, you can call fit multiple times.
For datasets that do not fit into memory, there is an answer in the Keras Documentation FAQ section
You can do batch training using model.train_on_batch(X, y) and
model.test_on_batch(X, y). See the models documentation.
Alternatively, you can write a generator that yields batches of
training data and use the method model.fit_generator(data_generator, samples_per_epoch, nb_epoch).
You can see batch training in action in our CIFAR10 example.
So if you want to iterate your dataset the way you are doing, you should probably use model.train_on_batch and take care of the batch sizes and iteration yourself.
One more thing to note is that you should make sure the order in which the samples you train your model with is shuffled after each epoch. The way you have written the example code seems to not shuffle the dataset. You can read a bit more about shuffling here and here

How to convert (samesize, categoriezed) images into dataset for TensorFlow

I am learning to create a learning model using TensorFlow.
I have successfully run the MNIST tutorial, now would like to test the model with my own images. They are same-size image (224x224) and classified into folders.
Now I would like to use those images as input for my model as in the MNIST example. I tried to open the MNIST data-set but it's unreadable. I guess it has been converted into some binary types. Through the example, I think the MNIST dataset somehow has a structure like this:
mnist
test
images
labels
train
images
labels
How can I make a dataset look like the MNIST data from my own images files?
Thank you very much!
MNIST is not stored in image format. From the mnist web-site (http://yann.lecun.com/exdb/mnist/) you could see that it has specific format which is already close to the tensor or numpy array, which could be used in tensorflow with minimal adjustments. It is a kind of a matrix with numbers.
What you need to work with usual images (.jpg for instance) is to use any python lib for image processing to convert into the np.array. For example PIL will work, like here:
PIL and numpy
Another option is to use a built-in functions from tensorflow to convert your images straight to tensors supported by tensofrlow, check this out:
https://www.tensorflow.org/versions/r0.9/api_docs/python/image.html

How to use pickled file as dataset for keras

I have build my own dataset for digit classification and it worked well with convolutional network model developed by lisa lab (Here). I wanted to visualize the weights and i wanted to do it through keras.
Keras documentation tries to load mnist data like this:
(X_train, y_train), (X_test, y_test) = mnist.load_data()
But i want my pickled dataset to load instead of mnist default data. Where does mnist module for keras load it's dataset ? And, how can i pass my own dataset instead of that to use mnist module from keras ?
Thanks in advance.
You can get a lot of info about this method by reading the source: https://github.com/fchollet/keras/blob/master/keras/datasets/mnist.py
In this case, the dataset is a pickle file loaded from an Amazon S3 bucket.
You could write a copy of this function and use it yourself to load up a different pickled dataset.

Extract features using pre-trained (Tensorflow) CNN

Deep Learning has been applied successfully on several large data sets for the classification of a handful of classes (cats, dogs, cars, planes, etc), with performances beating simpler descriptors like Bags of Features over SIFT, color histograms, etc.
Nevertheless, training such a network requires a lot of data per class and a lot of training time. However, very often one doesn't have enough data or just wants to get an idea of how well a convolutional neural network might do, before spending time one designing and training such a device and gathering the training data.
In this particular case, it might be ideal to have a network configured and trained using some benchmark data set used by the state of the art publications, and to simply apply it to some data set that you might have as a feature extractor.
This results in a set of features for each image, which one could feed to a classical classification method like SVM's, logistic regression, neural networks, etc.
In particular when one does not have enough data to train the CNN, I may expect this to outperform a pipeline where the CNN was trained on few samples.
I was looking at the tensorflow tutorials, but they always seem to have a clear training / testing phase. I couldn't find a pickle file (or similar) with a pre-configured CNN feature extractor.
My questions are: do such pre-trained networks exist and where can I find them. Alternatively: does this approach make sense? Where could I find a CNN+weights ?
EDIT
W.r.t. #john's comment I tried using 'DecodeJpeg:0' and 'DecodeJpeg/contents:0' and checked the outputs, which are different (:S)
import cv2, requests, numpy
import tensorflow.python.platform
import tensorflow as tf
response = requests.get('https://i.stack.imgur.com/LIW6C.jpg?s=328&g=1')
data = numpy.asarray(bytearray(response.content), dtype=np.uint8)
image = cv2.imdecode(data,-1)
compression_worked, jpeg_data = cv2.imencode('.jpeg', image)
if not compression_worked:
raise Exception("Failure when compressing image to jpeg format in opencv library")
jpeg_data = jpeg_data.tostring()
with open('./deep_learning_models/inception-v3/classify_image_graph_def.pb', 'rb') as graph_file:
graph_def = tf.GraphDef()
graph_def.ParseFromString(graph_file.read())
tf.import_graph_def(graph_def, name='')
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('pool_3:0')
arr0 = numpy.squeeze(sess.run(
softmax_tensor,
{'DecodeJpeg:0': image}
))
arr1 = numpy.squeeze(sess.run(
softmax_tensor,
{'DecodeJpeg/contents:0': jpeg_data}
))
print(numpy.abs(arr0 - arr1).max())
So the max absolute difference is 1.27649, and in general all the elements differ (especially since the average value of the arr0 and arr1 themselves lies between 0 - 0.5).
I also would expect that 'DecodeJpeg:0' needs a jpeg-string, not a numpy array, why else does the name contain 'Jpeg'. #john: Could you state how
sure you are about your comment?
So I guess I'm not sure what is what, as I would expect a trained neural network to be deterministic (but chaotic at most).
The TensorFlow team recently released a deep CNN trained on the ImageNet dataset. You can download the script that fetches the data (including the model graph and the trained weights) from here. The associated Image Recognition tutorial has more details about the model.
While the current model isn't specifically packaged to be used in a subsequent training step, you could explore modifying the script to reuse parts of the model and the trained weights in your own network.

Resources