Deep Learning - How can I test the MNIST tutorial model on WML? - watson-studio

This is a "Watson Studio" related question. I've done the following Deep-Learning tutorial/experiment assistant, successfully deployed a generated CNN model to WML(WebService). Cool!
Tutorial: Single convolution layer on MNIST data
Experiment Assistant
Next, I'd like to test if the model could identify my image( MNIST ) in deployed environment, and the questions came to my mind.
What kind of input file( maybe pixel image file ) should I prepare for the model input ? How can I kick the scoring endpoint passing my image? ( I saw python code-snippet on the "Implementation" tab, but it's json example and not sure how can I pass the pixel image...)
payload_scoring = {"fields": [array_of_feature_columns], "values": [array_of_values_to_be_scored, another_array_of_values_to_be_scored]}
Any advice/suggestions highly welcomed. Thx in advance.

The model that was trained accepts an input data that is an array of 4 dimensions i.e [<batchsize>, 28, 28, 1], where 28 refers to the height and width of the image in pixels, 1 refers to the number of channels. Currently the WML online deployment and scoring service requires the payload data in the format that matches the input format of the model. So, to predict any image with this model, you must ...
convert the image to an array of [1, 28, 28, 1] dimension. Converting image to an array is explained in next section.
pre-process the image data as required by the model i.e perform (a) normalize the data (b) convert the type to float
pre-processed data must be be specified in json format with appropriate keys. This json doc will be the input payload for the scoring request for the model.
How to convert image to an array?
There are two ways.. (using python code)
a) keras python library has a MNIST dataset that has MNIST images that are converted to 28 x 28 arrays. Using the python code below, we can use this dataset to create the scoring payload.
import numpy as np
from keras.datasets import mnist
(X, y), (X_test, y_test) = mnist.load_data()
score_payload_data = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1)
score_payload_data = score_payload_data.astype("float32")/255
score_payload_data = score_payload_data[2].tolist() ## we are choosing the 2nd image in the list to predict
scoring_payload = {'values': [score_payload_data]}
b) If you have an image of size 28 x 28 pixels, we can create the scoring payload using the code below.
img_file_name = "<image file name with full path>"
from scipy import misc
img = misc.imread(img_file_name)
img_to_predict = img.reshape(img.shape[0], img.shape[1], 1)/255
img_to_predict = img_to_predict.astype("float32").tolist()
scoring_payload = {"values": [img_to_predict]}

Related

How to feed multiple images as input to a Convolutional Neural network

I am pretty new to CNN. I am planning to build a classifier where you will be feeding two images as input to the classifier. And it should output whether its a "match" or not .
I am not sure where to start and how to feed two images and train the neural networks. It would of great help if you can post a sample code. Please help
Thank You
You first need to take the two images and put them into an array. So if each image is 26x26 the array shape should then be 2x26x26. Now you must put each of these arrays into you training data array, BUT make sure that you reshape your training data array to 26x26x2 before you hit train. You can do this by typing in numpy.array(your_array_.reshape(-1, 26, 26, 2) to your fit function input.
Here is an example:
import numpy as np
image1 = # put your image array here
image2 = # put other image array here
both_images = [image1, image2]
training_data.append(both_images) # Feel free to add as much training data as you would like
same = 0
labels = [same]
model = create_model() # Make a function to create your model and set your model to a variable
model.fit(np.array(training_data).reshape(-1, 26, 26, 2), np.array(labels), batch_size=32)

Keras error in Dense layer, expected 4 dimensions got array with shape (1024,2) [duplicate]

This question already has an answer here:
Multi-dimensional input layers in Keras
(1 answer)
Closed 5 years ago.
I'm attempting to train a model of 3 layer Dense Neural Network using Keras with a GPU enabled Tensorflow backend.
The dataset I have is 4 million 20x40px images that I placed in directories with the name of the category they belong to.
Because of the large amount of data I can't just load it all into RAM and feed it to my model so I thought using Keras's ImageDataGenerator, specifically the function flow_from_directory() would do the trick. This yields a tuple of (x, y) where x is the numpy array of the image and y is the label of the image.
I expected the model to know to access the numpy array to be given as input for my model so I setup my input shape to be: (None,20,40,3) where None is the batch size, 20 and 40 are size of the image and 3 are the number of channels in the image. This does not work however as when I try to train my model I keep getting the error:
ValueError: Error when checking target: expected dense_3 to have 4 dimensions, but got array with shape (1024, 2)
I know the cause is that it is getting the tuple from flow_from_directoy and I guess I could change the input shape to match, however, I fear that this would render my model useless as I will be using images to make predictions not a pre-categorized tuple. So my question is, how can I get flow_from_directory to feed the image to my model and only use the tuple to validate it's training? Am I misunderstanding something here?
For reference, here is my code:
from keras.models import Model
from keras.layers import *
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard
# Prepare the Image Data Generator.
train_datagen = ImageDataGenerator()
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
'/path/to/train_data/',
target_size=(20, 40),
batch_size=1024,
)
test_generator = test_datagen.flow_from_directory(
'/path/to/test_data/',
target_size=(20, 40),
batch_size=1024,
)
# Define input tensor.
input_t = Input(shape=(20,40,3))
# Now create the layers and pass the input tensor to it.
hidden_1 = Dense(units=32, activation='relu')(input_t)
hidden_2 = Dense(units=16)(hidden_1)
prediction = Dense(units=1)(hidden_2)
# Now put it all together and create the model.
model = Model(inputs=input_t, outputs=prediction)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
# Prepare Tensorboard callback and start training.
tensorboard = TensorBoard(log_dir='./graph', histogram_freq=0, write_graph=True, write_images=True)
print(test_generator)
model.fit_generator(
train_generator,
steps_per_epoch=2000,
epochs=100,
validation_data=test_generator,
validation_steps=800,
callbacks=[tensorboard]
)
# Save trained model.
model.save('trained_model.h5')
Your input shape is wrong for Dense layers.
Dense layers expect inputs in the shape (None,length).
You'll either need to reshape your inputs so that they become vectors:
imageBatch=imageBatch.reshape((imageBatch.shape[0],20*40*3))
Or use convolutional layers, that expect that type of input shape (None,nRows,nCols,nChannels) like in tensorflow.

TensorFlow 1.2.1 and InceptionV3 to classify an image

I'm trying to create an example using the Keras built in the latest version of TensorFlow from Google. This example should be able to classify a classic image of an elephant. The code looks like this:
# Import a few libraries for use later
from PIL import Image as IMG
from tensorflow.contrib.keras.python.keras.preprocessing import image
from tensorflow.contrib.keras.python.keras.applications.inception_v3 import InceptionV3
from tensorflow.contrib.keras.python.keras.applications.inception_v3 import preprocess_input, decode_predictions
# Get a copy of the Inception model
print('Loading Inception V3...\n')
model = InceptionV3(weights='imagenet', include_top=True)
print ('Inception V3 loaded\n')
# Read the elephant JPG
elephant_img = IMG.open('elephant.jpg')
# Convert the elephant to an array
elephant = image.img_to_array(elephant_img)
elephant = preprocess_input(elephant)
elephant_preds = model.predict(elephant)
print ('Predictions: ', decode_predictions(elephant_preds))
Unfortunately I'm getting an error when trying to evaluate the model with model.predict:
ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (299, 299, 3)
This code is taken from and based on the excellent example coremltools-keras-inception and will be expanded more when it is figured out.
The reason why this error occured is that model always expects the batch of examples - not a single example. This diverge from a common understanding of models as mathematical functions of their inputs. The reasons why model expects batches are:
Models are computationaly designed to work faster on batches in order to speed up training.
There are algorithms which takes into account the batch nature of input (e.g. Batch Normalization or GAN training tricks).
So four dimensions comes from a first dimension which is a sample / batch dimension and then - the next 3 dimensions are image dims.
Actually I found the answer. Even though the documentation states that if the top layer is included the shape of the input vector is still set to take a batch of images. Thus we need to add this before the code line for the prediction:
elephant = numpy.expand_dims(elephant, axis=0)
Then the tensor is in the right shape and everything works correctly. I am still uncertain why the documentation states that the input vector should be (3x299x299) or (299x299x3) when it clearly wants 4 dimensions.
Be careful!

How do I obtain the layer names for use in the iOS sample app? (Tensorflow)

I'm very new to Tensorflow, and I'm trying to train something using the inception v3 network for use in an iPhone app. I managed to export my graph as a protocolbuffer file, manually remove the dropout nodes (correctly, I hope), and have placed that .pb file in my iOS project, but now I am receiving the following error:
Running model failed:Not found: FeedInputs: unable to find feed output input
which seems to indicate that my input_layer_name and output_layer_name variables in the iOS app are misconfigured.
I see in various places that it should be Mul and softmax respectively, for inception v3, but these values don't work for me.
My question is: what is a layer (with regards to this context), and how do I find out what mine are?
This is the exact definition of the model that I trained, but I don't see "Mul" or "softmax" present.
This is what I've been able to learn about layers, but it seems to be a different concept, since "Mul" isn't present in that list.
I'm worried that this might be a duplicate of this question but "layers" aren't explained (are they tensors?) and graph.get_operations() seems to be deprecated, or maybe I'm using it wrong.
As MohamedEzz wrote there are no layers in Tensorflow graphs. There are only operations that can be placed under the same name scope.
Usually operations of a single layer placed under the same scope and applications that aware of name scope concept can display them grouped.
One of such applications is Tensorboard. I believe that using Tensorboard is the easiest way to find node names.
Consider the following example:
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
input_placeholder = tf.placeholder(tf.float32, shape=(None, 224, 224, 3))
network = nets.inception.inception_v3(input_placeholder)
writer = tf.summary.FileWriter('.', tf.get_default_graph())
writer.close()
It creates placeholder for input data then creates Inception v3 network and saves event data (with graph) in current directory.
Launching Tensorflow in the same directory makes it possible to view graph structure.
tensorboard --logdir .
Tensorboard prints UI url to the console
Starting TensorBoard 41 on port 6006
(You can navigate to http://192.168.128.73:6006)
Below is an image of this graph.
Locate node you are interested in and select it to find its name (in the upper left information pane).
Input:
Output:
Please note that usually you need not node names but tensor names. In most cases it is enough to add :0 to node name to get tensor name.
For example to run Inception v3 network created above using names from the graph use the following code (continuation of the above code):
import numpy as np
data = np.random.randn(1, 224, 224, 3) # just random data
session = tf.InteractiveSession()
session.run(tf.global_variables_initializer())
result = session.run('InceptionV3/Predictions/Softmax:0', feed_dict={'Placeholder:0': data})
# result.shape = (1, 1000)
In the core of tensorflow, there are ops (operations) and tensors (n-dimensional arrays). Each op takes tensors and gives back tensors. Layers are just convenience wrappers around a number of ops that represent a neural network layer.
For example a convolution layer is composed of mainly 3 ops :
conv2d op : this is what slides a kernel over the input tensor and does element-wise multiplication between the kernel and the underlying input window.
bias_add op : adds the biases to the tensor coming out of the conv2d op
activation op : applies an activation function element-wise to the output tensor of the bias_add op
To run a tensorflow model, you provide feeds (inputs) and fetches (desired outputs). These are tensors, or tensor names.
From this line of code Inception_model, it seems that what you need is a tensor named 'predictions' which has the n_class output probabilities.
What you observed (softmax) is the type of the op that produced the predictions tensor
As for the input tensor name, the inception_model.py code does not show the input tensor name, since it's an argument to the function. So it depends on what name you have given to that input tensor.
When you create your layers or variable add the parameter called name
with tf.name_scope("output"):
W2 = tf.Variable(tf.truncated_normal([num_filters, num_classes], stddev=0.1), name="W2")
b2 = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b2")
scores = tf.nn.xw_plus_b(h_pool_flat, W2, b2, name="scores")
pred_y = tf.nn.softmax(scores,name="pred_y")
In this case I can access final predicted values by using "output/pred_y". If you dont have name_scope, you can just use "pred_y" to get to the values
conv = tf.nn.conv1d(word_embeddedings,
W1,
stride=stride_size,
padding="VALID",
name="conv") #will have dimensions [batch_size,out_width,num_filters] out_width is a function of max_words,filter_size and stride_size
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b1), name="relu")
I called the layer "conv" and used it in the next layer. Paste your snippet like I have done here

Resizing images in Keras ImageDataGenerator flow methods

The Keras ImageDataGenerator class provides the two flow methods flow(X, y) and flow_from_directory(directory) (https://keras.io/preprocessing/image/).
Why is the parameter
target_size: tuple of integers, default: (256, 256). The dimensions to which all images found will be resized
Only provided by flow_from_directory(directory) ? And what is the most concise way to add reshaping of images to the preprocessing pipeline using flow(X, y) ?
flow_from_directory(directory) generates augmented images from directory with arbitrary collection of images. So there is need of parameter target_size to make all images of same shape.
While flow(X, y) augments images which are already stored in a sequence in X which is nothing but numpy matrix and can be easily preprocessed/resized before passing to flow. So no need for target_size parameter. As for resizing I prefer using scipy.misc.imresize over PIL.Image resize, or cv2.resize as it can operate on numpy image data.
import scipy
new_shape = (28,28,3)
X_train_new = np.empty(shape=(X_train.shape[0],)+new_shape)
for idx in xrange(X_train.shape[0]):
X_train_new[idx] = scipy.misc.imresize(X_train[idx], new_shape)
For large training dataset, performing transformations such as resizing on the entire training data is very memory consuming. As Keras did in ImageDataGenerator, it's better to do it batch by batch. As far as I know, there're 2 ways to achieve this other than operating the whole dataset:
You can use Lambda Layer to create a layer and then feed original training data to it. The output is the resized you need.
Here is the sample code if you use TensorFlow as the backend of Keras:
original_dim = (32, 32, 3)
target_size = (64, 64)
input = keras.layers.Input(original_dim)
x = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, target_size))(input)
As #Retardust mentioned, maybe you can customize your own ImageDataGenerator as well as the preprocessing_function.
For anyone else who wants to do this, .flow method of ImageDataGenerator does not have a target_shape parameter and we cannot resize an image using preprocessing_function parameter as the documentation states The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3), and should output a Numpy tensor with the same shape.
So in order to use .flow, you will have to pass resized images only otherwise use a custom generator that resizes them on the fly.
Here's a sample of custom generator in keras (can also be made using python generator or any other method)
class Custom_Generator(keras.utils.Sequence) :
def __init__(self,...,datapath, batch_size, ..) :
def __len__(self) :
#calculate data len, something like len(train_labels)
def load_and_preprocess_function(self, label_names, ...):
#do something...
#load data for the batch using label names with whatever library
def __getitem__(self, idx) :
batch_y = train_labels[idx:idx+batch_size]
batch_x = self.load_and_preprocess_function()
return ( batch_x, batch_y )
X_data_resized = numpy.asarray([skimage.transform.resize(image, new_shape) for image in X_data])
because of the above code is now depreciated...
There is also (newer) method flow_from_dataframe() which accepts a Pandas dataframe with file paths and y data as columns - and it also allows to specify the target size. Just in case your image data is not organized directory-wise!

Resources