Keras:Unable to add Dense layer to VGG16 - machine-learning

I am trying to fine-tune the last convolution block of vgg16 (imagenet pretrained) with a few dense layers added on top. My code is below. I am not able to figure out why I get this error upon execution Error when checking target: expected sequential_9 to have shape (None, 11) but got array with shape (4, 1). The number of classes in my dataset is 11 and the batch size is 4. Am I somehow mixing these two? Please help.
def finetune( epochs):
num_classes = 11
batch_size = 4
base_model = VGG16(weights='imagenet', include_top=False, input_shape = (224,224,3))
print('Model loaded.')
print(base_model.output_shape[1:])
top_model = Sequential()
top_model.add(Flatten(input_shape=base_model.output_shape[1:]))
top_model.add(Dense(512, activation='relu',kernel_regularizer=regularizers.l2(0.01)))
top_model.add(Dropout(0.25))
top_model.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
top_model.add(Dropout(0.25))
top_model.add(Dense(num_classes, activation='softmax'))
top_model.load_weights('vgg_ft_best.h5')
# add the model on top of the convolutional base
#model = Model(inputs= base_model.input, outputs= top_model(base_model.output))
#base_model.add(top_model)
#print(base_model.summary())
new_model = Sequential()
for l in base_model.layers:
new_model.add(l)
# CONCATENATE THE TWO MODELS
new_model.add(top_model)
print(new_model.summary())
# set the first 10 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in new_model.layers[:11]:
layer.trainable = False
# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_data_dir = "./images/train"
validation_data_dir = "./images/validation"
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(224, 224),
batch_size=batch_size,
class_mode='binary')
num_train_samples = len(train_generator.filenames)
num_validation_samples = len(validation_generator.filenames)
print(num_validation_samples)
new_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# fine-tune the model
new_model.fit_generator(
train_generator,
steps_per_epoch=int(num_train_samples/batch_size),
epochs=epochs,
validation_data=validation_generator,
validation_steps = int(num_validation_samples/batch_size))

The problem is your data (target = true outputs, true labels, etc.).
Your target has shape (batch,1), and your model with 11 classes is expecting (batch,11)
So, the problem lies in your generator. It must output tensors with 11 classes.
For that, see the documentation for flow_from_directory and the highlighted parts:
classes: optional list of class subdirectories (e.g. ['dogs', 'cats']). Default: None. If not provided, the list of classes will be automatically inferred from the subdirectory names/structure under directory, where each subdirectory will be treated as a different class (and the order of the classes, which will map to the label indices, will be alphanumeric). The dictionary containing the mapping from class names to class indices can be obtained via the attribute class_indices.
class_mode: one of "categorical", "binary", "sparse", "input" or None. Default: "categorical". Determines the type of label arrays that are returned: "categorical" will be 2D one-hot encoded labels, "binary" will be 1D binary labels, "sparse" will be 1D integer labels, "input" will be images identical to input images (mainly used to work with autoencoders). If None, no labels are returned (the generator will only yield batches of image data, which is useful to use model.predict_generator(), model.evaluate_generator(), etc.). Please note that in case of class_mode None, the data still needs to reside in a subdirectory of directory for it to work correctly.
Solution
You need to dispose your images in 11 different folders, each folder being a different class.
You need to use class_mode='categorical' to have the (batch,11) format.
Now, if your classes are not categorical (one image can have two or more classes), then you need to create your own custom generator.

Related

1D Convolutions CNN Keras

I'm very new to Keras and I'm tying to implement a CNN using 1D convolutions for binary classification on the raw time series data. Each training example has 160 time steps and I have 120 training examples. The training data is of shape (120,160). Here is the code:
X_input = Input((160,1))
X = Conv1D(6, 5, strides=1, name='conv1', kernel_initializer=glorot_uniform(seed=0))(X_input)
X = Activation('relu')(X)
X = MaxPooling1D(2, strides=2)(X)
X = Conv1D(16, 5, strides=1, name='conv2', kernel_initializer=glorot_uniform(seed=0))(X)
X = Activation('relu')(X)
X = MaxPooling1D(2, strides=2)(X)
X = Flatten()(X)
X = Dense(120, activation='relu', name='fc1', kernel_initializer=glorot_uniform(seed=0))(X)
X = Dense(84, activation='relu', name='fc2', kernel_initializer=glorot_uniform(seed=0))(X)
X = Dense(2, activation='sigmoid', name='fc3', kernel_initializer=glorot_uniform(seed=0))(X)
model = Model(inputs=X_input, outputs=X, name='model')
X_train = X_train.reshape(-1,160,1) # shape (120,160,1)
t_train = y_train.reshape(-1,1,1) # shape (120,1,1)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train)
The error that I get is expected fc3 to have 2 dimensions, but got array with shape (120, 1, 1).
I tried removing each layer and just leaving the 'conv1' component but I get the error expected conv1 to have shape (156, 6) but got array with shape (1, 1). It seems like my input shape is wrong; however, looking at other examples it seems that this worked for other people.
I think the issue is not your inputs, but rather your targets.
The output of the model is 2 dimensions, but when it checks against the targets, it realizes that the targets are in an array with shape (120, 1, 1).
You can try changing the y_train reshape line as follows (fyi, it also seems that you accidentally typed t_train instead of y_train):
y_train = y_train.reshape(-1,1)
Also, it seems that you probably want to use 1 instead of 2 for the last Dense layer (see Difference between Dense(2) and Dense(1) as the final layer of a binary classification CNN?)

How to work with big dataset for multi-label image classification in terms of memory and batches

I am working on a dataset of 300K images doing multi class image classification. So far i took a small dataset of around 7k images, but the code either returns memory error or my notebook just dies. The code below converts all images to a numpy array at once, which results in trouble with my memory when the last row of code gets executed. train.csv contains image-filenames and one hot encoded labels.
The code is like this:
data = pd.read_csv('train.csv')
img_width = 400
img_height = 400
img_vectors = []
for i in range(data.shape[0]):
path = 'Images/' + data['Id'][
img = image.load_img(path, target_size=(img_width, img_height, 3))
img = image.img_to_array(img)
img = img/255.0
img_vectors.append(img)
img_vectors = np.array(img_vectors)
Error Message:
MemoryError Traceback (most recent call last)
<ipython-input-13-dd2302ae54e1> in <module>
----> 1 img_vectors = np.array(img_vectors)
MemoryError: Unable to allocate array with shape (7344, 400, 400, 3) and data type float32
I guess I need a batch of smaller arrays for all images to handle memory issue, to avoid having one array with all imagedata at the same time.
On an earlier project i did image-classification without multi-label with around 225k images. Anyway this code doesnt convert all image-data to one giant array. It rather puts the imagedata into smaller batches:
#image preparation
if K.image_data_format() is "channels_first":
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
train_datagen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='categorical')
model = Sequential()
model.add(Conv2D(32, (3,3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
...
model.add(Dense(17))
model.add(BatchNormalization(axis=1, momentum=0.6))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size,
class_weight = class_weight
)
So what i actually need is an approach of how I can handle big datasets of images for multilabel image classification without getting in trouble with memory.
Ideal would be to work with a csv-file containing image-filename and one-hot-encoded labels in combination with array batches for learning.
Any help or guesses here would be greatly appreciated.
The easiest way to solve the problem you are facing is to write a costume data generator, here is a tutorial that shows how to do this. The idea is that instead of using flow_from_directory, you create generate a costume dataloader, that reads each image from its source path and gives to y the correspongind labels. Practiclly I think that your data is stored on a .csv file, where each row contain the path to an image, and the labels present in the image. So your datagen will have a function getittem(self, index) that will read the image from the path in raw number index and return along with the target that is obtained by reading the labels in this raw and one hot encode them, then sum them.

Transfer learning with CNTK and pre-trained ONNX model fails

I'm trying to use the ResNet-50 model from the ONNX model zoo and load and train it in CNTK for an image classification task. The first thing that confuses me is, that the batch axis (not sure what's the official name for it, dynamic axis?) is set to 1 in this model:
Why is that? Couldn't it simply be [3x224x224]? In this model for example, the input looks like this:
To load the model and use my own Dense layer, I use the following code:
def create_model(num_classes, input_features, freeze=False):
base_model = load_model("restnet-50.onnx", format=ModelFormat.ONNX)
feature_node = find_by_name(base_model, "gpu_0/data_0")
last_node = find_by_uid(base_model, "Reshape2959")
substitutions = {
feature_node : placeholder(name='new_input')
}
cloned_layers = last_node.clone(CloneMethod.clone, substitutions)
cloned_out = cloned_layers(input_features)
z = Dense(num_classes, activation=softmax, name="prediction") (cloned_out)
return z
For training I use (shortened):
# datasets = list of classes
feature = input_variable(shape=(1, 3, 224, 224))
label = input_variable(shape=(1,3))
model = create_model(len(datasets), feature)
loss = cross_entropy_with_softmax(model, label)
# some definitions for learner, epochs, ProgressPrinters missing
for epoch in range(epochs):
loss.train((X_current,y_current), parameter_learners=[learner], callbacks=[progress_printer])
X_current is a single image and y_current the corresponding class label both encoded as numpy arrays with the followings shapes
X_current.shape
(1, 3, 224, 224)
y_current.shape
(1, 3)
When I try to train the model, I get
"ValueError: ToBatchAxis7504 ToBatchAxisNode operation can only operate on tensor without minibatch data (no layout)"
What's wrong here?

Keras LSTM input features and incorrect dimensional data input

So I'm trying to practice how to use LSTMs in Keras and all parameter (samples, timesteps, features). 3D list is confusing me.
So I have some stock data and if the next item in the list is above the threshold of 5 which is +-2.50 it buys OR sells, if it is in the middle of that threshold it holds, these are my labels: my Y.
For my features my X I have a dataframe of [500, 1, 3] for my 500 samples and each timestep is 1 since each data is 1 hour increment and 3 for 3 features. But I get this error:
ValueError: Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (500, 3)
How can I fix this code and what am I doing wrong?
import json
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
"""
Sample of JSON file
{"time":"2017-01-02T01:56:14.000Z","usd":8.14},
{"time":"2017-01-02T02:56:14.000Z","usd":8.16},
{"time":"2017-01-02T03:56:15.000Z","usd":8.14},
{"time":"2017-01-02T04:56:16.000Z","usd":8.15}
"""
file = open("E.json", "r", encoding="utf8")
file = json.load(file)
"""
If the price jump of the next item is > or < +-2.50 the append 'Buy or 'Sell'
If its in the range of +- 2.50 then append 'Hold'
This si my classifier labels
"""
data = []
for row in range(len(file['data'])):
row2 = row + 1
if row2 == len(file['data']):
break
else:
difference = file['data'][row]['usd'] - file['data'][row2]['usd']
if difference > 2.50:
data.append((file['data'][row]['usd'], 'SELL'))
elif difference < -2.50:
data.append((file['data'][row]['usd'], 'BUY'))
else:
data.append((file['data'][row]['usd'], 'HOLD'))
"""
add the price the time step which si 1 and the features which is 3
"""
frame = pd.DataFrame(data)
features = pd.DataFrame()
# train LSTM
for x in range(500):
series = pd.Series(data=[500, 1, frame.iloc[x][0]])
features = features.append(series, ignore_index=True)
labels = frame.iloc[16000:16500][1]
# test
#yt = frame.iloc[16500:16512][0]
#xt = pd.get_dummies(frame.iloc[16500:16512][1])
# create LSTM
model = Sequential()
model.add(LSTM(3, input_shape=features.shape, activation='relu', return_sequences=False))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='relu'))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x=features.as_matrix(), y=labels.as_matrix())
"""
ERROR
Anaconda3\envs\Final\python.exe C:/Users/Def/PycharmProjects/Ether/Main.py
Using Theano backend.
Traceback (most recent call last):
File "C:/Users/Def/PycharmProjects/Ether/Main.py", line 62, in <module>
model.fit(x=features.as_matrix(), y=labels.as_matrix())
File "\Anaconda3\envs\Final\lib\site-packages\keras\models.py", line 845, in fit
initial_epoch=initial_epoch)
File "\Anaconda3\envs\Final\lib\site-packages\keras\engine\training.py", line 1405, in fit
batch_size=batch_size)
File "\Anaconda3\envs\Final\lib\site-packages\keras\engine\training.py", line 1295, in _standardize_user_data
exception_prefix='model input')
File "\Anaconda3\envs\Final\lib\site-packages\keras\engine\training.py", line 121, in _standardize_input_data
str(array.shape))
ValueError: Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (500, 3)
"""
Thanks.
This is my first post here I wish that could be useful I will try to do my best
First you need to create 3 dimension array to work with input_shape in keras you can watch this in keras documentation or in a better way:
from keras.models import Sequential
Sequential?
Linear stack of layers.
Arguments
layers: list of layers to add to the model.
# Note
The first layer passed to a Sequential model
should have a defined input shape. What that
means is that it should have received an input_shape
or batch_input_shape argument,
or for some type of layers (recurrent, Dense...)
an input_dim argument.
Example
```python
model = Sequential()
# first layer must have a defined input shape
model.add(Dense(32, input_dim=500))
# afterwards, Keras does automatic shape inference
model.add(Dense(32))
# also possible (equivalent to the above):
model = Sequential()
model.add(Dense(32, input_shape=(500,)))
model.add(Dense(32))
# also possible (equivalent to the above):
model = Sequential()
# here the batch dimension is None,
# which means any batch size will be accepted by the model.
model.add(Dense(32, batch_input_shape=(None, 500)))
model.add(Dense(32))
After that how to transform arrays 2 dimensions in 3 dimmension
check np.newaxis
Useful commands that help you more than you expect:
Sequential?,
-Sequential??,
-print(list(dir(Sequential)))
Best

number of input channels does not match corresponding dimension of filter in Keras

I am using keras to build a model based on the Resnet50, the following code is shown below
input_crop = Input(shape=(3, 224, 224))
# extract feature from image crop
resnet = ResNet50(include_top=False, weights='imagenet')
for layer in resnet.layers: # set resnet as non-trainable
layer.trainable = False
crop_encoded = resnet(input_crop)
However, I got an error
'ValueError: number of input channels does not match corresponding
dimension of filter, 224 != 3'
how can I fix it?
Such errors are routinely produced due to the different image format used by the Theano & TensorFlow backends for Keras. In your case, the images are obviously in channels_first format (Theano), while most probably you use a TensorFlow backend which needs them in channels_last format.
The MNIST CNN example in Keras provides a nice way to make your code immune to such issues, i.e. working for both Theano & TensorFlow backends - here is an adaptation for your data:
from keras import backend as K
img_rows, img_cols = 224, 224
if K.image_data_format() == 'channels_first':
input_crop = input_crop.reshape(input_crop.shape[0], 3, img_rows, img_cols)
input_shape = (3, img_rows, img_cols)
else:
input_crop = input_crop.reshape(input_crop.shape[0], img_rows, img_cols, 3)
input_shape = (img_rows, img_cols, 3)
input_crop = Input(shape=input_shape)

Resources