InvalidArgumentError: Incompatible shapes: [15,3] vs. [100,3] - machine-learning

I have a dataset with more than 4000 images and 3 classes, and I'm reusing a code for capsule neural network with 10 classes but I modified it to 3 classes, when I'm running the model the following error occurs at the last point of the first epoch (44/45):
Epoch 1/16
44/45 [============================>.] - ETA: 28s - loss: 0.2304 - capsnet_loss: 0.2303 - decoder_loss: 0.2104 - capsnet_accuracy: 0.6598 - decoder_accuracy: 0.5781
InvalidArgumentError: Incompatible shapes: [15,3] vs. [100,3]
[[node gradient_tape/margin_loss/mul/Mul (defined at <ipython-input-22-9d913bd0e1fd>:11) ]] [Op:__inference_train_function_6157]
Function call stack:
train_function
Training code:
m = 100
epochs = 16
# Using EarlyStopping, end training when val_accuracy is not improved for 10 consecutive times
early_stopping = keras.callbacks.EarlyStopping(monitor='val_capsnet_accuracy',mode='max',
patience=2,restore_best_weights=True)
# Using ReduceLROnPlateau, the learning rate is reduced by half when val_accuracy is not improved for 5 consecutive times
lr_scheduler = keras.callbacks.ReduceLROnPlateau(monitor='val_capsnet_accuracy',mode='max',factor=0.5,patience=4)
train_model.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss=[margin_loss,'mse'],loss_weights = [1. ,0.0005],metrics=['accuracy'])
train_model.fit([x_train, y_train],[y_train,x_train], batch_size = m, epochs = epochs, validation_data = ([x_test, y_test],[y_test,x_test]),callbacks=[early_stopping,lr_scheduler])
The model is:
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(100, 28, 28, 1)] 0
__________________________________________________________________________________________________
conv2d (Conv2D) (100, 27, 27, 256) 1280 input_1[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (100, 27, 27, 256) 0 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (100, 19, 19, 128) 2654336 max_pooling2d[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (100, 6, 6, 128) 1327232 conv2d_1[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (100, 576, 8) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
lambda (Lambda) (100, 576, 8) 0 reshape[0][0]
__________________________________________________________________________________________________
digitcaps (CapsuleLayer) (100, 3, 16) 221184 lambda[0][0]
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 3)] 0
__________________________________________________________________________________________________
mask (Mask) (100, 48) 0 digitcaps[0][0]
input_2[0][0]
__________________________________________________________________________________________________
capsnet (Length) (100, 3) 0 digitcaps[0][0]
__________________________________________________________________________________________________
decoder (Sequential) (None, 28, 28, 1) 1354000 mask[0][0]
==================================================================================================
Total params: 5,558,032
Trainable params: 5,558,032
Non-trainable params: 0
Input layer,convulational layers and primary capsule
img_shape=(28,28,1)
inp=L.Input(img_shape,100)
# Adding the first conv1 layer
conv1=L.Conv2D(filters=256,kernel_size=(2,2),activation='relu',padding='valid')(inp)
# Adding Maxpooling layer
maxpool1=L.MaxPooling2D(pool_size=(1,1))(conv1)
# Adding second convulational layer
conv2=L.Conv2D(filters=128,kernel_size=(9,9),activation='relu',padding='valid')(maxpool1)
# Adding primary cap layer
conv2=L.Conv2D(filters=8*16,kernel_size=(9,9),strides=2,padding='valid',activation=None)(conv2)
# Adding the squash activation
reshape2=L.Reshape([-1,8])(conv2)
squashed_output=L.Lambda(squash)(reshape2)
code source
x_train.shape --> (4415, 28, 28, 1)
y_train.shape --> (4415, 3)
x_test.shape --> (1104, 28, 28, 1)
y_test.shape --> (1104, 3)
My code here

Try make the X set so that the batch size perfectly fits the data i think the batch size remainder is 15 after fitting to all the data
For eg : make it a multiple of 100

Related

How to store the gradients of Alexnet as numpy array (in each iteration) in Python?

I want to store the final gradient vector of a model as a numpy array. Is there an easy and intuitive way to do that using Tensorflow?
I want to store the gradient vectors of Alexnet (in a numpy array) for each iteration,, until convergence.
We can do it as shown below code -
import tensorflow as tf
import numpy as np
print(tf.__version__)
#Define the input tensor
x = tf.constant([3.0,6.0,9.0])
#Define the Gradient Function
with tf.GradientTape() as g:
g.watch(x)
y = x * x
dy_dx = g.gradient(y, x)
#Output Gradient Tensor
print("Output Gradient Tensor:",dy_dx)
#Convert to array
a = np.asarray(dy_dx)
print("Gradient array:",a)
print("Array shape:",a.shape)
print("Output type:",type(a))
The Output of the code is -
2.1.0
Output Gradient Tensor: tf.Tensor([ 6. 12. 18.], shape=(3,), dtype=float32)
Gradient array: [ 6. 12. 18.]
Array shape: (3,)
Output type: <class 'numpy.ndarray'>
Below is the model that resembles Alexnet architecture and capturing gradient for every epoch.
# (1) Importing dependency
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
import numpy as np
np.random.seed(1000)
# (2) Get Data
import tflearn.datasets.oxflower17 as oxflower17
x, y = oxflower17.load_data(one_hot=True)
# (3) Create a sequential model
model = Sequential()
# 1st Convolutional Layer
model.add(Conv2D(filters=96, input_shape=(224,224,3), kernel_size=(11,11), strides=(4,4), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation before passing it to the next layer
model.add(BatchNormalization())
# 2nd Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# 3rd Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 4th Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 5th Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
# Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))
# Batch Normalisation
model.add(BatchNormalization())
# Passing it to a dense layer
model.add(Flatten())
# 1st Dense Layer
model.add(Dense(4096, input_shape=(224*224*3,)))
model.add(Activation('relu'))
# Add Dropout to prevent overfitting
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# 2nd Dense Layer
model.add(Dense(4096))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# 3rd Dense Layer
model.add(Dense(1000))
model.add(Activation('relu'))
# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# Output Layer
model.add(Dense(17))
model.add(Activation('softmax'))
model.summary()
# (4) Compile
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# (5) Define Gradient Function
def get_gradient_func(model):
grads = K.gradients(model.total_loss, model.trainable_weights)
inputs = model.model._feed_inputs + model.model._feed_targets + model.model._feed_sample_weights
func = K.function(inputs, grads)
return func
# (6) Train the model such that gradients are captured for every epoch
epoch_gradient = []
for epoch in range(1,5):
model.fit(x, y, batch_size=64, epochs= epoch, initial_epoch = (epoch-1), verbose=1, validation_split=0.2, shuffle=True)
get_gradient = get_gradient_func(model)
grads = get_gradient([x, y, np.ones(len(y))])
epoch_gradient.append(grads)
# (7) Convert to a 2 dimensiaonal array of (epoch, gradients) type
gradient = np.asarray(epoch_gradient)
print("Total number of epochs run:", epoch)
print("Gradient Array has the shape:",gradient.shape)
Output: gradient is the 2 dimensional array that has gradient captured for every epoch that retains the structure of gradient as per the network layers.
Model: "sequential_34"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_115 (Conv2D) (None, 54, 54, 96) 34944
_________________________________________________________________
activation_213 (Activation) (None, 54, 54, 96) 0
_________________________________________________________________
max_pooling2d_83 (MaxPooling (None, 27, 27, 96) 0
_________________________________________________________________
batch_normalization_180 (Bat (None, 27, 27, 96) 384
_________________________________________________________________
conv2d_116 (Conv2D) (None, 17, 17, 256) 2973952
_________________________________________________________________
activation_214 (Activation) (None, 17, 17, 256) 0
_________________________________________________________________
max_pooling2d_84 (MaxPooling (None, 8, 8, 256) 0
_________________________________________________________________
batch_normalization_181 (Bat (None, 8, 8, 256) 1024
_________________________________________________________________
conv2d_117 (Conv2D) (None, 6, 6, 384) 885120
_________________________________________________________________
activation_215 (Activation) (None, 6, 6, 384) 0
_________________________________________________________________
batch_normalization_182 (Bat (None, 6, 6, 384) 1536
_________________________________________________________________
conv2d_118 (Conv2D) (None, 4, 4, 384) 1327488
_________________________________________________________________
activation_216 (Activation) (None, 4, 4, 384) 0
_________________________________________________________________
batch_normalization_183 (Bat (None, 4, 4, 384) 1536
_________________________________________________________________
conv2d_119 (Conv2D) (None, 2, 2, 256) 884992
_________________________________________________________________
activation_217 (Activation) (None, 2, 2, 256) 0
_________________________________________________________________
max_pooling2d_85 (MaxPooling (None, 1, 1, 256) 0
_________________________________________________________________
batch_normalization_184 (Bat (None, 1, 1, 256) 1024
_________________________________________________________________
flatten_34 (Flatten) (None, 256) 0
_________________________________________________________________
dense_99 (Dense) (None, 4096) 1052672
_________________________________________________________________
activation_218 (Activation) (None, 4096) 0
_________________________________________________________________
dropout_66 (Dropout) (None, 4096) 0
_________________________________________________________________
batch_normalization_185 (Bat (None, 4096) 16384
_________________________________________________________________
dense_100 (Dense) (None, 4096) 16781312
_________________________________________________________________
activation_219 (Activation) (None, 4096) 0
_________________________________________________________________
dropout_67 (Dropout) (None, 4096) 0
_________________________________________________________________
batch_normalization_186 (Bat (None, 4096) 16384
_________________________________________________________________
dense_101 (Dense) (None, 1000) 4097000
_________________________________________________________________
activation_220 (Activation) (None, 1000) 0
_________________________________________________________________
dropout_68 (Dropout) (None, 1000) 0
_________________________________________________________________
batch_normalization_187 (Bat (None, 1000) 4000
_________________________________________________________________
dense_102 (Dense) (None, 17) 17017
_________________________________________________________________
activation_221 (Activation) (None, 17) 0
=================================================================
Total params: 28,096,769
Trainable params: 28,075,633
Non-trainable params: 21,136
_________________________________________________________________
Train on 1088 samples, validate on 272 samples
Epoch 1/1
1088/1088 [==============================] - 22s 20ms/step - loss: 3.1251 - acc: 0.2178 - val_loss: 13.0005 - val_acc: 0.1140
Train on 1088 samples, validate on 272 samples
Epoch 2/2
128/1088 [==>...........................] - ETA: 1s - loss: 2.3913 - acc: 0.2656/usr/local/lib/python3.6/dist-packages/keras/engine/sequential.py:111: UserWarning: `Sequential.model` is deprecated. `Sequential` is a subclass of `Model`, you can just use your `Sequential` instance directly.
warnings.warn('`Sequential.model` is deprecated. '
1088/1088 [==============================] - 2s 2ms/step - loss: 2.2318 - acc: 0.3465 - val_loss: 9.6171 - val_acc: 0.1912
Train on 1088 samples, validate on 272 samples
Epoch 3/3
64/1088 [>.............................] - ETA: 1s - loss: 1.5143 - acc: 0.5000/usr/local/lib/python3.6/dist-packages/keras/engine/sequential.py:111: UserWarning: `Sequential.model` is deprecated. `Sequential` is a subclass of `Model`, you can just use your `Sequential` instance directly.
warnings.warn('`Sequential.model` is deprecated. '
1088/1088 [==============================] - 2s 2ms/step - loss: 1.8109 - acc: 0.4320 - val_loss: 4.3375 - val_acc: 0.3162
Train on 1088 samples, validate on 272 samples
Epoch 4/4
64/1088 [>.............................] - ETA: 1s - loss: 1.7827 - acc: 0.4688/usr/local/lib/python3.6/dist-packages/keras/engine/sequential.py:111: UserWarning: `Sequential.model` is deprecated. `Sequential` is a subclass of `Model`, you can just use your `Sequential` instance directly.
warnings.warn('`Sequential.model` is deprecated. '
1088/1088 [==============================] - 2s 2ms/step - loss: 1.5861 - acc: 0.4871 - val_loss: 3.4091 - val_acc: 0.3787
Total number of epochs run: 4
Gradient Array has the shape: (4, 34)
/usr/local/lib/python3.6/dist-packages/keras/engine/sequential.py:111: UserWarning: `Sequential.model` is deprecated. `Sequential` is a subclass of `Model`, you can just use your `Sequential` instance directly.
warnings.warn('`Sequential.model` is deprecated. '

How to create a sub-model based on fine-tuned VGGNet16

The following network architecture is designed in order to find the similarity between two images.
Initially, I took VGGNet16 and removed the classification head:
vgg_model = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(img_width, img_height, channels)))
Afterward, I set the parameter layer.trainable = False, so that the network will work as a feature extractor.
I passed two different images to the network:
encoded_left = vgg_model(input_left)
encoded_right = vgg_model(input_right)
This will produce two feature vectors. Then for the classification (whether they are similar or not), I used a metric network that consists of 2 convolution layers followed by pooling and 4 fully connected layers.
merge(encoded_left, encoded_right) -> conv-pool -> conv-pool -> reshape -> dense * 4 -> output
Hence, the model looks like:
model = Model(inputs=[left_image, right_image], outputs=output)
After training only metric network, for fine-tuning convolution layers, I set the last convo block for training. Therefore, in the second training phase, along with the metric network, the last convolution block is also trained.
Now I want to use this fine-tuned network for another purpose. Here is the network summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 1024) 0 vgg16[1][0]
vgg16[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 4195328 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 activation_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 activation_6[0][0]
==================================================================================================
Total params: 27,880,517
Trainable params: 13,158,787
Non-trainable params: 14,721,730
As the last convolution block of VGGNet is already trained on the custom dataset I want to cut the network at layer:
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
and use this as a powerful feature extractor. For this task, I loaded the fine-tuned model:
model = load_model('model.h5')
then tried to create the new model as:
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
This results in the following error:
`AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.`
Please, advise me where I am doing wrong.
I have tried several ways but the following method works perfectly. Instead of creating new model as:
model = load_model('model.h5')
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
I used the following way:
model = load_model('model.h5')
sub_model = Sequential()
for layer in model.get_layer('vgg16').layers:
sub_model.add(layer)
I hope this will help others.

Why is my CNN overfitting and how can I fix it?

I am finetuning a 3D-CNN called C3D which was originally trained to classify sports from video clips.
I am freezing the convolution (feature extraction) layers and training the fully connected layers using gifs from GIPHY to classify the gifs for sentiment analysis (positive or negative).
Weights are pre loaded for all layers except the final fully connected layer.
I am using 5000 images (2500 positive, 2500 negative) for training with a 70/30 training/testing split using Keras. I am using the Adam optimizer with a learning rate of 0.0001.
The training accuracy increases and the training loss decreases during training but very early on the validation accuracy and loss does not improve as the model starts to overfit.
I believe I have enough training data and am using a dropout of 0.5 on both of the fully connected layers so how can I combat this overfitting?
The model architechture, training code and visualisations of training performance from Keras can be found below.
train_c3d.py
from training.c3d_model import create_c3d_sentiment_model
from ImageSentiment import load_gif_data
import numpy as np
import pathlib
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
def image_generator(files, batch_size):
"""
Generate batches of images for training instead of loading all images into memory
:param files:
:param batch_size:
:return:
"""
while True:
# Select files (paths/indices) for the batch
batch_paths = np.random.choice(a=files,
size=batch_size)
batch_input = []
batch_output = []
# Read in each input, perform preprocessing and get labels
for input_path in batch_paths:
input = load_gif_data(input_path)
if "pos" in input_path: # if file name contains pos
output = np.array([1, 0]) # label
elif "neg" in input_path: # if file name contains neg
output = np.array([0, 1]) # label
batch_input += [input]
batch_output += [output]
# Return a tuple of (input,output) to feed the network
batch_x = np.array(batch_input)
batch_y = np.array(batch_output)
yield (batch_x, batch_y)
model = create_c3d_sentiment_model()
print(model.summary())
model.load_weights('models/C3D_Sport1M_weights_keras_2.2.4.h5', by_name=True)
for layer in model.layers[:14]: # freeze top layers as feature extractor
layer.trainable = False
for layer in model.layers[14:]: # fine tune final layers
layer.trainable = True
train_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_train').glob('**/*')]
val_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_validation').glob('**/*')]
batch_size = 8
train_generator = image_generator(train_files, batch_size)
validation_generator = image_generator(val_files, batch_size)
model.compile(optimizer=Adam(lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=1)
history = model.fit_generator(train_generator, validation_data=validation_generator,
steps_per_epoch=int(np.ceil(len(train_files) / batch_size)),
validation_steps=int(np.ceil(len(val_files) / batch_size)), epochs=5, shuffle=True,
callbacks=[mc])
load_gif_data()
def load_gif_data(file_path):
"""
Load and process gif for input into Keras model
:param file_path:
:return: Mean normalised image in BGR format as numpy array
for more info see -> http://cs231n.github.io/neural-networks-2/
"""
im = Img(fp=file_path)
try:
im.load(limit=16, # Keras image model only requires 16 frames
first=True)
except:
print("Error loading image: " + file_path)
return
im.resize(size=(112, 112))
im.convert('RGB')
im.close()
np_frames = []
frame_index = 0
for i in range(16): # if image is less than 16 frames, repeat the frames until there are 16
frame = im.frames[frame_index]
rgb = np.array(frame)
bgr = rgb[..., ::-1]
mean = np.mean(bgr, axis=0)
np_frames.append(bgr - mean) # C3D model was originally trained on BGR, mean normalised images
# it is important that unseen images are in the same format
if frame_index == (len(im.frames) - 1):
frame_index = 0
else:
frame_index = frame_index + 1
return np.array(np_frames)
model architecture
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 16, 112, 112, 64) 5248
_________________________________________________________________
pool1 (MaxPooling3D) (None, 16, 56, 56, 64) 0
_________________________________________________________________
conv2 (Conv3D) (None, 16, 56, 56, 128) 221312
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 28, 28, 128) 0
_________________________________________________________________
conv3a (Conv3D) (None, 8, 28, 28, 256) 884992
_________________________________________________________________
conv3b (Conv3D) (None, 8, 28, 28, 256) 1769728
_________________________________________________________________
pool3 (MaxPooling3D) (None, 4, 14, 14, 256) 0
_________________________________________________________________
conv4a (Conv3D) (None, 4, 14, 14, 512) 3539456
_________________________________________________________________
conv4b (Conv3D) (None, 4, 14, 14, 512) 7078400
_________________________________________________________________
pool4 (MaxPooling3D) (None, 2, 7, 7, 512) 0
_________________________________________________________________
conv5a (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
conv5b (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
zeropad5 (ZeroPadding3D) (None, 2, 8, 8, 512) 0
_________________________________________________________________
pool5 (MaxPooling3D) (None, 1, 4, 4, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_________________________________________________________________
fc6 (Dense) (None, 4096) 33558528
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
fc7 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
nfc8 (Dense) (None, 2) 8194
=================================================================
Total params: 78,003,970
Trainable params: 78,003,970
Non-trainable params: 0
_________________________________________________________________
None
training visualisations
I think that the error is in the loss function and in the last Dense layer. As provided in the model summary, the last Dense layer is,
nfc8 (Dense) (None, 2)
The output shape is ( None , 2 ) meaning that the layer has 2 units. As you said earlier, you need to classify GIFs as positive or negative.
Classifying GIFs could be a binary classification problem or a multiclass classification problem ( with two classes ).
Binary classification has only 1 unit in the last Dense layer with a sigmoid activation function. But, here the model has 2 units in the last Dense layer.
Hence, the model is a multiclass classifier, but you have given a loss function of binary_crossentropy which is meant for binary classifiers ( with a single unit in the last layer ).
So, replacing the loss with categorical_crossentropy should work. Or edit the last Dense layer and change the number of units and activation function.
Hope this helps.

Fine tuning model delete previous added layers

I use Keras 2.2.4. I train a model that I want to fine-tune every 30 epochs with new data content (image classification).
Everyday I add more image to classes to feed the model. Every 30 epochs the model is re-trained.
I use 2 conditions, first one if no previous model already trained and second condition when a model is already trained then I want to fine-tune it with new content/classes.
model_base = keras.applications.vgg19.VGG19(include_top=False, input_shape=(*IMG_SIZE, 3), weights='imagenet')
output = GlobalAveragePooling2D()(model_base.output)
# If we resume a pretrained model load it
if os.path.isfile(os.path.join(MODEL_PATH, 'weights.h5')):
print('Using existing weights...')
base_lr = 0.0001
model = load_model(os.path.join(MODEL_PATH, 'weights.h5'))
output = Dense(len(all_character_names), activation='softmax', name='d2')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-2]:
layer.trainable = False
else:
base_lr = 0.001
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(2048, activation='relu', name='d1')(output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(len(all_character_names), activation='softmax', name='d2')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-5]:
layer.trainable = False
opt = optimizers.Adam(lr=base_lr, decay=base_lr / epochs)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
Model summary first time:
...
_________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 512) 2048
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
d1 (Dense) (None, 2048) 1050624
_________________________________________________________________
batch_normalization_2 (Batch (None, 2048) 8192
_________________________________________________________________
dropout_2 (Dropout) (None, 2048) 0
_________________________________________________________________
d2 (Dense) (None, 19) 38931
=================================================================
Total params: 21,124,179
Trainable params: 10,533,907
Non-trainable params: 10,590,272
Model summary second time:
...
_________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512) 0
_________________________________________________________________
d2 (Dense) (None, 19) 9747
=================================================================
Total params: 20,034,131
Trainable params: 2,369,555
Non-trainable params: 17,664,576
Problem: When a model exist and is loaded for fine-tune it seems to have loose all additionals layers added the first time (Dense 2048, Dropout, etc)
Do I need to add these layers again ? It seems to have no sense as it would loose the training information made at the first pass.
Note: I may need to not set the base_lr as saving a model should save also the learning rate at the state where it stopped before, but I will check this later.
Please note that once you load the model:
model = load_model(os.path.join(MODEL_PATH, 'weights.h5'))
You don't use it. You just overwrite it again
model = Model(model_base.input, output)
Where output is also defined as an operation on the base_model.
It seems to me that you just want to delete the lines after load_model.

Keras - visualize classes on a CNN network

In order to generate Google-Dream like images, I am trying to modify input images optimizing an inceptionV3 network with gradient ascent`.
Desired effect: https://github.com/google/deepdream/blob/master/dream.ipynb
(for more info on this, refer to [https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html.)
For that matter, I have fine-tuned an inception network using the transfer learning method, and have generated the model:inceptionv3-ft.model
model.summary() prints the following architecture (shortened here due to space limitations):
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, None, 3 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, None, None, 3 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 96 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, None, None, 3 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, None, None, 3 9216 activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 3 96 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, None, None, 3 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, None, None, 6 18432 activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 6 192 conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, None, None, 6 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, None, None, 6 0 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, None, None, 8 5120 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 8 240 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, None, None, 8 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, None, None, 1 138240 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 576 conv2d_5[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, None, None, 1 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, None, None, 1 0 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, None, None, 6 12288 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 6 192 conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, None, None, 6 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, None, None, 4 9216 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, None, None, 9 55296 activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 4 144 conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 9 288 conv2d_10[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, None, None, 4 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, None, None, 9 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, None, None, 1 0 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, None, None, 6 12288 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
(...)
mixed9_1 (Concatenate) (None, None, None, 7 0 activation_88[0][0]
activation_89[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, None, None, 7 0 activation_92[0][0]
activation_93[0][0]
__________________________________________________________________________________________________
activation_94 (Activation) (None, None, None, 1 0 batch_normalization_94[0][0]
__________________________________________________________________________________________________
mixed10 (Concatenate) (None, None, None, 2 0 activation_86[0][0]
mixed9_1[0][0]
concatenate_2[0][0]
activation_94[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 2048) 0 mixed10[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176 global_average_pooling2d_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 1025 dense_1[0][0]
==================================================================================================
Total params: 23,901,985
Trainable params: 18,315,137
Non-trainable params: 5,586,848
____________________________________
Now I'm using the following settings and code to try and tweak and activate specific high layer objects in order to make full objects emerge on the input image:
settings = {
'features': {
'mixed2': 0.,
'mixed3': 0.,
'mixed4': 0.,
'mixed10': 0., #highest
},
}
model = load_model('inceptionv3-ft.model')
#Get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
#Define the loss.
loss = K.variable(0.)
for layer_name in settings['features']:
# Add the L2 norm of the features of a layer to the loss.
assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.'
coeff = settings['features'][layer_name]
x = layer_dict[layer_name].output
print (x)
# We avoid border artifacts by only involving non-border pixels in the loss.
scaling = K.prod(K.cast(K.shape(x), 'float32'))
if K.image_data_format() == 'channels_first':
loss += coeff * K.sum(K.square(x[:, :, 2: -2, 2: -2])) / scaling
else:
loss += coeff * K.sum(K.square(x[:, 2: -2, 2: -2, :])) / scaling
# Compute the gradients of the dream wrt the loss.
grads = K.gradients(loss, dream)[0]
# Normalize gradients.
grads /= K.maximum(K.mean(K.abs(grads)), K.epsilon())
# Set up function to retrieve the value
# of the loss and gradients given an input image.
outputs = [loss, grads]
fetch_loss_and_grads = K.function([dream], outputs)
def eval_loss_and_grads(x):
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1]
return loss_value, grad_values
def resize_img(img, size):
img = np.copy(img)
if K.image_data_format() == 'channels_first':
factors = (1, 1,
float(size[0]) / img.shape[2],
float(size[1]) / img.shape[3])
else:
factors = (1,
float(size[0]) / img.shape[1],
float(size[1]) / img.shape[2],
1)
return scipy.ndimage.zoom(img, factors, order=1)
def gradient_ascent(x, iterations, step, max_loss=None):
for i in range(iterations):
loss_value, grad_values = eval_loss_and_grads(x)
if max_loss is not None and loss_value > max_loss:
break
print('..Loss value at', i, ':', loss_value)
x += step * grad_values
return x
def save_img(img, fname):
pil_img = deprocess_image(np.copy(img))
scipy.misc.imsave(fname, pil_img)
"""Process:
- Load the original image.
- Define a number of processing scales (i.e. image shapes),
from smallest to largest.
- Resize the original image to the smallest scale.
- For every scale, starting with the smallest (i.e. current one):
- Run gradient ascent
- Upscale image to the next scale
- Reinject the detail that was lost at upscaling time
- Stop when we are back to the original size.
To obtain the detail lost during upscaling, we simply
take the original image, shrink it down, upscale it,
and compare the result to the (resized) original image.
"""
# Playing with these hyperparameters will also allow you to achieve new effects
step = 0.01 # Gradient ascent step size
num_octave = 3 # Number of scales at which to run gradient ascent
octave_scale = 1.4 # Size ratio between scales
iterations = 20 # Number of ascent steps per scale
max_loss = 10.
img = preprocess_image(base_image_path)
if K.image_data_format() == 'channels_first':
original_shape = img.shape[2:]
else:
original_shape = img.shape[1:3]
successive_shapes = [original_shape]
for i in range(1, num_octave):
shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape])
successive_shapes.append(shape)
successive_shapes = successive_shapes[::-1]
original_img = np.copy(img)
shrunk_original_img = resize_img(img, successive_shapes[0])
for shape in successive_shapes:
print('Processing image shape', shape)
img = resize_img(img, shape)
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
same_size_original = resize_img(original_img, shape)
lost_detail = same_size_original - upscaled_shrunk_original_img
img += lost_detail
shrunk_original_img = resize_img(original_img, shape)
save_img(img, fname=result_prefix + '.png')
But no matter the setting values I tweak, I seem to only activate low level features, like edges and curves, or, at best, mixed features.
Ideally, settings should be able to access individual layers down to channels and units, i.e.,
Layer4c - Unit 0, but I haven't found in Keras documentation any method to achieve that:
see this: https://distill.pub/2017/feature-visualization/appendix/googlenet/4c.html
I have learned that using Caffe framework gives you more flexibility, but installation system-wide is a dependency hell.
So, how do I activate individual classes on this network within the Keras framework, or any other framework other than Caffe?
What worked for me was the following:
To avoid installing all dependencies and caffe on my machine, I've pulled this Docker Image with all Deep Learning frameworks in it.
Within minutes I had caffe (as well as keras, tensorflow, CUDA, theano, lasagne, torch, openCV) installed in a container with a shared folder in my host machine.
I then ran this caffe script -->
Deep Dream, and voilá.
models generated by caffe are more resourceful and allow classes as stated above to be 'printed' on input images or from noise.

Resources