Loss validation that don't decrease in Keras images classification - image-processing

I'm trying to finetune my VGG19 model with a bunch of images for classification.
have 18 classes with 6000 images in each well currated.
Using Keras 2.2.4
Model:
INIT_LR = 0.00001
BATCH_SIZE = 128
IMG_SIZE = (256, 256)
epochs = 150
model_base = keras.applications.vgg19.VGG19(include_top=False, input_shape=(*IMG_SIZE, 3), weights='imagenet')
output = Flatten()(model_base.output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(64, activation='relu')(output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(len(all_character_names), activation='softmax')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-10]:
layer.trainable = False
opt = optimizers.Adam(lr=INIT_LR, decay=INIT_LR / epochs)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy', 'top_k_categorical_accuracy'])
Data augmentation:
image_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=.15,
height_shift_range=.15,
#rescale=1./255,
shear_range=0.15,
zoom_range=0.15,
channel_shift_range=1,
vertical_flip=True,
horizontal_flip=True)
Model train:
validation_steps = data_generator.validation_samples/BATCH_SIZE
steps_per_epoch = data_generator.train_samples/BATCH_SIZE
model.fit_generator(
generator,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=validation_data,
validation_steps=validation_steps
)
Model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 256, 256, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 128, 128, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 128, 128, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 128, 128, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 64, 64, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 64, 64, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 32, 32, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 16, 16, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 8, 8, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32768) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32768) 131072
_________________________________________________________________
dropout_1 (Dropout) (None, 32768) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 2097216
_________________________________________________________________
batch_normalization_2 (Batch (None, 64) 256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 19) 1235
=================================================================
Total params: 22,254,163
Trainable params: 19,862,931
Non-trainable params: 2,391,232
_________________________________________________________________
<keras.engine.input_layer.InputLayer object at 0x00000224568D0D68> False
<keras.layers.convolutional.Conv2D object at 0x00000224568D0F60> False
<keras.layers.convolutional.Conv2D object at 0x00000224568F0438> False
<keras.layers.pooling.MaxPooling2D object at 0x00000224570A5860> False
<keras.layers.convolutional.Conv2D object at 0x00000224570A58D0> False
<keras.layers.convolutional.Conv2D object at 0x00000224574196D8> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457524048> False
<keras.layers.convolutional.Conv2D object at 0x0000022457524D30> False
<keras.layers.convolutional.Conv2D object at 0x0000022457053160> False
<keras.layers.convolutional.Conv2D object at 0x00000224572E15C0> False
<keras.layers.convolutional.Conv2D object at 0x000002245707B080> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457088400> False
<keras.layers.convolutional.Conv2D object at 0x0000022457088E10> True
<keras.layers.convolutional.Conv2D object at 0x00000224575DB240> True
<keras.layers.convolutional.Conv2D object at 0x000002245747A320> True
<keras.layers.convolutional.Conv2D object at 0x0000022457486160> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574924E0> True
<keras.layers.convolutional.Conv2D object at 0x0000022457492D68> True
<keras.layers.convolutional.Conv2D object at 0x00000224574AD320> True
<keras.layers.convolutional.Conv2D object at 0x00000224574C6400> True
<keras.layers.convolutional.Conv2D object at 0x00000224574D2240> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574DAF98> True
<keras.layers.core.Flatten object at 0x00000224574EA080> True
<keras.layers.normalization.BatchNormalization object at 0x00000224574F82B0> True
<keras.layers.core.Dropout object at 0x000002247134BA58> True
<keras.layers.core.Dense object at 0x000002247136A7B8> True
<keras.layers.normalization.BatchNormalization object at 0x0000022471324438> True
<keras.layers.core.Dropout object at 0x00000224713249B0> True
<keras.layers.core.Dense object at 0x00000224713BF7F0> True
batchsize:128
LR:1e-05
The doomed graph:
Tries:
Tryed several LR
Tryed without training last 10, 5 layers, it is worst, simply not converging
Tryed several batch size, 128 give the best results
Also tryed resnet50 but not converging at all (even with last 3 layers trainables)
Tryed VGG16 with not much luck.
I add about 2000 new images each days to try to reach around 20 000 images per classe as I think this is here my problem.

In lower layer, network has learned low level features like edges, contours etc. It is higher layer, where these features combined. So in your case, you need much finer features such as hair color, person size etc. One thing you can tried that fine tune from last few layer(from block 4-5). Also you can use different learning rate, very low for VGG blocks and little higher for completely new dense layer. For implemention, this-thread will be helpful.

Related

Understanding CNN by visualizing class activations using GRAD_CAM

I followed the blog Where CNN is looking? to understand and visualize the class activations in order to predict something. The given example works very well.
I have developed a custom model using autoencoders for image similarity. The model accepts 2 images and predicts the score for similarity. The model has the following layers:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
encoder (Sequential) (None, 7, 7, 256) 3752704 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 512) 0 encoder[1][0]
encoder[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 2098176 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 1, 256) 0 activation_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 dropout_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 1, 128) 0 activation_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 1, 64) 0 activation_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 dropout_3[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 1, 1) 0 activation_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 dropout_4[0][0]
==================================================================================================
The encoder layer consists of the following layers:
conv2d_1
batch_normalization_1
activation_1
max_pooling2d_1
conv2d_2
batch_normalization_2
activation_2
max_pooling2d_2
conv2d_3
batch_normalization_3
activation_3
conv2d_4
batch_normalization_4
activation_4
conv2d_5
batch_normalization_5
activation_5
max_pooling2d_3
I want to change my custom network to accept one input instead of two using the encoder part only and generate the heatmaps to understand what does the encoder part has learned.
Therefore, the idea is, in case the network predicts 'not similar' then I can generate the heatmaps of images one by one and compare them.
What I have done is the following:
I have passed the two images to the network and got the prediction as described in the blog:
preds = model.predict([x, y])
class_idx = np.argmax(preds[0])
class_output = model.output[:, class_idx]
Set the last convolutional layer and compute the gradient of the class output value with respect to the feature map.
last_conv_layer = model.get_layer('encoder')
grads = K.gradients(class_output, last_conv_layer.get_output_at(-1))[0]
The output of grads:
Tensor("gradients/Merged_feature_map/concat_grad/Slice_1:0", shape=(?, 7, 7, 256), dtype=float32)
Then I done pool the gradients as described in the blog:
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([input_img], [pooled_grads, last_conv_layer.get_output_at(-1)[0]])
At this moment when I checked the inputs and outputs it shows the following:
iterate.inputs
[<tf.Tensor 'input_1:0' shape=(?, 256, 256, 3) dtype=float32>]
iterate.outputs
[<tf.Tensor 'Mean:0' shape=(256,) dtype=float32>, <tf.Tensor 'strided_slice_1:0' shape=(7, 7, 256) dtype=float32>]
But I am now getting the error on the following code line:
pooled_grads_value, conv_layer_output_value = iterate([x])
The error is:
You must feed a value for placeholder tensor 'input_2' with dtype float and shape [?,256,256,3]
[[{{node input_2}}]]
It seems that it is asking for second image input but as seen above 'iterate.inputs' is only one image.
Where have I done a mistake? How can I limit it to accept only one image? Or, any other way to achieve the task in a more batter way?

How to create a sub-model based on fine-tuned VGGNet16

The following network architecture is designed in order to find the similarity between two images.
Initially, I took VGGNet16 and removed the classification head:
vgg_model = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(img_width, img_height, channels)))
Afterward, I set the parameter layer.trainable = False, so that the network will work as a feature extractor.
I passed two different images to the network:
encoded_left = vgg_model(input_left)
encoded_right = vgg_model(input_right)
This will produce two feature vectors. Then for the classification (whether they are similar or not), I used a metric network that consists of 2 convolution layers followed by pooling and 4 fully connected layers.
merge(encoded_left, encoded_right) -> conv-pool -> conv-pool -> reshape -> dense * 4 -> output
Hence, the model looks like:
model = Model(inputs=[left_image, right_image], outputs=output)
After training only metric network, for fine-tuning convolution layers, I set the last convo block for training. Therefore, in the second training phase, along with the metric network, the last convolution block is also trained.
Now I want to use this fine-tuned network for another purpose. Here is the network summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 1024) 0 vgg16[1][0]
vgg16[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 4195328 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 activation_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 activation_6[0][0]
==================================================================================================
Total params: 27,880,517
Trainable params: 13,158,787
Non-trainable params: 14,721,730
As the last convolution block of VGGNet is already trained on the custom dataset I want to cut the network at layer:
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
and use this as a powerful feature extractor. For this task, I loaded the fine-tuned model:
model = load_model('model.h5')
then tried to create the new model as:
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
This results in the following error:
`AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.`
Please, advise me where I am doing wrong.
I have tried several ways but the following method works perfectly. Instead of creating new model as:
model = load_model('model.h5')
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
I used the following way:
model = load_model('model.h5')
sub_model = Sequential()
for layer in model.get_layer('vgg16').layers:
sub_model.add(layer)
I hope this will help others.

convolution and recurrent neural network

how can i combine convolution neural network and recurrent neural network for image segmentation???
summary of my model :
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_34 (InputLayer) (None, 128, 128, 3) 0
__________________________________________________________________________________________________
conv2d_277 (Conv2D) (None, 128, 128, 16) 448 input_34[0][0]
__________________________________________________________________________________________________
max_pooling2d_109 (MaxPooling2D (None, 64, 64, 16) 0 conv2d_277[0][0]
__________________________________________________________________________________________________
conv2d_278 (Conv2D) (None, 64, 64, 32) 4640 max_pooling2d_109[0][0]
__________________________________________________________________________________________________
max_pooling2d_110 (MaxPooling2D (None, 32, 32, 32) 0 conv2d_278[0][0]
__________________________________________________________________________________________________
conv2d_279 (Conv2D) (None, 32, 32, 64) 18496 max_pooling2d_110[0][0]
__________________________________________________________________________________________________
max_pooling2d_111 (MaxPooling2D (None, 16, 16, 64) 0 conv2d_279[0][0]
__________________________________________________________________________________________________
conv2d_280 (Conv2D) (None, 16, 16, 128) 73856 max_pooling2d_111[0][0]
__________________________________________________________________________________________________
max_pooling2d_112 (MaxPooling2D (None, 8, 8, 128) 0 conv2d_280[0][0]
__________________________________________________________________________________________________
conv2d_281 (Conv2D) (None, 8, 8, 256) 295168 max_pooling2d_112[0][0]
__________________________________________________________________________________________________
up_sampling2d_109 (UpSampling2D (None, 16, 16, 256) 0 conv2d_281[0][0]
__________________________________________________________________________________________________
concatenate_109 (Concatenate) (None, 16, 16, 384) 0 up_sampling2d_109[0][0]
conv2d_280[0][0]
__________________________________________________________________________________________________
conv2d_282 (Conv2D) (None, 16, 16, 128) 442496 concatenate_109[0][0]
__________________________________________________________________________________________________
up_sampling2d_110 (UpSampling2D (None, 32, 32, 128) 0 conv2d_282[0][0]
__________________________________________________________________________________________________
concatenate_110 (Concatenate) (None, 32, 32, 192) 0 up_sampling2d_110[0][0]
conv2d_279[0][0]
__________________________________________________________________________________________________
conv2d_283 (Conv2D) (None, 32, 32, 64) 110656 concatenate_110[0][0]
__________________________________________________________________________________________________
up_sampling2d_111 (UpSampling2D (None, 64, 64, 64) 0 conv2d_283[0][0]
__________________________________________________________________________________________________
concatenate_111 (Concatenate) (None, 64, 64, 96) 0 up_sampling2d_111[0][0]
conv2d_278[0][0]
__________________________________________________________________________________________________
conv2d_284 (Conv2D) (None, 64, 64, 32) 27680 concatenate_111[0][0]
__________________________________________________________________________________________________
up_sampling2d_112 (UpSampling2D (None, 128, 128, 32) 0 conv2d_284[0][0]
__________________________________________________________________________________________________
concatenate_112 (Concatenate) (None, 128, 128, 48) 0 up_sampling2d_112[0][0]
conv2d_277[0][0]
__________________________________________________________________________________________________
conv2d_285 (Conv2D) (None, 128, 128, 16) 6928 concatenate_112[0][0]
__________________________________________________________________________________________________
conv2d_286 (Conv2D) (None, 128, 128, 1) 17 conv2d_285[0][0]
__________________________________________________________________________________________________
lambda_49 (Lambda) (16384, 1, 1) 0 conv2d_286[0][0]
__________________________________________________________________________________________________
lstm_28 (LSTM) (16384, 1) 12 lambda_49[0][0]
__________________________________________________________________________________________________
dense_26 (Dense) (16384, 1) 2 lstm_28[0][0]
__________________________________________________________________________________________________
lambda_50 (Lambda) (128, 128, 1) 0 dense_26[0][0]
==================================================================================================
Total params: 980,399
Trainable params: 980,399
Non-trainable params: 0
i want to pass output of my convolution neural network as an input to recurrent neural network...but it shows error in this line model = keras.models.Model(inputs, outputs)
error:```AttributeError: 'tuple' object has no attribute 'ndim'

How to calculate RAM memory needed for training?

How can I calculate RAM memory needed for training a keras model? I want to calculate this because I sometimes encounter the exceeds system memory error when training models. Here is my model, for example:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 30, 30, 32) 320
_________________________________________________________________
conv2d_2 (Conv2D) (None, 28, 28, 32) 9248
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 14, 14, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 12, 12, 64) 18496
_________________________________________________________________
conv2d_4 (Conv2D) (None, 10, 10, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 5, 5, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 204928
_________________________________________________________________
dropout_3 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
=================================================================
Total params: 271,210
Trainable params: 271,210
Non-trainable params: 0
Assuming each parameter is a 32-bit datatype (single-precision floating point, 4 bytes). Your memory usage should be somewhere around: (# of params) * 4B
In this case: 271,210 * 4B = 1084840B =~ 1MB
However, there is an important consideration to keep in mind. This is assuming a batch size of 1, i.e. you're loading in 1 input at a time. If you are using minibatch (typical batch size of 32 or 64) then you'll have to multiply that memory calculation by the size of the batch. If you are using batch gradient descent, then you may be using your entire dataset on each batch. In this case, your memory requirements could be enormous.
This analysis inspired by: https://datascience.stackexchange.com/questions/17286/cnn-memory-consumption

Keras model.pop() not working

I am running Keras 2.0.6 with Python 3.6.2 and Tensorflow-gpu 1.3.0.
In order to do fine tuning on the Vgg16 model, I run this code after having hand built a vgg16 architecture and loaded the weights, but I have not called compile() yet:
model = self.model
model.pop()
for layer in model.layers: layer.trainable=False
model.add(Dense(num, activation='softmax'))
self.compile()
And when I check the graph in Tensorboard I see (check top left of attached picture) dense_3 connected to dropout_2 but dangling by itself. And then next to it I see dense_4, also connected to dropout_2.
Tensorboard model graph
I tried to replace pop() with the pop_layer() code below as suggested here by joelthchao on May 6, 2016. Unfortunately, the graph displayed in Tensorboard becomes an un-understandable mess.
def pop_layer(model):
if not model.outputs:
raise Exception('Sequential model cannot be popped: model is empty.')
model.layers.pop()
if not model.layers:
model.outputs = []
model.inbound_nodes = []
model.outbound_nodes = []
else:
model.layers[-1].outbound_nodes = []
model.outputs = [model.layers[-1].output]
model.built = False
I know something is not working right because I get low accuracy when running this on the Kaggle cats vs dog competition where I hover around 90% whilst others running this code (it's adapted from fast.ai) on top of Theanos easily get 97%. Perhaps my accuracy problem comes from somewhere else, but I still don't think dense_3 should be dangling there and I am wondering if this could be the source of my precision problem.
How can I definitely disconnect and remove dense_3?
See below for model.summary() before and after running the code to prepare for fine tuning. We don't see dense_3 anymore but we do see it in the tensorboard graph.
Before Running
Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 3, 224, 224) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 3, 226, 226) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 64, 224, 224) 1792
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 64, 226, 226) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 224, 224) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 112, 112) 0
_________________________________________________________________
zero_padding2d_3 (ZeroPaddin (None, 64, 114, 114) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 112, 112) 73856
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 128, 114, 114) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 112, 112) 147584
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 128, 56, 56) 0
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 128, 58, 58) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 56, 56) 295168
_________________________________________________________________
zero_padding2d_6 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
zero_padding2d_7 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 256, 28, 28) 0
_________________________________________________________________
zero_padding2d_8 (ZeroPaddin (None, 256, 30, 30) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 512, 28, 28) 1180160
_________________________________________________________________
zero_padding2d_9 (ZeroPaddin (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
zero_padding2d_10 (ZeroPaddi (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 512, 14, 14) 0
_________________________________________________________________
zero_padding2d_11 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_12 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_13 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 512, 7, 7) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_1 (Dense) (None, 4096) 102764544
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_3 (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
After running
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 3, 224, 224) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 3, 226, 226) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 64, 224, 224) 1792
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 64, 226, 226) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 224, 224) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 112, 112) 0
_________________________________________________________________
zero_padding2d_3 (ZeroPaddin (None, 64, 114, 114) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 112, 112) 73856
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 128, 114, 114) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 112, 112) 147584
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 128, 56, 56) 0
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 128, 58, 58) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 56, 56) 295168
_________________________________________________________________
zero_padding2d_6 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
zero_padding2d_7 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 256, 28, 28) 0
_________________________________________________________________
zero_padding2d_8 (ZeroPaddin (None, 256, 30, 30) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 512, 28, 28) 1180160
_________________________________________________________________
zero_padding2d_9 (ZeroPaddin (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
zero_padding2d_10 (ZeroPaddi (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 512, 14, 14) 0
_________________________________________________________________
zero_padding2d_11 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_12 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_13 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 512, 7, 7) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_1 (Dense) (None, 4096) 102764544
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_4 (Dense) (None, 2) 8194
=================================================================
Total params: 134,268,738
Trainable params: 8,194
Non-trainable params: 134,260,544
I believe this is a problem with the implimentation of layers.pop() in Keras when using the tensorflow backend. For now here is a work-around removing the last layer by name:
name_last_layer = str(model1.layers[-1])
model2 = Sequential()
for layer in model1.layers:
if str(layer) != name_last_layer:
model2.add(layer)
Where model1 is your original model and model2 is the same model without the last layer. In this example I've made model2 a Sequential model but you can ofcourse change this.

Resources