How to create a sub-model based on fine-tuned VGGNet16 - machine-learning

The following network architecture is designed in order to find the similarity between two images.
Initially, I took VGGNet16 and removed the classification head:
vgg_model = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(img_width, img_height, channels)))
Afterward, I set the parameter layer.trainable = False, so that the network will work as a feature extractor.
I passed two different images to the network:
encoded_left = vgg_model(input_left)
encoded_right = vgg_model(input_right)
This will produce two feature vectors. Then for the classification (whether they are similar or not), I used a metric network that consists of 2 convolution layers followed by pooling and 4 fully connected layers.
merge(encoded_left, encoded_right) -> conv-pool -> conv-pool -> reshape -> dense * 4 -> output
Hence, the model looks like:
model = Model(inputs=[left_image, right_image], outputs=output)
After training only metric network, for fine-tuning convolution layers, I set the last convo block for training. Therefore, in the second training phase, along with the metric network, the last convolution block is also trained.
Now I want to use this fine-tuned network for another purpose. Here is the network summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 1024) 0 vgg16[1][0]
vgg16[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 4195328 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 activation_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 activation_6[0][0]
==================================================================================================
Total params: 27,880,517
Trainable params: 13,158,787
Non-trainable params: 14,721,730
As the last convolution block of VGGNet is already trained on the custom dataset I want to cut the network at layer:
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
and use this as a powerful feature extractor. For this task, I loaded the fine-tuned model:
model = load_model('model.h5')
then tried to create the new model as:
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
This results in the following error:
`AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.`
Please, advise me where I am doing wrong.

I have tried several ways but the following method works perfectly. Instead of creating new model as:
model = load_model('model.h5')
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
I used the following way:
model = load_model('model.h5')
sub_model = Sequential()
for layer in model.get_layer('vgg16').layers:
sub_model.add(layer)
I hope this will help others.

Related

Understanding CNN by visualizing class activations using GRAD_CAM

I followed the blog Where CNN is looking? to understand and visualize the class activations in order to predict something. The given example works very well.
I have developed a custom model using autoencoders for image similarity. The model accepts 2 images and predicts the score for similarity. The model has the following layers:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
encoder (Sequential) (None, 7, 7, 256) 3752704 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 512) 0 encoder[1][0]
encoder[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 2098176 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 1, 256) 0 activation_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 dropout_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 1, 128) 0 activation_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 1, 64) 0 activation_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 dropout_3[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 1, 1) 0 activation_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 dropout_4[0][0]
==================================================================================================
The encoder layer consists of the following layers:
conv2d_1
batch_normalization_1
activation_1
max_pooling2d_1
conv2d_2
batch_normalization_2
activation_2
max_pooling2d_2
conv2d_3
batch_normalization_3
activation_3
conv2d_4
batch_normalization_4
activation_4
conv2d_5
batch_normalization_5
activation_5
max_pooling2d_3
I want to change my custom network to accept one input instead of two using the encoder part only and generate the heatmaps to understand what does the encoder part has learned.
Therefore, the idea is, in case the network predicts 'not similar' then I can generate the heatmaps of images one by one and compare them.
What I have done is the following:
I have passed the two images to the network and got the prediction as described in the blog:
preds = model.predict([x, y])
class_idx = np.argmax(preds[0])
class_output = model.output[:, class_idx]
Set the last convolutional layer and compute the gradient of the class output value with respect to the feature map.
last_conv_layer = model.get_layer('encoder')
grads = K.gradients(class_output, last_conv_layer.get_output_at(-1))[0]
The output of grads:
Tensor("gradients/Merged_feature_map/concat_grad/Slice_1:0", shape=(?, 7, 7, 256), dtype=float32)
Then I done pool the gradients as described in the blog:
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([input_img], [pooled_grads, last_conv_layer.get_output_at(-1)[0]])
At this moment when I checked the inputs and outputs it shows the following:
iterate.inputs
[<tf.Tensor 'input_1:0' shape=(?, 256, 256, 3) dtype=float32>]
iterate.outputs
[<tf.Tensor 'Mean:0' shape=(256,) dtype=float32>, <tf.Tensor 'strided_slice_1:0' shape=(7, 7, 256) dtype=float32>]
But I am now getting the error on the following code line:
pooled_grads_value, conv_layer_output_value = iterate([x])
The error is:
You must feed a value for placeholder tensor 'input_2' with dtype float and shape [?,256,256,3]
[[{{node input_2}}]]
It seems that it is asking for second image input but as seen above 'iterate.inputs' is only one image.
Where have I done a mistake? How can I limit it to accept only one image? Or, any other way to achieve the task in a more batter way?

convolution and recurrent neural network

how can i combine convolution neural network and recurrent neural network for image segmentation???
summary of my model :
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_34 (InputLayer) (None, 128, 128, 3) 0
__________________________________________________________________________________________________
conv2d_277 (Conv2D) (None, 128, 128, 16) 448 input_34[0][0]
__________________________________________________________________________________________________
max_pooling2d_109 (MaxPooling2D (None, 64, 64, 16) 0 conv2d_277[0][0]
__________________________________________________________________________________________________
conv2d_278 (Conv2D) (None, 64, 64, 32) 4640 max_pooling2d_109[0][0]
__________________________________________________________________________________________________
max_pooling2d_110 (MaxPooling2D (None, 32, 32, 32) 0 conv2d_278[0][0]
__________________________________________________________________________________________________
conv2d_279 (Conv2D) (None, 32, 32, 64) 18496 max_pooling2d_110[0][0]
__________________________________________________________________________________________________
max_pooling2d_111 (MaxPooling2D (None, 16, 16, 64) 0 conv2d_279[0][0]
__________________________________________________________________________________________________
conv2d_280 (Conv2D) (None, 16, 16, 128) 73856 max_pooling2d_111[0][0]
__________________________________________________________________________________________________
max_pooling2d_112 (MaxPooling2D (None, 8, 8, 128) 0 conv2d_280[0][0]
__________________________________________________________________________________________________
conv2d_281 (Conv2D) (None, 8, 8, 256) 295168 max_pooling2d_112[0][0]
__________________________________________________________________________________________________
up_sampling2d_109 (UpSampling2D (None, 16, 16, 256) 0 conv2d_281[0][0]
__________________________________________________________________________________________________
concatenate_109 (Concatenate) (None, 16, 16, 384) 0 up_sampling2d_109[0][0]
conv2d_280[0][0]
__________________________________________________________________________________________________
conv2d_282 (Conv2D) (None, 16, 16, 128) 442496 concatenate_109[0][0]
__________________________________________________________________________________________________
up_sampling2d_110 (UpSampling2D (None, 32, 32, 128) 0 conv2d_282[0][0]
__________________________________________________________________________________________________
concatenate_110 (Concatenate) (None, 32, 32, 192) 0 up_sampling2d_110[0][0]
conv2d_279[0][0]
__________________________________________________________________________________________________
conv2d_283 (Conv2D) (None, 32, 32, 64) 110656 concatenate_110[0][0]
__________________________________________________________________________________________________
up_sampling2d_111 (UpSampling2D (None, 64, 64, 64) 0 conv2d_283[0][0]
__________________________________________________________________________________________________
concatenate_111 (Concatenate) (None, 64, 64, 96) 0 up_sampling2d_111[0][0]
conv2d_278[0][0]
__________________________________________________________________________________________________
conv2d_284 (Conv2D) (None, 64, 64, 32) 27680 concatenate_111[0][0]
__________________________________________________________________________________________________
up_sampling2d_112 (UpSampling2D (None, 128, 128, 32) 0 conv2d_284[0][0]
__________________________________________________________________________________________________
concatenate_112 (Concatenate) (None, 128, 128, 48) 0 up_sampling2d_112[0][0]
conv2d_277[0][0]
__________________________________________________________________________________________________
conv2d_285 (Conv2D) (None, 128, 128, 16) 6928 concatenate_112[0][0]
__________________________________________________________________________________________________
conv2d_286 (Conv2D) (None, 128, 128, 1) 17 conv2d_285[0][0]
__________________________________________________________________________________________________
lambda_49 (Lambda) (16384, 1, 1) 0 conv2d_286[0][0]
__________________________________________________________________________________________________
lstm_28 (LSTM) (16384, 1) 12 lambda_49[0][0]
__________________________________________________________________________________________________
dense_26 (Dense) (16384, 1) 2 lstm_28[0][0]
__________________________________________________________________________________________________
lambda_50 (Lambda) (128, 128, 1) 0 dense_26[0][0]
==================================================================================================
Total params: 980,399
Trainable params: 980,399
Non-trainable params: 0
i want to pass output of my convolution neural network as an input to recurrent neural network...but it shows error in this line model = keras.models.Model(inputs, outputs)
error:```AttributeError: 'tuple' object has no attribute 'ndim'

How to calculate RAM memory needed for training?

How can I calculate RAM memory needed for training a keras model? I want to calculate this because I sometimes encounter the exceeds system memory error when training models. Here is my model, for example:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 30, 30, 32) 320
_________________________________________________________________
conv2d_2 (Conv2D) (None, 28, 28, 32) 9248
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 14, 14, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 12, 12, 64) 18496
_________________________________________________________________
conv2d_4 (Conv2D) (None, 10, 10, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 5, 5, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 204928
_________________________________________________________________
dropout_3 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
=================================================================
Total params: 271,210
Trainable params: 271,210
Non-trainable params: 0
Assuming each parameter is a 32-bit datatype (single-precision floating point, 4 bytes). Your memory usage should be somewhere around: (# of params) * 4B
In this case: 271,210 * 4B = 1084840B =~ 1MB
However, there is an important consideration to keep in mind. This is assuming a batch size of 1, i.e. you're loading in 1 input at a time. If you are using minibatch (typical batch size of 32 or 64) then you'll have to multiply that memory calculation by the size of the batch. If you are using batch gradient descent, then you may be using your entire dataset on each batch. In this case, your memory requirements could be enormous.
This analysis inspired by: https://datascience.stackexchange.com/questions/17286/cnn-memory-consumption

Loss validation that don't decrease in Keras images classification

I'm trying to finetune my VGG19 model with a bunch of images for classification.
have 18 classes with 6000 images in each well currated.
Using Keras 2.2.4
Model:
INIT_LR = 0.00001
BATCH_SIZE = 128
IMG_SIZE = (256, 256)
epochs = 150
model_base = keras.applications.vgg19.VGG19(include_top=False, input_shape=(*IMG_SIZE, 3), weights='imagenet')
output = Flatten()(model_base.output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(64, activation='relu')(output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(len(all_character_names), activation='softmax')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-10]:
layer.trainable = False
opt = optimizers.Adam(lr=INIT_LR, decay=INIT_LR / epochs)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy', 'top_k_categorical_accuracy'])
Data augmentation:
image_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=.15,
height_shift_range=.15,
#rescale=1./255,
shear_range=0.15,
zoom_range=0.15,
channel_shift_range=1,
vertical_flip=True,
horizontal_flip=True)
Model train:
validation_steps = data_generator.validation_samples/BATCH_SIZE
steps_per_epoch = data_generator.train_samples/BATCH_SIZE
model.fit_generator(
generator,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=validation_data,
validation_steps=validation_steps
)
Model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 256, 256, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 128, 128, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 128, 128, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 128, 128, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 64, 64, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 64, 64, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 32, 32, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 16, 16, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 8, 8, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32768) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32768) 131072
_________________________________________________________________
dropout_1 (Dropout) (None, 32768) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 2097216
_________________________________________________________________
batch_normalization_2 (Batch (None, 64) 256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 19) 1235
=================================================================
Total params: 22,254,163
Trainable params: 19,862,931
Non-trainable params: 2,391,232
_________________________________________________________________
<keras.engine.input_layer.InputLayer object at 0x00000224568D0D68> False
<keras.layers.convolutional.Conv2D object at 0x00000224568D0F60> False
<keras.layers.convolutional.Conv2D object at 0x00000224568F0438> False
<keras.layers.pooling.MaxPooling2D object at 0x00000224570A5860> False
<keras.layers.convolutional.Conv2D object at 0x00000224570A58D0> False
<keras.layers.convolutional.Conv2D object at 0x00000224574196D8> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457524048> False
<keras.layers.convolutional.Conv2D object at 0x0000022457524D30> False
<keras.layers.convolutional.Conv2D object at 0x0000022457053160> False
<keras.layers.convolutional.Conv2D object at 0x00000224572E15C0> False
<keras.layers.convolutional.Conv2D object at 0x000002245707B080> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457088400> False
<keras.layers.convolutional.Conv2D object at 0x0000022457088E10> True
<keras.layers.convolutional.Conv2D object at 0x00000224575DB240> True
<keras.layers.convolutional.Conv2D object at 0x000002245747A320> True
<keras.layers.convolutional.Conv2D object at 0x0000022457486160> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574924E0> True
<keras.layers.convolutional.Conv2D object at 0x0000022457492D68> True
<keras.layers.convolutional.Conv2D object at 0x00000224574AD320> True
<keras.layers.convolutional.Conv2D object at 0x00000224574C6400> True
<keras.layers.convolutional.Conv2D object at 0x00000224574D2240> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574DAF98> True
<keras.layers.core.Flatten object at 0x00000224574EA080> True
<keras.layers.normalization.BatchNormalization object at 0x00000224574F82B0> True
<keras.layers.core.Dropout object at 0x000002247134BA58> True
<keras.layers.core.Dense object at 0x000002247136A7B8> True
<keras.layers.normalization.BatchNormalization object at 0x0000022471324438> True
<keras.layers.core.Dropout object at 0x00000224713249B0> True
<keras.layers.core.Dense object at 0x00000224713BF7F0> True
batchsize:128
LR:1e-05
The doomed graph:
Tries:
Tryed several LR
Tryed without training last 10, 5 layers, it is worst, simply not converging
Tryed several batch size, 128 give the best results
Also tryed resnet50 but not converging at all (even with last 3 layers trainables)
Tryed VGG16 with not much luck.
I add about 2000 new images each days to try to reach around 20 000 images per classe as I think this is here my problem.
In lower layer, network has learned low level features like edges, contours etc. It is higher layer, where these features combined. So in your case, you need much finer features such as hair color, person size etc. One thing you can tried that fine tune from last few layer(from block 4-5). Also you can use different learning rate, very low for VGG blocks and little higher for completely new dense layer. For implemention, this-thread will be helpful.

Keras - visualize classes on a CNN network

In order to generate Google-Dream like images, I am trying to modify input images optimizing an inceptionV3 network with gradient ascent`.
Desired effect: https://github.com/google/deepdream/blob/master/dream.ipynb
(for more info on this, refer to [https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html.)
For that matter, I have fine-tuned an inception network using the transfer learning method, and have generated the model:inceptionv3-ft.model
model.summary() prints the following architecture (shortened here due to space limitations):
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, None, 3 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, None, None, 3 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 96 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, None, None, 3 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, None, None, 3 9216 activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 3 96 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, None, None, 3 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, None, None, 6 18432 activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 6 192 conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, None, None, 6 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, None, None, 6 0 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, None, None, 8 5120 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 8 240 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, None, None, 8 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, None, None, 1 138240 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 576 conv2d_5[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, None, None, 1 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, None, None, 1 0 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, None, None, 6 12288 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 6 192 conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, None, None, 6 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, None, None, 4 9216 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, None, None, 9 55296 activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 4 144 conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 9 288 conv2d_10[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, None, None, 4 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, None, None, 9 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, None, None, 1 0 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, None, None, 6 12288 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
(...)
mixed9_1 (Concatenate) (None, None, None, 7 0 activation_88[0][0]
activation_89[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, None, None, 7 0 activation_92[0][0]
activation_93[0][0]
__________________________________________________________________________________________________
activation_94 (Activation) (None, None, None, 1 0 batch_normalization_94[0][0]
__________________________________________________________________________________________________
mixed10 (Concatenate) (None, None, None, 2 0 activation_86[0][0]
mixed9_1[0][0]
concatenate_2[0][0]
activation_94[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 2048) 0 mixed10[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176 global_average_pooling2d_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 1025 dense_1[0][0]
==================================================================================================
Total params: 23,901,985
Trainable params: 18,315,137
Non-trainable params: 5,586,848
____________________________________
Now I'm using the following settings and code to try and tweak and activate specific high layer objects in order to make full objects emerge on the input image:
settings = {
'features': {
'mixed2': 0.,
'mixed3': 0.,
'mixed4': 0.,
'mixed10': 0., #highest
},
}
model = load_model('inceptionv3-ft.model')
#Get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
#Define the loss.
loss = K.variable(0.)
for layer_name in settings['features']:
# Add the L2 norm of the features of a layer to the loss.
assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.'
coeff = settings['features'][layer_name]
x = layer_dict[layer_name].output
print (x)
# We avoid border artifacts by only involving non-border pixels in the loss.
scaling = K.prod(K.cast(K.shape(x), 'float32'))
if K.image_data_format() == 'channels_first':
loss += coeff * K.sum(K.square(x[:, :, 2: -2, 2: -2])) / scaling
else:
loss += coeff * K.sum(K.square(x[:, 2: -2, 2: -2, :])) / scaling
# Compute the gradients of the dream wrt the loss.
grads = K.gradients(loss, dream)[0]
# Normalize gradients.
grads /= K.maximum(K.mean(K.abs(grads)), K.epsilon())
# Set up function to retrieve the value
# of the loss and gradients given an input image.
outputs = [loss, grads]
fetch_loss_and_grads = K.function([dream], outputs)
def eval_loss_and_grads(x):
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1]
return loss_value, grad_values
def resize_img(img, size):
img = np.copy(img)
if K.image_data_format() == 'channels_first':
factors = (1, 1,
float(size[0]) / img.shape[2],
float(size[1]) / img.shape[3])
else:
factors = (1,
float(size[0]) / img.shape[1],
float(size[1]) / img.shape[2],
1)
return scipy.ndimage.zoom(img, factors, order=1)
def gradient_ascent(x, iterations, step, max_loss=None):
for i in range(iterations):
loss_value, grad_values = eval_loss_and_grads(x)
if max_loss is not None and loss_value > max_loss:
break
print('..Loss value at', i, ':', loss_value)
x += step * grad_values
return x
def save_img(img, fname):
pil_img = deprocess_image(np.copy(img))
scipy.misc.imsave(fname, pil_img)
"""Process:
- Load the original image.
- Define a number of processing scales (i.e. image shapes),
from smallest to largest.
- Resize the original image to the smallest scale.
- For every scale, starting with the smallest (i.e. current one):
- Run gradient ascent
- Upscale image to the next scale
- Reinject the detail that was lost at upscaling time
- Stop when we are back to the original size.
To obtain the detail lost during upscaling, we simply
take the original image, shrink it down, upscale it,
and compare the result to the (resized) original image.
"""
# Playing with these hyperparameters will also allow you to achieve new effects
step = 0.01 # Gradient ascent step size
num_octave = 3 # Number of scales at which to run gradient ascent
octave_scale = 1.4 # Size ratio between scales
iterations = 20 # Number of ascent steps per scale
max_loss = 10.
img = preprocess_image(base_image_path)
if K.image_data_format() == 'channels_first':
original_shape = img.shape[2:]
else:
original_shape = img.shape[1:3]
successive_shapes = [original_shape]
for i in range(1, num_octave):
shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape])
successive_shapes.append(shape)
successive_shapes = successive_shapes[::-1]
original_img = np.copy(img)
shrunk_original_img = resize_img(img, successive_shapes[0])
for shape in successive_shapes:
print('Processing image shape', shape)
img = resize_img(img, shape)
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
same_size_original = resize_img(original_img, shape)
lost_detail = same_size_original - upscaled_shrunk_original_img
img += lost_detail
shrunk_original_img = resize_img(original_img, shape)
save_img(img, fname=result_prefix + '.png')
But no matter the setting values I tweak, I seem to only activate low level features, like edges and curves, or, at best, mixed features.
Ideally, settings should be able to access individual layers down to channels and units, i.e.,
Layer4c - Unit 0, but I haven't found in Keras documentation any method to achieve that:
see this: https://distill.pub/2017/feature-visualization/appendix/googlenet/4c.html
I have learned that using Caffe framework gives you more flexibility, but installation system-wide is a dependency hell.
So, how do I activate individual classes on this network within the Keras framework, or any other framework other than Caffe?
What worked for me was the following:
To avoid installing all dependencies and caffe on my machine, I've pulled this Docker Image with all Deep Learning frameworks in it.
Within minutes I had caffe (as well as keras, tensorflow, CUDA, theano, lasagne, torch, openCV) installed in a container with a shared folder in my host machine.
I then ran this caffe script -->
Deep Dream, and voilá.
models generated by caffe are more resourceful and allow classes as stated above to be 'printed' on input images or from noise.

Resources