Related
I'm new in machine learning and I'm trying to train a model.
I'm using this Keras oficial example as a guide to set my dataset and feed it into the model: https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence
From the training data I have an sliding windows created for a single column and for the labels I have a binary classification (1 or 0).
This is the model creation code:
n = 200
hidden_units = n
dense_model = Sequential()
dense_model.add(Dense(hidden_units, input_shape=([200,1])))
dense_model.add(Activation('relu'))
dense_model.add(Dropout(dropout))
print(hidden_units)
while hidden_units > 2:
hidden_units = math.ceil(hidden_units/2)
dense_model.add(Dense(hidden_units))
dense_model.add(Activation('relu'))
dense_model.add(Dropout(dropout))
print(hidden_units)
dense_model.add(Dense(units = 1, activation='sigmoid'))
This is the functions I'm using to compile the model:
def compile_and_fit(model, window, epochs, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(window.train , epochs=epochs)
return history
This is the model training:
break_batchs = find_gaps(df_train, 'date_diff', diff_int_value)
for keys, values in break_batchs.items():
dense_window = WindowGenerator(data=df_train['price_var'],
data_validation=df_validation['price_var'],
data_test=df_test['price_var'],
input_width=n,
shift=m,
start_index=values[0],
end_index=values[1],
class_labels=y_buy,
class_labels_train=y_buy_train,
class_labels_test=y_buy_test,
label_width=1,
label_columns=None,
classification=True,
batch_size=batch_size,
seed=None)
history = compile_and_fit(dense_model, dense_window)
and those are the shapes of the batches:
(TensorSpec(shape=(None, 200, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None, 1, 1), dtype=tf.float64, name=None))
The problem is (I guess) that, from the model summary the model is training from the last dimension when it should be working in the second one:
dense_model.summary()
Model: "sequential_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
|Model is being applied here
|
v
dense_232 (Dense) (None, 200, 200) 400
_________________________________________________________________
|When it should be applied here
|
v
activation_225 (Activation) (None, 200, 200) 0
_________________________________________________________________
dropout_211 (Dropout) (None, 200, 200) 0
_________________________________________________________________
dense_233 (Dense) (None, 200, 100) 20100
_________________________________________________________________
activation_226 (Activation) (None, 200, 100) 0
_________________________________________________________________
dropout_212 (Dropout) (None, 200, 100) 0
_________________________________________________________________
dense_234 (Dense) (None, 200, 50) 5050
_________________________________________________________________
activation_227 (Activation) (None, 200, 50) 0
_________________________________________________________________
dropout_213 (Dropout) (None, 200, 50) 0
_________________________________________________________________
dense_235 (Dense) (None, 200, 25) 1275
_________________________________________________________________
activation_228 (Activation) (None, 200, 25) 0
_________________________________________________________________
dropout_214 (Dropout) (None, 200, 25) 0
_________________________________________________________________
dense_236 (Dense) (None, 200, 13) 338
_________________________________________________________________
activation_229 (Activation) (None, 200, 13) 0
_________________________________________________________________
dropout_215 (Dropout) (None, 200, 13) 0
_________________________________________________________________
dense_237 (Dense) (None, 200, 7) 98
_________________________________________________________________
activation_230 (Activation) (None, 200, 7) 0
_________________________________________________________________
dropout_216 (Dropout) (None, 200, 7) 0
_________________________________________________________________
dense_238 (Dense) (None, 200, 4) 32
_________________________________________________________________
activation_231 (Activation) (None, 200, 4) 0
_________________________________________________________________
dropout_217 (Dropout) (None, 200, 4) 0
_________________________________________________________________
dense_239 (Dense) (None, 200, 2) 10
_________________________________________________________________
activation_232 (Activation) (None, 200, 2) 0
_________________________________________________________________
dropout_218 (Dropout) (None, 200, 2) 0
_________________________________________________________________
dense_240 (Dense) (None, 200, 1) 3
=================================================================
Total params: 27,306
Trainable params: 27,306
Non-trainable params: 0
_________________________________________________________________
And because of that Im getting ValueError: logits and labels must have the same shape ((None, 200, 1) vs (None, 1, 1))
How can I tell Keras to apply the training in the second dimension and not the last one?
EDIT
This is what I understand is happening, is this right? How I fixed it?
Edit 2
I tried to modify as suggested, using:
dense_model.add(Dense(hidden_units, input_shape=(None,200,1)))
but I'm getting the following warning:
WARNING:tensorflow:Model was constructed with shape (None, None, 200, 1) for input KerasTensor(type_spec=TensorSpec(shape=(None, None, 200, 1), dtype=tf.float32, name='dense_315_input'), name='dense_315_input', description="created by layer 'dense_315_input'"), but it was called on an input with incompatible shape (None, 200, 1, 1).
The first dimension that you are pointing at is batch size, as you specified in your input layer (the input shape is [batch_size, input_dim] as can be seen here
dense_model.add(Dense(hidden_units, input_shape=([200,1])))
So your model is outputting 200 values because your batch size is 200, but the target label you are comparing only has one value.
I followed the blog Where CNN is looking? to understand and visualize the class activations in order to predict something. The given example works very well.
I have developed a custom model using autoencoders for image similarity. The model accepts 2 images and predicts the score for similarity. The model has the following layers:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
encoder (Sequential) (None, 7, 7, 256) 3752704 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 512) 0 encoder[1][0]
encoder[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 2098176 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 1, 256) 0 activation_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 dropout_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 1, 128) 0 activation_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 1, 64) 0 activation_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 dropout_3[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
dropout_4 (Dropout) (None, 1, 1) 0 activation_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 dropout_4[0][0]
==================================================================================================
The encoder layer consists of the following layers:
conv2d_1
batch_normalization_1
activation_1
max_pooling2d_1
conv2d_2
batch_normalization_2
activation_2
max_pooling2d_2
conv2d_3
batch_normalization_3
activation_3
conv2d_4
batch_normalization_4
activation_4
conv2d_5
batch_normalization_5
activation_5
max_pooling2d_3
I want to change my custom network to accept one input instead of two using the encoder part only and generate the heatmaps to understand what does the encoder part has learned.
Therefore, the idea is, in case the network predicts 'not similar' then I can generate the heatmaps of images one by one and compare them.
What I have done is the following:
I have passed the two images to the network and got the prediction as described in the blog:
preds = model.predict([x, y])
class_idx = np.argmax(preds[0])
class_output = model.output[:, class_idx]
Set the last convolutional layer and compute the gradient of the class output value with respect to the feature map.
last_conv_layer = model.get_layer('encoder')
grads = K.gradients(class_output, last_conv_layer.get_output_at(-1))[0]
The output of grads:
Tensor("gradients/Merged_feature_map/concat_grad/Slice_1:0", shape=(?, 7, 7, 256), dtype=float32)
Then I done pool the gradients as described in the blog:
pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([input_img], [pooled_grads, last_conv_layer.get_output_at(-1)[0]])
At this moment when I checked the inputs and outputs it shows the following:
iterate.inputs
[<tf.Tensor 'input_1:0' shape=(?, 256, 256, 3) dtype=float32>]
iterate.outputs
[<tf.Tensor 'Mean:0' shape=(256,) dtype=float32>, <tf.Tensor 'strided_slice_1:0' shape=(7, 7, 256) dtype=float32>]
But I am now getting the error on the following code line:
pooled_grads_value, conv_layer_output_value = iterate([x])
The error is:
You must feed a value for placeholder tensor 'input_2' with dtype float and shape [?,256,256,3]
[[{{node input_2}}]]
It seems that it is asking for second image input but as seen above 'iterate.inputs' is only one image.
Where have I done a mistake? How can I limit it to accept only one image? Or, any other way to achieve the task in a more batter way?
The following network architecture is designed in order to find the similarity between two images.
Initially, I took VGGNet16 and removed the classification head:
vgg_model = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(img_width, img_height, channels)))
Afterward, I set the parameter layer.trainable = False, so that the network will work as a feature extractor.
I passed two different images to the network:
encoded_left = vgg_model(input_left)
encoded_right = vgg_model(input_right)
This will produce two feature vectors. Then for the classification (whether they are similar or not), I used a metric network that consists of 2 convolution layers followed by pooling and 4 fully connected layers.
merge(encoded_left, encoded_right) -> conv-pool -> conv-pool -> reshape -> dense * 4 -> output
Hence, the model looks like:
model = Model(inputs=[left_image, right_image], outputs=output)
After training only metric network, for fine-tuning convolution layers, I set the last convo block for training. Therefore, in the second training phase, along with the metric network, the last convolution block is also trained.
Now I want to use this fine-tuned network for another purpose. Here is the network summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 1024) 0 vgg16[1][0]
vgg16[2][0]
__________________________________________________________________________________________________
mnet_conv1 (Conv2D) (None, 7, 7, 1024) 4195328 Merged_feature_map[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024) 4096 mnet_conv1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 7, 7, 1024) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D) (None, 3, 3, 1024) 0 activation_1[0][0]
__________________________________________________________________________________________________
mnet_conv2 (Conv2D) (None, 3, 3, 2048) 8390656 mnet_pool1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048) 8192 mnet_conv2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 3, 3, 2048) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D) (None, 1, 1, 2048) 0 activation_2[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 2048) 0 mnet_pool2[0][0]
__________________________________________________________________________________________________
fc1 (Dense) (None, 1, 256) 524544 reshape_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256) 1024 fc1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 1, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
fc2 (Dense) (None, 1, 128) 32896 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128) 512 fc2[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 1, 128) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
fc3 (Dense) (None, 1, 64) 8256 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64) 256 fc3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 1, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
fc4 (Dense) (None, 1, 1) 65 activation_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1) 4 fc4[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 1, 1) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1) 0 activation_6[0][0]
==================================================================================================
Total params: 27,880,517
Trainable params: 13,158,787
Non-trainable params: 14,721,730
As the last convolution block of VGGNet is already trained on the custom dataset I want to cut the network at layer:
__________________________________________________________________________________________________
vgg16 (Model) (None, 7, 7, 512) 14714688 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
and use this as a powerful feature extractor. For this task, I loaded the fine-tuned model:
model = load_model('model.h5')
then tried to create the new model as:
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
This results in the following error:
`AttributeError: Layer vgg16 has multiple inbound nodes, hence the notion of "layer output" is ill-defined. Use `get_output_at(node_index)` instead.`
Please, advise me where I am doing wrong.
I have tried several ways but the following method works perfectly. Instead of creating new model as:
model = load_model('model.h5')
new_model = Model(Input(shape=(img_width, img_height, channels)), model.layers[2].output)
I used the following way:
model = load_model('model.h5')
sub_model = Sequential()
for layer in model.get_layer('vgg16').layers:
sub_model.add(layer)
I hope this will help others.
I'm trying to finetune my VGG19 model with a bunch of images for classification.
have 18 classes with 6000 images in each well currated.
Using Keras 2.2.4
Model:
INIT_LR = 0.00001
BATCH_SIZE = 128
IMG_SIZE = (256, 256)
epochs = 150
model_base = keras.applications.vgg19.VGG19(include_top=False, input_shape=(*IMG_SIZE, 3), weights='imagenet')
output = Flatten()(model_base.output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(64, activation='relu')(output)
output = BatchNormalization()(output)
output = Dropout(0.5)(output)
output = Dense(len(all_character_names), activation='softmax')(output)
model = Model(model_base.input, output)
for layer in model_base.layers[:-10]:
layer.trainable = False
opt = optimizers.Adam(lr=INIT_LR, decay=INIT_LR / epochs)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy', 'top_k_categorical_accuracy'])
Data augmentation:
image_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=.15,
height_shift_range=.15,
#rescale=1./255,
shear_range=0.15,
zoom_range=0.15,
channel_shift_range=1,
vertical_flip=True,
horizontal_flip=True)
Model train:
validation_steps = data_generator.validation_samples/BATCH_SIZE
steps_per_epoch = data_generator.train_samples/BATCH_SIZE
model.fit_generator(
generator,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_data=validation_data,
validation_steps=validation_steps
)
Model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 256, 256, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 256, 256, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 128, 128, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 128, 128, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 128, 128, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 64, 64, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 64, 64, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 64, 64, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 32, 32, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 32, 32, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 32, 32, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 16, 16, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 16, 16, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 8, 8, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32768) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32768) 131072
_________________________________________________________________
dropout_1 (Dropout) (None, 32768) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 2097216
_________________________________________________________________
batch_normalization_2 (Batch (None, 64) 256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 19) 1235
=================================================================
Total params: 22,254,163
Trainable params: 19,862,931
Non-trainable params: 2,391,232
_________________________________________________________________
<keras.engine.input_layer.InputLayer object at 0x00000224568D0D68> False
<keras.layers.convolutional.Conv2D object at 0x00000224568D0F60> False
<keras.layers.convolutional.Conv2D object at 0x00000224568F0438> False
<keras.layers.pooling.MaxPooling2D object at 0x00000224570A5860> False
<keras.layers.convolutional.Conv2D object at 0x00000224570A58D0> False
<keras.layers.convolutional.Conv2D object at 0x00000224574196D8> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457524048> False
<keras.layers.convolutional.Conv2D object at 0x0000022457524D30> False
<keras.layers.convolutional.Conv2D object at 0x0000022457053160> False
<keras.layers.convolutional.Conv2D object at 0x00000224572E15C0> False
<keras.layers.convolutional.Conv2D object at 0x000002245707B080> False
<keras.layers.pooling.MaxPooling2D object at 0x0000022457088400> False
<keras.layers.convolutional.Conv2D object at 0x0000022457088E10> True
<keras.layers.convolutional.Conv2D object at 0x00000224575DB240> True
<keras.layers.convolutional.Conv2D object at 0x000002245747A320> True
<keras.layers.convolutional.Conv2D object at 0x0000022457486160> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574924E0> True
<keras.layers.convolutional.Conv2D object at 0x0000022457492D68> True
<keras.layers.convolutional.Conv2D object at 0x00000224574AD320> True
<keras.layers.convolutional.Conv2D object at 0x00000224574C6400> True
<keras.layers.convolutional.Conv2D object at 0x00000224574D2240> True
<keras.layers.pooling.MaxPooling2D object at 0x00000224574DAF98> True
<keras.layers.core.Flatten object at 0x00000224574EA080> True
<keras.layers.normalization.BatchNormalization object at 0x00000224574F82B0> True
<keras.layers.core.Dropout object at 0x000002247134BA58> True
<keras.layers.core.Dense object at 0x000002247136A7B8> True
<keras.layers.normalization.BatchNormalization object at 0x0000022471324438> True
<keras.layers.core.Dropout object at 0x00000224713249B0> True
<keras.layers.core.Dense object at 0x00000224713BF7F0> True
batchsize:128
LR:1e-05
The doomed graph:
Tries:
Tryed several LR
Tryed without training last 10, 5 layers, it is worst, simply not converging
Tryed several batch size, 128 give the best results
Also tryed resnet50 but not converging at all (even with last 3 layers trainables)
Tryed VGG16 with not much luck.
I add about 2000 new images each days to try to reach around 20 000 images per classe as I think this is here my problem.
In lower layer, network has learned low level features like edges, contours etc. It is higher layer, where these features combined. So in your case, you need much finer features such as hair color, person size etc. One thing you can tried that fine tune from last few layer(from block 4-5). Also you can use different learning rate, very low for VGG blocks and little higher for completely new dense layer. For implemention, this-thread will be helpful.
I am running Keras 2.0.6 with Python 3.6.2 and Tensorflow-gpu 1.3.0.
In order to do fine tuning on the Vgg16 model, I run this code after having hand built a vgg16 architecture and loaded the weights, but I have not called compile() yet:
model = self.model
model.pop()
for layer in model.layers: layer.trainable=False
model.add(Dense(num, activation='softmax'))
self.compile()
And when I check the graph in Tensorboard I see (check top left of attached picture) dense_3 connected to dropout_2 but dangling by itself. And then next to it I see dense_4, also connected to dropout_2.
Tensorboard model graph
I tried to replace pop() with the pop_layer() code below as suggested here by joelthchao on May 6, 2016. Unfortunately, the graph displayed in Tensorboard becomes an un-understandable mess.
def pop_layer(model):
if not model.outputs:
raise Exception('Sequential model cannot be popped: model is empty.')
model.layers.pop()
if not model.layers:
model.outputs = []
model.inbound_nodes = []
model.outbound_nodes = []
else:
model.layers[-1].outbound_nodes = []
model.outputs = [model.layers[-1].output]
model.built = False
I know something is not working right because I get low accuracy when running this on the Kaggle cats vs dog competition where I hover around 90% whilst others running this code (it's adapted from fast.ai) on top of Theanos easily get 97%. Perhaps my accuracy problem comes from somewhere else, but I still don't think dense_3 should be dangling there and I am wondering if this could be the source of my precision problem.
How can I definitely disconnect and remove dense_3?
See below for model.summary() before and after running the code to prepare for fine tuning. We don't see dense_3 anymore but we do see it in the tensorboard graph.
Before Running
Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 3, 224, 224) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 3, 226, 226) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 64, 224, 224) 1792
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 64, 226, 226) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 224, 224) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 112, 112) 0
_________________________________________________________________
zero_padding2d_3 (ZeroPaddin (None, 64, 114, 114) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 112, 112) 73856
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 128, 114, 114) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 112, 112) 147584
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 128, 56, 56) 0
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 128, 58, 58) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 56, 56) 295168
_________________________________________________________________
zero_padding2d_6 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
zero_padding2d_7 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 256, 28, 28) 0
_________________________________________________________________
zero_padding2d_8 (ZeroPaddin (None, 256, 30, 30) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 512, 28, 28) 1180160
_________________________________________________________________
zero_padding2d_9 (ZeroPaddin (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
zero_padding2d_10 (ZeroPaddi (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 512, 14, 14) 0
_________________________________________________________________
zero_padding2d_11 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_12 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_13 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 512, 7, 7) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_1 (Dense) (None, 4096) 102764544
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_3 (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
After running
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 3, 224, 224) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 3, 226, 226) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 64, 224, 224) 1792
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 64, 226, 226) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 224, 224) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 112, 112) 0
_________________________________________________________________
zero_padding2d_3 (ZeroPaddin (None, 64, 114, 114) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 112, 112) 73856
_________________________________________________________________
zero_padding2d_4 (ZeroPaddin (None, 128, 114, 114) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 112, 112) 147584
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 128, 56, 56) 0
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 128, 58, 58) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 256, 56, 56) 295168
_________________________________________________________________
zero_padding2d_6 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
zero_padding2d_7 (ZeroPaddin (None, 256, 58, 58) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 256, 56, 56) 590080
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 256, 28, 28) 0
_________________________________________________________________
zero_padding2d_8 (ZeroPaddin (None, 256, 30, 30) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 512, 28, 28) 1180160
_________________________________________________________________
zero_padding2d_9 (ZeroPaddin (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
zero_padding2d_10 (ZeroPaddi (None, 512, 30, 30) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 512, 28, 28) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 512, 14, 14) 0
_________________________________________________________________
zero_padding2d_11 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_12 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
zero_padding2d_13 (ZeroPaddi (None, 512, 16, 16) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, 512, 14, 14) 2359808
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 512, 7, 7) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 25088) 0
_________________________________________________________________
dense_1 (Dense) (None, 4096) 102764544
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
dense_4 (Dense) (None, 2) 8194
=================================================================
Total params: 134,268,738
Trainable params: 8,194
Non-trainable params: 134,260,544
I believe this is a problem with the implimentation of layers.pop() in Keras when using the tensorflow backend. For now here is a work-around removing the last layer by name:
name_last_layer = str(model1.layers[-1])
model2 = Sequential()
for layer in model1.layers:
if str(layer) != name_last_layer:
model2.add(layer)
Where model1 is your original model and model2 is the same model without the last layer. In this example I've made model2 a Sequential model but you can ofcourse change this.