im trying to fit the data with the following shape to the pretrained keras vgg19 model.
image input shape is (32383, 96, 96, 3)
label shape is (32383, 17)
and I got this error
expected block5_pool to have 4 dimensions, but got array with shape (32383, 17)
at this line
model.fit(x = X_train, y= Y_train, validation_data=(X_valid, Y_valid),
batch_size=64,verbose=2, epochs=epochs,callbacks=callbacks,shuffle=True)
Here's how I define my model
model = VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(96,96,3),classes=17)
How did maxpool give me a 2d tensor but not a 4D tensor ? I'm using the original model from keras.applications.vgg16. How can I fix this error?
Your problem comes from VGG16(include_top=False,...) as this makes your solution to load only a convolutional part of VGG. This is why Keras is complaining that it got 2-dimensional output insted of 4-dimensional one (4 dimensions come from the fact that convolutional output has shape (nb_of_examples, width, height, channels)). In order to overcome this issue you need to either set include_top=True or add additional layers which will squash the convolutional part - to a 2d one (by e.g. using Flatten, GlobalMaxPooling2D, GlobalAveragePooling2D and a set of Dense layers - including a final one which should be a Dense with size of 17 and softmax activation function).
Related
Can someone please explain me the inputs and outputs along with the working of the layer mentioned below
model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))
total_words = 263
max_sequence_len=11
Is 64, the number of dimensions?
And why is the output of this layer (None, 10, 64)
Shouldn't it be a 64 dimension vector for each word, i.e (None, 263, 64)
You can find all the information about the Embedding Layer of Tensorflow Here.
The first two parameters are input_dimension and output_dimension.
The input dimensions basically represents the vocabulary size of your model. You can find this out by using the word_index function of the Tokenizer() function.
The output dimensions are going to be Dimensions of the input of the next Dense Layer
The output of the Embedding layer is of the form (batch_size, input_length, output_dim). But since you specified the input_length parameter, your layers input will be of the form (batch, input_length). That's why the output is of the form (None, 10 ,64).
Hope that clears up your doubt ☺️
In the Embedding layer the first argument represents the input dimensions (which is typically of considerable dimensionality). The second argument represents the output dimensions, a.k.a the dimensionality of the reduced vector. The third argument is for the sequence length. In essence, an Embedding layer is simply learning a lookup table of shape (input dim, output dim). The weights of this layer reflect that shape. The output of the layer, however, will of course be of shape (output dim, seq length); one dimensionality-reduced embedding vector for each element in the input sequence. The shape you were expecting is actually the shape of the weights of an embedding layer.
I'm doing a binary classification with CNNs and the data is imbalanced where the positive medical image : negative medical image = 0.4 : 0.6. So I want to use SMOTE to oversample the positive medical image data before training.
However, the dimension of the data is 4D (761,64,64,3) which cause the error
Found array with dim 4. Estimator expected <= 2
So, I reshape my train_data:
X_res, y_res = smote.fit_sample(X_train.reshape(X_train.shape[0], -1), y_train.ravel())
And it works fine. Before feed it to CNNs, I reshape it back by:
X_res = X_res.reshape(X_res.shape[0], 64, 64, 3)
Now, I'm not sure is it a correct way to oversample and will the reshape operator change the images' structer?
I had a similar issue. I had used the reshape function to reshape the image (basically flattened the image)
X_train.shape
(8000, 250, 250, 3)
ReX_train = X_train.reshape(8000, 250 * 250 * 3)
ReX_train.shape
(8000, 187500)
smt = SMOTE()
Xs_train, ys_train = smt.fit_sample(ReX_train, y_train)
Although, this approach is pathetically slow, but helped to improve the performance.
As soon as you flatten an image you are loosing localized information, this is one of the reasons why convolutions are used in image-based machine learning.
8000x250x250x3 has an inherent meaning - 8000 samples of images, each image of width 250, height 250 and all of them have 3 channels when you do 8000x250*250*3 reshape is just a bunch of numbers unless you use some kind of sequence network to teach its bad.
oversampling is bad for image data, you can do image augmentations (20crop, introducing noise like a gaussian blur, rotations, translations, etc..)
First Flatten the image
Apply SMOTE on this flattened image data and its labels
Reshape the flattened image to RGB image
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=42)
train_rows=len(X_train)
X_train = X_train.reshape(train_rows,-1)
(80,30000)
X_train, y_train = sm.fit_resample(X_train, y_train)
X_train = X_train.reshape(-1,100,100,3)
(>80,100,100,3)
I am working on a classification problem using CNN where my input image size is 64X64 and I want to use pretrained model such as VGG16,COCO or any other. But the problem is input image size of pretrained model is 224X224. How do I sort this issue. Is there any data augmentation way for input image size.
If I resize my input image to 224X224 then there is very high chance of image will get blurred and that may impact the training. Please correct me if I am wrong.
Another question is related to pretrained model. If I am using transfer learning then generally how layers I have to freeze from pretrained model. Considering my classification is very different from pretrained model classes. But I guess first few layers we can freeze it to get the edges, curve etc.. of the images which is very common in all the images.
But the problem is input image size of pretrained model is 224X224.
I assume you work with Keras/Tensorflow (It's the same for other DL frameworks). According to the docs in the Keras Application:
input_shape: optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3) (with
'channels_last' data format) or (3, 224, 224) (with 'channels_first'
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 48. E.g. (200, 200, 3) would be one
So there are two options to solve your issue:
Resize your input image to 244*244 by existing library and use VGG classifier [include_top=True].
Train your own classifier on top of the VGG models. As mentioned in the above documentation in Keras if your image is different than 244*244, you should train your own classifier [include_top=False]. You can do such things easily with:
inp = keras.layers.Input(shape=(64, 64, 3), name='image_input')
vgg_model = VGG19(weights='imagenet', include_top=False)
vgg_model.trainable = False
x = keras.layers.Flatten(name='flatten')(vgg_model)
x = keras.layers.Dense(512, activation='relu', name='fc1')(x)
x = keras.layers.Dense(512, activation='relu', name='fc2')(x)
x = keras.layers.Dense(10, activation='softmax', name='predictions')(x)
new_model = keras.models.Model(inputs=inp, outputs=x)
new_model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
If I am using transfer learning then generally how layers I have to
freeze from pretrained model
It is really depend on what your new task, how many training example you have, whats your pretrained model, and lots of other things. If I were you, I first throw away the pretrained model classifier. Then, If not worked, remove some other Convolution layer and do it step by step until I get good performance.
The following code works for me for image size 128*128*3:
vgg_model = VGG16(include_top=False, weights='imagenet')
print(vgg_model.summary())
#Get the dictionary of config for vgg16
vgg_config = vgg_model.get_config()
vgg_config["layers"][0]["config"]["batch_input_shape"] = (None, 128, 128, 3)
vgg_updated = Model.from_config(vgg_config)
vgg_updated.trainable = False
model = Sequential()
# Add the vgg convolutional base model
model.add(vgg_updated)
# Flattedn Layer must be added
model.add(Flatten())
vgg_updated.summary()
model.summary()
The Keras tutorial gives the following code example (with comments):
# apply a convolution 1d of length 3 to a sequence with 10 timesteps,
# with 64 output filters
model = Sequential()
model.add(Convolution1D(64, 3, border_mode='same', input_shape=(10, 32)))
# now model.output_shape == (None, 10, 64)
I am confused about the output size. Shouldn't it create 10 timesteps with a depth of 64 and a width of 32 (stride defaults to 1, no padding)? So (10,32,64) instead of (None,10,64)
In k-Dimensional convolution you will have a filters which will somehow preserve a structure of first k-dimensions and will squash the information from all other dimension by convoluting them with a filter weights. So basically every filter in your network will have a dimension (3x32) and all information from the last dimension (this one with size 32) will be squashed to a one real number with the first dimension preserved. This is the reason why you have a shape like this.
You could imagine a similar situation in 2-D case when you have a colour image. Your input will have then 3-dimensional structure (picture_length, picture_width, colour). When you apply the 2-D convolution with respect to your first two dimensions - all information about colours will be squashed by your filter and will no be preserved in your output structure. The same as here.
i have a classical y_train which is composed of 0 (negative) and 1 (positive) in a one dimensionnal shape. I wanted to train a tensorflow model but i have to initialize the y placeholder with the number of classes i want. So in this text classification case, i want the model to check negative or positive value so 2 classes ? But how did i convert my y_train to fit the output that i'm looking for. Thanks for your time!
"ValueError: Cannot feed value of shape (25000, 1) for Tensor u'Placeholder_5:0', which has shape (Dimension(None), Dimension(2))"
It appears your y_train contains the label values themselves, whereas the y_train needed by the model requires label probabilities. In your case, since there are only two labels, you can convert that to label probabilities as follows :
y_train = tf.concat(1, [1 - y_train, y_train])
If you have more labels, have a look at sparse_to_dense to convert them to probabilities.