Reshape y_train for binary text classification in Tensorflow - machine-learning

i have a classical y_train which is composed of 0 (negative) and 1 (positive) in a one dimensionnal shape. I wanted to train a tensorflow model but i have to initialize the y placeholder with the number of classes i want. So in this text classification case, i want the model to check negative or positive value so 2 classes ? But how did i convert my y_train to fit the output that i'm looking for. Thanks for your time!
"ValueError: Cannot feed value of shape (25000, 1) for Tensor u'Placeholder_5:0', which has shape (Dimension(None), Dimension(2))"

It appears your y_train contains the label values themselves, whereas the y_train needed by the model requires label probabilities. In your case, since there are only two labels, you can convert that to label probabilities as follows :
y_train = tf.concat(1, [1 - y_train, y_train])
If you have more labels, have a look at sparse_to_dense to convert them to probabilities.

Related

What is the default batch size of pytorch SGD?

What does pytorch SGD do if I feed the whole data and do not specify the batch size? I don't see any "stochastic" or "randomness" in the case.
For example, in the following simple code, I feed the whole data (x,y) into a model.
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
for epoch in range(5):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Suppose there are 100 data pairs (x,y), i.e. x_data and y_data each has 100 elements.
Question: It seems to me that all the 100 gradients are calculated before one update of parameters. Size of a "mini_batch" is 100, not 1. So there is no randomness, am I right? At first, I think SGD means randomly choose 1 data point and calculate its gradient, which will be used as an approximation of the true gradient from all data.
The SGD optimizer in PyTorch is just gradient descent. The stocastic part comes from how you usually pass a random subset of your data through the network at a time (i.e. a mini-batch or batch). The code you posted passes the entire dataset through on each epoch before doing backprop and stepping the optimizer so you're really just doing regular gradient descent.

Use SMOTE to oversample image data

I'm doing a binary classification with CNNs and the data is imbalanced where the positive medical image : negative medical image = 0.4 : 0.6. So I want to use SMOTE to oversample the positive medical image data before training.
However, the dimension of the data is 4D (761,64,64,3) which cause the error
Found array with dim 4. Estimator expected <= 2
So, I reshape my train_data:
X_res, y_res = smote.fit_sample(X_train.reshape(X_train.shape[0], -1), y_train.ravel())
And it works fine. Before feed it to CNNs, I reshape it back by:
X_res = X_res.reshape(X_res.shape[0], 64, 64, 3)
Now, I'm not sure is it a correct way to oversample and will the reshape operator change the images' structer?
I had a similar issue. I had used the reshape function to reshape the image (basically flattened the image)
X_train.shape
(8000, 250, 250, 3)
ReX_train = X_train.reshape(8000, 250 * 250 * 3)
ReX_train.shape
(8000, 187500)
smt = SMOTE()
Xs_train, ys_train = smt.fit_sample(ReX_train, y_train)
Although, this approach is pathetically slow, but helped to improve the performance.
As soon as you flatten an image you are loosing localized information, this is one of the reasons why convolutions are used in image-based machine learning.
8000x250x250x3 has an inherent meaning - 8000 samples of images, each image of width 250, height 250 and all of them have 3 channels when you do 8000x250*250*3 reshape is just a bunch of numbers unless you use some kind of sequence network to teach its bad.
oversampling is bad for image data, you can do image augmentations (20crop, introducing noise like a gaussian blur, rotations, translations, etc..)
First Flatten the image
Apply SMOTE on this flattened image data and its labels
Reshape the flattened image to RGB image
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=42)
train_rows=len(X_train)
X_train = X_train.reshape(train_rows,-1)
(80,30000)
X_train, y_train = sm.fit_resample(X_train, y_train)
X_train = X_train.reshape(-1,100,100,3)
(>80,100,100,3)

Use pretrained model with different input shape and class model

I am working on a classification problem using CNN where my input image size is 64X64 and I want to use pretrained model such as VGG16,COCO or any other. But the problem is input image size of pretrained model is 224X224. How do I sort this issue. Is there any data augmentation way for input image size.
If I resize my input image to 224X224 then there is very high chance of image will get blurred and that may impact the training. Please correct me if I am wrong.
Another question is related to pretrained model. If I am using transfer learning then generally how layers I have to freeze from pretrained model. Considering my classification is very different from pretrained model classes. But I guess first few layers we can freeze it to get the edges, curve etc.. of the images which is very common in all the images.
But the problem is input image size of pretrained model is 224X224.
I assume you work with Keras/Tensorflow (It's the same for other DL frameworks). According to the docs in the Keras Application:
input_shape: optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3) (with
'channels_last' data format) or (3, 224, 224) (with 'channels_first'
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 48. E.g. (200, 200, 3) would be one
So there are two options to solve your issue:
Resize your input image to 244*244 by existing library and use VGG classifier [include_top=True].
Train your own classifier on top of the VGG models. As mentioned in the above documentation in Keras if your image is different than 244*244, you should train your own classifier [include_top=False]. You can do such things easily with:
inp = keras.layers.Input(shape=(64, 64, 3), name='image_input')
vgg_model = VGG19(weights='imagenet', include_top=False)
vgg_model.trainable = False
x = keras.layers.Flatten(name='flatten')(vgg_model)
x = keras.layers.Dense(512, activation='relu', name='fc1')(x)
x = keras.layers.Dense(512, activation='relu', name='fc2')(x)
x = keras.layers.Dense(10, activation='softmax', name='predictions')(x)
new_model = keras.models.Model(inputs=inp, outputs=x)
new_model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
If I am using transfer learning then generally how layers I have to
freeze from pretrained model
It is really depend on what your new task, how many training example you have, whats your pretrained model, and lots of other things. If I were you, I first throw away the pretrained model classifier. Then, If not worked, remove some other Convolution layer and do it step by step until I get good performance.
The following code works for me for image size 128*128*3:
vgg_model = VGG16(include_top=False, weights='imagenet')
print(vgg_model.summary())
#Get the dictionary of config for vgg16
vgg_config = vgg_model.get_config()
vgg_config["layers"][0]["config"]["batch_input_shape"] = (None, 128, 128, 3)
vgg_updated = Model.from_config(vgg_config)
vgg_updated.trainable = False
model = Sequential()
# Add the vgg convolutional base model
model.add(vgg_updated)
# Flattedn Layer must be added
model.add(Flatten())
vgg_updated.summary()
model.summary()

keras vgg 16 shape error

im trying to fit the data with the following shape to the pretrained keras vgg19 model.
image input shape is (32383, 96, 96, 3)
label shape is (32383, 17)
and I got this error
expected block5_pool to have 4 dimensions, but got array with shape (32383, 17)
at this line
model.fit(x = X_train, y= Y_train, validation_data=(X_valid, Y_valid),
batch_size=64,verbose=2, epochs=epochs,callbacks=callbacks,shuffle=True)
Here's how I define my model
model = VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(96,96,3),classes=17)
How did maxpool give me a 2d tensor but not a 4D tensor ? I'm using the original model from keras.applications.vgg16. How can I fix this error?
Your problem comes from VGG16(include_top=False,...) as this makes your solution to load only a convolutional part of VGG. This is why Keras is complaining that it got 2-dimensional output insted of 4-dimensional one (4 dimensions come from the fact that convolutional output has shape (nb_of_examples, width, height, channels)). In order to overcome this issue you need to either set include_top=True or add additional layers which will squash the convolutional part - to a 2d one (by e.g. using Flatten, GlobalMaxPooling2D, GlobalAveragePooling2D and a set of Dense layers - including a final one which should be a Dense with size of 17 and softmax activation function).

Output dimensions of convolutional layer with Keras

The Keras tutorial gives the following code example (with comments):
# apply a convolution 1d of length 3 to a sequence with 10 timesteps,
# with 64 output filters
model = Sequential()
model.add(Convolution1D(64, 3, border_mode='same', input_shape=(10, 32)))
# now model.output_shape == (None, 10, 64)
I am confused about the output size. Shouldn't it create 10 timesteps with a depth of 64 and a width of 32 (stride defaults to 1, no padding)? So (10,32,64) instead of (None,10,64)
In k-Dimensional convolution you will have a filters which will somehow preserve a structure of first k-dimensions and will squash the information from all other dimension by convoluting them with a filter weights. So basically every filter in your network will have a dimension (3x32) and all information from the last dimension (this one with size 32) will be squashed to a one real number with the first dimension preserved. This is the reason why you have a shape like this.
You could imagine a similar situation in 2-D case when you have a colour image. Your input will have then 3-dimensional structure (picture_length, picture_width, colour). When you apply the 2-D convolution with respect to your first two dimensions - all information about colours will be squashed by your filter and will no be preserved in your output structure. The same as here.

Resources