Run Multiple Keras Models In A Cluster Like OAR2 - machine-learning

I want to build an algorithm like automl, but, don't know how to train multiple keras models simultaneously in a cluster like OAR2.
Assume that i have two different keras models like this :
model1 = Sequential()
model1.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model1.add(Conv2D(64, (3, 3), activation='relu'))
model1.add(MaxPooling2D(pool_size=(2, 2)))
model1.add(Dropout(0.25))
model1.add(Flatten())
model1.add(Dense(128, activation='relu'))
model1.add(Dropout(0.5))
model1.add(Dense(num_classes, activation='softmax'))
model2 = Sequential()
model2.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model2.add(Conv2D(128, (3, 3), activation='selu'))
model2.add(MaxPooling2D(pool_size=(2, 2)))
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(num_classes, activation='softmax'))
How could train these two models simultaneously in a cluster?

Related

convolution layer as an output layer for classification problem

i am trying to build a deep learning model for classifying cifar10 dataset of 10 classes. now, i want a convolution layer as my output layer and this layer(filters=10) should take input from the flatten and predict my class.
my model code
num_class = 10
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(num_classes))
model.add(Conv2D(10, (3,3)))
model.add(Activation('softmax'))
but it is giving me error
Input 0 of layer conv2d_34 is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 6272]
how do i achieve this?
You are using Flatten layer before a convolutional layer. Flatten makes the tensor output 2-d, but Conv2D needs 4-d data. Just comment the Flatten layer line and everything will work fine.
You have no classification module in your model, you need to have a Dense layer with number of classes in the last layer.
#model.add(Flatten()) # comment this line
model.add(Dropout(0.5))
model.add(Conv2D(10,(3,3)))
model.add(Flatten())
model.add(Dense(num_class)) # num_class is how many classes do you have in your dataset
model.add(Activation('softmax'))
You can use convolution layer as final output with some kind of global pooling. For example, the following model uses GlobalAveragePooling.
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(10, (3,3)))
model.add(GlobalAveragePooling2D())
model.add(Activation('softmax'))
model.summary()

How to get a good binary classification deep neural model where negative data is more on dataset?

I wanted to make a binary image classification using Cifar-10 dataset. Where I modified Cifar-10 such a way that class-0 as class-True(1) and all other class as class-False(0). Now there is only two classes in my dataset - True(1) and False(0).
while I am doing training using the following Keras model(Tensorflow as backend) I am getting almost 99% accuracy.
But in the test I am finding that all the False is predicted as False and all True are also predicted as False - and getting 99% accuracy.
But I do not wanted that all True are predicted as False.
I was expecting that all True are predicted as True.
How can I resolve this problem?
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
output=model.fit(x_train, y_train, batch_size=32, epochs=10)
You have a few options here:
Get more data with True label. However in most scenarios this is not easily possible.
Use only a small amount of the data that is labeled False. Maybe it is enough to train your model?
Use weights for the loss function during training. In Kerasyou can do this using the class_weight option of fit. The class True should have a higher weight than the class False in your example.
As mentioned in the comments this is a huge problem in the ML field. These are just a few very simple things you could try.

Implementation of AlexNet in Keras on cifar-10 gives poor accuracy

I tried implementing AlexNet as explained in this video. Pardon me if I have implemented it wrong, this is the code for my implementation it in keras.
Edit : The cifar-10 ImageDataGenerator
cifar_generator = ImageDataGenerator()
cifar_data = cifar_generator.flow_from_directory('datasets/cifar-10/train',
batch_size=32,
target_size=input_size,
class_mode='categorical')
The Model described in Keras:
model = Sequential()
model.add(Convolution2D(filters=96, kernel_size=(11, 11), input_shape=(227, 227, 3), strides=4, activation='relu'))
model.add(MaxPool2D(pool_size=(3 ,3), strides=2))
model.add(Convolution2D(filters=256, kernel_size=(5, 5), strides=1, padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(3 ,3), strides=2))
model.add(Convolution2D(filters=384, kernel_size=(3, 3), strides=1, padding='same', activation='relu'))
model.add(Convolution2D(filters=384, kernel_size=(3, 3), strides=1, padding='same', activation='relu'))
model.add(Convolution2D(filters=256, kernel_size=(3, 3), strides=1, padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(3 ,3), strides=2))
model.add(Flatten())
model.add(Dense(units=4096))
model.add(Dense(units=4096))
model.add(Dense(units=10, activation='softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
I have used an ImageDataGenerator to train this network on the cifar-10 data set. However, I am only able to get an accuracy of about .20. I cannot figure out what I am doing wrong.
For starters, you need to extend the relu activation to your two intermediate dense layers, too; as they are now:
model.add(Dense(units=4096))
model.add(Dense(units=4096))
i.e. with linear activation (default), it can be shown that they are equivalent to a simple linear unit each (Andrew Ng devotes a whole lecture in his first course on the DL specialization explaining this). Change them to:
model.add(Dense(units=4096, activation='relu'))
model.add(Dense(units=4096, activation='relu'))
Check the SO thread Why must a nonlinear activation function be used in a backpropagation neural network?, as well as the AlexNet implementations here and here to confirm this.

Image classifier with Keras not converging

all. I am trying to build an image classifier with Keras (Tensorflow as backend). The objective is to separate memes from other images.
I am using the structure convolutional layers + fully connected layers with max pooling and dropouts.
The code is as following:
model = Sequential()
model.add(Conv2D(64, (3,3), activation='relu', input_shape=conv_input_shape))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.
compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
The input is a matrix of shape (n, 100, 100, 3). n RGB images with resolution 100 x 100, and output labels are [1, 0] for meme and [0, 1] otherwise.
However, when I train the model, the loss won't ever decrease from the first iteration.
Is there anything off in the code?
I am thinking that meme is actually not that different from other images in many ways except that some of them have some sort of captions together with some other features.
What are some better architectures to solve a problem like this?

How to load only specific weights on Keras

I have a trained model that I've exported the weights and want to partially load into another model.
My model is built in Keras using TensorFlow as backend.
Right now I'm doing as follows:
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape, trainable=False))
model.add(Activation('relu', trainable=False))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), trainable=False))
model.add(Activation('relu', trainable=False))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), trainable=True))
model.add(Activation('relu', trainable=True))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.load_weights("image_500.h5")
model.pop()
model.pop()
model.pop()
model.pop()
model.pop()
model.pop()
model.add(Conv2D(1, (6, 6),strides=(1, 1), trainable=True))
model.add(Activation('relu', trainable=True))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
I'm sure it's a terrible way to do it, although it works.
How do I load just the first 9 layers?
If your first 9 layers are consistently named between your original trained model and the new model, then you can use model.load_weights() with by_name=True. This will update weights only in the layers of your new model that have an identically named layer found in the original trained model.
The name of the layer can be specified with the name keyword, for example:
model.add(Dense(8, activation='relu',name='dens_1'))
This call:
weights_list = model.get_weights()
will return a list of all weight tensors in the model, as Numpy arrays.
All what you have to do next is to iterate over this list and apply:
for i, weights in enumerate(weights_list[0:9]):
model.layers[i].set_weights(weights)
where model.layers is a flattened list of the layers comprising the model. In this case, you reload the weights of the first 9 layers.
More information is available here:
https://keras.io/layers/about-keras-layers/
https://keras.io/models/about-keras-models/

Resources