model.add( Dropout(0.25)) SyntaxError: invalid syntax in CNN - machine-learning

I am trying to do a CNN based project. But when I want to build a CNN model, I got an error in "model.add(Dropout(0.25))" in line 14. In the previous model.add(Dropout(0.25)) , i did not get error in line 9.
Can anyone tell me what is the problem here? Why does it give an error?
model = Sequential()
model.add(Conv2D(32 , kernel_size=(3,3), acitvation ='relu' , padding='same' , input_shape = (28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(32,kernel_size=(3,3),activation='relu' , padding='same'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2,2) ,strides=2))
model.add(Dropout(0.25))
model.add(Conv2D(64,kernel_size=(3,3),activation='relu' , padding='same'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2,2),strides=2,padding='valid')
model.add( Dropout(0.25))
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(1024,activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10,activation='softmax'))
and the error message is
File "<ipython-input-53-e1c5cf3b08b4>", line 14
model.add( Dropout(0.25))
^
SyntaxError: invalid syntax

You forgot one parenthesis on the line above. Fix that and you are good to go, I believe.

You forgot to give the bracket on the line above. Just put the bracket at the end of line 13.
model = Sequential()
model.add(Conv2D(32 , kernel_size=(3,3), acitvation ='relu' , padding='same' , input_shape = (28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(32,kernel_size=(3,3),activation='relu' , padding='same'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2,2) ,strides=2))
model.add(Dropout(0.25))
model.add(Conv2D(64,kernel_size=(3,3),activation='relu' , padding='same'))
model.add(BatchNormalization())
model.add(MaxPool2D(pool_size=(2,2),strides=2,padding='valid'))
model.add( Dropout(0.25))
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Dense(1024,activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10,activation='softmax'))

Related

perform LSTM model with more than 3D image representation

I work on image classification with 10 classes.
each image is represented as a set of sequences (=75 sequence). Each sequence is represented as a set (=42 visual words) of visuel words. each word is encoded accoardind to a visual vocabulary (size=200)
so each image is represented as a tensor of shape (75, 42,200).
I want to use LSTM network to model this image representation using this code
model = Sequential()
model.add(LSTM(128, activation='relu', input_shape=(75, 42,200))))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
i get this error message:
ValueError Traceback (most recent call last)
<ipython-input-30-da9ec53d6d59> in <module>
1 model = Sequential()
----> 2 model.add(LSTM(128, activation='relu', input_shape=(75,42,200))) #number_of_hidden_units=128
3 model.add(Dense(10, activation='softmax')) #since number of output classes is 10
4 model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
5 model.summary()
2 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
212 ndim = shape.rank
213 if ndim != spec.ndim:
--> 214 raise ValueError(f'Input {input_index} of layer "{layer_name}" '
215 'is incompatible with the layer: '
216 f'expected ndim={spec.ndim}, found ndim={ndim}. '
ValueError: Input 0 of layer "lstm_1" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 75, 42, 200)
What is wrong. Please help
thank you
perform LSTM model with more than 3D image representation

Negative weights and biases in siamese keras model

I try to train a siamese model in keras. I use a really simple encoder with only covnets to encode a 32x32 RGB picture into a feature vector. The encoder encodes two pictures A and B. Then a MLP compares the two vectors and computes a score between 0 and 1 which is should be high if A and B are of the same class and low if they are not.
I used relu as the activation function on all layers but the model only learned to encode everything into a 0-vector. I switched to 'tanh' and saw that a lot of weight and biases, and also the entries in the feature-vector are negative. So i now understand why with relu everything was zero. But how come i get negative values? The input is positive, output as well and y-values are 0 or 1. I think there is something wrong with my model.
It doesn't perform very well either. It gets to around 60% accuracy.
Here is my model:
def model():
initializer = keras.initializers.random_uniform(minval=0.0001, maxval=0.001)
enc = Sequential()
enc.add(Conv2D(32, (3, 3), padding='same', activation='tanh',kernel_initializer=initializer))
enc.add(Conv2D(32, (3, 3), padding='same', strides=(2, 2), activation='tanh',kernel_initializer=initializer))
enc.add(Conv2D(32, (3, 3), padding='same', activation='tanh',kernel_initializer=initializer))
enc.add(Conv2D(16, (3, 3), padding='same', strides=(2, 2), activation='tanh',kernel_initializer=initializer))
enc.add(Conv2D(32, (3, 3), padding='same', activation='tanh',kernel_initializer=initializer))
enc.add(Conv2D(4, (3, 3), padding='same', strides=(2, 2), activation='tanh',kernel_initializer=initializer))
enc.add(Flatten())
input1 = Input((32, 32, 3))
# enc.build((1,32,32,3))
# enc.summary()
input2 = Input((32, 32, 3))
enc1 = enc(input1)
enc2 = enc(input2)
twin = concatenate([enc1, enc2])
twin = Dense(64, activation='tanh',kernel_initializer=initializer)(twin)
twin = Dense(32, activation='tanh',kernel_initializer=initializer)(twin)
twin = Dense(1, activation='sigmoid',kernel_initializer=initializer)(twin)
twin = Model(inputs=[input1, input2], outputs=twin)
twin.summary()
twin.compile(optimizer=adam(0.0001), loss='binary_crossentropy', metrics=["acc"])
return twin
Edit: I found out it was all good. Just my data was bad. I had only 1/10's samples of one class compared to the others. Oversampling didn't help. I removed the class from the dataset for now. Its working. I might add the class back in with augmented copies as additional samples and see how it goes.

How to use custom metrics in Keras model while using Grid Search CV?

I want to use R2 (Coefficient of determination) as metrics in my Keras model. For that, I have already defined a function (coeff_determination). This function as a metric works well without Grid Search CV but with grid search cv it gives an error like "The model is not configured to compute the accuracy. You should pass metrics=["accuracy"] to the model.compile() method". The code is given below.
def create_model():
#CNN Architecture - Model 7
model = Sequential()
model.add(Convolution1D(filters=10, kernel_size=12, activation="relu", kernel_initializer="glorot_uniform", input_shape=(X_train.shape[1],1)))
model.add(MaxPooling1D(pool_size=4, strides=2))
model.add(BatchNormalization())
model.add(Convolution1D(filters=16, kernel_size=12, activation='relu', kernel_initializer="glorot_uniform"))
model.add(MaxPooling1D(pool_size=3, strides=2))
model.add(BatchNormalization())
model.add(Convolution1D(filters=22, kernel_size=12, activation='relu', kernel_initializer="glorot_uniform"))
model.add(MaxPooling1D(pool_size=3, strides=2))
model.add(BatchNormalization())
model.add(Convolution1D(filters=28, kernel_size=12, activation='relu', kernel_initializer="glorot_uniform"))
model.add(MaxPooling1D(pool_size=4, strides=2))
model.add(BatchNormalization())
model.add(Convolution1D(filters=34, kernel_size=12, activation='relu', kernel_initializer="glorot_uniform"))
model.add(MaxPooling1D(pool_size=3, strides=2))
model.add(BatchNormalization())
model.add(Convolution1D(filters=40, kernel_size=12, activation='relu', kernel_initializer="glorot_uniform"))
model.add(MaxPooling1D(pool_size=3, strides=2))
model.add(BatchNormalization())
model.add(Flatten())
#model.add(Dropout(0.35))
model.add(Dense(130, activation='relu'))
#model.add(Dropout(0.35))
model.add(Dense(130, activation='relu'))
model.add(Dense(1, activation='linear'))
history = History()
model.compile(loss='mean_squared_error',optimizer= Adam(lr=0.0001), metrics=[coeff_determination])
#model.fit(X_train,y_train, validation_data=(X_test,y_test), epochs=400, batch_size=30, callbacks=[history])
return model
def coeff_determination(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return (1 - SS_res / (SS_tot + K.epsilon()))
# to reprduce the same results next time
seed = 7
np.random.seed(seed)
# Creating Keras model with Scikit learn wrap-up
model = KerasClassifier(build_fn=create_model, verbose=0)
# define the grid search parameters
batch_size = [20,30,40,80]
epochs = [100,200,300,400]
# Using make scorer to convert metric r_2 to a scorer
my_scorer = make_scorer(r2_score, greater_is_better=True)
# passing dictionaries of parameters to the GridSearchCV
param_grid = dict(batch_size=batch_size, epochs=epochs)
grid = GridSearchCV(estimator=model, scoring=my_scorer, param_grid=param_grid, n_jobs=1, cv=3)
grid_result = grid.fit(X_train, y_train)
# summarizing the results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
I think you need to feed your custom scoring function as the input for scoring param to GridSearchCV otherwise it will look out for the default estimator's scoring method which is accuracy.
From Documentation:
scoring: str, callable, list/tuple or dict, default=None
A single str (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions) to evaluate the predictions on the test set.
For evaluating multiple metrics, either give a list of (unique) strings or a dict with names as keys and callables as values.
NOTE that when using custom scorers, each scorer should return a single value. Metric functions returning a list/array of values can be wrapped into multiple scorers that return one value each.
See Specifying multiple metrics for evaluation for an example.
If None, the estimator’s score method is used.

I have this error "input 0 is incompatible with layer lstm expected ndim=3 found ndim=5"

I am very new in this field. I searched on the internet but I could not find a solution. I am waiting for the help of people who are interested in this field.
My model
def load_VGG16_model():
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(256,256,3))
print("Model loaded..!")
return base_model
Summary of the model
load_VGG16_model().summary()
Adding Layers
def action_model(shape=(30, 256, 256, 3), nbout=len(classes)):
convnet = load_VGG16_model()
model = Sequential()
model.add(TimeDistributed(convnet, input_shape=shape))
model.add(LSTM(30,return_sequences=True,input_shape=(30,512))) # the error shows this line.
top_model.add(Dense(4096, activation='relu', W_regularizer=l2(0.1)))
top_model.add(Dropout(0.5))
top_model.add(Dense(4096, activation='relu', W_regularizer=l2(0.1)))
top_model.add(Dropout(0.5))
model.add(Dense(nbout, activation='softmax'))
return model
model.add(LSTM(30,return_sequences=True,input_shape=(30,512))) ==> the error shows this line.
Your problem is similar to this one Building CNN + LSTM in Keras for a regression problem. What are proper shapes?
Using reshape layer before the LSTM should work fine for you
def action_model(shape=(256, 256, 3), nbout=len(classes)):
convnet = load_VGG16_model()
model = Sequential()
model.add(convnet)
model.add(tf.keras.layers.Reshape((8*8, 512))) # Shape comes from the last output of covnet
model.add(LSTM(30,return_sequences=True,input_shape=(8*8,512))) # the error shows this line.
top_model.add(Dense(4096, activation='relu', W_regularizer=l2(0.1)))
top_model.add(Dropout(0.5))
top_model.add(Dense(4096, activation='relu', W_regularizer=l2(0.1)))
top_model.add(Dropout(0.5))
model.add(Dense(nbout, activation='softmax'))
return model

How to check the learning rate with train_on_batch [Keras]

I am using Keras on Python2.
Does anyone know how to check and modify the learning rate for the ADAM optimizer please ? Here is my neural network and I defined my own optimizer. When training on batches with model.train_on_batch(...) I have no way to track the learning rate. Thanks for your help
def CNN_model():
# Create model
model = Sequential()
model.add(Conv2D(12, (5, 5), input_shape=(1, 256, 256), activation='elu'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Conv2D(12, (5, 5), activation='elu'))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Conv2D(12, (3, 3), activation='elu'))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Flatten())
model.add(Dropout(0.3))
model.add(Dense(128, activation='elu'))
model.add(Dropout(0.3))
model.add(Dense(32, activation='elu'))
model.add(Dense(2, activation='softmax'))
# Compile model
my_optimizer = Adam(lr=0.001, decay=0.05)
model.compile(loss='categorical_crossentropy', optimizer=my_optimizer, metrics=['accuracy'])
return model
You can do it in several ways. The simplest thing in my mind is to do it through callbacks
from keras.callbacks import Callback
from keras import backend as K
class showLR( Callback ) :
def on_epoch_begin(self, epoch, logs=None):
lr = float(K.get_value(self.model.optimizer.lr))
print " epoch={:02d}, lr={:.5f}".format( epoch, lr )
You can use ReduceLROnPlateau callback. On your callbacks list add ReduceLROnPlateau callback and then just include your callback list to your train scheme.
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
callbacks= [ReduceLROnPlateau(monitor='val_acc',
patience=5,
verbose=1,
factor=0.5,
min_lr=0.00001)]
model=CNN_model()
model.fit(x_train, y_train, batch_size=batch_size,
epochs=epochs,
validation_data=(x_valid, y_valid),
callbacks = callbacks)

Resources