Dimensionality error while using CNN conv1d model - machine-learning

I have a dataset where x_train shape is (34650,10,1) , y_train shape is (34650,) , x_test shape is (17067,10,1) and y_test is (17067,) .
I am making a simple cnn model -
input_layer = Input(shape=(10, 1))
conv2 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(input_layer)
pool1 = MaxPooling1D(pool_size=1)(conv2)
drop1 = Dropout(0.5)(pool1)
pool2 = MaxPooling1D(pool_size=1)(drop1)
conv3 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(pool2)
drop2 = Dropout(0.5)(conv3)
conv4 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(drop2)
pool3 = MaxPooling1D(pool_size=1)(conv4)
conv5 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(pool3)
output_layer = Dense(1, activation='sigmoid')(conv5)
model_2 = Model(inputs=input_layer, outputs=output_layer)
But when i am trying to fit the model
model_2.compile(loss='mse',optimizer='adam')
model_2 = model_2.fit(x_train, y_train,
batch_size=128,
epochs=2,
verbose=1,
validation_data=(x_test, y_test))
I am getting this error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-177-aee9b3241a20> in <module>()
4 epochs=2,
5 verbose=1,
----> 6 validation_data=(x_test, y_test))
2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
133 ': expected ' + names[i] + ' to have ' +
134 str(len(shape)) + ' dimensions, but got array '
--> 135 'with shape ' + str(data_shape))
136 if not check_batch_axis:
137 data_shape = data_shape[1:]
ValueError: Error when checking target: expected dense_14 to have 3 dimensions, but got array with shape (34650, 1)
The shape of x_train and x_test is already 3 dimensional, then why it is showing this error

this is because your input is 3d and your target is 2d. Inside your network there isn't anything that enables you to pass from to 3d to 2d. to do this you can use global pooling or flatten. below an example
n_sample = 100
X = np.random.uniform(0,1, (n_sample,10,1))
y = np.random.randint(0,2, n_sample)
input_layer = Input(shape=(10, 1))
conv2 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(input_layer)
pool1 = MaxPooling1D(pool_size=1)(conv2)
drop1 = Dropout(0.5)(pool1)
pool2 = MaxPooling1D(pool_size=1)(drop1)
conv3 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(pool2)
drop2 = Dropout(0.5)(conv3)
conv4 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(drop2)
pool3 = MaxPooling1D(pool_size=1)(conv4)
conv5 = Conv1D(filters=64,
kernel_size=3,
strides=1,
activation='relu')(pool3)
x = GlobalMaxPool1D()(conv5) # =====> from 3d to 2d (also GlobalAvg1D or Flatten are ok)
output_layer = Dense(1, activation='sigmoid')(x)
model_2 = Model(inputs=input_layer, outputs=output_layer)
model_2.compile('adam', 'binary_crossentropy')
model_2.fit(X,y, epochs=3)

Related

Working with customised Alexnet for face recognition

I am currently trying to fit my customised cnn model (Alexnet) with the input shape of (224, 224, 1) as I have the the image shape of 224 x 224 and I am dealing with black and white image.
So this is where I am trying to load the data and then get I got the dataset sizes such as the number of samples, features, and height and widths of the images, and finally the number of classes
lfw_people = fetch_lfw_people(min_faces_per_person = 70, resize = 2.39)
n_samples, h, w = lfw_people.images.shape
X = lfw_people.data
n_features = X.shape[1]
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
after that, I splitted the data using the train test split and reshape with height and width of the image and then I reduced the height to the same length as width which makes it 224 x 224. The result of the counts of y_train and y test is
y_train Count: Counter({3: 384, 1: 176, 6: 108, 2: 94, 4: 84, 0: 64, 5: 56})
y_test Count: Counter({3: 146, 1: 60, 6: 36, 2: 27, 4: 25, 5: 15, 0: 13})
and then I am trying to convert both y_train and y_test to categorical which 7 classes
y_train = to_categorical(
y_train,
num_classes = len(set(y)),
dtype = 'uint8'
)
y_test = to_categorical(
y_test,
num_classes = len(set(y)),
dtype = 'uint8'
)
here is my code for my model where it has the 8 layers in total:
model = Sequential()
# 1st Convolutional Layer
model.add(Conv2D(filters = 96, input_shape = (224, 224, 1),
kernel_size = (11, 11), strides = (4, 4),
padding = 'valid'))
model.add(Activation('relu'))
#Max-Pooling
model.add(MaxPooling2D(pool_size = (2, 2),
strides = (2, 2), padding = 'valid'))
# Batch Normalisation
model.add(BatchNormalization())
# 2nd Convolutional Layer
model.add(Conv2D(filters = 256, kernel_size = (11, 11),
strides = (1, 1), padding = 'valid'))
model.add(Activation('relu'))
# Max-Pooling
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 3rd Convolutional Layer
model.add(Conv2D(filters = 384, kernel_size = (3, 3),
strides = (1, 1), padding = 'valid'))
model.add(Activation('relu'))
# Batch Normalization
model.add(BatchNormalization())
# 4th Convolutional Lauer
model.add(Conv2D(filters = 384, kernel_size = (3, 3),
strides = (1, 1), padding = 'valid'))
model.add(Activation('relu'))
# Batch Normalisation
model.add(BatchNormalization())
# 5th Convolutional Layer
model.add(Conv2D(filters = 256, kernel_size = (3, 3),
strides = (1, 1), padding = 'valid'))
model.add(Activation('relu'))
# Max-pooling
model.add(MaxPooling2D(pool_size = (2, 2), strides = (2, 2),
padding = 'valid'))
# Batch Normalisation
model.add(BatchNormalization())
# Flattening
model.add(Flatten())
# 1st Dense Layer
model.add(Dense(4096, input_shape = (224*224*1, )))
model.add(Activation('relu'))
# Add Dropout to prevent overfitting
model.add(Dropout(0.4))
# Batch Normalisatoin
model.add(BatchNormalization())
# 2nd Dense Layer
model.add(Dense(4096))
model.add(Activation('relu'))
#Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# output softmax layer
model.add(Dense(7))
model.add(Activation('softmax'))
This is my augmentation where I am trying to generate the image
# Data Augmentation
datagen = ImageDataGenerator(
featurewise_center = True, # set input mean to 0 over the dataset
samplewise_center = True, # set each sample mean to 0
featurewise_std_normalization = True, # divide inputs by std of the dataset
samplewise_std_normalization = True, # divide each inputs by its std
zca_whitening = False, # dimension reduction
rotation_range = 20, # randomly rotate images in the range 5 degrees
zoom_range = 0.1, # Randomly zoom image 10%
width_shift_range = 0.2, # randomly shift images horizontally 10%
height_shift_range = 0.2, # randomly shift images vertically 10%
horizontal_flip = True, # randomly flip images
# vertical_flip = 0.2 # Random flip images
vertical_flip = 0.8
)
history = model.fit(datagen.flow(X_train, y_train, batch_size = batch_size),
epochs = 100, validation_data = (X_test, y_test),
steps_per_epoch = X_train.shape[0] // batch_size,
verbose = 0)
After I ran the model, it gave me the validation accuracy of 0.45 and this my confusion matrix which tells me that it kept predicting 'class 3'
| 0 1 2 3 4 5 6
---------------------------------
0| 0 0 0 13 0 0 0
1| 0 0 0 60 0 0 0
2| 0 0 0 27 0 0 0
3| 0 0 0 146 0 0 0
4| 0 0 0 25 0 0 0
5| 0 0 0 15 0 0 0
6| 0 0 0 36 0 0 0
So how to make it predict classes other than 3?

Combination of CNN and LSTM for time series data

I'm trying to run a combination of CNN (Convolutional Neural Network) and LSTM (Long Short Term Memory), and didn't find the right reshaping for the data the fits for both. I thought LSTM needs [samples, timesteps, features], but it doesn't work here as input.
I'm receiving an error:
ValueError: Negative dimension size caused by subtracting 3 from 1 for '{{node conv1d/conv1d/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](conv1d/conv1d/Reshape, conv1d/conv1d/ExpandDims_1)' with input shapes: [?,1,1,24], [1,3,24,64].
The data is taken from:
https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data
It has the shape of:
date
LandAverageTemperature
1750-01-01
1.2
etc..
The full code is:
import tensorflow as tf
def preprocessing(data,n_in=1, n_out=1):
from sklearn.model_selection import train_test_split
def series_to_supervised(df, n_in=1, n_out=1,
dropnan=True):
cols = list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
# put it all together
agg = pd.concat(cols, axis=1)
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg.values
land_temp = pd.DataFrame(data['LandAverageTemperature'].values)
ma_vals = data['LandAverageTemperature'].expanding(min_periods=12).mean()
ma_vals_inter = ma_vals.interpolate(limit_direction='both')
df = series_to_supervised(ma_vals_inter, n_in=n_in, n_out=n_out, dropnan=True)
df = pd.DataFrame(df)
X, y = df.iloc[:, :-n_out], df.iloc[:, -n_out:]
percent = 0.8
if n_out == 1:
y = y.iloc[:, 0]
lim = int(percent * X.shape[0])
X_train, X_test, y_train, y_test = X[:lim], X[lim:], y[:lim], y[ lim:] # train_test_split( X, y, test_size=0.4, random_state=0)
return X_train, X_test, y_train, y_test
def lstm_cnn2(X_train,y_train,config,n_in,n_out=1,batch_size=1,epochs=1000,verbose=0,n_features=1):
input_y = y_train.values.reshape(y_train.shape[0], 1)
n_timesteps, n_features, n_outputs = X_train.shape[0],
X_train.shape[1], input_y.shape[1]
# reshape output into [samples, timesteps, features]
input_x = X_train.values.reshape((X_train.shape[0], 1,
n_features))
# define model
model = Sequential()
model.add(Conv1D(64, 3, activation='relu', input_shape=(n_timesteps,1,n_features)))
model.add(Conv1D(64, 3, activation='relu'))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(RepeatVector(n_outputs))
model.add(LSTM(200, activation='relu',
return_sequences=True))
model.add(TimeDistributed(Dense(100,
activation='relu')))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='mse', optimizer='adam')
# fit network
model.fit(input_x, input_y, epochs=epochs,
batch_size=batch_size, verbose=verbose)
return model
if __name__ == '__main__':
file_location='./GlobalTemperatures.csv'
data = pd.read_csv(file_location)
data['dt'] = pd.to_datetime(data['dt'])
n_out = 1
n_in = 12 * 2
X_train, X_test, y_train, y_test =
preprocessing(data,n_in,n_out)
config = 128, 64, 32, 3 * 48, 48, 24, 100, 20 # lstm
model configuration
verbose, epochs, batch_size = 0, 1, 16
model_lstm = lstm_cnn2(X_train, y_train, config,
n_in,batch_size=batch_size)

Mel Spectrogram feature extraction to CNN

This question is in line with the question posted here but with a slight nuance of the CNN. Using the feature extraction definition:
max_pad_len = 174
n_mels = 128
def extract_features(file_name):
try:
audio, sample_rate = librosa.core.load(file_name, res_type='kaiser_fast')
mely = librosa.feature.melspectrogram(y=audio, sr=sample_rate, n_mels=n_mels)
#pad_width = max_pad_len - mely.shape[1]
#mely = np.pad(mely, pad_width=((0, 0), (0, pad_width)), mode='constant')
except Exception as e:
print("Error encountered while parsing file: ", file_name)
return None
return mely
How do you go about getting the correct dimension of the num_rows, num_columns and num_channels to be input to the train and test data?
In constructing the CNN Model, how to determine the correct shape to input?
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, input_shape=(num_rows, num_columns, num_channels), activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
I dont know if it is exactly your problem but I also have to use a MEL as an input to a CNN.
Short answer:
input_shape = (x_train.shape[1], x_train.shape[2], 1)
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1)
or
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1)
input_shape = x_train.shape[1:]
Long answer
In my case I have a DataFrame with speakers_id and mel spectrograms (previously calculated with librosa).
The Keras CNN models are prepared for images with width, height and channels of colors (grayscale - RGB)
The Mel Spectrograms given by librosa are image-like arrays with width and height, so you need to do a reshape to add the channel dimension.
Define the input and expected output
# It looks stupid but that way i could convert the panda.Series to a np.array
x = np.array(list(df.mel))
y = df.speaker_id
print('X shape:', x.shape)
X shape: (2204, 128, 24)
2204 Mels, 128x24
Split in train-test
x_train, x_test, y_train, y_test = train_test_split(x, y)
print(f'Train: {len(x_train)}', f'Test: {len(x_test)}')
Train: 1653 Test: 551
Reshape to add the extra dimension
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1)
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], x_test.shape[2], 1)
print('Shapes:', x_train.shape, x_test.shape)
Shapes: (1653, 128, 24, 1) (551, 128, 24, 1)
Set input_shape
# The input shape is independent of the amount of inputs
input_shape = x_train.shape[1:]
print('Input shape:', input_shape)
Input shape: (128, 24, 1)
Put it into the model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D())
# More layers...
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])
Run model
model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test))
Hope this is helpfull

CNN incompatible

My data has the following shapes:
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
(942, 32, 32, 1) (236, 32, 32, 1) (942, 3, 3) (236, 3, 3)
And whenever I try to run my CNN I get the following error:
from tensorflow.keras import layers
from tensorflow.keras import Model
img_input = layers.Input(shape=(32, 32, 1))
x = layers.Conv2D(16, (3,3), activation='relu', strides = 1, padding = 'same')(img_input)
x = layers.Conv2D(32, (3,3), activation='relu', strides = 2)(x)
x = layers.Conv2D(128, (3,3), activation='relu', strides = 2)(x)
x = layers.MaxPool2D(pool_size=2)(x)
x = layers.Conv2D(3, 3, activation='linear', strides = 2)(x)
output = layers.Flatten()(x)
model = Model(img_input, output)
model.summary()
model.compile(loss='mean_squared_error',optimizer= 'adam', metrics=['mse'])
history = model.fit(X_train,Y_train,validation_data=(X_test, Y_test), epochs = 100,verbose=1)
Error:
InvalidArgumentError: Incompatible shapes: [32,3] vs. [32,3,3]
[[node BroadcastGradientArgs_2 (defined at /usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_7567]
Function call stack:
distributed_function
What am I missing here?
you don't handle the dimensionality inside your network properly. Firstly expand the dimension of your y in order to get them in this format (n_sample, 3, 3, 1). At this point adjust the network (I remove flatten and max pooling and adjust the last conv output)
# create dummy data
n_sample = 10
X = np.random.uniform(0,1, (n_sample, 32, 32, 1))
y = np.random.uniform(0,1, (n_sample, 3, 3))
# expand y dim
y = y[...,np.newaxis]
print(X.shape, y.shape)
img_input = Input(shape=(32, 32, 1))
x = Conv2D(16, (3,3), activation='relu', strides = 1, padding = 'same')(img_input)
x = Conv2D(32, (3,3), activation='relu', strides = 2)(x)
x = Conv2D(128, (3,3), activation='relu', strides = 2)(x)
x = Conv2D(1, (3,3), activation='linear', strides = 2)(x)
model = Model(img_input, x)
model.summary()
model.compile(loss='mean_squared_error',optimizer= 'adam', metrics=['mse'])
model.fit(X,y, epochs=3)

ValueError: Found input variables with inconsistent numbers of samples: [4162, 3]

I am facing an error in plotting my confusion matrix. I am giving the test labels and my predicted label in confusion matrix function but it is giving me the value error having the problem in number of samples.
Shape of My data is below.
Trainig Data Shape (4162, 224, 224, 3)
Training Data Labels Shape (4162, 5)
Testing Data Shape (3921, 224, 224, 3)
Testing Data Labels Shape (3921, 5)
Predicted Label is a bit ugly because of only 2 epochs run, I just wanted to plot the confusion matrix first so thats why.
predictingimage = "D:/compCarsThesisData/data/image/78/3/2010/0ba8d018cdc994.jpg" #67/1698/2010/6805eb92ac6c70.jpg"
predictImageRead = mpg.imread(predictingimage)
resizingImage = cv2.cv2.resize(predictImageRead,(224,224))
reshapedFinalImage = np.expand_dims(resizingImage, axis=0)
npimage = np.asarray(reshapedFinalImage)
m = model.predict(npimage)
print(m)
[array([[0.02502811, 0.01959323, 0.6556284 , 0.26472655, 0.03502375]],
dtype=float32), array([[5.8234303e-04, 3.1917400e-04, 9.4957882e-01, 1.8873921e-02,
3.0645736e-02]], dtype=float32), array([[0.02581117, 0.04752538, 0.81816435, 0.04812173, 0.06037736]],
dtype=float32)]
cm = confusion_matrix(train_labels_Encode,m)
plt.imshow(cm)
plt.show()
ERROR
Traceback (most recent call last):
File "d:/ThesisWork/seriouswork/Inception_SVM_CompCarsGoogleNetArchitecture.py", line 299, in <module>
cm = confusion_matrix(train_labels_hotEncode,n)
File "C:\Users\zeele\Miniconda3\lib\site-packages\sklearn\metrics\classification.py", line 253, in confusion_matrix
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
File "C:\Users\zeele\Miniconda3\lib\site-packages\sklearn\metrics\classification.py", line 71, in _check_targets
check_consistent_length(y_true, y_pred)
File "C:\Users\zeele\Miniconda3\lib\site-packages\sklearn\utils\validation.py", line 235, in check_consistent_length
" samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [4162, 3]
Classifier Code:
X_train = np.load('D:/Inception_preprocessed_data_Labels_2004/Top5/TrainingData_Top5.npy')#('D:/ThesisWork/S_224_Training_data.npy')#training_images
X_test = np.load('D:/Inception_preprocessed_data_Labels_2004/Top5/TrainingLabels_Top5.npy')#('D:/ThesisWork/S_224_Training_labels.npy')#training_labels
y_train = np.load('D:/Inception_preprocessed_data_Labels_2004/Top5/TestingData_Top5.npy')#('D:/ThesisWork/S_224_Testing_data.npy')#testing_images
y_test = np.load('D:/Inception_preprocessed_data_Labels_2004/Top5/TestingLabels_Top5.npy')#('D:/ThesisWork/S_224_Testing_labels.npy')#testing_labels
print(X_test)
le = preprocessing.LabelEncoder()
le.fit(X_test)
transform_trainLabels = le.transform(X_test)
print(transform_trainLabels)
print(le.inverse_transform(transform_trainLabels))
train_labels_hotEncode = np_utils.to_categorical(transform_trainLabels,len(set(transform_trainLabels)))
shuffle(X_train)
shuffle(train_labels_hotEncode)
le2 = preprocessing.LabelEncoder()
le2.fit(y_test)
transform_testLabels = le2.transform(y_test)
test_labels_hotEncode = np_utils.to_categorical(transform_testLabels,len(set(transform_testLabels)))
print(test_labels_hotEncode.shape)
shuffle(y_train)
shuffle(test_labels_hotEncode)
# print(train_labels_hotEncode[3000])
# exit()
# X_train = np.asarray(X_train / 255.0)
# y_train = np.asarray(y_train / 255.0)
# print("X_Training" ,X_train.shape, X_train)
# print("X_TEST", X_test.shape)
# print("Y_train", y_train.shape)
# print("y_test", y_test.shape)
# exit()
# plt.imshow(X_train[1])
# print(X_test)
# plt.imshow(y_train[1])
# print(y_test)
# plt.show()
print("Trainig Data Shape",X_train.shape)
print("Training Data Labels Shape",train_labels_hotEncode.shape)
print("Testing Data Shape", y_train.shape)
print("Testing Data Labels Shape", test_labels_hotEncode.shape)
# X_train = np.array(X_train).astype(np.float32)
# y_train = np.array(y_train).astype(np.float32)
def inception_module(image,
filters_1x1,
filters_3x3_reduce,
filter_3x3,
filters_5x5_reduce,
filters_5x5,
filters_pool_proj,
name=None):
conv_1x1 = Conv2D(filters_1x1, (1,1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer= bias_init)(image)
conv_3x3 = Conv2D(filters_3x3_reduce, (1,1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer= bias_init)(image)
conv_3x3 = Conv2D(filter_3x3,(3,3), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer=bias_init)(conv_3x3)
conv_5x5 = Conv2D(filters_5x5_reduce,(1,1), padding='same', activation='relu',kernel_initializer=kernel_init, bias_initializer= bias_init)(image)
conv_5x5 = Conv2D(filters_5x5, (3,3), padding='same', activation='relu',kernel_initializer=kernel_init, bias_initializer=bias_init)(conv_5x5)
pool_proj = MaxPool2D((3,3), strides=(1,1), padding='same')(image)
pool_proj = Conv2D(filters_pool_proj, (1,1), padding='same', activation='relu', kernel_initializer=kernel_init, bias_initializer= bias_init)(pool_proj)
output = concatenate([conv_1x1, conv_3x3, conv_5x5, pool_proj], axis=3, name=name)
return output
kernel_init = keras.initializers.glorot_uniform()
bias_init = keras.initializers.Constant(value=0.2)
# IMG_SIZE = 64
input_layer = Input(shape=(224,224,3))
image = Conv2D(64,(7,7),padding='same', strides=(2,2), activation='relu', name='conv_1_7x7/2', kernel_initializer=kernel_init, bias_initializer=bias_init)(input_layer)
image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_1_3x3/2')(image)
image = Conv2D(64, (1,1), padding='same', strides=(1,1), activation='relu', name='conv_2a_3x3/1' )(image)
image = Conv2D(192, (3,3), padding='same', strides=(1,1), activation='relu', name='conv_2b_3x3/1')(image)
image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_2_3x3/2')(image)
image = inception_module(image,
filters_1x1= 64,
filters_3x3_reduce= 96,
filter_3x3 = 128,
filters_5x5_reduce=16,
filters_5x5= 32,
filters_pool_proj=32,
name='inception_3a')
image = inception_module(image,
filters_1x1=128,
filters_3x3_reduce=128,
filter_3x3=192,
filters_5x5_reduce=32,
filters_5x5=96,
filters_pool_proj=64,
name='inception_3b')
image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_3_3x3/2')(image)
image = inception_module(image,
filters_1x1=192,
filters_3x3_reduce=96,
filter_3x3=208,
filters_5x5_reduce=16,
filters_5x5=48,
filters_pool_proj=64,
name='inception_4a')
image1 = AveragePooling2D((5,5), strides=3)(image)
image1 = Conv2D(128, (1,1), padding='same', activation='relu')(image1)
image1 = Flatten()(image1)
image1 = Dense(1024, activation='relu')(image1)
image1 = Dropout(0.7)(image1)
image1 = Dense(5, activation='softmax', name='auxilliary_output_1')(image1)
image = inception_module(image,
filters_1x1 = 160,
filters_3x3_reduce= 112,
filter_3x3= 224,
filters_5x5_reduce= 24,
filters_5x5= 64,
filters_pool_proj=64,
name='inception_4b')
image = inception_module(image,
filters_1x1= 128,
filters_3x3_reduce = 128,
filter_3x3= 256,
filters_5x5_reduce= 24,
filters_5x5=64,
filters_pool_proj=64,
name='inception_4c')
image = inception_module(image,
filters_1x1=112,
filters_3x3_reduce=144,
filter_3x3= 288,
filters_5x5_reduce= 32,
filters_5x5=64,
filters_pool_proj=64,
name='inception_4d')
image2 = AveragePooling2D((5,5), strides=3)(image)
image2 = Conv2D(128, (1,1), padding='same', activation='relu')(image2)
image2 = Flatten()(image2)
image2 = Dense(1024, activation='relu')(image2)
image2 = Dropout(0.7)(image2) #Changed from 0.7
image2 = Dense(5, activation='softmax', name='auxilliary_output_2')(image2)
image = inception_module(image,
filters_1x1=256,
filters_3x3_reduce=160,
filter_3x3=320,
filters_5x5_reduce=32,
filters_5x5=128,
filters_pool_proj=128,
name= 'inception_4e')
image = MaxPool2D((3,3), padding='same', strides=(2,2), name='max_pool_4_3x3/2')(image)
image = inception_module(image,
filters_1x1=256,
filters_3x3_reduce=160,
filter_3x3= 320,
filters_5x5_reduce=32,
filters_5x5= 128,
filters_pool_proj=128,
name='inception_5a')
image = inception_module(image,
filters_1x1=384,
filters_3x3_reduce=192,
filter_3x3=384,
filters_5x5_reduce=48,
filters_5x5=128,
filters_pool_proj=128,
name='inception_5b')
image = GlobalAveragePooling2D(name='avg_pool_5_3x3/1')(image)
image = Dropout(0.7)(image)
image = Dense(5, activation='softmax', name='output')(image)
model = Model(input_layer, [image,image1,image2], name='inception_v1')
model.summary()
epochs = 2
initial_lrate = 0.001 # Changed From 0.01
def decay(epoch, steps=100):
initial_lrate = 0.01
drop = 0.96
epochs_drop = 8
lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))#
return lrate
sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# nadam = keras.optimizers.Nadam(lr= 0.002, beta_1=0.9, beta_2=0.999, epsilon=None)
# keras
lr_sc = LearningRateScheduler(decay)
# rms = keras.optimizers.RMSprop(lr = initial_lrate, rho=0.9, epsilon=1e-08, decay=0.0)
# ad = keras.optimizers.adam(lr=initial_lrate)
model.compile(loss=['categorical_crossentropy', 'categorical_crossentropy','categorical_crossentropy'],loss_weights=[1,0.3,0.3], optimizer='sgd', metrics=['accuracy'])
# loss = 'categorical_crossentropy', 'categorical_crossentropy','categorical_crossentropy'
history = model.fit(X_train, [train_labels_hotEncode,train_labels_hotEncode,train_labels_hotEncode], validation_split=0.3,shuffle=True,epochs=epochs, batch_size= 32, callbacks=[lr_sc]) # batch size changed from 256 or 64 to 16(y_train,[y_test,y_test,y_test])
# validation_data=(y_train,[test_labels_hotEncode,test_labels_hotEncode,test_labels_hotEncode]), validation_data= (X_train, [train_labels_hotEncode,train_labels_hotEncode,train_labels_hotEncode]),
print(history.history.keys())
plt.plot(history.history['output_acc'])
plt.plot(history.history['val_output_acc'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'],loc = 'upper left')
plt.show()
# predictingimage = "D:/compCarsThesisData/data/image/78/3/2010/0ba8d018cdc994.jpg" #67/1698/2010/6805eb92ac6c70.jpg"
predictImageRead = X_train
# resizingImage = cv2.cv2.resize(predictImageRead,(224,224))
# reshapedFinalImage = np.expand_dims(predictImageRead, axis=0)
# print(reshapedFinalImage.shape)
# npimage = np.array(reshapedFinalImage)
m = model.predict(predictImageRead)
print(m)
print(predictImageRead.shape)
print(train_labels_hotEncode)
# print(m.shape)
plt.imshow(predictImageRead[1])
plt.show()
# n = np.argmax(m,axis=-1)
# n = np.array(m)
print(confusion_matrix(X_test,m[0]))
cm = confusion_matrix(X_test,m[0])
plt.imshow(cm)
plt.show()
Please guide me through this.
Thanks!
If you want to compute a confusion matrix of your training data you have to make your moddel predict all your training examples, roughly like this:
m = model.predict(train_data) # train_data should have the shape (4162, 224, 224, 3)
m should then have a length of 4162 and you can plot the confusion matrix like this:
cm = confusion_matrix(train_labels_Encode, m)
plt.imshow(cm)
plt.show()

Resources