Keras target dimensions mismatch - machine-learning

Attempting a single-label classification problem with num_classes = 73
Here's my simplified Keras model:
num_classes = 73
batch_size = 4
train_data_list = [training_file_names list here..]
validation_data_list = [ validation_file_names list here..]
training_generator = DataGenerator(train_data_list, batch_size, num_classes)
validation_generator = DataGenerator(validation_data_list, batch_size, num_classes)
model = Sequential()
model.add(Conv1D(32, 3, strides=1, input_shape=(15,120), activation="relu"))
model.add(Conv1D(16, 3, strides=1, activation="relu"))
model.add(Flatten())
model.add(Dense(n_classes, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss="categorical_crossentropy",optimizer=sgd,metrics=['accuracy'])
model.fit_generator(generator=training_generator, epochs=100,
validation_data=validation_generator)
Here's my DataGenerator's __get_item__ method:
def __get_item__(self):
X = np.zeros((self.batch_size,15,120))
y = np.zeros((self.batch_size, 1 ,self.n_classes))
for i in range(self.batch_size):
X_row = some_method_that_gives_X_of_15x20_dim()
target = some_method_that_gives_target()
one_hot = keras.utils.to_categorical(target, num_classes=self.n_classes)
X[i] = X_row
y[i] = one_hot
return X, y
Since my X values are correctly returned with dimension (batch_size, 15, 120), I am not showing it here. My issue is with the y value returned.
y returned from this generator method has a shape of (batch_size, 1, 73) as one hot encoded label for the 73 classes, which I think is the correct shape to return.
However Keras gives the following error for the last layer:
ValueError: Error when checking target: expected dense_1 to have 2
dimensions, but got array with shape (4, 1, 73)
Since the batch size is 4, I think the target batch should also be 3 dimensional (4,1,73). Why is then Keras expecting the last layer to be 2 dimensions ?

you model' s summary shows that in the output layer there should be only 2 dimensions, (None, 73)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_7 (Conv1D) (None, 13, 32) 11552
_________________________________________________________________
conv1d_8 (Conv1D) (None, 11, 16) 1552
_________________________________________________________________
flatten_5 (Flatten) (None, 176) 0
_________________________________________________________________
dense_4 (Dense) (None, 73) 12921
=================================================================
Total params: 26,025
Trainable params: 26,025
Non-trainable params: 0
_________________________________________________________________
Since dimension of your target is (batch_size, 1, 73), you can just change to (batch_size, 73) in order for your model to run

Related

Keras autoencoder model for detect anomaly in text

I am trying to create an autoencoder that is capable of finding anomalies in text sequences:
X_train_pada_seq.shape
(28840, 999)
I want to use a layer Embedding. Here is my model:
encoder_inputs = Input(shape=(max_len_str, ))
encoder_emb = Embedding(input_dim=len(word_index)+1, output_dim=20, input_length=laenge_pads)(encoder_inputs)
encoder_LSTM_1 = Bidirectional(LSTM(400, activation='relu', return_sequences=True))(encoder_emb)
encoder_drop = Dropout(0.2)(encoder_LSTM_1)
encoder_LSTM_2 = Bidirectional(GRU(200, activation='relu', return_sequences=False, name = 'bottleneck'))(encoder_drop)
decoder_repeated = RepeatVector(200)(encoder_LSTM_2)
decoder_LSTM = Bidirectional(LSTM(400, activation='relu', return_sequences=True))(decoder_repeated)
decoder_drop = Dropout(0.2)(decoder_LSTM)
decoder_output = TimeDistributed(Dense(999, activation='softmax'))(decoder_drop)
autoencoder = Model(encoder_inputs, decoder_output)
autoencoder.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001), loss='categorical_crossentropy', metrics=['accuracy'])
autoencoder.summary()
Model: "model_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_10 (InputLayer) [(None, 999)] 0
_________________________________________________________________
embedding_19 (Embedding) (None, 999, 20) 159660
_________________________________________________________________
bidirectional (Bidirectional (None, 999, 800) 1347200
_________________________________________________________________
dropout (Dropout) (None, 999, 800) 0
_________________________________________________________________
bidirectional_1 (Bidirection (None, 400) 1202400
_________________________________________________________________
repeat_vector (RepeatVector) (None, 200, 400) 0
_________________________________________________________________
bidirectional_2 (Bidirection (None, 200, 800) 2563200
_________________________________________________________________
dropout_1 (Dropout) (None, 200, 800) 0
_________________________________________________________________
time_distributed_6 (TimeDist (None, 200, 999) 800199
=================================================================
Total params: 6,072,659
Trainable params: 6,072,659
Non-trainable params: 0
But when training the model:
history = autoencoder.fit(X_train_pada_seq, X_train_pada_seq, epochs=10, batch_size=64,
validation_data=(X_test_pada_seq, X_test_pada_seq))
I get an error:
ValueError: Shapes (None, 999) and (None, 200, 999) are incompatible
How to remake the model to fix the error?
I've seen your code snippet and it seems that your model output need to match your target shape which is (None, 999), but your output shape is (None, 200, 999).
You need to make your output model shape match the target shape.
Try using tf.reduce_mean with axis=1 (averages all the sequence):
decoder_drop = Dropout(0.2)(decoder_LSTM)
decoder_time = TimeDistributed(Dense(999, activation='softmax'))(decoder_drop)
decoder_output = tf.math.reduce_mean(decoder_time, axis=1)
This should let you fit the model.
your last layer (output) should be of this shape
batchsize x 999 x 200) #999 words, 200 is dim of each word
Currently the output of your model is
batchsize x 200 x 999
which is incorrect.
use sparse categorical cross entropy as loss function.
then it will work.

how to do custom keras layer matrix multiplication

Layers:
Input shape (None,75)
Hidden layer 1 - shape is (75,3)
Hidden layer 2 - shape is (3,1)
For the last layer, the output must be calculated as ( (H21*w1)*(H22*w2)*(H23*w3)), where H21,H22,H23 will be the outcome of Hidden layer 2, and w1,w2,w3 will be constant weight which are not trainable. So how to write a lambda function for the above outcome
def product(X):
return X[0]*X[1]
keras_model = Sequential()
keras_model.add(Dense(75,
input_dim=75,activation='tanh',name="layer1" ))
keras_model.add(Dense(3 ,activation='tanh',name="layer2" ))
keras_model.add(Dense(1,name="layer3"))
cross1=keras_model.add(Lambda(lambda x:product,output_shape=(1,1)))([layer2,layer3])
print(cross1)
NameError: name 'layer2' is not defined
Use the functional API model
inputs = Input((75,)) #shape (batch, 75)
output1 = Dense(75, activation='tanh',name="layer1" )(inputs) #shape (batch, 75)
output2 = Dense(3 ,activation='tanh',name="layer2" )(output1) #shape (batch, 3)
output3 = Dense(1,name="layer3")(output2) #shape (batch, 1)
cross1 = Lambda(lambda x: x[0] * x[1])([output2, output3]) #shape (batch, 3)
model = Model(inputs, cross1)
Please notice that the shapes are totally different from what you expect.
I will suggest you to do it via a customized layer instead of the Lambda layer. Why? A customized will give you more freedom to do stuffs, and it is also more transparent in terms of viewing your desired weights. More precisely, if you do it through Lambda layer, the constant weight will not be saved as a part of the model, but it will if you use a customized layer.
Here is an example
from keras import backend as K
from keras.layers import *
from keras.models import *
import numpy as np
class MyLayer(Layer) :
# see https://keras.io/layers/writing-your-own-keras-layers/
def __init__(self,
w_vec=None,
allow_training=False,
**kwargs) :
self._w_vec = w_vec
assert allow_training or (w_vec is not None), \
"ERROR: non-trainable w_vec must be initialized"
self.allow_training = allow_training
super().__init__(**kwargs)
return
def build(self, input_shape) :
batch_size, num_feats = input_shape
self.w_vec = self.add_weight(shape=(1, num_feats),
name='w_vec',
initializer='uniform', # <- use your own preferred initializer
trainable=self.allow_training,)
if self._w_vec is not None :
# predefined w_vec
assert self._w_vec.shape[1] == num_feats, \
"ERROR: initial w_vec shape mismatches the input shape"
# set it to the weight
self.set_weights([self._w_vec]) # <- set weights to the supplied one
super().build(input_shape)
return
def call(self, x) :
# Given:
# x = [H21, H22, H23]
# w_vec = [w1, w2, w3]
# Step 1: output elem_prod
# elem_prod = [H21*w1, H22*w2, H23*w3]
elem_prod = x * self.w_vec
# Step 2: output ret
# ret = (H21*w1) * (H22*w2) * (H23*w3)
ret = K.prod(elem_prod, axis=-1, keepdims=True)
return ret
def compute_output_shape(self, input_shape) :
return (input_shape[0], 1)
def make_test_cases(w_vec=None, allow_training=False):
x = Input(shape=(75,))
y = Dense(75, activation='tanh', name='fc1')(x)
y = Dense(3, activation='tanh', name='fc2')(y)
y = MyLayer(w_vec, allow_training, name='core')(y)
y = Dense(1, name='fc3')(y)
net = Model(inputs=x, outputs=y, name='{}-{}'.format( 'randomInit' if w_vec is None else 'assignInit',
'trainable' if allow_training else 'nontrainable'))
print(net.name)
print(net.layers[-2].get_weights()[0])
print(net.summary())
return net
And you may run the following test cases to see the differences (pay attention to the first and the last lines in the print out, which gives you the initial values and the number of constant parameters, respectively)
a. Constant weights, non-trainable
m1 = make_test_cases(w_vec=np.arange(3).reshape([1,3]), allow_training=False)
will give you
assignInit-nontrainable [[0. 1. 2.]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) (None, 75) 0
_________________________________________________________________
fc1 (Dense) (None, 75) 5700
_________________________________________________________________
fc2 (Dense) (None, 3) 228
_________________________________________________________________
core (MyLayer) (None, 1) 3
_________________________________________________________________
fc3 (Dense) (None, 1) 2
=================================================================
Total params: 5,933
Trainable params: 5,930
Non-trainable params: 3
_________________________________________________________________
b. Constant weights, trainable
m2 = make_test_cases(w_vec=np.arange(3).reshape([1,3]), allow_training=True)
will give you
assignInit-trainable [[0. 1. 2.]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) (None, 75) 0
_________________________________________________________________
fc1 (Dense) (None, 75) 5700
_________________________________________________________________
fc2 (Dense) (None, 3) 228
_________________________________________________________________
core (MyLayer) (None, 1) 3
_________________________________________________________________
fc3 (Dense) (None, 1) 2
=================================================================
Total params: 5,933
Trainable params: 5,933
Non-trainable params: 0
_________________________________________________________________
c. Random weights, trainable
m3 = make_test_cases(w_vec=None, allow_training=True)
will give you
randomInit-trainable [[ 0.02650297 -0.02010062 -0.03771694]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) (None, 75) 0
_________________________________________________________________
fc1 (Dense) (None, 75) 5700
_________________________________________________________________
fc2 (Dense) (None, 3) 228
_________________________________________________________________
core (MyLayer) (None, 1) 3
_________________________________________________________________
fc3 (Dense) (None, 1) 2
=================================================================
Total params: 5,933
Trainable params: 5,933
Non-trainable params: 0
_________________________________________________________________
Final remark
I will say it is unclear which case may work better in advance for your problem, but trying all three sounds like a good plan.

Why is my CNN overfitting and how can I fix it?

I am finetuning a 3D-CNN called C3D which was originally trained to classify sports from video clips.
I am freezing the convolution (feature extraction) layers and training the fully connected layers using gifs from GIPHY to classify the gifs for sentiment analysis (positive or negative).
Weights are pre loaded for all layers except the final fully connected layer.
I am using 5000 images (2500 positive, 2500 negative) for training with a 70/30 training/testing split using Keras. I am using the Adam optimizer with a learning rate of 0.0001.
The training accuracy increases and the training loss decreases during training but very early on the validation accuracy and loss does not improve as the model starts to overfit.
I believe I have enough training data and am using a dropout of 0.5 on both of the fully connected layers so how can I combat this overfitting?
The model architechture, training code and visualisations of training performance from Keras can be found below.
train_c3d.py
from training.c3d_model import create_c3d_sentiment_model
from ImageSentiment import load_gif_data
import numpy as np
import pathlib
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
def image_generator(files, batch_size):
"""
Generate batches of images for training instead of loading all images into memory
:param files:
:param batch_size:
:return:
"""
while True:
# Select files (paths/indices) for the batch
batch_paths = np.random.choice(a=files,
size=batch_size)
batch_input = []
batch_output = []
# Read in each input, perform preprocessing and get labels
for input_path in batch_paths:
input = load_gif_data(input_path)
if "pos" in input_path: # if file name contains pos
output = np.array([1, 0]) # label
elif "neg" in input_path: # if file name contains neg
output = np.array([0, 1]) # label
batch_input += [input]
batch_output += [output]
# Return a tuple of (input,output) to feed the network
batch_x = np.array(batch_input)
batch_y = np.array(batch_output)
yield (batch_x, batch_y)
model = create_c3d_sentiment_model()
print(model.summary())
model.load_weights('models/C3D_Sport1M_weights_keras_2.2.4.h5', by_name=True)
for layer in model.layers[:14]: # freeze top layers as feature extractor
layer.trainable = False
for layer in model.layers[14:]: # fine tune final layers
layer.trainable = True
train_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_train').glob('**/*')]
val_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_validation').glob('**/*')]
batch_size = 8
train_generator = image_generator(train_files, batch_size)
validation_generator = image_generator(val_files, batch_size)
model.compile(optimizer=Adam(lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=1)
history = model.fit_generator(train_generator, validation_data=validation_generator,
steps_per_epoch=int(np.ceil(len(train_files) / batch_size)),
validation_steps=int(np.ceil(len(val_files) / batch_size)), epochs=5, shuffle=True,
callbacks=[mc])
load_gif_data()
def load_gif_data(file_path):
"""
Load and process gif for input into Keras model
:param file_path:
:return: Mean normalised image in BGR format as numpy array
for more info see -> http://cs231n.github.io/neural-networks-2/
"""
im = Img(fp=file_path)
try:
im.load(limit=16, # Keras image model only requires 16 frames
first=True)
except:
print("Error loading image: " + file_path)
return
im.resize(size=(112, 112))
im.convert('RGB')
im.close()
np_frames = []
frame_index = 0
for i in range(16): # if image is less than 16 frames, repeat the frames until there are 16
frame = im.frames[frame_index]
rgb = np.array(frame)
bgr = rgb[..., ::-1]
mean = np.mean(bgr, axis=0)
np_frames.append(bgr - mean) # C3D model was originally trained on BGR, mean normalised images
# it is important that unseen images are in the same format
if frame_index == (len(im.frames) - 1):
frame_index = 0
else:
frame_index = frame_index + 1
return np.array(np_frames)
model architecture
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 16, 112, 112, 64) 5248
_________________________________________________________________
pool1 (MaxPooling3D) (None, 16, 56, 56, 64) 0
_________________________________________________________________
conv2 (Conv3D) (None, 16, 56, 56, 128) 221312
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 28, 28, 128) 0
_________________________________________________________________
conv3a (Conv3D) (None, 8, 28, 28, 256) 884992
_________________________________________________________________
conv3b (Conv3D) (None, 8, 28, 28, 256) 1769728
_________________________________________________________________
pool3 (MaxPooling3D) (None, 4, 14, 14, 256) 0
_________________________________________________________________
conv4a (Conv3D) (None, 4, 14, 14, 512) 3539456
_________________________________________________________________
conv4b (Conv3D) (None, 4, 14, 14, 512) 7078400
_________________________________________________________________
pool4 (MaxPooling3D) (None, 2, 7, 7, 512) 0
_________________________________________________________________
conv5a (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
conv5b (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
zeropad5 (ZeroPadding3D) (None, 2, 8, 8, 512) 0
_________________________________________________________________
pool5 (MaxPooling3D) (None, 1, 4, 4, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_________________________________________________________________
fc6 (Dense) (None, 4096) 33558528
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
fc7 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
nfc8 (Dense) (None, 2) 8194
=================================================================
Total params: 78,003,970
Trainable params: 78,003,970
Non-trainable params: 0
_________________________________________________________________
None
training visualisations
I think that the error is in the loss function and in the last Dense layer. As provided in the model summary, the last Dense layer is,
nfc8 (Dense) (None, 2)
The output shape is ( None , 2 ) meaning that the layer has 2 units. As you said earlier, you need to classify GIFs as positive or negative.
Classifying GIFs could be a binary classification problem or a multiclass classification problem ( with two classes ).
Binary classification has only 1 unit in the last Dense layer with a sigmoid activation function. But, here the model has 2 units in the last Dense layer.
Hence, the model is a multiclass classifier, but you have given a loss function of binary_crossentropy which is meant for binary classifiers ( with a single unit in the last layer ).
So, replacing the loss with categorical_crossentropy should work. Or edit the last Dense layer and change the number of units and activation function.
Hope this helps.

Issue with Keras input array when implementing Convolutional Neural Network

I'm trying to implement a convolutional neural network within Keras using a TF backend for image segmentation of 111 images of size 141 x 166. When I run the code below, I get the error message:
Error when checking target: expected dense_36 to have 2 dimensions, but got array with shape (88, 141, 166, 1)
My X_train variable is the shape (88, 141, 166, 1) as well as the y_train variable. My X_test variable is the shape (23, 141, 166, 1) as well as the y_test variable, as split by the function train_test_split from sklearn.
I'm not sure what the error message means as per dense_36. I have tried using the Flatten() function before fitting the model, but it says that I have a ndim = 2 and cannot be flattened.
# set input
batch_size = 111
num_epochs = 50
img_rows = 141
img_cols = 166
input_shape = (img_rows, img_cols, 1)
num_classes = img_rows*img_cols
# split training and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 4)
X_train = X_train.astype('float32')
X_test = X_train.astype('float32')
# CNN itself
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# compile CNN
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
# fit CNN
model.fit(X_train, y_train, batch_size=batch_size, epochs=num_epochs,
verbose=1, validation_data=(X_test, y_test))
My model summary is:
Layer (type) Output Shape Param #
=================================================================
conv2d_35 (Conv2D) (None, 139, 164, 32) 320
_________________________________________________________________
conv2d_36 (Conv2D) (None, 137, 162, 64) 18496
_________________________________________________________________
max_pooling2d_18 (MaxPooling (None, 68, 81, 64) 0
_________________________________________________________________
dropout_35 (Dropout) (None, 68, 81, 64) 0
_________________________________________________________________
flatten_28 (Flatten) (None, 352512) 0
_________________________________________________________________
dense_33 (Dense) (None, 128) 45121664
_________________________________________________________________
dropout_36 (Dropout) (None, 128) 0
_________________________________________________________________
dense_34 (Dense) (None, 2) 258
_________________________________________________________________
Total params: 45,140,738
Trainable params: 45,140,738
Non-trainable params: 0
_________________________________________________________________
None

Cannot create keras model while using flow_from_directory

I am trying to create a model to fit data from the cifar-10 dataset. I have a working convolution neural network from an example but when I try to create a multi layer perceptron I keep getting a shape mismatch problem.
#https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
#https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras.optimizers import RMSprop
# dimensions of our images.
img_width, img_height = 32, 32
train_data_dir = 'pro-data/train'
validation_data_dir = 'pro-data/test'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=input_shape))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
score = model.evaluate_generator(validation_generator, 1000)
print("Accuracy = ", score[1])
The error I am getting is this:
ValueError: Error when checking target: expected dense_3 to have 4 dimensions, but got array with shape (16, 1)
But if if change the input_shape for the input layer to an incorrect value "(784,)", I get this error:
ValueError: Error when checking input: expected dense_1_input to have 2 dimensions, but got array with shape (16, 32, 32, 3)
This is where I got a working cnn model using flow_from_directory:
https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
In case anyone is curious I am getting an accuracy of only 10% for cifar10 using the convolution neural network model. Pretty poor I think.
according to your model, your model summary is
dense_1 (Dense) (None, 32, 32, 512) 2048
dropout_1 (Dropout) (None, 32, 32, 512) 0
dense_2 (Dense) (None, 32, 32, 512) 262656
dropout_2 (Dropout) (None, 32, 32, 512) 0
dense_3 (Dense) (None, 32, 32, 10) 5130
Total params: 269,834
Trainable params: 269,834
Non-trainable params: 0
Your output format is (32,32,10)
In the cifar-10 dataset you want to classify into 10 labels
Try adding
model.add(Flatten())
before your last dense layer.
Now your output layer is
Layer (type) Output Shape Param #
dense_1 (Dense) (None, 32, 32, 512) 2048
dropout_1 (Dropout) (None, 32, 32, 512) 0
dense_2 (Dense) (None, 32, 32, 512) 262656
dropout_2 (Dropout) (None, 32, 32, 512) 0
flatten_1 (Flatten) (None, 524288) 0
dense_3 (Dense) (None, 10) 5242890
Total params: 5,507,594
Trainable params: 5,507,594
Non-trainable params: 0
Also, you've just used the dense and dropout layers in your model. To get better accuracy you should google the various CNN architectures which consists of dense and maxpooling layers.

Resources