How to decrease a 3D matrix to a 2D matrix using Keras? - machine-learning

I have built a Keras ConvLSTM neural network, and I want to predict one frame ahead based on a sequence of 10-time steps:
from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
import numpy as np
import pylab as plt
from keras import layers
# We create a layer which take as input movies of shape
# (n_frames, width, height, channels) and returns a movie
# of identical shape.
model = Sequential()
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
input_shape=(None, 64, 64, 1),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=40, kernel_size=(3, 3),
padding='same', return_sequences=True))
model.add(BatchNormalization())
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3),
activation='sigmoid',
padding='same', data_format='channels_last'))
model.compile(loss='binary_crossentropy', optimizer='adadelta')
training:
data_train_x = data_4[0:20, 0:10, :, :, :]
data_train_y = data_4[0:20, 10:11, :, :, :]
model.fit(data_train_x, data_train_y, batch_size=10, epochs=1,
validation_split=0.05)
and I test the model:
test_x = np.reshape(data_test_x[2,:,:,:,:], [1,10,64,64,1])
next_frame = model.predict(test_x,batch_size=1, verbose=1, steps=None)
but the problem is that 'next_frame' shape is: (1, 10, 64, 64, 1) but I wanted it to be of shape (1, 1, 64, 64, 1)
And this is the results of 'model.summary()':
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D) (None, None, 64, 64, 40) 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv_lst_m2d_4 (ConvLSTM2D) (None, None, 64, 64, 40) 115360
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 64, 64, 40) 160
_________________________________________________________________
conv3d_1 (Conv3D) (None, None, 64, 64, 1) 1081
=================================================================
Total params: 407,001
Trainable params: 406,681
Non-trainable params: 320
So I don't know what layer to add so I decrease the output to 1 frame instead of 10 frames?

This is expected based on the 3D convolution in the final layer. For example, if you have 1 filter in a Conv2D across a 3-dimensional tensor, with padding = 'same', this means it will produce a 2D output of the same height and width (e.g. the filter implicitly also captures along the depth axis).
The same is true for 3D across a 4-dimensional tensor, where it implicitly captures along the channel dimension depth axis, resulting in a 3-D tensor of the same (sequence index, height, width) as the input.
It sounds like what you want to do is add a pooling step of some kind after your Conv3D layer, such that it flattens across the sequence dimension, such as with AveragePooling3D with a pooling tuple of (10, 1, 1) to average across the first non-batch dimension (or modified according to your specific network needs).
Alternatively, suppose you want to specifically "pool" along the sequence dimension by taking only the final sequence element (e.g. instead of averaging or max-pooling across the sequence). You could then make the final ConvLSTM2D layer to have return_sequences=False, followed by a 2D convolution in the final step, but this means your final convolution won't benefit from aggregating across a sequence of predicted frames. Probably application-specific whether this is a good idea or not.
Just to confirm the first approach, I added:
model.add(layers.AveragePooling3D(pool_size=(10, 1, 1), padding='same'))
just after the Conv3D layer, and then made toy data:
x = np.random.rand(1, 10, 64, 64, 1)
and then:
In [22]: z = model.predict(x)
In [23]: z.shape
Out[23]: (1, 1, 64, 64, 1)
You would need to ensure the pooling size in the first non-batch dimension is set to the maximum possible sequence length to ensure you always get (1, 1, ...) in the final output shape.

As an alternative to ely's Conv2D and AveragePooling3D solutions, you can set the last ConvLSTM2D layer's return_sequence parameter as True but change the padding of the Conv3D layer to valid then set its kernel_size parameter as (n_observations - k_steps_to_predict + 1 , 1 , 1). With this, you are able to alter the time_dimension(#frames) of the output. You can apply this for any direct k-step ahead prediction assuming that the number of observations are fixed.

Related

Add two layers after a hidden layer keras

I am trying to get a neural network model like that:
input
|
hidden
/ \
hidden output2
|
output1
Here is a simple example in code:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # from here I would like to add a new neural network
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
How to get an expected model?
Please sorry if I ask a stupid question, I am a very beginner in artificial intelligence.
You could make use of keras functional APIs instead of sequential APIs to get it done as shown below:
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
num_classes = 10
inp= Input(shape=input_shape)
conv1 = Conv2D(32, kernel_size=(3,3), activation='relu')(inp)
conv2 = Conv2D(64, (3, 3), activation='relu')(conv1)
max_pool = MaxPooling2D(pool_size=(2, 2))(conv2)
flat = Flatten()(max_pool)
hidden1 = Dense(128, activation='relu')(flat)
output1 = Dense(num_classes, activation='softmax')(hidden1)
hidden2 = Dense(10, activation='relu')(flat) #specify the number of hidden units
output2 = Dense(3, activation='softmax')(hidden2) #specify the number of classes
model = Model(inputs=inp, outputs=[output1 ,output2])
your network looks like this:
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) (None, 64, 256, 256) 0
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 62, 254, 32) 73760 input_7[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 60, 252, 64) 18496 conv2d_10[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 30, 126, 64) 0 conv2d_11[0][0]
__________________________________________________________________________________________________
flatten_4 (Flatten) (None, 241920) 0 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 128) 30965888 flatten_4[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 10) 2419210 flatten_4[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 10) 1290 dense_6[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 3) 33 dense_8[0][0]
==================================================================================================
Total params: 33,478,677
Trainable params: 33,478,677
Non-trainable params: 0

Keras target dimensions mismatch

Attempting a single-label classification problem with num_classes = 73
Here's my simplified Keras model:
num_classes = 73
batch_size = 4
train_data_list = [training_file_names list here..]
validation_data_list = [ validation_file_names list here..]
training_generator = DataGenerator(train_data_list, batch_size, num_classes)
validation_generator = DataGenerator(validation_data_list, batch_size, num_classes)
model = Sequential()
model.add(Conv1D(32, 3, strides=1, input_shape=(15,120), activation="relu"))
model.add(Conv1D(16, 3, strides=1, activation="relu"))
model.add(Flatten())
model.add(Dense(n_classes, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss="categorical_crossentropy",optimizer=sgd,metrics=['accuracy'])
model.fit_generator(generator=training_generator, epochs=100,
validation_data=validation_generator)
Here's my DataGenerator's __get_item__ method:
def __get_item__(self):
X = np.zeros((self.batch_size,15,120))
y = np.zeros((self.batch_size, 1 ,self.n_classes))
for i in range(self.batch_size):
X_row = some_method_that_gives_X_of_15x20_dim()
target = some_method_that_gives_target()
one_hot = keras.utils.to_categorical(target, num_classes=self.n_classes)
X[i] = X_row
y[i] = one_hot
return X, y
Since my X values are correctly returned with dimension (batch_size, 15, 120), I am not showing it here. My issue is with the y value returned.
y returned from this generator method has a shape of (batch_size, 1, 73) as one hot encoded label for the 73 classes, which I think is the correct shape to return.
However Keras gives the following error for the last layer:
ValueError: Error when checking target: expected dense_1 to have 2
dimensions, but got array with shape (4, 1, 73)
Since the batch size is 4, I think the target batch should also be 3 dimensional (4,1,73). Why is then Keras expecting the last layer to be 2 dimensions ?
you model' s summary shows that in the output layer there should be only 2 dimensions, (None, 73)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_7 (Conv1D) (None, 13, 32) 11552
_________________________________________________________________
conv1d_8 (Conv1D) (None, 11, 16) 1552
_________________________________________________________________
flatten_5 (Flatten) (None, 176) 0
_________________________________________________________________
dense_4 (Dense) (None, 73) 12921
=================================================================
Total params: 26,025
Trainable params: 26,025
Non-trainable params: 0
_________________________________________________________________
Since dimension of your target is (batch_size, 1, 73), you can just change to (batch_size, 73) in order for your model to run

Why is my CNN overfitting and how can I fix it?

I am finetuning a 3D-CNN called C3D which was originally trained to classify sports from video clips.
I am freezing the convolution (feature extraction) layers and training the fully connected layers using gifs from GIPHY to classify the gifs for sentiment analysis (positive or negative).
Weights are pre loaded for all layers except the final fully connected layer.
I am using 5000 images (2500 positive, 2500 negative) for training with a 70/30 training/testing split using Keras. I am using the Adam optimizer with a learning rate of 0.0001.
The training accuracy increases and the training loss decreases during training but very early on the validation accuracy and loss does not improve as the model starts to overfit.
I believe I have enough training data and am using a dropout of 0.5 on both of the fully connected layers so how can I combat this overfitting?
The model architechture, training code and visualisations of training performance from Keras can be found below.
train_c3d.py
from training.c3d_model import create_c3d_sentiment_model
from ImageSentiment import load_gif_data
import numpy as np
import pathlib
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
def image_generator(files, batch_size):
"""
Generate batches of images for training instead of loading all images into memory
:param files:
:param batch_size:
:return:
"""
while True:
# Select files (paths/indices) for the batch
batch_paths = np.random.choice(a=files,
size=batch_size)
batch_input = []
batch_output = []
# Read in each input, perform preprocessing and get labels
for input_path in batch_paths:
input = load_gif_data(input_path)
if "pos" in input_path: # if file name contains pos
output = np.array([1, 0]) # label
elif "neg" in input_path: # if file name contains neg
output = np.array([0, 1]) # label
batch_input += [input]
batch_output += [output]
# Return a tuple of (input,output) to feed the network
batch_x = np.array(batch_input)
batch_y = np.array(batch_output)
yield (batch_x, batch_y)
model = create_c3d_sentiment_model()
print(model.summary())
model.load_weights('models/C3D_Sport1M_weights_keras_2.2.4.h5', by_name=True)
for layer in model.layers[:14]: # freeze top layers as feature extractor
layer.trainable = False
for layer in model.layers[14:]: # fine tune final layers
layer.trainable = True
train_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_train').glob('**/*')]
val_files = [str(filepath.absolute()) for filepath in pathlib.Path('data/sample_validation').glob('**/*')]
batch_size = 8
train_generator = image_generator(train_files, batch_size)
validation_generator = image_generator(val_files, batch_size)
model.compile(optimizer=Adam(lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', verbose=1)
history = model.fit_generator(train_generator, validation_data=validation_generator,
steps_per_epoch=int(np.ceil(len(train_files) / batch_size)),
validation_steps=int(np.ceil(len(val_files) / batch_size)), epochs=5, shuffle=True,
callbacks=[mc])
load_gif_data()
def load_gif_data(file_path):
"""
Load and process gif for input into Keras model
:param file_path:
:return: Mean normalised image in BGR format as numpy array
for more info see -> http://cs231n.github.io/neural-networks-2/
"""
im = Img(fp=file_path)
try:
im.load(limit=16, # Keras image model only requires 16 frames
first=True)
except:
print("Error loading image: " + file_path)
return
im.resize(size=(112, 112))
im.convert('RGB')
im.close()
np_frames = []
frame_index = 0
for i in range(16): # if image is less than 16 frames, repeat the frames until there are 16
frame = im.frames[frame_index]
rgb = np.array(frame)
bgr = rgb[..., ::-1]
mean = np.mean(bgr, axis=0)
np_frames.append(bgr - mean) # C3D model was originally trained on BGR, mean normalised images
# it is important that unseen images are in the same format
if frame_index == (len(im.frames) - 1):
frame_index = 0
else:
frame_index = frame_index + 1
return np.array(np_frames)
model architecture
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 16, 112, 112, 64) 5248
_________________________________________________________________
pool1 (MaxPooling3D) (None, 16, 56, 56, 64) 0
_________________________________________________________________
conv2 (Conv3D) (None, 16, 56, 56, 128) 221312
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 28, 28, 128) 0
_________________________________________________________________
conv3a (Conv3D) (None, 8, 28, 28, 256) 884992
_________________________________________________________________
conv3b (Conv3D) (None, 8, 28, 28, 256) 1769728
_________________________________________________________________
pool3 (MaxPooling3D) (None, 4, 14, 14, 256) 0
_________________________________________________________________
conv4a (Conv3D) (None, 4, 14, 14, 512) 3539456
_________________________________________________________________
conv4b (Conv3D) (None, 4, 14, 14, 512) 7078400
_________________________________________________________________
pool4 (MaxPooling3D) (None, 2, 7, 7, 512) 0
_________________________________________________________________
conv5a (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
conv5b (Conv3D) (None, 2, 7, 7, 512) 7078400
_________________________________________________________________
zeropad5 (ZeroPadding3D) (None, 2, 8, 8, 512) 0
_________________________________________________________________
pool5 (MaxPooling3D) (None, 1, 4, 4, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_________________________________________________________________
fc6 (Dense) (None, 4096) 33558528
_________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0
_________________________________________________________________
fc7 (Dense) (None, 4096) 16781312
_________________________________________________________________
dropout_2 (Dropout) (None, 4096) 0
_________________________________________________________________
nfc8 (Dense) (None, 2) 8194
=================================================================
Total params: 78,003,970
Trainable params: 78,003,970
Non-trainable params: 0
_________________________________________________________________
None
training visualisations
I think that the error is in the loss function and in the last Dense layer. As provided in the model summary, the last Dense layer is,
nfc8 (Dense) (None, 2)
The output shape is ( None , 2 ) meaning that the layer has 2 units. As you said earlier, you need to classify GIFs as positive or negative.
Classifying GIFs could be a binary classification problem or a multiclass classification problem ( with two classes ).
Binary classification has only 1 unit in the last Dense layer with a sigmoid activation function. But, here the model has 2 units in the last Dense layer.
Hence, the model is a multiclass classifier, but you have given a loss function of binary_crossentropy which is meant for binary classifiers ( with a single unit in the last layer ).
So, replacing the loss with categorical_crossentropy should work. Or edit the last Dense layer and change the number of units and activation function.
Hope this helps.

ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

I am building a prediction model for the sequence data using conv1d layer provided by Keras. This is how I did
model= Sequential()
model.add(Conv1D(60,32, strides=1, activation='relu',padding='causal',input_shape=(None,64,1)))
model.add(Conv1D(80,10, strides=1, activation='relu',padding='causal'))
model.add(Dropout(0.25))
model.add(Conv1D(100,5, strides=1, activation='relu',padding='causal'))
model.add(MaxPooling1D(1))
model.add(Dropout(0.25))
model.add(Dense(300,activation='relu'))
model.add(Dense(1,activation='relu'))
print(model.summary())
However, the debugging information has
Traceback (most recent call last):
File "processing_2a_1.py", line 96, in <module>
model.add(Conv1D(60,32, strides=1, activation='relu',padding='causal',input_shape=(None,64,1)))
File "build/bdist.linux-x86_64/egg/keras/models.py", line 442, in add
File "build/bdist.linux-x86_64/egg/keras/engine/topology.py", line 558, in __call__
File "build/bdist.linux-x86_64/egg/keras/engine/topology.py", line 457, in assert_input_compatibility
ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4
The training data and validation data shape are as follows
('X_train shape ', (1496000, 64, 1))
('Y_train shape ', (1496000, 1))
('X_val shape ', (374000, 64, 1))
('Y_val shape ', (374000, 1))
I think the input_shape in the first layer was not setup right. How to set it up?
Update: After using input_shape=(64,1), I got the following error message, even though the model summary runs through
________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_1 (Conv1D) (None, 64, 60) 1980
_________________________________________________________________
conv1d_2 (Conv1D) (None, 64, 80) 48080
_________________________________________________________________
dropout_1 (Dropout) (None, 64, 80) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 64, 100) 40100
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 64, 100) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 64, 100) 0
_________________________________________________________________
dense_1 (Dense) (None, 64, 300) 30300
_________________________________________________________________
dense_2 (Dense) (None, 64, 1) 301
=================================================================
Total params: 120,761
Trainable params: 120,761
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
File "processing_2a_1.py", line 125, in <module>
history=model.fit(X_train, Y_train, batch_size=batch_size, validation_data=(X_val,Y_val), epochs=nr_of_epochs,verbose=2)
File "build/bdist.linux-x86_64/egg/keras/models.py", line 871, in fit
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1524, in fit
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1382, in _standardize_user_data
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 132, in _standardize_input_data
ValueError: Error when checking target: expected dense_2 to have 3 dimensions, but got array with shape (1496000, 1)
You should either change input_shape to
input_shape=(64,1)
... or use batch_input_shape:
batch_input_shape=(None, 64, 1)
This discussion explains the difference between the two in keras in detail.
I had the same issue. I found expanding the dimensions of the input data fixed it using tf.expand_dims
x = expand_dims(x, axis=-1)
I my case I wanted to use Conv2D on a single 20*32 feature map, and did:
print(kws_x_train.shape) # (8000,20,32)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 8), input_shape=(20,32)),
])
...
model.fit(kws_x_train, kws_y_train, epochs=15)
which gives the expected ndim=4, found ndim=3. Full shape received: [None, 20, 32]. However you need to tell Conv2D that there is only 1 feature map, and add an extra dimension to the input vector. This worked:
kws_x_train2 = kws_x_train.reshape(kws_x_train.shape + (1,))
print(kws_x_train2.shape) # (8000,20,32,1)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 8), input_shape=(20,32,1)),
])
...
model.fit(kws_x_train2, kws_y_train, epochs=15)

Cannot create keras model while using flow_from_directory

I am trying to create a model to fit data from the cifar-10 dataset. I have a working convolution neural network from an example but when I try to create a multi layer perceptron I keep getting a shape mismatch problem.
#https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
#https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras.optimizers import RMSprop
# dimensions of our images.
img_width, img_height = 32, 32
train_data_dir = 'pro-data/train'
validation_data_dir = 'pro-data/test'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=input_shape))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
score = model.evaluate_generator(validation_generator, 1000)
print("Accuracy = ", score[1])
The error I am getting is this:
ValueError: Error when checking target: expected dense_3 to have 4 dimensions, but got array with shape (16, 1)
But if if change the input_shape for the input layer to an incorrect value "(784,)", I get this error:
ValueError: Error when checking input: expected dense_1_input to have 2 dimensions, but got array with shape (16, 32, 32, 3)
This is where I got a working cnn model using flow_from_directory:
https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
In case anyone is curious I am getting an accuracy of only 10% for cifar10 using the convolution neural network model. Pretty poor I think.
according to your model, your model summary is
dense_1 (Dense) (None, 32, 32, 512) 2048
dropout_1 (Dropout) (None, 32, 32, 512) 0
dense_2 (Dense) (None, 32, 32, 512) 262656
dropout_2 (Dropout) (None, 32, 32, 512) 0
dense_3 (Dense) (None, 32, 32, 10) 5130
Total params: 269,834
Trainable params: 269,834
Non-trainable params: 0
Your output format is (32,32,10)
In the cifar-10 dataset you want to classify into 10 labels
Try adding
model.add(Flatten())
before your last dense layer.
Now your output layer is
Layer (type) Output Shape Param #
dense_1 (Dense) (None, 32, 32, 512) 2048
dropout_1 (Dropout) (None, 32, 32, 512) 0
dense_2 (Dense) (None, 32, 32, 512) 262656
dropout_2 (Dropout) (None, 32, 32, 512) 0
flatten_1 (Flatten) (None, 524288) 0
dense_3 (Dense) (None, 10) 5242890
Total params: 5,507,594
Trainable params: 5,507,594
Non-trainable params: 0
Also, you've just used the dense and dropout layers in your model. To get better accuracy you should google the various CNN architectures which consists of dense and maxpooling layers.

Resources