I'm trying to use MLP for classification. Here is how model looks like.
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.utils import np_utils
model = Sequential()
model.add(Dense(256, activation='relu', input_dim=400))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(number_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
X_train = input_data
y_train = np_utils.to_categorical(encoded_labels, number_of_classes)
history = model.fit(X_train, y_train, validation_split=0.2, nb_epoch=10, verbose=1)
But when I train my model, I see that training accuracy goes better but validation accuracy not moving and has high value.
Using TensorFlow backend.
Train on 41827 samples, validate on 10457 samples
Epoch 1/10
41827/41827 [==============================] - 7s - loss: 2.5783 - acc: 0.3853 - val_loss: 14.2315 - val_acc: 0.0031
Epoch 2/10
41827/41827 [==============================] - 6s - loss: 1.0356 - acc: 0.7011 - val_loss: 14.8957 - val_acc: 0.0153
Epoch 3/10
41827/41827 [==============================] - 6s - loss: 0.7935 - acc: 0.7691 - val_loss: 15.2258 - val_acc: 0.0154
Epoch 4/10
41827/41827 [==============================] - 6s - loss: 0.6734 - acc: 0.8013 - val_loss: 15.4279 - val_acc: 0.0153
Epoch 5/10
41827/41827 [==============================] - 6s - loss: 0.6188 - acc: 0.8185 - val_loss: 15.4588 - val_acc: 0.0165
Epoch 6/10
41827/41827 [==============================] - 6s - loss: 0.5847 - acc: 0.8269 - val_loss: 15.5796 - val_acc: 0.0176
Epoch 7/10
41827/41827 [==============================] - 6s - loss: 0.5488 - acc: 0.8395 - val_loss: 15.6464 - val_acc: 0.0154
Epoch 8/10
41827/41827 [==============================] - 6s - loss: 0.5398 - acc: 0.8418 - val_loss: 15.6705 - val_acc: 0.0164
Epoch 9/10
41827/41827 [==============================] - 6s - loss: 0.5287 - acc: 0.8439 - val_loss: 15.7259 - val_acc: 0.0163
Epoch 10/10
41827/41827 [==============================] - 6s - loss: 0.4923 - acc: 0.8547 - val_loss: 15.7484 - val_acc: 0.0187
Is problem related to train data or something wrong with my train process setup?
Your models seems that is strongly overfitting. It is probably something to do with the data but you could try lowering your learning rate first, just in case.
from keras.optimizers import Adam
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.0001),
metrics=['accuracy'])
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
Improve this question
I'm building a neural network to classify doublets of 100*80 images into two classes.
My accuracy is capped at around 88% no matter what I try to do (add convolutional layers, dropouts...).
I've investigated the issue and found from the confusion matrix that my model is only making true negative and false positive predictions. I have no idea how this is possible and was wondering if anyone could help me.
Here is some of the code (I've used a really simple model architecture here):
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, shuffle = True)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape = (100,80,2)))
model.add(keras.layers.Dense(5, activation = 'relu'))
model.add(keras.layers.Dense(1, activation = 'sigmoid'))
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
metrics=['accuracy'])
model.fit(X_train, y_train, epochs =10, batch_size= 200, validation_data = (X_test, y_test))
Output for training:
Epoch 1/10
167/167 [==============================] - 6s 31ms/step - loss: 0.6633 - accuracy: 0.8707 - val_loss: 0.6345 - val_accuracy: 0.8813
Epoch 2/10
167/167 [==============================] - 2s 13ms/step - loss: 0.6087 - accuracy: 0.8827 - val_loss: 0.5848 - val_accuracy: 0.8813
Epoch 3/10
167/167 [==============================] - 2s 13ms/step - loss: 0.5630 - accuracy: 0.8828 - val_loss: 0.5435 - val_accuracy: 0.8813
Epoch 4/10
167/167 [==============================] - 2s 13ms/step - loss: 0.5249 - accuracy: 0.8828 - val_loss: 0.5090 - val_accuracy: 0.8813
Epoch 5/10
167/167 [==============================] - 2s 12ms/step - loss: 0.4931 - accuracy: 0.8828 - val_loss: 0.4805 - val_accuracy: 0.8813
Epoch 6/10
167/167 [==============================] - 2s 13ms/step - loss: 0.4663 - accuracy: 0.8828 - val_loss: 0.4567 - val_accuracy: 0.8813
Epoch 7/10
167/167 [==============================] - 2s 14ms/step - loss: 0.4424 - accuracy: 0.8832 - val_loss: 0.4363 - val_accuracy: 0.8813
Epoch 8/10
167/167 [==============================] - 3s 17ms/step - loss: 0.4198 - accuracy: 0.8848 - val_loss: 0.4190 - val_accuracy: 0.8816
Epoch 9/10
167/167 [==============================] - 2s 15ms/step - loss: 0.3982 - accuracy: 0.8887 - val_loss: 0.4040 - val_accuracy: 0.8816
Epoch 10/10
167/167 [==============================] - 3s 15ms/step - loss: 0.3784 - accuracy: 0.8942 - val_loss: 0.3911 - val_accuracy: 0.8821
Out[85]:
<keras.callbacks.History at 0x7fe3ce8dedd0>
loss, accuracies = model1.evaluate(X_test, y_test)
261/261 [==============================] - 1s 2ms/step - loss: 0.3263 - accuracy: 0.8813
y_pred = model1.predict(X_test)
y_pred = (y_pred > 0.5)
confusion_matrix((y_test > 0.5), y_pred )
array([[ 0, 990],
[ 0, 7353]])
First, check how imbalance is your data.
If for example your dataset contain 10 samples, which 9 is class A and 1 is of class B. So your model likely would want to maximize its acciracy by simply always tell you the class is A - it would still get 90% accuracy.
When you actually wish to punish him alot on the unreprented class - i.e. class B.
So if indeed your data is inbalanced you can change try to change the metric from [accuracy] to ['matthews_correlation']
e.g.
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
metrics=['matthews_correlation'])
Which will do what I have explained in the beginning,over punish the mistakes in the unrepresented class .
I am training a multitarget classification model with keras. My architecture is:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
imput_ = Input(shape=(X_train.shape[1]))
x = Dense(50, activation="relu")(imput_)
x = Dense(n_targets, activation="sigmoid", name="output")(x)
model = Model(imput_, x)
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
Then I fit my model like this:
model.fit(X_train, y_train.toarray(), validation_data=(X_test, y_test.toarray()), epochs=5)
The fitting loss shows this:
Epoch 1/5
36/36 [==============================] - 1s 10ms/step - loss: 0.5161 - accuracy: 0.0614 - val_loss: 0.3365 - val_accuracy: 0.1434
Epoch 2/5
36/36 [==============================] - 0s 6ms/step - loss: 0.2761 - accuracy: 0.2930 - val_loss: 0.2429 - val_accuracy: 0.4560
Epoch 3/5
36/36 [==============================] - 0s 5ms/step - loss: 0.2255 - accuracy: 0.4435 - val_loss: 0.2187 - val_accuracy: 0.5130
Epoch 4/5
36/36 [==============================] - 0s 5ms/step - loss: 0.2037 - accuracy: 0.4800 - val_loss: 0.2040 - val_accuracy: 0.5199
Epoch 5/5
36/36 [==============================] - 0s 5ms/step - loss: 0.1876 - accuracy: 0.4996 - val_loss: 0.1929 - val_accuracy: 0.5250
<keras.callbacks.History at 0x7fe0a549ee10>
But then if I run:
from sklearn.metrics import accuracy_score
accuracy_score(np.round(model.predict(X_test)), y_test.toarray())
I got the following score:
0.07772020725388601
Shouldn't the score be equal to the val accuracy score in the last epoch?
With that loss and activation function, your top probability might not be higher than 0.5, and it would become 0 when you use np.round.
Try:
y_pred = np.argmax(model.predict(X_test), axis=1)
accuracy_score(y_test, y_pred)
I am doing image classification with CNN.
The following is my model:
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(200, 200, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(4, activation='softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train,y_train,epochs=16,batch_size=64,validation_data=(x_val,y_val))
The epoch results are like the below:
Epoch 1/16
416/416 [==============================] - 832s 2s/step - loss: 0.7742 - accuracy: 0.8689 - val_loss: 0.5149 - val_accuracy: 0.8451
Epoch 2/16
416/416 [==============================] - 825s 2s/step - loss: 0.5608 - accuracy: 0.8585 - val_loss: 0.3776 - val_accuracy: 0.8808
Epoch 3/16
416/416 [==============================] - 775s 2s/step - loss: 0.1926 - accuracy: 0.9338 - val_loss: 0.3328 - val_accuracy: 0.9066
Epoch 4/16
416/416 [==============================] - 587s 1s/step - loss: 0.0984 - accuracy: 0.9650 - val_loss: 0.3163 - val_accuracy: 0.9388
Epoch 5/16
416/416 [==============================] - 578s 1s/step - loss: 0.0606 - accuracy: 0.9798 - val_loss: 0.3584 - val_accuracy: 0.9357
Epoch 6/16
416/416 [==============================] - 511s 1s/step - loss: 0.0457 - accuracy: 0.9860 - val_loss: 0.5067 - val_accuracy: 0.9360
Epoch 7/16
416/416 [==============================] - 476s 1s/step - loss: 0.3649 - accuracy: 0.8912 - val_loss: 0.4446 - val_accuracy: 0.8645
Epoch 8/16
416/416 [==============================] - 476s 1s/step - loss: 0.3108 - accuracy: 0.9006 - val_loss: 0.6096 - val_accuracy: 0.8681
Epoch 9/16
416/416 [==============================] - 477s 1s/step - loss: 0.2397 - accuracy: 0.9158 - val_loss: 0.4061 - val_accuracy: 0.9042
Epoch 10/16
416/416 [==============================] - 502s 1s/step - loss: 0.1334 - accuracy: 0.9532 - val_loss: 0.3673 - val_accuracy: 0.9281
Epoch 11/16
416/416 [==============================] - 478s 1s/step - loss: 0.2787 - accuracy: 0.9184 - val_loss: 0.6745 - val_accuracy: 0.9039
Epoch 12/16
416/416 [==============================] - 481s 1s/step - loss: 0.7476 - accuracy: 0.8649 - val_loss: 0.4643 - val_accuracy: 0.8777
Epoch 13/16
416/416 [==============================] - 488s 1s/step - loss: 0.2187 - accuracy: 0.9271 - val_loss: 0.3347 - val_accuracy: 0.9102
Epoch 14/16
416/416 [==============================] - 483s 1s/step - loss: 4.0347 - accuracy: 0.9171 - val_loss: 0.6267 - val_accuracy: 0.7980
Epoch 15/16
416/416 [==============================] - 476s 1s/step - loss: 0.5838 - accuracy: 0.8095 - val_loss: 0.4481 - val_accuracy: 0.8663
Epoch 16/16
416/416 [==============================] - 492s 1s/step - loss: 0.4916 - accuracy: 0.8520 - val_loss: 1.0406 - val_accuracy: 0.6113
My first question is that because the mode.fit will keep the last epoch result, but my last epoch result is not the best(the epoch 4/16 is the best result based on min val_loss)
Hence, I wonder how could I build on a model using the epoch 4/16 parameter?
Note: I have saved the model.
I realize that if I add ModelCheckpoing in the model.fit, then the min val_loss may be saved. However, because it takes me a long time to run the code, I think is it possible to extract the min val_loss result directly from the model I saved without running the code again?
My second question is that I do not understand how ModelCheckpoint works since my understanding is that ModelCheckpoint will stop at the best epoch.
if I have a ModelCheckpoint like below:
mc = ModelCheckpoint('best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
If the epoch is 16 and the min val_loss happened at epoch 4/16, then using the ModelCheckpoing will stop running the code at epoch 4/16 and save the parameters. But it does not run the rest of the epoch 5 to 16, how does it know that epoch 4 is the best? or actually, using ModelCehckpoint, the code will still run the 16 epoch and just save the best one(epoch 4)?
Thanks!!
ModelCheckpoint does not stop the training. After each epoch it compares the result with the current best one, and pick the best between two, according to this document code, then you only need to reload the saved model to get that best weight.
I am trying to classify a set of images within two categories: left and right.
I built a CNN using Keras, my classifier seems to work well:
I have 1,939 images used for training (50% left, 50% right)
I have 648 images used for validation (50% left, 50% right)
All images are 115x45, in greyscale
acc is increasing up to 99.53%
val_acc is increasing up to 98.38%
Both loss and val_loss are converging close to 0
Keras verbose looks normal to me:
60/60 [==============================] - 6s 98ms/step - loss: 0.6295 - acc: 0.6393 - val_loss: 0.4877 - val_acc: 0.7641
Epoch 2/32
60/60 [==============================] - 5s 78ms/step - loss: 0.4825 - acc: 0.7734 - val_loss: 0.3403 - val_acc: 0.8799
Epoch 3/32
60/60 [==============================] - 5s 77ms/step - loss: 0.3258 - acc: 0.8663 - val_loss: 0.2314 - val_acc: 0.9042
Epoch 4/32
60/60 [==============================] - 5s 83ms/step - loss: 0.2498 - acc: 0.8942 - val_loss: 0.2329 - val_acc: 0.9042
Epoch 5/32
60/60 [==============================] - 5s 76ms/step - loss: 0.2408 - acc: 0.9002 - val_loss: 0.1426 - val_acc: 0.9432
Epoch 6/32
60/60 [==============================] - 5s 80ms/step - loss: 0.1968 - acc: 0.9260 - val_loss: 0.1484 - val_acc: 0.9367
Epoch 7/32
60/60 [==============================] - 5s 77ms/step - loss: 0.1621 - acc: 0.9319 - val_loss: 0.1141 - val_acc: 0.9578
Epoch 8/32
60/60 [==============================] - 5s 81ms/step - loss: 0.1600 - acc: 0.9361 - val_loss: 0.1229 - val_acc: 0.9513
Epoch 9/32
60/60 [==============================] - 4s 70ms/step - loss: 0.1358 - acc: 0.9462 - val_loss: 0.0884 - val_acc: 0.9692
Epoch 10/32
60/60 [==============================] - 4s 74ms/step - loss: 0.1193 - acc: 0.9542 - val_loss: 0.1232 - val_acc: 0.9529
Epoch 11/32
60/60 [==============================] - 5s 79ms/step - loss: 0.1075 - acc: 0.9595 - val_loss: 0.0865 - val_acc: 0.9724
Epoch 12/32
60/60 [==============================] - 4s 73ms/step - loss: 0.1209 - acc: 0.9531 - val_loss: 0.1067 - val_acc: 0.9497
Epoch 13/32
60/60 [==============================] - 4s 73ms/step - loss: 0.1135 - acc: 0.9609 - val_loss: 0.0860 - val_acc: 0.9838
Epoch 14/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0869 - acc: 0.9682 - val_loss: 0.0907 - val_acc: 0.9675
Epoch 15/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0960 - acc: 0.9637 - val_loss: 0.0996 - val_acc: 0.9643
Epoch 16/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0951 - acc: 0.9625 - val_loss: 0.1223 - val_acc: 0.9481
Epoch 17/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0685 - acc: 0.9729 - val_loss: 0.1220 - val_acc: 0.9513
Epoch 18/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0791 - acc: 0.9715 - val_loss: 0.0959 - val_acc: 0.9692
Epoch 19/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0595 - acc: 0.9802 - val_loss: 0.0648 - val_acc: 0.9773
Epoch 20/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0486 - acc: 0.9844 - val_loss: 0.0691 - val_acc: 0.9838
Epoch 21/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0499 - acc: 0.9812 - val_loss: 0.1166 - val_acc: 0.9627
Epoch 22/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0481 - acc: 0.9844 - val_loss: 0.0875 - val_acc: 0.9734
Epoch 23/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0533 - acc: 0.9814 - val_loss: 0.1094 - val_acc: 0.9724
Epoch 24/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0487 - acc: 0.9812 - val_loss: 0.0722 - val_acc: 0.9740
Epoch 25/32
60/60 [==============================] - 4s 72ms/step - loss: 0.0441 - acc: 0.9828 - val_loss: 0.0992 - val_acc: 0.9773
Epoch 26/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0667 - acc: 0.9726 - val_loss: 0.0964 - val_acc: 0.9643
Epoch 27/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0436 - acc: 0.9835 - val_loss: 0.0771 - val_acc: 0.9708
Epoch 28/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0322 - acc: 0.9896 - val_loss: 0.0872 - val_acc: 0.9756
Epoch 29/32
60/60 [==============================] - 5s 80ms/step - loss: 0.0294 - acc: 0.9943 - val_loss: 0.1414 - val_acc: 0.9578
Epoch 30/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0348 - acc: 0.9870 - val_loss: 0.1102 - val_acc: 0.9659
Epoch 31/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0306 - acc: 0.9922 - val_loss: 0.0794 - val_acc: 0.9659
Epoch 32/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0152 - acc: 0.9953 - val_loss: 0.1051 - val_acc: 0.9724
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 113, 43, 32) 896
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 56, 21, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 54, 19, 32) 9248
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 27, 9, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 7776) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 995456
_________________________________________________________________
dense_2 (Dense) (None, 1) 129
=================================================================
Total params: 1,005,729
Trainable params: 1,005,729
Non-trainable params: 0
So everything looks great, but when I tried to predict the category of 2,000 samples I got very strange results, with an accuracy < 70%.
At first I thought this sample might be biased, so I tried, instead, to predict the images in the validation dataset.
I should have a 98.38% accuracy, and a perfect 50-50 split, but instead, once again I got:
170 images predicted right, instead of 324, with an accuracy of 98.8%
478 images predicted left, instead of 324, with an accuracy of 67.3%
Average accuracy: 75.69% and not 98.38%
I guess something is wrong either in my CNN or my prediction script.
CNN classifier code:
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# Init CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (115, 45, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
import numpy
train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = False)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('./dataset/training_set',
target_size = (115, 45),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('./dataset/test_set',
target_size = (115, 45),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 1939/32, # total samples / batch size
epochs = 32,
validation_data = test_set,
validation_steps = 648/32)
# Save the classifier
classifier.evaluate_generator(generator=test_set)
classifier.summary()
classifier.save('./classifier.h5')
Prediction code:
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
from keras.preprocessing import image
from shutil import copyfile
classifier = load_model('./classifier.h5')
folder = './small/'
files = os.listdir(folder)
pleft = 0
pright = 0
for f in files:
test_image = image.load_img(folder+f, target_size = (115, 45))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
#print training_set.class_indices
if result[0][0] == 1:
pright=pright+1
prediction = 'right'
copyfile(folder+'../'+f, '/found_right/'+f)
else:
prediction = 'left'
copyfile(folder+'../'+f, '/found_left/'+f)
pleft=pleft+1
ptot = pleft + pright
print 'Left = '+str(pleft)+' ('+str(pleft / (ptot / 100))+'%)'
print 'Right = '+str(pright)
print 'Total = '+str(ptot)
Output:
Left = 478 (79%)
Right = 170
Total = 648
Your help will be much appreciated.
I resolved this issue by doing two things:
As #Matias Valdenegro suggested, I had to rescale the image values before predicting, I added test_image /= 255. before calling predict().
As my val_loss was still a bit high, I added an EarlyStopping callback as well as two Dropout() before my Dense layers.
My prediction results are now consistent with the ones obtained during training/validation.
I am trying to train a model to solve multi-class classification problem.
I've got a problem that is training accuracy and validation accuracy doesn't change over all epochs. Like this:
Train on 4642 samples, validate on 516 samples
Epoch 1/100
- 1s - loss: 1.7986 - acc: 0.4649 - val_loss: 1.7664 - val_acc: 0.4942
Epoch 2/100
- 1s - loss: 1.6998 - acc: 0.5017 - val_loss: 1.7035 - val_acc: 0.4942
Epoch 3/100
- 1s - loss: 1.6956 - acc: 0.5022 - val_loss: 1.7000 - val_acc: 0.4942
Epoch 4/100
- 1s - loss: 1.6900 - acc: 0.5022 - val_loss: 1.6954 - val_acc: 0.4942
Epoch 5/100
- 1s - loss: 1.6931 - acc: 0.5017 - val_loss: 1.7058 - val_acc: 0.4942
...
Epoch 98/100
- 1s - loss: 1.6842 - acc: 0.5022 - val_loss: 1.6995 - val_acc: 0.4942
Epoch 99/100
- 1s - loss: 1.6844 - acc: 0.5022 - val_loss: 1.6977 - val_acc: 0.4942
Epoch 100/100
- 1s - loss: 1.6838 - acc: 0.5022 - val_loss: 1.6934 - val_acc: 0.4942
My code with keras:
y_train = to_categorical(y_train, num_classes=11)
X_train, X_test, Y_train, Y_test = train_test_split(x_train, y_train,
test_size=0.1, random_state=42)
model = Sequential()
model.add(Dense(64, init='normal', activation='relu', input_dim=160))
model.add(Dropout(0.3))
model.add(Dense(32, init='normal', activation='relu'))
model.add(BatchNormalization())
model.add(Dense(11, init='normal', activation='softmax'))
model.summary()
print("[INFO] compiling model...")
model.compile(optimizer=keras.optimizers.Adam(lr=0.01, beta_1=0.9,
beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False),
loss='categorical_crossentropy',
metrics=['accuracy'])
print("[INFO] training network...")
model.fit(X_train, Y_train, epochs=100, batch_size=32, verbose=2, validation_data = (X_test, Y_test))
Please help me. Thank you!
I had a similar problem once. For me it turned out that making sure I didnt have too many missing values in x_train (having to fill with value representing unknown or filling with median value), dropping columns that really didnt help (all had same value), and normalizing the x_train data helped.
Example from my data/model,
# load data
x_main = pd.read_csv("glioma DB X.csv")
y_main = pd.read_csv("glioma DB Y.csv")
# fill with median (will have to improve later, not done yet)
fill_median =['Surgery_SBRT','df','Dose','Ki67','KPS','BMI','tumor_size']
x_main[fill_median] = x_main[fill_median].fillna(x_main[fill_median].median())
x_main['Neurofc'] = x_main['Neurofc'].fillna(2)
x_main['comorbid'] = x_main['comorbid'].fillna(int(x_main['comorbid'].median()))
# drop surgery
x_main = x_main.drop(['Surgery'], axis=1)
# normalize all x
x_main_normalized = x_main.apply(lambda x: (x-np.mean(x))/(np.std(x)+1e-10))