Transfer Learning - Val_loss strange behaviour - machine-learning

I am trying to use transfer-learning on MobileNetV2 from keras.application in phyton.
My images belongs to 4 classes with an amount of 8000, 7000, 8000 and 8000 images in the first, second, third and last class. My images are gray-scaled and resized from 1024x1024 to 128x128.
I removed the classification dense layers from MobileNetV2 and added my own dense layers:
global_average_pooling2d_1 (Glo Shape = (None, 1280) 0 Parameters
______________________________________________________________________________
dense_1 (Dense) Shape=(None, 4) 5124 Parameters
______________________________________________________________________________
dropout_1 (Dropout) Shape=(None, 4) 0 Parameters
________________________________________________________________
dense_2 (Dense) Shape=(None, 4) 20 Parameters
__________________________________________________________________________
dense_3 (Dense) Shape=(None, 4) 20 Parameters
Total params: 2,263,148
Trainable params: 5,164
Non-trainable params: 2,257,984
As you can see I added 2 dense layers with dropout as regularizer.
Furhtermore, I used the following
opt = optimizers.SGD(lr=0.001, decay=4e-5, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
batch_size = 32
My results on training are very weird... :
Epoch
1 loss: 1.3378 - acc: 0.3028 - val_loss: 1.4629 - val_acc: 0.2702
2 loss: 1.2807 - acc: 0.3351 - val_loss: 1.3297 - val_acc: 0.3208
3 loss: 1.2641 - acc: 0.3486 - val_loss: 1.4428 - val_acc: 0.3707
4 loss: 1.2178 - acc: 0.3916 - val_loss: 1.4231 - val_acc: 0.3758
5 loss: 1.2100 - acc: 0.3909 - val_loss: 1.4009 - val_acc: 0.3625
6 loss: 1.1979 - acc: 0.3976 - val_loss: 1.5025 - val_acc: 0.3116
7 loss: 1.1943 - acc: 0.3988 - val_loss: 1.4510 - val_acc: 0.2872
8 loss: 1.1926 - acc: 0.3965 - val_loss: 1.5162 - val_acc: 0.3072
9 loss: 1.1888 - acc: 0.4004 - val_loss: 1.5659 - val_acc: 0.3304
10 loss: 1.1906 - acc: 0.3969 - val_loss: 1.5655 - val_acc: 0.3260
11 loss: 1.1864 - acc: 0.3999 - val_loss: 1.6286 - val_acc: 0.2967
(...)
Summarizing, the loss of training does not decrease anymore and is still very high. The model also overfits.
You may ask why I added only 2 dense layers with 4 neurons in each. In the beginning I tried different configurations (e.g. 128 neurons and 64 neurons and also different regulaziers), then overfitting was a huge problem, i.e. accuracy on training was almost 1 and loss on test was still far away from 0.
I am a little bit confused what is going on, since something tremendously is wrong here.
Fine-tuning attempts:
Different numbers of neurons in the dense layers in the classification part varying from 1024 to 4.
Different learning rates (0.01, 0.001, 0.0001)
Different batch sizes (16,32, 64)
Different regulaziers L1 with 0.001, 0.0001
Results:
Always huge overfitting
base_model = MobileNetV2(input_shape=(128, 128, 3), weights='imagenet', include_top=False)
# define classificator
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(4, activation='relu')(x)
x = Dropout(0.8)(x)
x = Dense(4, activation='relu')(x)
preds = Dense(4, activation='softmax')(x) #final layer with softmax activation
model = Model(inputs=base_model.input, outputs=preds)
for layer in model.layers[:-4]:
layer.trainable = False
opt = optimizers.SGD(lr=0.001, decay=4e-5, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
batch_size = 32
EPOCHS = int(trainY.size/batch_size)
H = model.fit(trainX, trainY, validation_data=(testX, testY), epochs=EPOCHS, batch_size=batch_size)
Result should be that there is no overfitting and val_loss close to 0. I know that from some paper working on similiar image sets.
UPDATE:
Here are some pictures of val_loss, train_loss and accuracy:
2 dense layers with 16 and 8 neurons, lr =0.001 with decay 1e-6, batchsize=25

Here, you used
x = Dropout(0.8)(x)
which means to drop 80% but i assume you need 20%
so replace it by x = Dropout(0.2)(x)
Also, please go thorugh keras documentation for the same if needed.
an extract from the above documentation
keras.layers.Dropout(rate, noise_shape=None, seed=None)
rate: float between 0 and 1. Fraction of the input units to drop.

I am not sure what the error was from above, but i know how to fix it. I completely trained the pretrained network (aswell one dense layer with 4 neurons and softmax). The results are more than satisfying.
I also tested on VGG16, where I trained only the dense output layer and it totally worked fine.
It seems to be that MobileNetV2 learns features which undesirable for my set of datas. My data sets are radar images, which looks very artificially (choi williams distribution of 'LPI'-signals). On the other hand those images are very easy (they are basicially just edges in a grayscale image), so it is still unkown to me why model-based transfer learning doesnt work for MobileNetV2).

Could be the result of your dropout rate being to high. You do not show your data generators so I can't tell if there is an issue there but I suspect that you need to compile using
loss='sparse_categorical_crossentropy'

Related

Why I am retrieving high value loss for neural network regression

I have data in the following format consisting of 80 instances. I need to predict two-parameter latency and accuracy
No Model Technique Latency Accuracy
0 1 Net Repartition 31308.4 0.99
1 2 Net Connection 30338.2 0.79
2 3 MobiNet Repartition 20360.1 0.89
predictors=data.drop(['Latency','Accuracy'], axis = 1)
target=data[['Latency', 'Accuracy']]
predictors_cat_converted=pd.get_dummies(predictors, prefix=['Model', 'Technique'])
pre_norms = (predictors_cat_converted-predictors_cat_converted.mean()/predictors_cat_converted.std())
def regression():
model=Sequential()
model.add(Dense(50, activation= 'relu',input_shape=(n_cols,)))
model.add(Dense(50, activation='relu'))#hidden layer
model.add(Dense(2))#output
model.compile(optimizer='adam',loss='mean_squared_error')
return model
model=regression()
model.fit(pre_norms, target,validation_split=.3,epochs=100,verbose=1)
Output retrieving high value loss
Epoch 1/100
2/2 [==============================] - 1s 275ms/step - loss: 256321162.6667 - val_loss: 262150224.0000
Epoch 2/100
2/2 [==============================] - 0s 23ms/step - loss: 246612645.3333 - val_loss: 262146176.0000
Epoch 3/100
2/2 [==============================] - 0s 22ms/step - loss: 251778928.0000 - val_loss: 262142000.0000
Epoch 4/100
2/2 [==============================] - 0s 26ms/step - loss: 252470826.6667 - val_loss: 262137664.0000
Epoch 5/100
2/2 [==============================] - 0s 25ms/step - loss: 255799392.0000 - val_loss: 262133200.0000
Epoch 6/100
You have very less data, just 2 columns, 80 rows and 2 target variables. All you can do is:
Add more data.
Normalize your data and then feed it to the neural network.
If neural network not giving good accuracy, try Random Forest or XGBoost.
I also want to add one thing that is your neural network architecture is wrong. Dense layer with 2 outputs and a softmax activation isn't going to give you good result here. You have to use TensorFlow's Funtional API and make 1 input 2 output neural network architecture.
One of your target variables reaches quite big values. As shown in the excerpt of your data, "Latency" reaches values around 30,000 and 20,000.
Evidently if your model makes quite wrong predictions in the beginning, f.e. if it predicts "1" for your Latency, the MSE will be extremely high.
You could normalize your targets as you did with your inputs to make it easier for your network to learn the targets. Your MSE and hence your loss should be much smaller then

Transfer learning only works with trainable set to false

I have two models initialized like this
vgg19 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
for layer in vgg19.layers:
layer.trainable = False
model = Sequential(layers=vgg19.layers)
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
and
vgg19_2 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
model2 = Sequential(layers=vgg19_2.layers)
model2.add(Dense(1024, activation='relu'))
model2.add(Dense(512, activation='relu'))
model2.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model2.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
In other words the only difference is the second model doesn't set vgg19 layers' trainable parameter to false. Unfortunately the model with trainable set to true does not learn the data.
When I use model.fit I get
Trainable set to false:
Epoch 1/51
2500/2500 [==============================] - 49s 20ms/step - loss: 1.4319 - accuracy: 0.5466 - val_loss: 1.3951 - val_accuracy: 0.5693
Epoch 2/51
2500/2500 [==============================] - 47s 19ms/step - loss: 1.1508 - accuracy: 0.6009 - val_loss: 0.7832 - val_accuracy: 0.6023
Epoch 3/51
2500/2500 [==============================] - 48s 19ms/step - loss: 1.0816 - accuracy: 0.6256 - val_loss: 0.6782 - val_accuracy: 0.6153
Epoch 4/51
2500/2500 [==============================] - 47s 19ms/step - loss: 1.0396 - accuracy: 0.6450 - val_loss: 1.3045 - val_accuracy: 0.6103
The model trains to about 65% accuracy within a few epochs. However using model2 which should be able to make even better predictions (since there are more trainable parameters) I get:
Epoch 1/5
2500/2500 [==============================] - 226s 90ms/step - loss: 2.3028 - accuracy: 0.0980 - val_loss: 2.3038 - val_accuracy: 0.1008
Epoch 2/5
2500/2500 [==============================] - 311s 124ms/step - loss: 2.3029 - accuracy: 0.0980 - val_loss: 2.2988 - val_accuracy: 0.1017
Epoch 3/5
2500/2500 [==============================] - 306s 123ms/step - loss: 2.3029 - accuracy: 0.0980 - val_loss: 2.3052 - val_accuracy: 0.0997
Epoch 4/5
2500/2500 [==============================] - 321s 129ms/step - loss: 2.3029 - accuracy: 0.0972 - val_loss: 2.3028 - val_accuracy: 0.0997
Epoch 5/5
2500/2500 [==============================] - 300s 120ms/step - loss: 2.3028 - accuracy: 0.0988 - val_loss: 2.3027 - val_accuracy: 0.1007
When I then try to compute weights gradients on my data I get only zeros. I understand that it may take a long time to train such a big neural net like vgg to optimum but considering the calculated gradients for the last 3 layers should be very similar in both cases why is the accuracy so low? Training for more time gives no improvement.
Try this:
Train the first model, which sets trainable to False. You don't have to train it to saturation, so I would start with your 5 epochs.
Go back and set trainable to True for all the vgg19 parameters. Then, per the documentation, you can rebuild and recompile the model to have these changes take effect.
Continue training on the rebuilt model, which now has all parameters available for tuning.
It is very common in transfer learning to completely freeze the transferred layers in order to preserve them. In the early stages of training your additional layers don't know what to do. That means a noisy gradient by the time it gets to the transferred layers, which will quickly "detune" them away from their previously well-tuned weights.
Putting it all together into some code, it would look something like this.
# Original code. Transfer VGG and freeze the weights.
vgg19 = keras.applications.vgg19.VGG19(
weights='imagenet',
include_top=False,
input_shape=(img_height, img_width, img_channels))
for layer in vgg19.layers:
layer.trainable = False
model = Sequential(layers=vgg19.layers)
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
opt = Adam(learning_rate=0.001, beta_1=0.9)
model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.fit()
# New second stage: unfreeze and continue training.
for layer in vgg19.layers:
layer.trainable = True
full_model = Sequential(layers=model.layers)
full_model.compile(
loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
full_model.fit()
You may want to tune the learning rate for the fine-tuning stage. It's not essential to start, just something to keep in mind.
A third option is to use discriminative learning rates, as introduced by Jeremy Howard and Sebastian Ruder in the ULMFiT paper. The idea is that, in Transfer Learning, you usually want the later layers to learn faster than the earlier, transferred layers. So you actually set the learning rates to be different for different sets of layers. The fastai library has a PyTorch implementation that works by dividing the model into "layer groups" and allowing different parameters for each.

Validaton loss decrease and validation accuracy decrease in CNN classification

Im training classification on 2 classes (spawned fish or not from image of scale). The dataset is unbalanced. There is only 5% spawned scales.
I havnt checked how many spawned fish are in each of train/validation/test sets, but there are 9073 images. Splitt in 70/15/15 %. Then I observe in epoke 2 that val_loss decrease while val_acc decrease. How is that possible?
Im using Keras. The network is EfficientNetB4 from github.com/qubvel.
1600/1600 [==============================] - 1557s 973ms/step - loss: 1.3353 - acc: 0.6474 - val_loss: 0.8055 - val_acc: 0.7046
Epoch 00001: val_loss improved from inf to 0.80548, saving model to ./checkpoints_missing_loss2/salmon_scale_inception.001-0.81.hdf5
Epoch 2/150
1600/1600 [==============================] - 1508s 943ms/step - loss: 0.8013 - acc: 0.7084 - val_loss: 0.6816 - val_acc: 0.6973
Epoch 00002: val_loss improved from 0.80548 to 0.68164, saving model to ./checkpoints_missing_loss2/salmon_scale_inception.002-0.68.hdf5
Edit: here is another example - only 1010 images but its balanced - 50/50.
Epoch 5/150
1600/1600 [==============================] - 1562s 976ms/step - loss: 0.0219 - acc: 0.9933 - val_loss: 0.2639 - val_acc: 0.9605
Epoch 00005: val_loss improved from 0.28715 to 0.26390, saving model to ./checkpoints_missing_loss2/salmon_scale_inception.005-0.26.hdf5
Epoch 6/150
1600/1600 [==============================] - 1565s 978ms/step - loss: 0.0059 - acc: 0.9982 - val_loss: 0.4140 - val_acc: 0.9276
Epoch 00006: val_loss did not improve from 0.26390
Epoch 7/150
1600/1600 [==============================] - 1561s 976ms/step - loss: 0.0180 - acc: 0.9941 - val_loss: 0.2379 - val_acc: 0.9276
and val_loss decrease aswell as val_acc.
If you have such an unbalanced dataset, the model first classifies everything as the majority class which gets relatively high accuracy, but all probability is distributed to the majority class. The reason is that the final bias can be learned very quickly because the back-propagation path is very short.
In the later stages of the training, the model basically finds reasons not to classify the input with the majority class. At this point, the model starts to make mistakes, the accuracy goes down, but the probability is more evenly distributed, so from the loss perspective, the error is smaller.
With such an imbalanced dataset, I would rather track F-measure instead of accuracy.

Keras validation accuracy is 0, and stays constant throughout the training

I am doing a time series analysis using Tensorflow/ Keras in Python.
The overall LSTM model looks like,
model = keras.models.Sequential()
model.add(keras.layers.LSTM(25, input_shape = (1,1), activation = 'relu', dropout = 0.2, return_sequences = False))
model.add(keras.layers.Dense(1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['acc'])
tensorboard = keras.callbacks.TensorBoard(log_dir="logs/{}".format(time()))
es = keras.callbacks.EarlyStopping(monitor='val_acc', mode='max', verbose=1, patience=50)
mc = keras.callbacks.ModelCheckpoint('/home/sukriti/best_model.h5', monitor='val_loss', mode='min', save_best_only=True)
history = model.fit(trainX_3d, trainY_1d, epochs=50, batch_size=10, verbose=2, validation_data = (testX_3d, testY_1d), callbacks=[mc, es, tensorboard])
I am having the following outcome,
Train on 14015 samples, validate on 3503 samples
Epoch 1/50
- 3s - loss: 0.0222 - acc: 7.1352e-05 - val_loss: 0.0064 - val_acc: 0.0000e+00
Epoch 2/50
- 2s - loss: 0.0120 - acc: 7.1352e-05 - val_loss: 0.0054 - val_acc: 0.0000e+00
Epoch 3/50
- 2s - loss: 0.0108 - acc: 7.1352e-05 - val_loss: 0.0047 - val_acc: 0.0000e+00
Now the val_acc remains unchanged. Is it normal?
what does it signify?
As signified by loss = 'mean_squared_error', you are in a regression setting, where accuracy is meaningless (it is meaningful only in classification problems).
Unfortunately, Keras will not "protect" you in such a case, insisting in computing and reporting back an "accuracy", despite the fact that it is meaningless and inappropriate for your problem - see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)?
You should simply remove metrics=['acc'] from your model compilation, and don't bother - in regression settings, MSE itself can (and usually does) serve also as the performance metric.
In my case I had validation accuracy of 0.0000e+00 throughout training (using Keras and CNTK-GPU backend) when my batch size was 64 but there were only 120 samples in my validation set (divided into three classes). After I changed the batch size to 60, I got normal accuracy values.
It will not improve with changing batch size or with metrics. I had the same problem but when I shuffled my training and validation data set 0.0000e+00 gone.

ResNet: 100% accuracy during training, but 33% prediction accuracy with the same data

I am new to machine learning and deep learning, and for learning purposes I tried to play with Resnet. I tried to overfit over small data (3 different images) and see if I can get almost 0 loss and 1.0 accuracy - and I did.
The problem is that predictions on the training images (i.e. the same 3 images used for training) are not correct..
Training Images
Image labels
[1,0,0], [0,1,0], [0,0,1]
My python code
#loading 3 images and resizing them
imgs = np.array([np.array(Image.open("./Images/train/" + fname)
.resize((197, 197), Image.ANTIALIAS)) for fname in
os.listdir("./Images/train/")]).reshape(-1,197,197,1)
# creating labels
y = np.array([[1,0,0],[0,1,0],[0,0,1]])
# create resnet model
model = ResNet50(input_shape=(197, 197,1),classes=3,weights=None)
# compile & fit model
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['acc'])
model.fit(imgs,y,epochs=5,shuffle=True)
# predict on training data
print(model.predict(imgs))
The model does overfit the data:
3/3 [==============================] - 22s - loss: 1.3229 - acc: 0.0000e+00
Epoch 2/5
3/3 [==============================] - 0s - loss: 0.1474 - acc: 1.0000
Epoch 3/5
3/3 [==============================] - 0s - loss: 0.0057 - acc: 1.0000
Epoch 4/5
3/3 [==============================] - 0s - loss: 0.0107 - acc: 1.0000
Epoch 5/5
3/3 [==============================] - 0s - loss: 1.3815e-04 - acc: 1.0000
but predictions are:
[[ 1.05677405e-08 9.99999642e-01 3.95520459e-07]
[ 1.11955103e-08 9.99999642e-01 4.14905685e-07]
[ 1.02637095e-07 9.99997497e-01 2.43751242e-06]]
which means that all images got label=[0,1,0]
why? and how can that happen?
It's because of the batch normalization layers.
In training phase, the batch is normalized w.r.t. its mean and variance. However, in testing phase, the batch is normalized w.r.t. the moving average of previously observed mean and variance.
Now this is a problem when the number of observed batches is small (e.g., 5 in your example) because in the BatchNormalization layer, by default moving_mean is initialized to be 0 and moving_variance is initialized to be 1.
Given also that the default momentum is 0.99, you'll need to update the moving averages quite a lot of times before they converge to the "real" mean and variance.
That's why the prediction is wrong in the early stage, but is correct after 1000 epochs.
You can verify it by forcing the BatchNormalization layers to operate in "training mode".
During training, the accuracy is 1 and the loss is close to zero:
model.fit(imgs,y,epochs=5,shuffle=True)
Epoch 1/5
3/3 [==============================] - 19s 6s/step - loss: 1.4624 - acc: 0.3333
Epoch 2/5
3/3 [==============================] - 0s 63ms/step - loss: 0.6051 - acc: 0.6667
Epoch 3/5
3/3 [==============================] - 0s 57ms/step - loss: 0.2168 - acc: 1.0000
Epoch 4/5
3/3 [==============================] - 0s 56ms/step - loss: 1.1921e-07 - acc: 1.0000
Epoch 5/5
3/3 [==============================] - 0s 53ms/step - loss: 1.1921e-07 - acc: 1.0000
Now if we evaluate the model, we'll observe high loss and low accuracy because after 5 updates, the moving averages are still pretty close to the initial values:
model.evaluate(imgs,y)
3/3 [==============================] - 3s 890ms/step
[10.745396614074707, 0.3333333432674408]
However, if we manually specify the "learning phase" variable and let the BatchNormalization layers use the "real" batch mean and variance, the result becomes the same as what's observed in fit().
sample_weights = np.ones(3)
learning_phase = 1 # 1 means "training"
ins = [imgs, y, sample_weights, learning_phase]
model.test_function(ins)
[1.192093e-07, 1.0]
It's also possible to verify it by changing the momentum to a smaller value.
For example, by adding momentum=0.01 to all the batch norm layers in ResNet50, the prediction after 20 epochs is:
model.predict(imgs)
array([[ 1.00000000e+00, 1.34882026e-08, 3.92139575e-22],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[ 8.70998792e-06, 5.31159838e-10, 9.99991298e-01]], dtype=float32)
ResNet50V2 (the 2nd version) has the much higher accuracy than ResNet50in predicting a given image such as the classical Egyptian cat.
Predicted: [[('n02124075', 'Egyptian_cat', 0.8233388), ('n02123159', 'tiger_cat', 0.103765756), ('n02123045', 'tabby', 0.07267675), ('n03958227', 'plastic_bag', 3.6531426e-05), ('n02127052', 'lynx', 3.647774e-05)]]
Comparing with the EfficientNet(90% accuracy), the ResNet50/101/152 predicts quite a bad result(15~50% accuracy) while adopting the given weights provided by Francios Cholett. It is not related to the weights, but related to the inherent complexity of the above model. In other words, it is necessary to re-train the above model to predict an given image. But EfficientNet does not need such the training to predict an image.
For instance, while given a classical cat image, it shows the final result as follows.
1. Adoption of the decode_predictions
from keras.applications.imagenet_utils import decode_predictions
Predicted: [[('n01930112', 'nematode', 0.122968934), ('n03041632', 'cleaver', 0.04236396), ('n03838899', 'oboe', 0.03846453), ('n02783161', 'ballpoint', 0.027445247), ('n04270147', 'spatula', 0.024508419)]]
2. Adoption of the CV2
img = cv2.resize(cv2.imread('/home/mike/Documents/keras_resnet_common/images/cat.jpg'), (224, 224)).astype(np.float32)
# Remove the train image mean
img[:,:,0] -= 103.939
img[:,:,1] -= 116.779
img[:,:,2] -= 123.68
Predicted: [[('n04065272', 'recreational_vehicle', 0.46529356), ('n01819313', 'sulphur-crested_cockatoo', 0.31684962), ('n04074963', 'remote_control', 0.051597465), ('n02111889', 'Samoyed', 0.040776145), ('n04548362', 'wallet', 0.029898684)]]
Therefore, ResNet50/101/152 models are not suitable to predict an image without training even provided with the weights. But users can feel its value after 100~1000 epochs training for prediction because it helps obtain a better moving average. If users want an easy prediction, EfficientNet is a good choice with the given weights.
It seems that predicting with a batch of images will not work correctly in Keras. It is better to do prediction for each image individually and then calculate the accuracy manually.
As an example, in the following code, I don't use batch prediction, but use individual image prediction.
import os
from PIL import Image
import keras
import numpy
###
# I am not including code to load models or train model
###
print("Prediction result:")
dir = "/path/to/test/images"
files = os.listdir(dir)
correct = 0
total = 0
#dictionary to label all traffic signs class.
classes = {
0:'This is Cat',
1:'This is Dog',
}
for file_name in files:
total += 1
image = Image.open(dir + "/" + file_name).convert('RGB')
image = image.resize((100,100))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
image = image/255
pred = model.predict_classes([image])[0]
sign = classes[pred]
if ("cat" in file_name) and ("cat" in sign):
print(correct,". ", file_name, sign)
correct+=1
elif ("dog" in file_name) and ("dog" in sign):
print(correct,". ", file_name, sign)
correct+=1
print("accuracy: ", (correct/total))
What happens is basically that keras.fit() i.e your
model.fit()
is while having the best fit the precision is lost. As, the precision is lost the models fit gives problems and varied results.The keras.fit only has a good fit not the required precision

Resources