I have this simple model (5 channels) and I expect it to return the second one
import keras
import numpy as np
import keras.backend as K
data = np.random.normal(size = (1000, 5))
model = keras.models.Sequential()
model.add(keras.layers.Dense(10, activation = 'linear',input_shape = (5,)))
model.add(keras.layers.Dense(1, activation = 'linear'))
def loss(x, y):
return K.mean(K.square(x - y))
model.compile('adam', loss)
model.fit(data, data[:, 1], epochs = 100)
It works great and I get perfect zero loss.
When I tweak it a little bit (I add an extra channel in the output) and I decide that I don't care about the second one.
I change it to be this:
import keras
import numpy as np
import keras.backend as K
data = np.random.normal(size = (1000, 5))
model = keras.models.Sequential()
model.add(keras.layers.Dense(10, activation = 'linear',input_shape = (5,)))
model.add(keras.layers.Dense(2, activation = 'linear'))
def loss(x, y):
return K.mean(K.square(x - y[:, 0]))
model.compile('adam', loss)
model.fit(data, data[:, 1], epochs = 100)
And now it is impossible to train. It seems crazy to me. Does anyone know what is happening ?
PS: This example might seem stupid but for a more complex problem I need to compute a custom loss and I reduced the problem to this simple example.
Thank you for your help
After hours of struggling I finally have a fix (and a potential explanation).
The problem in this example (and the only difference) is the index selection. Even though it seems supported by Tensorflow. It does not behave correctly. (And the problematic snippet fails under Theano Backend). Even though the loss is computed correctly it seems the derivative is wrong. Misleading the optimizer. This is why the NN does not train. A hacky but perfectly working solution I found is to replace
y[:, 0]
by
tensorflow.matmul(y, [[1.0], [0.0]])
I did not try but it should be fine with keras.backend.dot too if you are looking for multi-backend stuff. Be careful to put float and not integers in the weights otherwise it will not typecheck.
Hope it will help someone else.
Related
I know this question has been asked before, but I have tried all of their solutions and nothing is working for me.
My Problem:
I am running a CNN to classify some images, a typical task, nothing too crazy. I have the following compilation of my model.
model.compile(optimizer = keras.optimizers.Adam(learning_rate = exp_learning_rate),
loss = tf.keras.losses.SparseCategoricalCrossentropy(),
metrics = ['accuracy'])
I fit this on my training dataset, and evaluated on my validation dataset as follows:
history = model.fit(train_dataset, validation_data = validation_dataset, epochs = 5)
And then I evaluated on a separate test set as follows:
model.evaluate(test_dataset)
Which resulted in this:
4/4 [==============================] - 30s 7s/step - loss: 1.7180 - accuracy: 0.8627
However, when I run:
model.predict(test_dataset)
I have the following confusion matrix output:
This clearly isn't 86% accuracy like the .evaluate method tells me. In fact, it's actually 35.39% accuracy. To make sure it wasn't an issue with my testing dataset, I had my model predict on my training and validation datasets and I still got a similar percentage as here (~30%) despite my training, validation accuracy during fitting going up to 96%, 87%, respectively.
Question:
I don't know why .predict and .evaluate are outputting different results? What's happening there? It seems like when I call .predict, it's not using any of the weights that I trained during fitting? (in fact, given that there are 3 classes, this output is no better than just blindly guessing each label). Are the weights from my fitting not being transferred over to my prediction? My loss function is correct (I label encoded my data as tensorflow wishes to be used with sparse_categorical_crossentropy) and when I pass 'accuracy', it will just take the accuracy corresponding to my loss function. All of this should be consistent. But why is there such a discrepancy with the results of .evaluate and .predict? Which one should I trust?
My Attempts to Fix My Issue:
I thought maybe the sparse categorical cross entropy wasn't right, so I one-hot encoded my target labels and used the categorical_crossentropy loss instead. I still have the EXACT same issue as above.
Concerns:
If the .evaluate is incorrect, then doesn't that mean my training accuracy and validation accuracy during fitting are inaccurate as well? Don't those use the .evaluate method as well? If that's the case, then what can I trust? The loss isn't a good indication of if my model is doing well because it is well-known that minimal loss does not imply good accuracy (although the converse is usually true depending on what standard of "good" we're using). How do I gauge my model's effectiveness in the case that my accuracy metrics aren't correct? I don't really know what to look at anymore because I have no other way to gauge if my model is learning, if someone could please help me understand what is happening I would appreciate it so much. I'm so frustrated.
Edit: (10-28-2021: 12:26 AM)
Ok, so I'll provide some more code to really troubleshoot this.
I originally preprocessed my data as such:
image_size = (256, 256)
batch_size = 16
train_ds = keras.preprocessing.image_dataset_from_directory(
directory = image_directory,
label_mode = 'categorical',
shuffle = True,
validation_split = 0.2,
subset = 'training',
seed = 24,
batch_size = batch_size
)
val_ds = keras.preprocessing.image_dataset_from_directory(
directory = image_directory,
label_mode = 'categorical',
shuffle = True,
validation_split = 0.2,
subset = 'validation',
seed = 24,
batch_size = batch_size
)
Where image_directory is a string with a path containing my images. Now you could probably read documentation, but the image_dataset_from_directory method actually returns a tf.data.Dataset object containing a bunch of batches of the respective (training, validation) data.
I imported the VGG16 architecture to do my classification so I called the respective preprocessing function for VGG16 as follows:
preprocess_input = tf.keras.applications.vgg16.preprocess_input
train_ds = train_ds.map(lambda x, y: (preprocess_input(x), y))
val_ds = val_ds.map(lambda x, y: (preprocess_input(x), y))
This transformed the images into something that was suitable as input for VGG16. Then, in my last processing steps, I did the following validation/test split:
val_batches = tf.data.experimental.cardinality(val_ds)
test_dataset = val_ds.take(val_batches // 3)
validation_dataset = val_ds.skip(val_batches // 3)
Then I proceeded to cache and prefetch my data:
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_ds.prefetch(buffer_size = AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size = AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size = AUTOTUNE)
The Problem:
The problem occurs in the method above. I'm still not sure whether or not .evaluate is a true indicator of accuracy for my model. But I realized that the .evaluate and .predict always coincide when my neural network is a keras.Sequential() model. However, (correct me if I'm wrong) what I am suspecting is that VGG16, when imported from keras.applications API, is actually NOT a keras.Sequential() model. Therefore, I don't think that the .predict and .evaluate results actually coincide when I feed my data straight into my model (I was going to post this as an answer, but I don't have sufficient knowledge nor research to confirm that any of what I said is correct, someone please chime in because I like learning things I know little to nothing about, an edit this is for now).
In the end, I worked around my problem by calling Image_Data_Generator() instead of image_dataset_from_directory() as follows:
train_datagen = ImageDataGenerator(
preprocessing_function = preprocess_input,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True
)
val_datagen = ImageDataGenerator(
preprocessing_function = preprocess_input
)
train_ds = train_datagen.flow_from_directory(
train_image_directory,
target_size = (224, 224),
batch_size = 16,
seed = 24,
shuffle = True,
classes = ['class1', 'class2', 'class3'],
class_mode = 'categorical'
)
test_ds = val_datagen.flow_from_directory(
test_image_directory,
target_size = (224, 224),
batch_size = 16,
seed = 24,
shuffle = False,
classes = ['class1', 'class2', 'class3'],
class_mode = 'categorical'
)
(NOTE: I got this based off the following link from tensorflow's documentation: https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator#flow_from_directory)
This completes all the preprocessing for me. Then, when I call model.evaluate(test_ds), it returns the exact same result as when I do model.predict_generator(test_ds). After some minor processing of the prediction output, I use the following code for my confusion matrix:
Y_pred = model.predict(test_ds)
y_pred = np.argmax(Y_pred, axis=1)
cf = confusion_matrix(test_ds.classes, y_pred)
sns.heatmap(cf, annot= True, xticklabels = class_names,
yticklabels = class_names)
plt.title('Performance of Model on Testing Set')
This eliminates the discrepancy in the confusion matrix and the result of model.evaluate(test_ds).
The Takeaway:
If you're loading images onto a classification model, and your loss and accuracy match, but you're getting discrepancy between your predictions and loss, accuracy, try preprocessing in every way possible. I usually preprocess my images using the image_dataset_from_directory() method for all my keras.sequential() models, however, for the VGG16 model, which I suspect is not a sequential() model, using the ImageDataGenerator(...).flow_from_directory(...) resulted in the correct format for the model to generate a prediction that is consistent with the performance metrics.
TLDR I didn't answer any of my original questions, but I found a workaround. Sorry if this is spam in any way. As is the nature of most Stack Overflow posts, I hope my turmoil in the last few hours helps someone way in the future.
I had the same problem. And even with the ImageDataGenerator it stayed that odd behaviour.
But I think the problem is the shuffle flag of the validation set.
You changed that from here:
val_ds = keras.preprocessing.image_dataset_from_directory(
directory = image_directory,
label_mode = 'categorical',
shuffle = True,
validation_split = 0.2,
subset = 'validation',
seed = 24,
batch_size = batch_size
)
To here:
test_ds = val_datagen.flow_from_directory(
test_image_directory,
target_size = (224, 224),
batch_size = 16,
seed = 24,
shuffle = False,
classes = ['class1', 'class2', 'class3'],
class_mode = 'categorical'
)
It makes intuitive sense to me that the label's dimension should be the same as the neural network's last layer's dimension. However, with some experiments using PyTorch, it turns out that it somehow works.
Code:
import torch
import torch.nn as nn
X = torch.tensor([[1],[2],[3],[4]], dtype=torch.float32) # training input
Y = torch.tensor([[2],[4],[6],[8]], dtype=torch.float32) # training label
model = nn.Linear(1,3)
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(10):
y_pred model(X)
loss = nn.MSELoss(Y, y_pred)
loss.backward()
optimizer.zero_grad()
optimizer.step()
In the above code, model = nn.Linear(1,3) is used instead of model = nn.Linear(1,1). As a result, while Y.shape is (4,1), y_pred.shape is (4,3).
The code works with a warning saying that "Using a target size that is different to the input size will likely lead to incorrect results due to broadcasting. "
I got the following output when I executed model(torch.tensor([10], dtype=torch.float32)):
tensor([20.0089, 19.6121, 19.1967], grad_fn=<AddBackward0>)
All three outputs seems correct to me. But how is the loss calculated if the sizes of the data are different?
Should we in any case use a target size that is different to the input size? Is there a benefit for this?
Assuming you are working with batch_size=4, you are using a target with 1 component vs 3 for your predicted tensor. You don't actually see the intermediate results when computing the loss with nn.MSELoss, using the reduction='none' option will allow you to do so:
>>> criterion = nn.MSELoss(reduction='none')
>>> y = torch.rand(2,1)
>>> y_hat = torch.rand(2,3)
>>> criterion(y_hat, y).shape
(2, 3)
Considering this, you can conclude that the target y, being too small, has been broadcasted to the predicted tensor y_hat. Essentially, in your example, you will get the same result (without the warning) as:
>>> y_repeat = y.repeat(1, 3)
>>> criterion(y_hat, y_repeat)
This means that, for each batch, you are L2-optimizing all its components against a single value: MSE(y_hat[0,0], y[0]), MSE(y_hat[0,1], y[0]), and MSE(y_hat[0,2], y[0]), same goes for y[1] and y[2].
The warning is there to make sure you're conscious of this broadcast operation. Maybe this is what you're looking to do, in this case, you should broadcast the target tensor yourself. Otherwise, it won't make sense to do so.
This is something that has been bothering me for a while about XOR and MLP; it may be basic (if so, apoligies in advance), but I would like to know.
There are many approaches to solving XOR with MLP, but generally they look like this:
from sklearn.neural_network import MLPClassifier
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
y = [0, 1, 1, 0]
model = MLPClassifier(
activation='relu', max_iter=1000, hidden_layer_sizes=(4,2))
Now to fit the model:
model.fit(X, y)
And, guess what?
print('score:', model.score(X, y))
outputs a perfect
score: 1.0
But what is being predicted and scored? In the case of XOR we have a dataset which, by definition(!) has four rows, two features and one binary label. There is no standard X_train, y_train, X_test, y_test to work with. By definition, again, there is no unseen data for the model to digest.
The prediction takes place in the form of
model.predict(X)
which is exactly the same X that training was performed on.
So doesn't the model just spit back the y it was trained on? How do we know the model "learned" anything?
EDIT: Just to try to clarify what baffles me - the features have 2 and only 2 unique values; the 2 unique values have 4 and only 4 possible combinations. The right label for each possible combination is already present in the label column. So what is there for the model to "learn" when fit() is called? And how is this "learning" performed? How can the model ever be "wrong" when it has access to the "right" answer for each possible combination of inputs?
Again, sorry for what is probably a very basic question.
The key thing is that XOR problem was proposed to demonstrate how some models can learn non-linear problems and some models can't.
So when a model gets 1.0 accuracy on the dataset you mentioned, it's notable since it has learned a non-linear problem. The fact that it has learned the training data is enough for us to know that it can [potentially] learn non-linear models. Notice that if this wasn't the case your model would get a very low accuracy like 0.25 since it divides the 2D space into two sub-spaces by a line.
To understand this better, let's see a case where a model can't learn the data under this same circumstances:
import tensorflow as tf
import numpy as np
X = np.array(X)
y = np.array(y)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(2, activation='relu'))
model.compile(optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.1), loss='categorical_crossentropy', metrics=['categorical_accuracy'])
model.fit(X, y, epochs=100)
_, acc = model.evaluate(X, y)
print('acc = ' + str(acc))
which gives:
acc = 0.5
As you can see this model can't classify the data it has already seen. The reason is, this is a non-linear data and our model can only classify linear data.(here is a link to understand the non-linearity of XOR problem better). As soon as we add another layer to our network it will be able to solve this problem:
import tensorflow as tf
import numpy as np
X = np.array(X)
y = np.array(y)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(1, activation='relu'))
model.add(tf.keras.layers.Dense(2, activation='relu'))
tb_callback = tf.keras.callbacks.TensorBoard(log_dir='./test/', write_graph=True)
model.compile(optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.1), loss='categorical_crossentropy', metrics=['categorical_accuracy'])
model.fit(X, y, epochs=5, callbacks=[tb_callback, ])
acc = model.evaluate(X, y)
print('acc = ' + str(acc))
which gives:
acc = 1.0
By adding only one neuron our model learned to do what it couldn't learn in 100 epochs with 1 layer (even though it had already seen the data).
So to sum up, it is correct that our dataset is so small that the network can easily memorize it but the XOR problem is important because it means there are networks that can't memorize this data no matter what.
Having said that however, there are varsities of XOR problems with proper train and test sets. here is one (the plot is slightly different):
import numpy as np
import matplotlib.pyplot as plt
x1 =np.concatenate([np.random.uniform(0, 100, 100), np.random.uniform(-100, 0, 100)])
y1 =np.concatenate([np.random.uniform(-100, 0, 100), np.random.uniform(0, 100, 100)])
x2 =np.concatenate([np.random.uniform(0, 100, 100), np.random.uniform(-100, 0, 100)])
y2 =np.concatenate([np.random.uniform(0, 100, 100), np.random.uniform(-100, 0, 100)])
plt.scatter(x1, y1, c='red')
plt.scatter(x2, y2, c='blue')
plt.show()
hope that helped ;))
So I made a CNN that classifies two types of birds, and it worked fine. After that, I tried adding one more type, but I got weird results. I already posted this on ai stack exchange, but they said its better to ask it in here, so I am providing a link to that post.
https://ai.stackexchange.com/q/11444/23452
Here is the model code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time as time
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
pickle_in = open("C:/Users/Recep/Desktop/programlar/python/X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("C:/Users/Recep/Desktop/programlar/python/Y.pickle","rb")
Y = pickle.load(pickle_in)
X = X/255.0
node_size = 64
model_name = "agi_vs_golden-{}".format(time.time())
tensorboard = TensorBoard(log_dir='C:/Users/Recep/Desktop/programlar/python/logs/{}'.format(model_name))
file_writer = tf.summary.FileWriter('C:/Users/Recep/Desktop/programlar/python/logs/{}'.format(model_name, sess.graph))
model = Sequential()
model.add(Conv2D(node_size,(3,3),input_shape = X.shape[1:]))
#idk what that shape does except that and validation i have no problem
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(node_size,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(node_size))
model.add(Activation("relu"))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X,Y,batch_size=25,epochs=8,validation_split=0.1,callbacks=[tensorboard])
# idk what the validation is and how its used but dont think it caused the problem
model.save("agi_vs_gouldian.model")
By the way, as I said in the comments of my original post, I think maybe there is a lack of training the network, or I don't have the enough data. So I tried increasing the number of epochs. It kinda get the problem, but the part that I'm curious about is what happened when I had the lower epochs?
Can anyone help me?
I am giving the tensor board graphs below.
BTW, is my data array rgb?
And how can I get rid of this local max of %70?
And since I'm a beginner to this, I don't know what validation really works, but I saw that the validation graphs stays the same in the first training that I had issues with.
You try to classify three birds with sigmoid. Sigmoid is good for binary classification. Try a softmax activation layer and see how it goes. I suggest replacing
model.add(Dense(1))
model.add(Activation("sigmoid"))
with
model.add(Dense(3, activation='softmax'))
Where 3 is the number of birds' type you want to classify.
Have a look here, a very good tutorial of using softmax as the activation layer for a multi-class classification
https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/
I’ve tried to train a 2 layer neural network on a simple linear interpolation for a discrete function, I’ve tried lots of different learning rates as well as different activation functions, and it seems like nothing is being learned!
I’ve literally spent the last 6 hours trying to debug the following code, but it seems like there’s no bug! What's the explanation?
from torch.utils.data import Dataset
import os
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import random
LOW_X=255
MID_X=40000
HIGH_X=200000
LOW_Y=torch.Tensor([0,0,1])
MID_Y=torch.Tensor([0.2,0.5,0.3])
HIGH_Y=torch.Tensor([1,0,0])
BATCH_SIZE=4
def x_to_tensor(x):
if x<=MID_X:
return LOW_Y+(x-LOW_X)*(MID_Y-LOW_Y)/(MID_X-LOW_X)
if x<=HIGH_X:
return MID_Y+(x-MID_X)*(HIGH_Y-MID_Y)/(HIGH_X-MID_X)
return HIGH_Y
class XYDataset(Dataset):
LENGTH=10000
def __len__(self):
return self.LENGTH
def __getitem__(self, idx):
x=random.randint(LOW_X,HIGH_X)
y=x_to_tensor(x)
return x,y
class Interpolate(nn.Module):
def __init__(self, num_outputs,hidden_size=10):
super(Interpolate, self).__init__()
self.hidden_size=hidden_size
self.x_to_hidden = nn.Linear(1, hidden_size)
self.hidden_to_out = nn.Linear(hidden_size,num_outputs)
self.activation = nn.Tanh() #I have tried Sigmoid and Relu activations as well
self.softmax=torch.nn.Softmax(dim=1)
def forward(self, x):
out = self.x_to_hidden(x)
out = self.activation(out)
out = self.hidden_to_out(out)
out = self.softmax(out)
return out
dataset=XYDataset()
trainloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
criterion= nn.MSELoss()
def train_net(net,epochs=10,lr=5.137871216190041e-05,l2_regularization=2.181622809797563e-12):
optimizer= optim.Adam(net.parameters(),lr=lr,weight_decay=l2_regularization)
net.train(True)
running_loss=0.0
for epoch in range(epochs):
for i,data in enumerate(trainloader):
inputs,targets=data
inputs,targets=torch.FloatTensor(inputs.float()).view(-1,1),torch.FloatTensor(targets.float())
optimizer.zero_grad()
outputs=net(inputs)
loss=criterion(outputs,targets)
loss.backward()
optimizer.step()
running_loss+=loss.item()
if (len(trainloader)*epoch+i)%200==199:
running_loss=running_loss/(200*BATCH_SIZE)
print('[%d,%5d] loss: %.6f ' % (epoch+1,i+1,running_loss))
running_loss=0.0
for i in range(-11,3):
net=Interpolate(num_outputs=3)
train_net(net,lr=10**i,epochs=1)
print('for learning rate {} net output on low x is {}'.format(i,net(torch.Tensor([255]).view(-1,1))))
Although your problem is quite simple, it is poorly scaled: x ranges from 255 to 200K. This poor scaling leads to numerical instability and overall makes the training process unnecessarily unstable.
To overcome this technical issue, you simply need to scale your inputs to [-1, 1] (or [0, 1]) range.
Note that this scaling is quite ubiquitous in deep-learning: images are scaled to [-1, 1] range (see, e.g., torchvision.transforms.Normalize).
To understand better the importance of scaled responses, you can look into the mathematical analysis done in this paper.
You Can Perform a simple interpolation with a NN however you have to consider the following:
I would recommend the following settings:
For an activation function: for a simple interpolation a identity activation function can turn the NN as a Linear Regressor which may generalize well. However you should consider Rectified Linear Unit (Relu) for big data and Logistic/Tanh for regular size data as other options.
In case of big amounts of data I would select an iterative optimizer for weights as simple gradient descent or Adam. On the other hand if you got few data I would use a Newton approximation LBFGS since you will get a good approximation at weights in a reasonably lower computational time.
Vary the number of neurons in each layer and number of layers performing batch learning to seek better approximations.