Type Error for Image generation GAN model - image-processing

I am trying to visualize the final images from the generator model for GAN image generator, everything else worked perfectly.
When trying to visualize the images after training the model I am getting TypeError.
This is what i was going for image plotting:
imgs = test_model.predict(tf.random.normal((16, 128, 1)))
^ this ran fine
but the one below gave me an error:
fig, ax = plt.subplots(ncols = 4, nrows = 4, figsize = (20, 20))
for r in range(4):
for c in range(4):
ax[r][c].imshow(imgs[(r+1)*(c+1)-1])
This is the error.
TypeError: Invalid shape (28, 28, 1) for image data
Anyone can tell me how to fix this ?
I checked for all possible errors, couldn't find it.
What am I missing to check?

Related

SageMaker Serverless Inference Preprocessing Question

I've recently seen that there is a serverless version of SageMaker and I wanted to use it for a personal project (first time using SageMaker). I used the guide below to try and deploy my model, only modifying some preprocessing steps, steps which I also did when doing predictions locally and on Lambda).
def input_handler(data, context):
if context.request_content_type == 'application/x-image':
image_as_bytes = io.BytesIO(data.read())
image = Image.open(image_as_bytes)
image = image.convert('RGB')
image = image.resize((150,150))
instance = np.array(image, dtype='f')
instance = instance / 255
instance = np.expand_dims(image, axis=0)
payload = json.dumps({"instances": instance.tolist()})
return payload
else:
_return_error(415, 'Unsupported content type "{}"'.format(context.request_content_type or 'Unknown'))
with open(file_name, 'rb') as f:
image_data = f.read()
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=image_data)
When invoking the endpoint via the runtime (as in the guide) I always get the same prediction for any of the images from different classes.
Hope my explanation is ok as it is the first time I am asking a question. Not sure what I am missing but any help is appreciated.
https://github.com/shashankprasanna/sagemaker-video-examples/tree/master/sagemaker-serverless-inference
I tried doing predictions locally (same model, same preprocessing steps) via the predict api and it was working as expected. I have the logs from CloudWatch (which I deleted in the code above so it's not cluttered): Opened image for inference -> Image mode set to: RGB -> Resized to: (150, 150) -> Converted image to: float32 -> Normalized array: [[[0.7529412 0.7019608 0.67058825] ... -> Expanded to: (1, 150, 150, 3) -> payload: {"instances": [[[[192, 179, 171], [187, 174, 166], [188, 175, 167], [195, 182, 174] ...
Not sure if it has something to do with it, but shouldn't the list be the same as normalized np array?

Running Detectron2 inference in Caffe2

I have a Detectron2 .pth model that I converted successfully to Caffe2 .pb via the Detectron2 tools functionality located here: https://github.com/facebookresearch/detectron2/blob/master/tools/caffe2_converter.py
As recommended, used the --run-eval flag to confirm results while converting and the results are very similar to original detectron2 results.
To run inference on a new image using the resulting model.pb and model_init.pb files, used functionality located here:
https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/api.py (mostly)
https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/caffe2_inference.py
However, inference results are not even close. Can anybody suggest reasons why this might happen? Detectron2 repo says all preprocessing is done in the caffe2 scripts, but am I missing something?
I can provide my inference code:
caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])
Your input_image should be multiple of 32.
So problably do you need make a resize of your input img
So do you need :
caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
img = cv2.resize(img, (64, 64))
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])
See the class: classdetectron2.export.Caffe2Tracer
In link : https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer

Error when checking input: expected conv2d_27_input to have 4 dimensions, but got array with shape (55000, 28, 28)

hi i'm trying to use cnn on fashion mnist data
there are 5200 images 28*28 in grayscale so i used a 2D cnn
here is my code:
fashion_mnist=keras.datasets.fashion_mnist
(xtrain,ytrain),(xtest,ytest)=fashion_mnist.load_data()
xvalid,xtrain=xtrain[:5000]/255.0,xtrain[5000:]/255.0
yvalid,ytrain=ytrain[:5000],ytrain[5000:]
defaultcon=partial(keras.layers.Conv2D,kernel_size=3,activation='relu',padding="SAME")
model=keras.models.Sequential([
defaultcon(filters=64,kernel_size=7,input_shape=[28,28,1]),
keras.layers.MaxPooling2D(pool_size=2),
defaultcon(filters=128),
defaultcon(filters=128),
keras.layers.MaxPooling2D(pool_size=2),
defaultcon(filters=256),
defaultcon(filters=256),
keras.layers.MaxPooling2D(pool_size=2),
keras.layers.Flatten(),
keras.layers.Dense(units=128,activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(units=64,activation='relu'),
keras.layers.Dropout(0.5),
keras.layers.Dense(units=10,activation='softmax'),
])
model.compile(optimizer='sgd',loss="sparse_categorical_crossentropy", metrics=["accuracy"])
history=model.fit(xtrain,ytrain,epochs=30,validation_data=(xvalid,yvalid))
but i get Error when checking input: expected conv2d_27_input to have 4 dimensions, but got array with shape (55000, 28, 28)
how expected to get 4D ?
In the input line :
defaultcon(filters=64,kernel_size=7,input_shape=[28,28,1])
you mistakenly defined the shape (28,28,1) which is not correct. And for a task with m samples, model will expect the data with the dimension of (m,28,28,1) which is a 4D.
Apparently your inputs are in the shape of (m,28,28) where m is the number of samples. So you can solve your problem by changing the line I mentioned above with this one:
defaultcon(filters=64,kernel_size=7,input_shape=(28,28))
and hopefully, you will be all set.

Keras: ValueError: No data provided for "input_1". Need data for each key

I am using the keras functional API with input images of dimension (224, 224, 3). I have the following model using the functional API, although a similar problem seems to arise with sequential models:
input = Input(shape=(224, 224, 3,))
shared_layers = Dense(16)(input)
model = KerasModel(input=input, output=shared_layers)
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy'])
I am calling model.fit_generator where my generator has
yield ({'input_1': image}, {'output': classification})
image is the input (224, 224, 3) image and classification is in {-1,1}.
On fitting the model, I get an error
ValueError: No data provided for "dense_1". Need data for each key in: ['dense_1']
One strange thing is that if I switch the input_1 target of the dict to dense_1, the error switches to missing an input for input_1, but goes back to missing dense_1 if both keys are in the data generator.
This happens whether I call fit_generator or get batches from the generator and call train_on_batch.
Does anyone know what's going on? From what I can tell, this should be the same as given in the documentation although with a different input size.
Full traceback:
Traceback (most recent call last):
File "pymask.py", line 303, in <module>
main(sys.argv)
File "pymask.py", line 285, in main
keras.callbacks.ProgbarLogger()
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1557, in fit_generator
class_weight=class_weight)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1314, in train_on_batch
check_batch_axis=True)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1029, in _standardize_user_data
exception_prefix='model input')
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 52, in standardize_input_data
str(names))
ValueError: No data provided for "input_1". Need data for each key in: ['input_1']
I encountered this error on 3 cases (In R):
The input data does not have the same dimension as was declared in the first layer
The input data includes missing values
The input data is not a matrix (for example, a data frame)
Please check all of the above.
Maybe this code in R can help:
library(keras)
#The network should identify the rule that a row sum greater than 1.5 should yield an output of 1
my_x=matrix(data=runif(30000), nrow=10000, ncol=3)
my_y=ifelse(rowSums(my_x)>1.5,1,0)
my_y=to_categorical(my_y, 2)
model = keras_model_sequential()
layer_dense(model,units = 2000, activation = "relu", input_shape = c(3))
layer_dropout(model,rate = 0.4)
layer_dense(model,units = 50, activation = "relu")
layer_dropout(model,rate = 0.3)
layer_dense(model,units = 2, activation = "softmax")
compile(model,loss = "categorical_crossentropy",optimizer = optimizer_rmsprop(),metrics = c("accuracy"))
history <- fit(model, my_x, my_y, epochs = 5, batch_size = 128, validation_split = 0.2)
evaluate(model,my_x, my_y,verbose = 0)
predict_classes(model,my_x)
I have encountered this issue as well and none of the above mentioned answers worked. According to the keras documentation you can pass the arguments either as a dictionary like that:
model.fit({'main_input': headline_data, 'aux_input': additional_data},
{'main_output': labels, 'aux_output': labels},
epochs=50, batch_size=32)
or as a list like that:
model.fit([headline_data, additional_data], [labels, labels],
epochs=50, batch_size=32)
The dictionary version didn't work for me with keras version 2.0.9. I have used the list version as a workaround for now.
This was due to me misunderstanding how the keras outputs work. The layer specified by the output argument to Model requires the output from the data. I misunderstood that the output key in the data dictionary automatically goes to the layer specified by the output argument.
yield ({'input_1': image}, {'output': classification})
Replace output with dense_1.
It will work.

Tensorflow Image Shape Error

I have trained a classifier and I now want to pass any single image through.
I'm using the keras library with Tensorflow as the backend.
I'm getting an error I can't seem to get past
img_path = '/path/to/my/image.jpg'
import numpy as np
from keras.preprocessing import image
x = image.load_img(img_path, target_size=(250, 250))
x = image.img_to_array(x)
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
Do I need to reshape my data to have None as the first dimension? I'm confused why Tensorflow would expect None as the first dimension?
Error when checking : expected convolution2d_input_1 to have shape (None, 250, 250, 3) but got array with shape (1, 3, 250, 250)
I'm wondering if there has been an issue with the architecture of my trained model?
edit: if i call model.summary() give convolution2d_input_1 as...
Edit: I did play around with the suggestion below but used numpy to transpose instead of tf - still seem to be hitting the same issue!
None matches any number. Usually, when you pass some data to a model, it is expected that you pass tensor of dimensions: None x data_size, meaning the first dimension is any dimension and denotes batch size. In your case, the problem is that you pass 250 x 250 x 3, and it is expected 3 x 250 x 250. Try:
x = image.load_img(img_path, target_size=(250, 250))
x_trans = tf.transpose(x, perm=[2, 0, 1])
x_expanded = np.expand_dims(x_trans, axis=0)
preds = model.predict(x_expanded)
Ok so using feedback rom Sygi i think i have half solved it,
The error was actually telling me i needed to pass in my dimensions as [1, 250, 250, 3] so that was an easy fix; i must say im not sure why TF is expecting the dimensions in this order as looking at the docs it doesnt seem right so more research required here.
Moving ahead im not sure transpose is the way to go as if i use a different input image the dimensions may not be in the same order meaning the transpose doesnt work properly,
Instead of transpose I'm probably trying to t call x_reshape = img.reshape((1, 250, 250, 3)) depending on what i find out about dimension order in reshaping for TS
thanks for the hints Sygi :)

Resources