SageMaker Serverless Inference Preprocessing Question - machine-learning

I've recently seen that there is a serverless version of SageMaker and I wanted to use it for a personal project (first time using SageMaker). I used the guide below to try and deploy my model, only modifying some preprocessing steps, steps which I also did when doing predictions locally and on Lambda).
def input_handler(data, context):
if context.request_content_type == 'application/x-image':
image_as_bytes = io.BytesIO(data.read())
image = Image.open(image_as_bytes)
image = image.convert('RGB')
image = image.resize((150,150))
instance = np.array(image, dtype='f')
instance = instance / 255
instance = np.expand_dims(image, axis=0)
payload = json.dumps({"instances": instance.tolist()})
return payload
else:
_return_error(415, 'Unsupported content type "{}"'.format(context.request_content_type or 'Unknown'))
with open(file_name, 'rb') as f:
image_data = f.read()
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=image_data)
When invoking the endpoint via the runtime (as in the guide) I always get the same prediction for any of the images from different classes.
Hope my explanation is ok as it is the first time I am asking a question. Not sure what I am missing but any help is appreciated.
https://github.com/shashankprasanna/sagemaker-video-examples/tree/master/sagemaker-serverless-inference
I tried doing predictions locally (same model, same preprocessing steps) via the predict api and it was working as expected. I have the logs from CloudWatch (which I deleted in the code above so it's not cluttered): Opened image for inference -> Image mode set to: RGB -> Resized to: (150, 150) -> Converted image to: float32 -> Normalized array: [[[0.7529412 0.7019608 0.67058825] ... -> Expanded to: (1, 150, 150, 3) -> payload: {"instances": [[[[192, 179, 171], [187, 174, 166], [188, 175, 167], [195, 182, 174] ...
Not sure if it has something to do with it, but shouldn't the list be the same as normalized np array?

Related

Coiled: Use local file to train XGBoost classifier

I want to train an XGBoost classifier with coiled and dask.
The problem is that my training data is really big and is stored in an h5py file on my computer. Is there a way to upload the h5py file directly to the workers?
To show my problem I created an example. For this example, I create some random data and store it in an h5py file so you can see what my data looks like. In my real work case, the data has 7245346 features and 2157 samples.
import coiled
import h5py
import numpy as np
import dask.array as da
from dask.distributed import Client
import xgboost as xgb
input_path = "test.h5"
# create some random data
n_features = 500
n_samples = 200
X = np.random.randint(0,3,size=[n_samples, n_features])
y = np.random.randint(0,5,size=[n_samples])
with h5py.File(input_path, mode='w') as file:
file.create_dataset('X', data=X)
file.create_dataset('y', data=y)
rows_per_chunk = 100
coiled.create_software_environment(
name="xgboost-on-coiled",
pip=["coiled", "h5py", "dask", "xgboost"])
with coiled.Cluster(
name="xgboost-cluster",
n_workers=2,
worker_cpu=8,
worker_memory="16GiB",
software="xgboost-on-coiled") as cluster:
with Client(cluster) as client:
file = h5py.File(input_path, mode='r')
n_features = file["X"].shape[1]
X = da.from_array(file["X"], chunks=(rows_per_chunk, n_features))
X = X.rechunk(chunks=(rows_per_chunk, n_features))
X.astype("int8")
X.persist()
y = da.from_array(file["y"], chunks=rows_per_chunk)
n_class = np.unique(y.compute()).size
y = y.astype("int8")
y.persist()
dtrain = xgb.dask.DaskDMatrix(
client,
X,
y,
feature_names=['%i' % i for i in range(n_features)])
model_params = {
'objective': 'multi:softprob',
'eval_metric': 'mlogloss',
'num_class': n_class}
# train model
output = xgb.dask.train(
client,
params=model_params,
dtrain=dtrain)
booster = output["booster"]
The Error message:
FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = 'test.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
For smaller amounts of data, I can load the data directly out of RAM. But for more data, this does not work anymore. Just that you know what I am talking about:
input_path = "test.h5"
n_features = 500
n_samples = 200
X = np.random.randint(0,3,size=[n_samples, n_features])
y = np.random.randint(0,5,size=[n_samples])
with h5py.File(input_path, mode='w') as file:
file.create_dataset('X', data=X)
file.create_dataset('y', data=y)
rows_per_chunk = 100
coiled.create_software_environment(
name="xgboost-on-coiled",
pip=["coiled", "h5py", "dask", "xgboost"])
with coiled.Cluster(
name="xgboost-cluster",
n_workers=2,
worker_cpu=8,
worker_memory="16GiB",
software="xgboost-on-coiled") as cluster:
with Client(cluster) as client:
file = h5py.File(input_path, mode='r')
n_features = file["X"].shape[1]
X = file["X"][:]
X = da.from_array(X, chunks=(rows_per_chunk, n_features))
y = file["y"][:]
n_class = np.unique(y).size
y = da.from_array(y, chunks=rows_per_chunk)
dtrain = xgb.dask.DaskDMatrix(
client,
X,
y,
feature_names=['%i' % i for i in range(n_features)])
model_params = {
'objective': 'multi:softprob',
'eval_metric': 'mlogloss',
'num_class': n_class}
# train model
output = xgb.dask.train(
client,
params=model_params,
dtrain=dtrain)
booster = output["booster"]
If this code is used with large amounts of data, no error message is displayed. In this case, simply nothing happens. I do not see the data being uploaded.
I have tried so many things and nothing has worked. I would be very grateful if you have some advice for me on how to do this.
(Just in case you are wondering why I am trying to train a model on 7 million features: I want to get the feature importance for feature selection)
Is there a way to upload the h5py file directly to the workers?
When using Coiled, the recommended way is to upload the data to an AWS S3 bucket (or similar), and read it directly from there. This is because Coiled provisions Dask clusters on the cloud, and there is a cost to moving data (e.g., from your local machine to the cloud). It's more effcient to have your data on the cloud, and if possible, in the same AWS region. Also, see the Coiled Documentation: How do I access my data from Coiled?.

Running Detectron2 inference in Caffe2

I have a Detectron2 .pth model that I converted successfully to Caffe2 .pb via the Detectron2 tools functionality located here: https://github.com/facebookresearch/detectron2/blob/master/tools/caffe2_converter.py
As recommended, used the --run-eval flag to confirm results while converting and the results are very similar to original detectron2 results.
To run inference on a new image using the resulting model.pb and model_init.pb files, used functionality located here:
https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/api.py (mostly)
https://github.com/facebookresearch/detectron2/blob/master/detectron2/export/caffe2_inference.py
However, inference results are not even close. Can anybody suggest reasons why this might happen? Detectron2 repo says all preprocessing is done in the caffe2 scripts, but am I missing something?
I can provide my inference code:
caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])
Your input_image should be multiple of 32.
So problably do you need make a resize of your input img
So do you need :
caffe2_model = Caffe2Model.load_protobuf(input_directory)
img = cv2.imread(input_image)
img = cv2.resize(img, (64, 64))
image = torch.as_tensor(img.astype("float32").transpose(2, 0, 1))
data = {'image': image, 'height': image.shape[1], 'width': image.shape[2]}
output = caffe2_model([data])
See the class: classdetectron2.export.Caffe2Tracer
In link : https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer

'transpose expects a vector of size 5. But input(1) is a vector of size 3\n\t " when making inference POST request to tensorflow serving model

I have trained a model and deployed it to tensorflow-serving for inference.
I am getting this error when making a request:
<Response [400]>
{'error': 'transpose expects a vector of size 5. But input(1) is a vector of size 3\n\t [[{{node bidirectional_1/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _class=["loc:#bidirectional_1/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3"], _output_shapes=[[50,?,512]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embedding_lookup, Attention/transpose/perm)]]'}
The notable difference between this model and the first I deployed that worked without issue is that it contains a Keras custom Layer whereas my successful attempt contained only standard Keras layers.
This is how I am testing the POST request to my tf-serving model:
with open("CNN_last_test_set.pkl", "rb") as fp:
x_arr_test, y_test = pickle.load(fp)
out = x_arr_test[:1, :]
out = out.tolist()
payload = {
"instances": [{'input': [out]}]
}
r = requests.post('http://localhost:9000/v1/models/prod_mod:predict', json=payload)
pred = json.loads(r.content.decode('utf-8'))
To create the tensorflow model object to use with tf-serving I am using this function:
def export_model_custom_layer(filename, export_path_base):
# set the mode to test time.
K.set_learning_phase(0)
model = keras.models.load_model(filename, custom_objects={"Attention": Attention})
sess = K.get_session()
# set the path to save the model and model version
export_version = 1
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(export_version)))
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input': model.input},
outputs={t.name.split(':')[0]: t for t in model.outputs},
legacy_init_op=tf.tables_initializer())
Where I've defined my customer layer as a custom object, in order for this to work I've added this function to my customer layer:
def get_config(self):
config = {
'name': "Attention"
}
base_config = super(Attention, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
When I predict with the model using the same data format as the tf-serving model is receiving using standard keras model.predict(), it works as intended:
class Attention(Layer):...
with open("CNN_last_test_set.pkl", "rb") as fp:
x_arr_test, y_test = pickle.load(fp)
model = keras.models.load_model(r"Data/modelCNN.model", custom_objects={"Attention": Attention})
out = x_arr_test[:1, :]
test1 = out.shape
out = out.tolist()
test = model.predict([out])
>> print(test)
>> [[0.21351092]]
This leads me to believe that the issue is happening either when I export the model from keras to the .pb file or in some way the model is being run in the docker container.
I am not sure what to make of this error but I'm assuming that this is related to my custom layer object considering that it worked with my previous model that only contained standard Keras layers.
Any help is greatly appreciated, thanks!
EDIT: I solved the issue, the problem was that my input data had two extra dimensions than necessary. I realized that when I removed the brackets from around the variable "out" my error changed from being 'transpose expects a vector of size 5' to 'transpose expects a vector of size 4'. So I reshaped my "out" variable from being (1, 50) to (50,) & removed the brackets and the problem resolved itself.

Keras: ValueError: No data provided for "input_1". Need data for each key

I am using the keras functional API with input images of dimension (224, 224, 3). I have the following model using the functional API, although a similar problem seems to arise with sequential models:
input = Input(shape=(224, 224, 3,))
shared_layers = Dense(16)(input)
model = KerasModel(input=input, output=shared_layers)
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy'])
I am calling model.fit_generator where my generator has
yield ({'input_1': image}, {'output': classification})
image is the input (224, 224, 3) image and classification is in {-1,1}.
On fitting the model, I get an error
ValueError: No data provided for "dense_1". Need data for each key in: ['dense_1']
One strange thing is that if I switch the input_1 target of the dict to dense_1, the error switches to missing an input for input_1, but goes back to missing dense_1 if both keys are in the data generator.
This happens whether I call fit_generator or get batches from the generator and call train_on_batch.
Does anyone know what's going on? From what I can tell, this should be the same as given in the documentation although with a different input size.
Full traceback:
Traceback (most recent call last):
File "pymask.py", line 303, in <module>
main(sys.argv)
File "pymask.py", line 285, in main
keras.callbacks.ProgbarLogger()
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1557, in fit_generator
class_weight=class_weight)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1314, in train_on_batch
check_batch_axis=True)
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 1029, in _standardize_user_data
exception_prefix='model input')
File "/home/danielunderwood/virtualenvs/keras/lib/python3.6/site-packages/keras/engine/training.py", line 52, in standardize_input_data
str(names))
ValueError: No data provided for "input_1". Need data for each key in: ['input_1']
I encountered this error on 3 cases (In R):
The input data does not have the same dimension as was declared in the first layer
The input data includes missing values
The input data is not a matrix (for example, a data frame)
Please check all of the above.
Maybe this code in R can help:
library(keras)
#The network should identify the rule that a row sum greater than 1.5 should yield an output of 1
my_x=matrix(data=runif(30000), nrow=10000, ncol=3)
my_y=ifelse(rowSums(my_x)>1.5,1,0)
my_y=to_categorical(my_y, 2)
model = keras_model_sequential()
layer_dense(model,units = 2000, activation = "relu", input_shape = c(3))
layer_dropout(model,rate = 0.4)
layer_dense(model,units = 50, activation = "relu")
layer_dropout(model,rate = 0.3)
layer_dense(model,units = 2, activation = "softmax")
compile(model,loss = "categorical_crossentropy",optimizer = optimizer_rmsprop(),metrics = c("accuracy"))
history <- fit(model, my_x, my_y, epochs = 5, batch_size = 128, validation_split = 0.2)
evaluate(model,my_x, my_y,verbose = 0)
predict_classes(model,my_x)
I have encountered this issue as well and none of the above mentioned answers worked. According to the keras documentation you can pass the arguments either as a dictionary like that:
model.fit({'main_input': headline_data, 'aux_input': additional_data},
{'main_output': labels, 'aux_output': labels},
epochs=50, batch_size=32)
or as a list like that:
model.fit([headline_data, additional_data], [labels, labels],
epochs=50, batch_size=32)
The dictionary version didn't work for me with keras version 2.0.9. I have used the list version as a workaround for now.
This was due to me misunderstanding how the keras outputs work. The layer specified by the output argument to Model requires the output from the data. I misunderstood that the output key in the data dictionary automatically goes to the layer specified by the output argument.
yield ({'input_1': image}, {'output': classification})
Replace output with dense_1.
It will work.

Tensorflow Image Shape Error

I have trained a classifier and I now want to pass any single image through.
I'm using the keras library with Tensorflow as the backend.
I'm getting an error I can't seem to get past
img_path = '/path/to/my/image.jpg'
import numpy as np
from keras.preprocessing import image
x = image.load_img(img_path, target_size=(250, 250))
x = image.img_to_array(x)
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
Do I need to reshape my data to have None as the first dimension? I'm confused why Tensorflow would expect None as the first dimension?
Error when checking : expected convolution2d_input_1 to have shape (None, 250, 250, 3) but got array with shape (1, 3, 250, 250)
I'm wondering if there has been an issue with the architecture of my trained model?
edit: if i call model.summary() give convolution2d_input_1 as...
Edit: I did play around with the suggestion below but used numpy to transpose instead of tf - still seem to be hitting the same issue!
None matches any number. Usually, when you pass some data to a model, it is expected that you pass tensor of dimensions: None x data_size, meaning the first dimension is any dimension and denotes batch size. In your case, the problem is that you pass 250 x 250 x 3, and it is expected 3 x 250 x 250. Try:
x = image.load_img(img_path, target_size=(250, 250))
x_trans = tf.transpose(x, perm=[2, 0, 1])
x_expanded = np.expand_dims(x_trans, axis=0)
preds = model.predict(x_expanded)
Ok so using feedback rom Sygi i think i have half solved it,
The error was actually telling me i needed to pass in my dimensions as [1, 250, 250, 3] so that was an easy fix; i must say im not sure why TF is expecting the dimensions in this order as looking at the docs it doesnt seem right so more research required here.
Moving ahead im not sure transpose is the way to go as if i use a different input image the dimensions may not be in the same order meaning the transpose doesnt work properly,
Instead of transpose I'm probably trying to t call x_reshape = img.reshape((1, 250, 250, 3)) depending on what i find out about dimension order in reshaping for TS
thanks for the hints Sygi :)

Resources