Vertex AI - Deployment failed - docker

I'm trying to deploy my custom-trained model using a custom-container, i.e. create an endpoint from a model that I created.
I'm doing the same thing with AI Platform (same model & container) and it works fine there.
At the first try I deployed the model successfully, but ever since whenever I try to create an endpoint it says "deploying" for 1+ hours and then it fails with the following error:
google.api_core.exceptions.FailedPrecondition: 400 Error: model server never became ready. Please validate that your model file or container configuration are valid. Model server logs can be found at (link)
The log shows the following:
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://127.0.0.1:8080
[05/Jul/2022 12:00:37] "[33mGET /v1/endpoints/1/deployedModels/2025850174177280000 HTTP/1.1[0m" 404 -
[05/Jul/2022 12:00:38] "[33mGET /v1/endpoints/1/deployedModels/2025850174177280000 HTTP/1.1[0m" 404 -
Where the last line is being spammed until it ultimately fails.
My flask app is as follows:
import base64
import os.path
import pickle
from typing import Dict, Any
from flask import Flask, request, jsonify
from streamliner.models.general_model import GeneralModel
class Predictor:
def __init__(self, model: GeneralModel):
self._model = model
def predict(self, instance: str) -> Dict[str, Any]:
decoded_pickle = base64.b64decode(instance)
features_df = pickle.loads(decoded_pickle)
prediction = self._model.predict(features_df).tolist()
return {"prediction": prediction}
app = Flask(__name__)
with open('./model.pkl', 'rb') as model_file:
model = pickle.load(model_file)
predictor = Predictor(model=model)
#app.route("/predict", methods=['POST'])
def predict() -> Any:
if request.method == "POST":
instance = request.get_json()
instance = instance['instances'][0]
predictions = predictor.predict(instance)
return jsonify(predictions)
#app.route("/health")
def health() -> str:
return "ok"
if __name__ == '__main__':
port = int(os.environ.get("PORT", 8080))
app.run(host='0.0.0.0', port=port)
The deployment code which I do through Python is irrelevant because the problem persists when I deploy through GCP's UI.
The model creation code is as follows:
def upload_model(self):
model = {
"name": self.model_name_on_platform,
"display_name": self.model_name_on_platform,
"version_aliases": ["default", self.run_id],
"container_spec": {
"image_uri": f'{REGION}-docker.pkg.dev/{GCP_PROJECT_ID}/{self.repository_name}/{self.run_id}',
"predict_route": "/predict",
"health_route": "/health",
},
}
parent = self.model_service_client.common_location_path(project=GCP_PROJECT_ID, location=REGION)
model_path = self.model_service_client.model_path(project=GCP_PROJECT_ID,
location=REGION,
model=self.model_name_on_platform)
upload_model_request_specifications = {'parent': parent, 'model': model,
'model_id': self.model_name_on_platform}
try:
print("trying to get model")
self.get_model(model_path=model_path)
except NotFound:
print("didn't find model, creating a new one")
else:
print("found an existing model, creating a new version under it")
upload_model_request_specifications['parent_model'] = model_path
upload_model_request = model_service.UploadModelRequest(upload_model_request_specifications)
response = self.model_service_client.upload_model(request=upload_model_request, timeout=1800)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=1800)
print("upload_model_response:", upload_model_response)
My problem is very close to this one with the difference that I do have a health check.
Why would it work on the first deployment and fail ever since? Why would it work on AI Platform but fail on Vertex AI?

This issue could be due to different reasons:
Validate the container configuration port, it should use port 8080.
This configuration is important because Vertex AI sends liveness
checks, health checks, and prediction requests to this port on the
container. You can see this document about containers, and this
other about custom containers.
Another possible reason is quota limits, which could need to be increased. You will be able to verify this using this document to do it
In the health and predict route use the MODEL_NAME you are using.
Like this example
"predict_route": "/v1/models/MODEL_NAME:predict",
"health_route": "/v1/models/MODEL_NAME",
Validate that the account you are using has enough permissions to
read your project's GCS bucket.
Validate the Model location, should be the correct path.
If any of the suggestions above work, it’s a requirement to contact GCP Support by creating a Support Case to fix it. It’s impossible for the community to troubleshoot it without using internal GCP resources

In case you haven't yet found a solution you can try out custom prediction routines. They are really helpful as they strip away the necessity to write the server part of the code and allows us to focus solely on the logic of our ml model and any kind of pre or post processing. Here is the link to help you out https://codelabs.developers.google.com/vertex-cpr-sklearn#0. Hope this helps.

Related

str() is not usable anymore to get true value of a Text tfx.data_types.RuntimeParameter during pipeline execution

how to get string as true value of tfx.orchestration.data_types.RuntimeParameter during execution pipeline?
Hi,
I'm defining a runtime parameter like data_root = tfx.orchestration.data_types.RuntimeParameter(name='data-root', ptype=str) for a base path, from which I define many subfolders for various components like str(data_root)+'/model' for model serving path in tfx.components.Pusher().
It was working like a charm before I moved to tfx==1.12.0: str(data_root) is now providing a json dump.
To overcome that, i tried to define a runtime parameter for model path like model_root = tfx.orchestration.data_types.RuntimeParameter(name='model-root', ptype=str) and then feed the Pusher component the way I saw in many tutotrials:
pusher = Pusher(model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(base_directory=model_root)))
but I get a TypeError saying tfx.proto.PushDestination.Filesystem does not accept Runtime parameter.
It completely breaks the existing setup as i received those parameters from external client for each kubeflow run.
Thanks a lot for any help.
I was able to fix it.
First of all, the docstring is not clear regarding which parameter of Pusher can be a RuntimeParameter or not.
I finally went to __init__ code definition of component Pusher to see that only the parameter push_destination can be a RuntimeParameter:
def __init__(
self,
model: Optional[types.BaseChannel] = None,
model_blessing: Optional[types.BaseChannel] = None,
infra_blessing: Optional[types.BaseChannel] = None,
push_destination: Optional[Union[pusher_pb2.PushDestination,
data_types.RuntimeParameter]] = None,
custom_config: Optional[Dict[str, Any]] = None,
custom_executor_spec: Optional[executor_spec.ExecutorSpec] = None):
Then I defined the component consequently, using my RuntimeParameter
model_root = tfx.orchestration.data_types.RuntimeParameter(name='model-serving-location', ptype=str)
pusher = Pusher(model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=model_root)
As push_destination parameter is supposed to be message proto tfx.proto.pusher_pb2.PushDestination, you have then to respect the associated schema when instantiating and running a pipeline execution, meaning the value should be like:
{'type': 'model-serving-location': 'value': '{"filesystem": {"base_directory": "path/to/model/serving/for/the/run"}}'}
Regards

Unable to run rasa agent inside rasa core server

I am trying to load and run a rasa model inside my nlu server in rasa 3, however, after loading the model with the Agent I am unable to perform inference with the model.
#DefaultV1Recipe.register(
[DefaultV1Recipe.ComponentType.INTENT_CLASSIFIER], is_trainable=False
)
class MyCustomComponent(GraphComponent, EntityExtractorMixin):
def __init__(self, config):
model_path = "model_path"
self.model = Agent.load(model_path=model_path)
def process(self, messages):
for message in messages:
result = self.model.parse_message(message.get("text"))
message.set(
"my_field",
result.get("intent"),
add_to_output=True,
)
return messages
Everytime the parse_message method executes, it returns a coroutine, which I am not sure how to extract the results from.
And, if I try to go via asyncio.get_running_loop() and loop.run_until_complete method, I get the following error.
asyncio.run() cannot be called from a running event loop
Any ideas on how can this problem be solved?
Thanks!

Tensorflow federated : How to map the remote-worker with remote datasets in iterative_process.next?

I would like to point the federated_train_data to remote client data as shown in the code below.Is this possible? How ?
If not what further implementation is required for me to try this out. Kindly point me to the relevant code.
factory = tff.framework.create_executor_factory(make_remote_executor)
context = tff.framework.ExecutionContext(factory)
tff.framework.set_default_context(context)
state = iterative_process.initialize()
state, metrics = iterative_process.next(state, federated_train_data)
def make_remote_executor(inferred_cardinalities):
"""Make remote executor."""
def create_worker_stack(ex):
ex = tff.framework.ThreadDelegatingExecutor(ex)
return tff.framework.ReferenceResolvingExecutor(ex)
client_ex = []
num_clients = inferred_cardinalities.get(tff.CLIENTS, None)
if num_clients:
print('Inferred that there are {} clients'.format(num_clients))
else:
print('No CLIENTS placement provided')
for _ in range(num_clients or 0):
channel = grpc.insecure_channel('{}:{}'.format(FLAGS.host, FLAGS.port))
remote_ex = tff.framework.RemoteExecutor(channel, rpc_mode='STREAMING')
worker_stack = create_worker_stack(remote_ex)
client_ex.append(worker_stack)
federating_strategy_factory = tff.framework.FederatedResolvingStrategy.factory(
{
tff.SERVER: create_worker_stack(tff.framework.EagerTFExecutor()),
tff.CLIENTS: client_ex,
})
unplaced_ex = create_worker_stack(tff.framework.EagerTFExecutor())
federating_ex = tff.framework.FederatingExecutor(federating_strategy_factory,
unplaced_ex)
return tff.framework.ReferenceResolvingExecutor(federating_ex)
This is from https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/examples/remote_execution/remote_executor_example.py
In the linked example, you can see that the client data is coming from a tf.data.Dataset per-client generated by the make_federated data function.
Client data can be supplied in the form of a serializable tf.data.Dataset or, depending on how you're defining your iterative process, you can tff.federated_map some input data (such as client IDs) to datasets using TensorFlow.
Note that RemoteExecutors are not designed to run against data "on clients", that is, on the remote executor itself. They could perhaps be used this way using TensorFlow code to read data from the remote executor's filesystem into a dataset, but in general this is not a supported use-case. The recommended way to handle client data is to have a TensorFlow computation that can generate a tf.data.Dataset representing the client data based on a client ID or other input to the client's TensorFlow computation.

How to find the concurrent.future input arguments for a Dask distributed function call

I'm using Dask to distribute work to a cluster. I'm creating a cluster and calling .submit() to submit a function to the scheduler. It returns a Futures object. I'm trying to figure out how to obtain the input arguments to that future object once it's been completed.
For example:
from dask.distributed import Client
from dask_yarn import YarnCluster
def somefunc(a,b,c ..., n ):
# do something
return
cluster = YarnCluster.from_specification(spec)
client = Client(cluster)
future = client.submit(somefunc, arg1, arg2, ..., argn)
# ^^^ how do I obtain the input arguments for this future object?
# `future.args` doesn't work
Futures don't hold onto their inputs. You can do this yourself though.
futures = {}
future = client.submit(func, *args)
futures[future] = args
A future only knows the key by which it is uniquely known on the scheduler. At the time of submission, if it has dependencies, these are transiently found and sent to the scheduler but no copy if kept locally.
The pattern you are after sounds more like delayed, which keeps hold of its graph, and indeed client.compute(delayed_thing) returns a future.
d = delayed(somefunc)(a, b, c)
future = client.compute(d)
dict(d.dask) # graph of things needed by d
You could communicate directly with the scheduler to find the dependencies of some key, which will in general also be keys, and so reverse-engineer the graph, but that does not sound like a great path, so I won't try to describe it here.

Retrained inception_v3 model deployed in Cloud ML Engine always outputs the same predictions

I followed the codelab TensorFlow For Poets for transfer learning using inception_v3. It generates retrained_graph.pb and retrained_labels.txt files, which can used to make predictions locally (running label_image.py).
Then, I wanted to deploy this model to Cloud ML Engine, so that I could make online predictions. For that, I had to export the retrained_graph.pb to SavedModel format. I managed to do it by following the indications in this answer from Google's #rhaertel80 and this python file from the Flowers Cloud ML Engine Tutorial. Here is my code:
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils
export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5
def build_signature(inputs, outputs):
signature_inputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in inputs.items() }
signature_outputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in outputs.items() }
signature_def = signature_def_utils.build_signature_def(
signature_inputs,
signature_outputs,
signature_constants.PREDICT_METHOD_NAME
)
return signature_def
class GraphReferences(object):
def __init__(self):
self.examples = None
self.train = None
self.global_step = None
self.metric_updates = []
self.metric_values = []
self.keys = None
self.predictions = []
self.input_jpeg = None
class Model(object):
def __init__(self, label_count):
self.label_count = label_count
def build_image_str_tensor(self):
image_str_tensor = tf.placeholder(tf.string, shape=[None])
def decode_and_resize(image_str_tensor):
return image_str_tensor
image = tf.map_fn(
decode_and_resize,
image_str_tensor,
back_prop=False,
dtype=tf.string
)
return image_str_tensor
def build_prediction_graph(self, g):
tensors = GraphReferences()
tensors.examples = tf.placeholder(tf.string, name='input', shape=(None,))
tensors.input_jpeg = self.build_image_str_tensor()
keys_placeholder = tf.placeholder(tf.string, shape=[None])
inputs = {
'key': keys_placeholder,
'image_bytes': tensors.input_jpeg
}
keys = tf.identity(keys_placeholder)
outputs = {
'key': keys,
'prediction': g.get_tensor_by_name('final_result:0')
}
return inputs, outputs
def export(self, output_dir):
with tf.Session(graph=tf.Graph()) as sess:
with tf.gfile.GFile(retrained_graph, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
inputs, outputs = self.build_prediction_graph(g)
signature_def = build_signature(inputs=inputs, outputs=outputs)
signature_def_map = {
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
}
builder = saved_model_builder.SavedModelBuilder(output_dir)
builder.add_meta_graph_and_variables(
sess,
tags=[tag_constants.SERVING],
signature_def_map=signature_def_map
)
builder.save()
model = Model(label_count)
model.export(export_dir)
This code generates a saved_model.pb file, which I then used to create the Cloud ML Engine model. I can get predictions from this model using gcloud ml-engine predict --model my_model_name --json-instances request.json, where the contents of request.json are:
{ "key": "0", "image_bytes": { "b64": "jpeg_image_base64_encoded" } }
However, no matter which jpeg I encode in the request, I always get the exact same wrong predictions:
Prediction output
I guess the problem is in the way the CloudML Prediction API passes the base64 encoded image bytes to the input tensor "DecodeJpeg/contents:0" of inception_v3 ("build_image_str_tensor()" method in the previous code). Any clue on how can I solve this issue and have my locally retrained model serving correct predictions on Cloud ML Engine?
(Just to make it clear, the problem is not in retrained_graph.pb, as it makes correct predictions when I run it locally; nor is it in request.json, because the same request file worked without problems when following the Flowers Cloud ML Engine Tutorial pointed above.)
First, a general warning. The TensorFlow for Poets codelab was not written in a way that is very amenable to production serving (partly manifested by the workarounds you are having to implement). You would normally export a prediction-specific graph that doesn't contain all of the extra training ops. So while we can try and hack something together that works, extra work may be needed to productionize this graph.
The approach of your code appears to be to import one graph, add some placeholders, and then export the result. This is generally fine. However, in the code shown in the question, you are adding input placeholders without actually connecting them to anything in the imported graph. You end up with a graph containing multiple disconnected subgraphs, something like (excuse the crude diagram):
image_str_tensor [input=image_bytes] -> <nothing>
keys_placeholder [input=key] -> identity [output=key]
inception_subgraph -> final_graph [output=prediction]
By inception_subgraph I mean all of the ops that you are importing.
So image_bytes is effectively a no-op and is ignored; key gets passed through; and prediction contains the result of running the inception_subgraph; since it's not using the input you are passing, it's returning the same result everytime (though I admit I actually expected an error here).
To address this problem, we would need to connect the placeholder you've created to the one that already exists in inception_subgraph to create a graph more or less like this:
image_str_tensor [input=image_bytes] -> inception_subgraph -> final_graph [output=prediction]
keys_placeholder [input=key] -> identity [output=key]
Note that image_str_tensor is going to be a batch of images, as required by the prediction service, but the inception graph's input is actually a single image. In the interest of simplicity, we're going to address this in a hacky way: we'll assume we'll be sending images one-by-one. If we ever send more than one image per request, we'll get errors. Also, batch prediction will never work.
The main change you need is the import statement, which connects the placeholder we've added to the existing input in the graph (you'll also see the code for changing the shape of the input):
Putting it all together, we get something like:
import tensorflow as tf
from tensorflow.contrib import layers
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils
export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5
class Model(object):
def __init__(self, label_count):
self.label_count = label_count
def build_prediction_graph(self, g):
inputs = {
'key': keys_placeholder,
'image_bytes': tensors.input_jpeg
}
keys = tf.identity(keys_placeholder)
outputs = {
'key': keys,
'prediction': g.get_tensor_by_name('final_result:0')
}
return inputs, outputs
def export(self, output_dir):
with tf.Session(graph=tf.Graph()) as sess:
# This will be our input that accepts a batch of inputs
image_bytes = tf.placeholder(tf.string, name='input', shape=(None,))
# Force it to be a single input; will raise an error if we send a batch.
coerced = tf.squeeze(image_bytes)
# When we import the graph, we'll connect `coerced` to `DecodeJPGInput:0`
input_map = {'DecodeJPGInput:0': coerced}
with tf.gfile.GFile(retrained_graph, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, input_map=input_map, name="")
keys_placeholder = tf.placeholder(tf.string, shape=[None])
inputs = {'image_bytes': image_bytes, 'key': keys_placeholder}
keys = tf.identity(keys_placeholder)
outputs = {
'key': keys,
'prediction': tf.get_default_graph().get_tensor_by_name('final_result:0')}
}
tf.simple_save(sess, output_dir, inputs, outputs)
model = Model(label_count)
model.export(export_dir)
I believe that your error is quite simple to solve:
{ "key": "0", "image_bytes": { "b64": "jpeg_image_base64_encoded" } }
You used " to specify what, I believe, is a string. By doing that, your program is reading jpeg_image_base64_encoded instead of the actual value of the variable.
That's why you get always the same prediction.
For anyone working on deploying TensorFlow image-based models on Google Cloud ML, in particular trying to get the base64 encoding working for images (as discussed in this question), I'd recommend also having a look at the following repo that I put together. I spent a lot of time working through the deployment process and was only able to find partial information across the web and on stack overflow. This repo has a full working version of deploying a TensorFlow tf.keras model onto google cloud ML and I think it will be of help to people who are facing the same challenges I faced. Here's the github link:
https://github.com/mhwilder/tf-keras-gcloud-deployment.
The repo covers the following topics:
Training a fully convolutional tf.keras model locally (mostly just to have a model for testing the next parts)
Example code for exporting models that work with the Cloud ML Engine
Three model versions that accept different JSON input types (1. An image converted to a simple list string, 2. An image converted to a base64 encoded string, and 3. A URL that points to an image in a Google Storage bucket)
Instructions and references for general Google Cloud Platform setup
Code for preparing the input JSON files for the 3 different input types
Google Cloud ML model and version creation instructions from the console
Examples using the Google Cloud SDK to call predict on the models

Resources