When I am trying to run this script on conda environment I am getting the error that no checkpoint file found at inception-v3 ,
-> bazel-bin/tensorflow_serving/example/inception_export --checkpoint_dir=inception-v3 --export_dir=inception-export
Please anybody help me with this issue I will be thankfully to you
As specified in the documentation of inception_export, you need to provide the directory where you have your model checkpoint. Do you have a valid inception-v3 checkpoint in tensorflow_serving/example/inception_exportinception-v3/?
If not, you should download a model checkpoint to use.
Related
I try to deploy example Kedro starter project (pandas-iris).
I successfuly run it locally (kedro run), and then, having kedro-docker install, init a Docker, build image and push it to my registry.
Unfortunately, both kedro docker run and docker run myDockerID/iris_image generate the same error:
DataSetError: Failed while loading data from data set
CSVDataSet(filepath=/home/kedro/data/01_raw/iris.csv, load_args={},
protocol=file, save_args={'index': False}).
[Errno 2] No such file or directory: '/home/kedro/data/01_raw/iris.csv'
It looks like the data catalog wasn't copied to the image/container.
I would appreciate Your help,
Many thanks :)
Andy
If data catalog wasn't copy then you probably won't get the path? Is the data actually lives there?
Problem solved: I had to comment #data entry in .dockerignore file. Original kedro-docker keeps data folder ignored.
#mediumnok: thank you for the comment, no problem with path :)
I am trying to set up AI habitat and habitat challenge and came across this issue when I was trying to run the DD-PPO training script indicated here: https://github.com/facebookresearch/habitat-challenge#pointnavobjectnav-baselines-and-dd-ppo-training-starter-code
I have download the Gibson dataset following the above instructions and extract the dataset to folder habitat-challenge/habitat-challenge-data/data/scene_datasets/gibson/ as indicated. I downloaded the 1.5 GB Habitat challenge dataset and it contained .glb and .navmesh files. However, when I tried running the DD-PPO script by
sh habitat_baselines/rl/ddppo/single_node.sh from the habitat-lab directory, it gave me errors saying that there aren't .scn files in the habitat-challenge/habitat-challenge-data/data/scene_datasets/gibson/ directory.
Does anyone know how I can resolve this issue or where to find these .scn files?
Apparently this is only a warning and does not affect the pointnav or objectnav challenges.
https://github.com/facebookresearch/habitat-challenge/issues/87#issuecomment-920540129
I am trying in Amazon Sagemaker to deploy an existing Scikit-Learn model. So a model that wasn't trained on SageMaker, but locally on my machine.
On my local (windows) machine I've saved my model as model.joblib and tarred the model to model.tar.gz.
Next, I've uploaded this model to my S3 bucket ('my_bucket') in the following path s3://my_bucket/models/model.tar.gz. I can see the tar file in S3.
But when I'm trying to deploy the model, it keeps giving the error message "Failed to extract model data archive".
The .tar.gz is generated on my local machine by running 'tar -czf model.tar.gz model.joblib' in a powershell command window.
The code for uploading to S3
import boto3
s3 = boto3.client("s3",
region_name='eu-central-1',
aws_access_key_id=AWS_KEY_ID,
aws_secret_access_key=AWS_SECRET)
s3.upload_file(Filename='model.tar.gz', Bucket=my_bucket, Key='models/model.tar.gz')
The code for creating the estimator and deploying:
import boto3
from sagemaker.sklearn.estimator import SKLearnModel
...
model_data = 's3://my_bucket/models/model.tar.gz'
sklearn_model = SKLearnModel(model_data=model_data,
role=role,
entry_point="my-script.py",
framework_version="0.23-1")
predictor = sklearn_model.deploy(instance_type="ml.t2.medium", initial_instance_count=1)
The error message:
error message: UnexpectedStatusException: Error hosting endpoint
sagemaker-scikit-learn-2021-01-24-17-24-42-204: Failed. Reason: Failed
to extract model data archive for container "container_1" from URL
"s3://my_bucket/models/model.tar.gz". Please ensure that the object
located at the URL is a valid tar.gz archive
Is there a way to see why the archive is invalid?
I had a similar issue as well, along with a similar fix to Bas (per comment above).
I was finding I wasn't necessarily having issues with the .tar.gz step, this command does work fine:
tar -czf <filename> ./<directory-with-files>
but rather with the uploading step.
Manually uploading to S3 should take care of this, however, if you're doing this step programmatically, you might need to double check the steps taken. Bas appears to have had filename issues, mine were around using boto properly. Here's some code that works (Python only here, but watch for similar issues with other libraries):
bucket = 'bucket-name'
key = 'directory-inside-bucket'
file = 'the file name of the .tar.gz'
s3_client = boto3.client('s3')
s3_client.upload_file(file, bucket, key)
Docs: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.upload_file
I am using Google Cloud ML Engine to do local prediction by run:
gcloud ml-engine local predict --model-dir=$MODEL_DIR --json-instances $INPUT_FILE --framework $FRAMEWORK
assume:
MODEL_DIR="gs://<bucket>/model.joblib"
FRAMEWORK="SCIKIT_LEARN"
input file input.json is in hardisk (d:\predict)
How to specify: INPUT_FILE=?
I have manually upload the input file into my gc bucket, but get error:
ERROR: (gcloud.ml-engine.local.predict) Unable to read file [gs://<bucket>/input.json]: [Errno 2] No such file or directory: 'gs://<bucket>/input.json
Where shall I place the input file?
shall I keep it as in local disk (e.g. d:\predit\input.json) or in bucket?
And what format is this?
You are setting the MODEL_DIR wrong, there is no need of adding "model.joblib" as it will be detected automatically. MODEL_DIR should contain the path (including folders if necessary) where the file "model.joblib" is. As good practise, it's common to have a bucket containing it. The command (for your case) should go like this :
MODEL_DIR="gs://<bucket>/"
INPUT_FILE="input.json"
FRAMEWORK="SCIKIT_LEARN"
and your bucket should contain "model.joblib". For INPUT_FILE, it should contain the path where "input.json" is FROM where you are running the command and the ".json" itself (i.e, if the ".json" it's under other folder, INPUT_FILE should be "< folder>/input.json").
Here is the documentation for testing models [1].
I do all the things just according to the official tutorial: https://hyperledger-fabric.readthedocs.io/en/release-1.0/getting_started.html
But it still does not work. Errors display as below:
My environment variables are set like this:
the red part is go and platform-specific binaries.
btw, I check the first error. It said: "open /opt/gopath/src/xxx...: permission denied". But thereās no path to there:
I found this directory is set in first-network/base/docker-compose-base.yaml:
It makes me very confused. Anyone can help me?
Thanks very much!
I just solve this problem by getting help from another place:)
check ownership of files under fabric-samples folder and I found all the files belong to root.
then I need to take ownership for my user: sudo chown -R [my user] *
then it works!