I am trying to do tensorflow serving with REST API using docker. I was following example from https://www.tensorflow.org/tfx/serving/docker, and https://towardsdatascience.com/serving-keras-models-locally-using-tensorflow-serving-tf-2-x-8bb8474c304e. I've created a simple digit mnist classifier. My model's export path:
MODEL_DIR = 'digit_mnist/model_serving/'
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
then saved model with this command:
tf.keras.models.save_model(model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None)
when I run:
sudo docker run -p 8501:8501 --mount type=bind,source=/artificial-neural-network/tensorflow_nn/digit_mnist/model_serving/1/,target=/models/model_serving -e MODEL_NAME=dmc -t tensorflow/serving
I get this error:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /artificial-neural-networks/tensorflow_nn/digit_mnist/model_serving/1/.
My file structure goes like this:
(venv) artificial_neural_networks/
__init__.py
pytorch_nn/
tensorflow_nn/
__init__.py
digit_mnist/
model_serving/
1/
assets
variables/
variables.data-00000-of-00002
variables.data-00001-of-00002
variables.index
saved_model.pb
__init__.py
mnist.py
Where am I doing the wrong thing? I'm on my second day of solving this problem so any help would be appreciated.
Related
I am currently working with TensorFlow serving and while running a command I encountered an error
Step 1:- I pulled tensorflow/serving image using
docker pull tensorflow/pull
Step 2:- I made a project where I save the TF model in a directory:
C:/Code/potato-disease:
Step 3:- After running the command :-
docker run -t --rm -p 8505:8505 -v C:/Code/potato-disease:/potato-disease tensorflow/serving --rest_api_port=8505 --model_config_file=/potato-disease/models.config
Error:-
Failed to start server. Error: Invalid argument: Expected model potatoes_model to have an absolute path or URI; got base_path()=C:/Code/potato-disease/saved_models
2022-03-16 03:21:46.161233: I tensorflow_serving/core/basic_manager.cc:279] Unload all remaining servables in the manager.
My models.config file
model_config_list {
config {
name: 'potatoes_model'
base_path: 'C:/Code/potato-disease/saved_models'
model_platform: 'tensorflow'
model_version_policy: {all: {}}
}
}
You should edit your models.config config and put /potato-disease/saved_models as a base_path since it'll be the docker that will interpret your code it should have propper arguments depending of it's environnement, since it's running inside docker the absolute path should be considering that.
I realize podman-compose is still under development. I'm going to be replacing my docker stack with podman once I replace Debian with CentOS8 on my Poweredge server as part of my homelab. Right now I'm just testing out/playing around with podman on my Fedora machine.
OS: Fedora 32
KERNEL:5.6.12-300.fc32.x86_64
PODMAN: 1.9.2
PODMAN-COMPOSE: 0.1.5
PROBLEM: podman-compose is failing and I'm unable to ascertain the reason why.
Here's my docker-compose.yml:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
# - UMASK_SET=022 #optional
# - DELUGE_LOGLEVEL=error #optional
volumes:
- /home/mike/test/deluge:/config
- /home/mike/Downloads:/downloads
restart: unless-stopped
When I run podman-compose up Here is the output:
[mike#localhost test]$ podman-compose up
podman pod create --name=test --share net
ce389be26589efe4433db15d875844b2047ea655c43dc84dbe49f69ffabe867e
0
podman create --name=deluge --pod=test -l io.podman.compose.config-hash=123 -l io.podman.compose.project=test -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=deluge --network host -e PUID=1000 -e PGID=1000 -e TZ=America/New_York --mount type=bind,source=/home/mike/test/deluge,destination=/config --mount type=bind,source=/home/mike/Downloads,destination=/downloads --add-host deluge:127.0.0.1 --add-host deluge:127.0.0.1 linuxserver/deluge
Trying to pull registry.fedoraproject.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull registry.access.redhat.com/linuxserver/deluge...
name unknown: Repo not found
Trying to pull registry.centos.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull docker.io/linuxserver/deluge...
Getting image source signatures
Copying blob a54f3db92256 done
Copying blob c114dc480980 done
Copying blob d0d29aaded3d done
Copying blob fa1dff0a3a53 done
Copying blob 5076df76a29a done
Copying blob a40b999f3c1e done
Copying config 31fddfa799 done
Writing manifest to image destination
Storing signatures
Error: error checking path "/home/mike/test/deluge": stat /home/mike/test/deluge: no such file or directory
125
podman start -a deluge
Error: unable to find container deluge: no container with name or ID deluge found: no such container
125
Then finally when I quite via ctrl-c :
^CTraceback (most recent call last):
File "/home/mike/.local/bin/podman-compose", line 8, in <module>
sys.exit(main())
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 1093, in main
podman_compose.run()
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 625, in run
cmd(self, args)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 782, in wrapped
return func(*args, **kw)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 914, in compose_up
thread.join(timeout=1.0)
File "/usr/lib64/python3.8/threading.py", line 1005, in join
if not self._started.is_set():
File "/usr/lib64/python3.8/threading.py", line 513, in is_set
def is_set(self):
KeyboardInterrupt
I'm not experienced enough to be able to read through this and figure out what the problem is so I'm hoping to learn from you all.
Thanks!
There is an error in your path:
volumes:
/home/mike/test/deluge:/config
/home/mike/test/deluge: no such file or directory
Check the folder path.
Situation and Problem
I am trying to follow this guide on "how to make your own linuxkit with docker for mac", where you can add some kernel modules usually not present in docker images.
After a lot of reading and testing I am failing to do the simplest (one would think) test case in the repository:
linuxkit/test/cases/020_kernel/011_kmod_4.9.x/
https://github.com/linuxkit/linuxkit/tree/master/test/cases/020_kernel/011_kmod_4.9.x
checking the container for the linux kernel-version and config:
... host$ docker run -it --rm -v /:/host -v $(pwd):/macos alpine:latest
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # / # uname -a
/bin/sh: /: Permission denied
/ #
/ #
/ # uname -a
Linux 029b8e5ada75 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 Linux
/ # cp /host/proc/config.gz /macos/
/ # exit
I went back in the github history to find the hash for my local linuxkit kernel version and modified the dockerfile of that example (or basically used the old one).
So far so good. The problem is, that if I try to do anything related to kernel modules (modinfo, modprobe, depmod, insmod), I will get errors like these:
modinfo: can't open '/lib/modules/4.9.125-linuxkit/modules.dep': No such file or directory
This is because that path simply does not exist in the container (there is not even a modules folder). That is also true if I were to check -- as above -- just in alpine:latest. So there doesn't seem to happen any magic in that dockerfile.
Question
Now I am completely puzzled and left stranded on what to do, hence my question ...
How to do the hello_world example from linuxkit/linuxkit ?
additional notes
The docs of the linuxkit-repository do not mention anything about that problem:
https://github.com/linuxkit/linuxkit/blob/master/docs/kernels.md#compiling-external-kernel-modules
For easy testing I am using
docker-compose
# build with
# docker-compose build
version: '3'
services:
linux-builder:
image: my_linux_kit
build:
context: .
dockerfile: my_linux_kit.dockerfile
# args:
# buildno: 1
privileged: true
And I even tricked it (by inserting by hand) into not showing any errors, but also not doing what I suppose the code should do:
... host$: docker exec -it 7a33fad37914 sh
/ # ls
bin dev hello_world.ko lib mnt root sbin sys usr
check.sh etc home media proc run srv tmp var
/ # /bin/busybox insmod hello_world.ko
/ #
I am working with Rasa NLU. I want to train a language model in Portuguese and have it running inside a Container. I can train the language dataset but I am not being able to get it to run.
I've created an Image from the official rasa_nlu, running with the spacy Portuguese pipeline, and placed in a Container on Docker.
I am able to use the rasa_nlu.traincommand to train the language model without problems, or at least that what it seems.
When I try to run it using the data that I trained, I get an error message complaining about missing parameters on the command that I used.
Here is the docker-compose service that I try to use when running the container:
rasa_nlu:
image: rasa_nlu_pt
volumes:
- ./models/rasa_nlu:/app/models
command:
- start
- --path
- /app/models
and it gives the following error message:
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
The same happens if I run it without other containers:
$ docker run -v $(pwd):/app/project -v $(pwd)/models/rasa_nlu:/app/models -
p 5000:5000 rasa_nlu_pt start --path app/models
usage: run.py [-h] -d CORE [-u NLU] [-v] [-vv] [--quiet] [-p PORT]
[--auth_token AUTH_TOKEN] [--cors [CORS [CORS ...]]]
[--enable_api] [-o LOG_FILE] [--credentials CREDENTIALS]
[-c CONNECTOR] [--endpoints ENDPOINTS] [--jwt_secret JWT_SECRET]
[--jwt_method JWT_METHOD]
run.py: error: the following arguments are required: -d/--core
I used the same command to run the service with the English spacy pipeline provided by Rasa and it worked as it should, but now it is giving this error message. That other information I am missing?
Depending on which pipeline your are using for your NLU, you should use the rasa/nlu:tensorflow-latest oder rasa/nlu:spacy-latest and not the rasa/nlu:latest. This will solve the problem.
I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp