I am working on a container to run TensorFlow Serving.
Here is my Dockerfile:
FROM tensorflow/serving:latest
WORKDIR /
COPY models.config /models/models.config
COPY models/mnist /models/mnist
Here is my models.config for a simple mnist model:
model_config_list {
config: {
name: "mnist",
base_path: "/models/mnist"
model_platform: "tensorflow"
model_version_policy {
specific {
versions: 1646266834
}
}
version_labels {
key: 'stable'
value: 1646266834
}
}
}
The models directory is setup as follows:
$ ls -Rls models
total 0
0 drwxr-xr-x 3 david staff 96 Mar 2 16:21 mnist
models/mnist:
total 0
0 drwxr-xr-x 6 david staff 192 Mar 2 16:21 1646266834
models/mnist/1646266834:
total 304
0 drwxr-xr-x 2 david staff 64 Mar 2 16:21 assets
32 -rw-r--r-- 1 david staff 15873 Mar 2 16:20 keras_metadata.pb
272 -rw-r--r-- 1 david staff 138167 Mar 2 16:20 saved_model.pb
0 drwxr-xr-x 4 david staff 128 Mar 2 16:21 variables
models/mnist/1646266834/assets:
total 0
models/mnist/1646266834/variables:
total 1424
1416 -rw-r--r-- 1 david staff 722959 Mar 2 16:20 variables.data-00000-of-00001
8 -rw-r--r-- 1 david staff 2262 Mar 2 16:20 variables.index
The problem is that when I build and run my container, I receive an error.
$ docker build -t example.com/example-tf-serving:1.0 .
$ docker run -it -p 8500:8500 -p 8501:8501 --name example-tf-serving --rm example.com/example-tf-serving:1.0
The error is as follows Not found: /models/model:
2022-03-03 00:48:06.242923: I tensorflow_serving/model_servers/server.cc:89] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2022-03-03 00:48:06.243215: I tensorflow_serving/model_servers/server_core.cc:465] Adding/updating models.
2022-03-03 00:48:06.243254: I tensorflow_serving/model_servers/server_core.cc:591] (Re-)adding model: model
2022-03-03 00:48:06.243899: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:365] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model with error Not found: /models/model not found
How do I fix my Dockerfile so that the above command will work?
For this solution, the quick and easy way will not work for me, so I cannot accept it as a solution:
docker run --name=the_name -p 9000:9000 -it -v "/path_to_the_model_in_computer:/path_to_model_in_docker" tensorflow/serving:1.15.0 --model_name=MODEL_NAME --port=9000
https://www.tensorflow.org/tfx/serving/docker
Optional environment variable MODEL_NAME (defaults to model)
Optional environment variable MODEL_BASE_PATH (defaults to /models)
You are using default values of these env variables, so Tensorflow is trying to find model in /models/model. You have different model path in the container, so /models/model not found is correct.
I would say simple configuration of MODEL_NAME env variable should solve the problem:
$ docker run -it -p 8500:8500 -p 8501:8501 \
--name example-tf-serving \
-e MODEL_NAME=mnist \
--rm example.com/example-tf-serving:1.0
For multiple models https://www.tensorflow.org/tfx/serving/serving_config#model_server_configuration
The easiest way to serve a model is to provide the --model_name and --model_base_path flags (or setting the MODEL_NAME environment variable if using Docker). However, if you would like to serve multiple models, or configure options like polling frequency for new versions, you may do so by writing a Model Server config file.
You may provide this configuration file using the --model_config_file flag and instruct Tensorflow Serving to periodically poll for updated versions of this configuration file at the specifed path by setting the --model_config_file_poll_wait_seconds flag.
See docker doc: https://www.tensorflow.org/tfx/serving/docker#passing_additional_arguments
You need to set CMD in the Dockerfile (so you don't need to specify it in run time, because requirement is to use only Dockerfile), e.g.:
FROM tensorflow/serving:latest
WORKDIR /
COPY models.config /models/models.config
COPY models/mnist /models/mnist
CMD ["--model_config_file=/models/models.config"]
Related
I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.
in /app folder, I have a Google Cloud credential file ocviam-dev-201b2db30d36.json
> docker exec -ti 9ac6de5f24586320ac41c6e2de9895f03ca87874745257642214852e755a4a99 bash
root#9ac6de5f2458:/# cd /app/
root#9ac6de5f2458:/app# ll
total 252
drwxr-xr-x 2 root root 4096 Apr 20 13:42 ./
drwxr-xr-x 1 root root 4096 Apr 20 13:44 ../
-rwxr-xr-x 1 root root 10875 Apr 20 12:29 function.R*
-rwxr-xr-x 1 root root 284 Apr 20 09:40 gfs_data_temp12042022.csv*
-rwxr-xr-x 1 root root 183311 Apr 12 07:33 gfs_data_temp_FULL.csv*
-rwxr-xr-x 1 root root 2331 Mar 22 15:49 ocviam-dev-201b2db30d36.json*
-rwxr-xr-x 1 root root 3355 Apr 20 12:34 plumber.R*
-rwxr-xr-x 1 root root 33467 Apr 13 09:39 .RData*
-rwxr-xr-x 1 root root 405 Apr 13 09:39 .Rhistory*
But when I run docker run -e option with GOOGLE_APPLICATION_CREDENTIALS variable and /app/ocviam-dev-201b2db30d36.json value, I get an error:
docker run --rm -it -p 80:8000 app1 -e GOOGLE_APPLICATION_CREDENTIALS=ocviam-dev-201b2db30d36.json
>>
R version 4.1.3 (2022-03-10) -- "One Push-Up"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> pr <- plumber::plumb(rev(commandArgs())[1]); args <- list(host = '0.0.0.0', port = 8000); if (packageVersion('plumber') >= '1.0.0') { pr$setDocs(TRUE) } else { args$swagger <- TRUE }; do.call(pr$run, args)
Error in plumber::plumb(rev(commandArgs())[1]) :
File does not exist: GOOGLE_APPLICATION_CREDENTIALS=ocviam-dev-201b2db30d36.json
Execution halted
(base) PS C:\Chantiers\repos\gcp-gfs-weather> docker run --rm -it -p 80:8000 app1 -e GOOGLE_APPLICATION_CREDENTIALS=/app/ocviam-dev-201b2db30d36.json
>>
R version 4.1.3 (2022-03-10) -- "One Push-Up"
Copyright (C) 2022 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> pr <- plumber::plumb(rev(commandArgs())[1]); args <- list(host = '0.0.0.0', port = 8000); if (packageVersion('plumber') >= '1.0.0') { pr$setDocs(TRUE) } else { args$swagger <- TRUE }; do.call(pr$run, args)
Error in plumber::plumb(rev(commandArgs())[1]) :
File does not exist: GOOGLE_APPLICATION_CREDENTIALS=/app/ocviam-dev-201b2db30d36.json
Execution halted
Is it a problem of syntax?
Thanks.
The -e option in docker run commands is for passing environment variables as -e MYVAR1 or --env MYVAR2=foo. If you need to pass a file which has the environment variables stored in them, create a file (say "env.list") with contents like this:
Variable1=Value1
Variable2=Value2
Post which you can run docker run --env-file env.list which will set all the environment variables mentioned in the file.
If your original intention was to authenticate to your Google Cloud Project using the JSON, you should do so using gcloud auth:
gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE --project=PROJECT_NAME
This question relates to this repository with the most relevant Travis job here.
The repository is for static site built from Jupyter notebooks. The notebooks are converted using build/build.py which, for each post, builds a Docker image, starts a corresponding container with the post notebook directory mounted, and uses nbconvert to convert the notebook to Markdown. One step of nbconvert's conversion involves creating a supporting file directory. This fails on Travis due to a permission issue.
In attempting to debug this problem, I found that the ownership and permissions of the repo are the same on my local machine and Travis (with my username switched for travis) before running Docker. Despite this, inside the mounted volume of the Docker container, the ownerships are different:
Local:
drwxrwxr-x 3 jovyan 1000 4096 Dec 10 19:56 .
drwsrwsr-x 1 jovyan users 4096 Dec 3 21:51 ..
-rw-rw-r-- 1 jovyan 1000 105 Dec 7 09:57 Dockerfile
drwxr-xr-x 2 jovyan 1000 4096 Dec 10 12:09 .ipynb_checkpoints
-rw-r--r-- 1 jovyan 1000 154229 Dec 10 12:28 post.ipynb
Travis:
drwxrwxr-x 2 2000 2000 4096 Dec 10 19:58 .
drwsrwsr-x 1 jovyan users 4096 Nov 8 16:37 ..
-rw-rw-r-- 1 2000 2000 101 Dec 10 19:58 Dockerfile
-rw-rw-r-- 1 2000 2000 35271 Dec 10 19:58 post.ipynb
Both my local machine and Travis are running Ubuntu 20.04, have the same version of Docker, and all other tools come from Conda so should behave the same. I am struggling to understand where this difference in ownership is coming from.
Try running the docker again with this command, so the uid outside the container is propagated inside:
docker run -u `id -u`
alternative, as pointed by #anemyte:
docker run -u $(id -u)
This should involve the creation of the new files inside the docker to be owned by "jovyan".
If you are able to guess that mounting points will exist, you could also pre-create them so the ownership of the files inside is also correct:
docker run -v /path/on/host:/path/in/container ...
If you set the permissions of your local path (/path/on/host) as 777, that will also be propagated to the mounting point: no permission error will be thrown regardless of the user that docker uses to create those files.
After that, you'll be free to restore permissions, if needed.
I've been looking into setting up a data volume for a Docker container that I'm running on my server. The container is from this FreePBX image https://hub.docker.com/r/jmar71n/freepbx/
Basically I want persistent data so I don't lose my VoIP extensions and settings in the case of Docker stopping. I've tried many guides, ones here on stack overflow, and on the Docker manpages, but I just can't quite get it to work.
Can anyone help me with what commands I need to run in order to attach a volume to the FreePBX image I linked above?
You can do this by running a container with the -v option and mapping to a host directory - you just need to know where the container's storing the data.
Looking at the Dockerfile for that image, I'm assuming that the data you're interested in is stored in MySql. In the MySql config the data directory the container's using is /var/lib/mysql.
So you can start your container like this, mapping the MySql data directory to /docker/pbx-data on your host:
> docker run -d -t -v /docker/pbx-data:/var/lib/mysql jmar71n/freepbx
20b45b8fb2eec63db3f4dcab05f89624ef7cb1ff067cae258e0f8a910762fb1a
Use docker inpect to confirm that the mount is mapped as expected:
> docker inspect --format '{{json .Mounts}}' 20b
[{"Source":"/docker/pbx-data",
"Destination":"/var/lib/mysql",
"Mode":"","RW":true,"Propagation":"rprivate"}]
When the container runs it bootstraps the database, so on the host you'll be able to see the contents of the MySql data directory the container is using:
> ls -l /docker/pbx-data
total 28684
-rw-r----- 1 103 root 2062 Sep 21 09:30 20b45b8fb2ee.err
-rw-rw---- 1 103 messagebus 18874368 Sep 21 09:30 ibdata1
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile0
-rw-rw---- 1 103 messagebus 5242880 Sep 21 09:30 ib_logfile1
drwx------ 2 103 root 4096 Sep 21 09:30 mysql
drwx------ 2 103 messagebus 4096 Sep 21 09:30 performance_schema
If you kill the container and run another one with the same volume mapping, it will have all the data files from the previous container, and your app state should be preserved.
I'm not familiar with FreePBX, but if there is state being stored in other directories, you can find the locations in config and map them to the host in the same way, with multiple -v options.
Hi Elton Stoneman and user3608260!
Yes, you assuming correctly for data saves in Mysql (records, users, configs, etc.).
But in asterisk, all configurations are saved in files '.conf' and similars.
In this case, the archives looked for user3608260 are storaged in '/etc/asterisk/*'
Your answer is perfectly with more one command: -v /local_to_save:/etc/asterisk
the final docker command:
docker run -d -t -v /docker/pbx-data:/var/lib/mysql -v /docker/pbx-asterisk:/etc/asterisk jmar71n/freepbx
[Assuming /docker/pbx-asterisk is a host directory. ]
This question is a minimal failing version of this other one:
How to get contents generated by a docker container on the local fileystem
I have the following files:
./test
-rw-r--r-- 1 miqueladell staff 114 Jan 21 15:24 Dockerfile
-rw-r--r-- 1 miqueladell staff 90 Jan 21 15:23 docker-compose.yml
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 html
./test/html:
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
DockerFile
FROM php:7.0.2-apache
RUN touch /var/www/html/file_generated_inside_the_container
VOLUME /var/www/html/
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
After running a container built from the image defined in the Dockerfile what I want is having:
./html
-- file_from_local_filesystem
-- file_generated_inside_the_container
Instead of this I get the following:
build the image
$ docker build --no-cache -t test .
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM php:7.0.2-apache
---> 2f16964f48ba
Step 2 : RUN touch /var/www/html/file_generated_inside_the_container
---> Running in b957cc9d7345
---> 5579d3a2d3b2
Removing intermediate container b957cc9d7345
Step 3 : VOLUME /var/www/html/
---> Running in 6722ddba76cc
---> 4408967d2a98
Removing intermediate container 6722ddba76cc
Successfully built 4408967d2a98
run a container with previous image
$ docker-compose up -d
Creating test_test_1
list files on the local machine filesystem
$ ls -al html
total 0
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 .
drwxr-xr-x 5 miqueladell staff 170 Jan 21 14:20 ..
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
list files from the container
$ docker exec -i -t test_test_1 ls -alR /var/www/html
/var/www/html:
total 4
drwxr-xr-x 1 1000 staff 102 Jan 21 14:25 .
drwxr-xr-x 4 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 staff 0 Jan 21 14:22 file_from_local_filesystem
The volume from the local filesystem gets mounted on the container file system replacing the contents of it.
This is contrary at what I understand in the section "Permissions and Ownership" of this guide Understanding volumes
How could I get the desired output?
Thanks
EDIT: As is said in the accepted answer I did not understand volumes when asking the question. Volumes, as mountponint, replace the container content with the local filesystem that is mounted.
The solution I needed was to use ENTRYPOINT to run the necessary commands to initialize the contents of the mounted volume once the container is running.
The code that originated the question can be seen working here:
https://github.com/MiquelAdell/composed_wordpress/tree/1.0.0
This is from the guide you've pointed to
This won’t happen if you specify a host directory for the volume
Volumes you share from other containers or host filesystem replace directories from container.
If you need to add some files to volume, you should do it after you start container. You can do an entrypoint for example which does touch and then runs your main process.
Yep, pretty sure it should be the full path:
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
./html should be /path/to/html
Edit
Output after changing to full path and running test.sh:
$ docker exec -ti dockervolumetest_test_1 bash
root#c0bd7a722b63:/var/www/html# ls -la
total 8
drwxr-xr-x 2 1000 adm 4096 Jan 21 15:19 .
drwxr-xr-x 3 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 adm 0 Jan 21 15:19 file_from_local_filesystem
Edit 2
Sorry, I misunderstood the entire premise of the question :)
So you're trying to get file_generated_inside_the_container (which is created inside your docker image only) mounted to some location on your host machine - like a "reverse mount".
This isn't possible to do with any docker commands, but if all you're after is access to your VOLUMEs files on your host, you can find the files in the docker root directory (normally /var/lib/docker). To find the exact location of the files, you can use docker inspect [container_id], or in the latest versions use the docker API.
See cpuguy's answer in this github issue: https://github.com/docker/docker/issues/12853#issuecomment-123953258 for more details.