I have tried to install TFServing in docker container several times.
However I still can't build it without any error.
I follow the installation steps on the official site. But I still meet compile error during building.
I doubt that if there is some flaw in dockerfile which I built.
I attach the screenshot of the error.
I found a dockerfile on the net which meet my demand.
https://github.com/posutsai/DeepLearning_Docker/blob/master/Tensorflow-serving/Dockerfile-Tensorflow-serving-gpu
As of now (Oct 2019), official docker image of TFServing for both CPU & GPU is available at https://hub.docker.com/r/tensorflow/serving.
To set up your tfserving image on docker, just pull the image & start the container.
Pull the official image
docker pull tensorflow/serving:latest
Start the container
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve -e MODEL_NAME=inception -t tensorflow/serving:latest
For using GPU, pull GPU specific image and pass the appropriate parameters in docker command
docker pull tensorflow/serving:latest-gpu
docker run -p 8500:8500 -p 8501:8501 --mount type=bind,source=/path/to/model/dir,target=/models/inception --name tfserve_gpu -e MODEL_NAME=inception --gpus all -t tensorflow/serving:latest-gpu --per_process_gpu_memory_fraction=0.001
Please note,
gpus all flag is used to allocate all available GPUs (in case you have multiple in your machine) to the docker container.
User gpus device=1 to select first GPU device, if you need to restrict usage to a particular device.
per_process_gpu_memory_fraction flag is used to restrict GPU memory usage by the tfserving docker image. Pass its value according to your program need.
Related
I am very new to docker and Jupyter notebook. I pulled the image from docker, it was able to direct me to the relevant Jupyter notebook. Problem is, whatever plots I am making in the notebook, I am not able to find the file in the system. A file with the name settings.cmnd should be made on my system. I am using Windows 10 home version. I am using the following command
docker run -it -v "//c/Users/AB/project":"//c/program files/Docker Toolbox" -p 8888:8888/tcp CONTAINER NAME
It is running fine as I am able to access the jupyter notebook but the file is still missing on my system.
Here the folder in which I want to save file is project
Kindly help.
I did not find an image called electronioncollider/pythiatutorial, so I'm assuming you meant electronioncollider/pythia-eic-tutorial.
Default working directory for that image is /code so the command on Windows should look like:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Working dierctory can be changed with -w, so the following should work as well:
docker run --rm -w //whatever -v //c/Users/AB/project://whatever -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Edit:
electronioncollider/pythia-eic-tutorial:latest image has only one version - one that is meant to run on linux/amd64. This means it's meant to run on 64-bit Linux installed on a computer with Intel or AMD processor.
You're not running it on Windows, but on a Linux VM that runs on your Windows host. Docker can access C:\Users\AB\project, because it's mounted inside the VM as c/Users/AB/project (although most likely it's actuall C:\Users that's mounted as /c/Users). Therein lies the problem - Windows and Linux permission models are incompatible, so the Windows directory is mounted with fixed permissions that allows all Linux users access. Docker then mounts that directory inside the container with the same permissions. Unfortunately Jupyter wants some of the files it creates to have a very specific set of permissions (for security reasons). Since the permissions are fixed to a specific value, Jupyter cannot change them and breaks.
There are two possible solutions
Get inside whatever VM the Docker is running inside, change directory to one not mounted from Windows, and run the container from there using the command from the tutorial/README:
docker run --rm -u `id -u $USER` -v $PWD:$PWD -w $PWD -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
and the files will appear in the directory that the command is run from.
Use the modified image I created:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 forinil/pythia-eic-tutorial:latest
You can find the image on Docker Hub here. The source code is available on GitHub here.
Edit:
Due to changes in my version of the image the proper command for it would be:
docker run -it --rm -v //c/Users/AB/project://code --entrypoint rivet forinil/pythia-eic-tutorial
I release a new version, so if you run docker pull forinil/pythia-eic-tutorial:latest, you'll be able to use both the command above, as well as:
docker run -it --rm -v //c/Users/AB/project://code forinil/pythia-eic-tutorial rivet
That being said I did not receive any permission errors while testing either the old or the new versions of the image.
I hope you understand that due to how Docker Toolbox works, you won't be able to use aliases the way the tutorial says you would on Linux.
For one thing, you'll only have access to files inside directory C:\Users\AB\project, for another file path inside the container will be different than outside the container, eg. file C:\Users\AB\project\notebooks\pythiaRivet.ipynb will be available inside the container as /code/notebooks/pythiaRivet.ipynb
Note on asking questions:
You've got banned from asking questions, because your questions are low quality. Please read the guidelines before asking any more.
My goal is to run arbitrary GUI applications from Docker container using host Xserver.
I tried http://wiki.ros.org/docker/Tutorials/GUI#The_simple_way - Step 1
I would run the docker image using docker run --gpus all --net=host -it -p "8888:8888" -v "/home/gillian/Documents/deeplearning/:/deeplearning/:" --env=DISPLAY=$DISPLAY --env=QT_X11_NO_MITSHM=1 --volume=/tmp/.X11-unix:/tmp/.X11-unix:rw pytorch
But when I tried to run xlogo or xclock from within the container, it would always return error Error: Can't open display: :0
after spending the night trying to fix it I tried to use --net=host as an argument for docker run. And then I could run xclock and xlogo and it would display them on my screen without any issues.
Why?
What can I do to run the docker image without sacrificing the network isolation (--net=host)?
I am running Kubuntu 20.04
I'm getting possibly incorrect behavior and a bad error message if I run an image if a linked container is not found:
# this works:
> docker run --rm -d --name natsserver nats
> docker run --rm -it --name hello-world --link natsserver hello-world
# now stop natsserver again...
> docker stop natsserver
When I run hello-world again with the same command, I don't understand the first part of the error handling - why does docker try to pull?
> docker run --rm -it --name hello-world --link natsserver hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
Digest: sha256:b8ba256769a0ac28dd126d584e0a2011cd2877f3f76e093a7ae560f2a5301c00
Status: Image is up to date for hello-world:latest
docker: Error response from daemon: could not get container for natsserver: No such container: natsserver.
See 'docker run --help'.
And things get even worse if I try to run an image I have built locally:
> docker build -t nats-logger .
[...]
Successfully tagged nats-logger:latest
> docker run --rm -it --name nats-logger --link=natsserver nats-logger
Unable to find image 'nats-logger:latest' locally
docker: Error response from daemon: pull access denied for nats-logger, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
So my questions are:
a) Is docker allowed to try to pull in this case, or is this a bad behavior?
b) Is this really a bad error message, or did I miss something?
P.S.: I'm running Docker version 19.03.2, build 6a30dfc on Windows 10.
Is docker allowed to try to pull in this case
Docker will pull image if it is not available on the machine.
Unable to find image 'hello-world:latest' locally
This warning message is not due to linking, it is because hello-world:latest is not exist in your system local images. so whe run docker run it will look on local then will pull from remote if not exist.
Now First thing, Better to use docker-compose instead of Legacy container links.
You can not link the container if it's not running. verify the container natsserver using docker ps and then if it is running then you can link.
docker run --rm -it --name hello-world --link natsserver:my_natserver_host hello-world
Once up you can then check the linking.
docker inspect hello-world | grep -A 1 Links
Legacy container links
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
simply try "docker login".
check if your image name is exist in docker hub
and check correct docker build command -> docker build -t image-name .
review the correctness of Docker file script
I'd like to dockerize my Strongloop Loopback based Node server and start using Process Manager(PM) to keep it running.
I've been using RancherOS on AWS which rocks.
I copied (but didn't add anything to) the following Dockerfile as a template for my own Dockerfile:
https://hub.docker.com/r/strongloop/strong-pm/~/dockerfile/
I then:
docker build -t somename .
(Dockerfile is in .)
It now appears in:
docker images
But when I try to start it, exits right away:
docker run --detach --restart=no --publish 8701:8701 --publish 3001:3001 --publish 3002:3002 --publish 3003:3003 somename
AND if I run the strong-pm image and after opening ports on AWS, it works as above with strongloop/strong-pm not somename
(I can browse aws-instance:8701/explorer)
Also, these instructions to deploy my app https://strongloop.com/strongblog/run-create-node-js-process-manager-docker-images/ require:
slc deploy http://docker-host:8701/
but Rancher doesn't come with npm (or curl) installed and when I bash into the vm, slc isn't installed, so seems like slc needs to be "outside" the vm
docker exec -it fb94ddab6baa bash
If you're still reading, nice. I think I'm trying to add a Dockerfile to my git repo that will deploy my app server (including pulling code from repos) on any docker box.
The workflow for the strongloop/strong-pm docker image assumes you are deploying to it from a workstation. The footprint for npm install -g strongloop is significantly larger than strong-pm alone, which is why the docker image has only strong-pm installed in it.
I read this article http://blog.docker.io/2013/09/docker-can-now-run-within-docker/ and I want to share images between my "host" docker and "child" docker. But when I run
sudo docker run -v /var/lib/docker:/var/lib/docker -privileged -t -i jpetazzo/dind
I can't connect to "child" docker from dind container.
root#5a0cbdc2b7df:/# docker version
Client version: 0.8.1
Go version (client): go1.2
Git commit (client): a1598d1
2014/03/13 18:37:49 Can't connect to docker daemon. Is 'docker -d' running on this host?
How can I share my local images between host and child docker?
You shouldn't do that! Docker assumes that it has exclusive access to /var/lib/docker, and if you (or another Docker instance) meddles with this directory, it could have unexpected results.
There are multiple solutions, depending on what you want to achieve.
If you want to be able to run Docker commands from within a container, but don't need a separate daemon, then you can share the Docker control socket with this container, e.g.:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-t -i ubuntu bash
If you really want to run a different Docker daemon (e.g. because you're hacking on Docker and/or want to run a different version), but want to access the same images, maybe you could run a private registry in a container, and use that registry to easily share images between Docker-in-the-Host and Docker-in-the-Container.
Don't hesitate to give more details about your use-case so we can tell you the most appropriate solution!