I've built a docker image with a python script that works with two different commands. The first one creates a file that is used when executing the second one.
As far as I know, I must use a Docker volume to store data between executions so I've created a volume with:
docker volume create myvol
To then use it when running the container
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
But then, when executing the other command it seems than the Docker directory /data is empty...
$ docker run predict -v myvol:/data
Error: /data/model.h5 not found
Is there any point that I'm missing?
The docker command line is order sensitive. The syntax is:
docker $args_to_docker run $args_to_run $image_name $override_to_cmd
In your command you pass the -v option after the image name, so it becomes the CMD value in your container:
$ docker run myimg fit -v myvol:/data
model.h5 stored at /data
That runs the cmd fit -v myvol:/data inside the container.
The solution is to change the order if you want the -v to be an option to run and define a volume:
$ docker run -v myvol:/data myimg fit
$ docker run -v myvol:/data predict
Make sure you use the -v or --mount argument to the Docker run command. This will make sure that the data is really stored outside of the container, and you'll lose nothing.
See: https://docs.docker.com/storage/volumes/ for details.
When running these commands you need to remember that the first part of the -v option is always <path_to_directory_on_host>:<path_to_directory_on_guest>. You are able to use both absolute path's and relative paths to the directory on your host machine. So first thing is to create a directory on your host called data, move model.h5 into that directory, then mount it with the -v switch.
so if I had my data directory in C:\data on a windows directory I would use:
docker run <img_name> -v C:\data:/data
if I was on unix and my data directory was in /usr/data and I wanted it to be mounted on /data in my guest container then I would use
docker run <img_name> -v /usr/data:/data
Related
The .sh file I am working with is:
docker run -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder tr_xl_container nohup python /Path/file.py -p/Path/ |& tee $HOME/Path/log.txt
I am confused about the -v and everything after that. Specifically, the -v $HOME/Folder:/Folder tr_xl_container section and -p/Path/. If someone would be able to help breakdown what those commands mean or point me to a reference that does, that would be very much appreciated. I checked Docker documentation and Linux command line documentation and did not come up with anything too helpful.
A docker run command is split up in 3 parts:
docker options
the image to run
a command for the container
In your case -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder are docker options.
tr_xl_container is the image name.
nohup python /Path/file.py -p/Path/ is the command sent to the container.
The last part, |& tee $HOME/Path/log.txt isn't run in the container, but takes the output from the docker run command and saves it in $HOME/Path/log.txt.
As for -v $HOME/Folder:/Folder, it's a volume mapping or more precisely, a bind mount. It creates a directory in the container with the path /Folder that is linked to the directory $Home/Folder on the host machine. That makes files in the host directory visible inside the container and if the container does anything with files in the /Folder directory, those changes will be visible in the host directory.
The command after the image name is for the container and it's up to the container what to do with it. From looking at it, it looks like it runs a Python program stored in /Path/file.py in the image. But to be sure, you'll need to know what the image does.
I am developing a Docker image. I started with a base image and was working inside it interactively, using bash. I installed a bunch of stuff, and the install (which included compiling a lot of code) took over 20 minutes, so to save my work, I used:
$ docker commit 0f08ac958391 myproject:wip
Now when I try to run the image:
$ docker run --rm -it myproject:wip
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/95aa9a9ea7cc0b1ba302adbd287e4d7059ee4fbe64183404df3bc65df332ee63/merged/run/host-services/ssh-auth.sock.
What is going on? How do I fix this?
Note about related/duplicate questions: while there are other questions about this error message, none of the answers directly explain why the error happens in this situation or what to do about it. In fact, most of the questions have no answers at all.
When I ran the base image, I included a mount for the SSH agent socket:
$ docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:dev /bin/bash
This bind mounts a file from the host (actually the Docker daemon VM) to a file in the Docker container. When I committed the running image, the image contained the file /run/host-services/ssh-auth.sock. The image also contained an empty volume reference to /run/host-services/ssh-auth.sock. This means that when I ran
$ docker run --rm -it myproject:wip
It was equivalent to running
$ docker run -v /run/host-services/ssh-auth.sock --rm -it myproject:wip
Unfortunately, what that command does is create an anonymous volume and mount it into the directory /run/host-services/ssh-auth.sock in the container. This works if the container has such a directory or even if it does not. What causes it to fail is if the target name is taken up by a file. Docker will not mount a volume over a file.
The solution is to explicitly provide a mapping from a host file to the target volume. Any host file will do, but in my case it is best to use the original. So this works:
docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:wip
This is what I tried:
Dockerfile:
ENTRYPOINT go test ./tests -v .>/outputs/report.txt
Command line:
docker run test -v /outputs:/outputs
I expect that the newly generated report.txt will be available in the host in the same directory. What am I missing here?
I think that you almost made it.
Try to map the volume before image name.
Instead: docker run test -v /outputs:/outputs
Use: docker run -v /outputs:/outputs test
This command will bind your local /outputs with the /outputs in the container. And remember, all commands after image name will pass a command to the container.
For more information see: Docker run command docs and Docker volume docs
having some issues with this and need a little help (guidance). The data I want to backup is located here:
/var/lib/docker/volumes/eb5294b586d6537c965bde61d02da06d49ff77467afdc55ec1441413fe5fb128/_data
so I need to create a backup of this volume data and transfer it to another host. From this website https://docs.docker.com/engine/tutorials/dockervolumes/#creating-and-mounting-a-data-volume-container it says to use the following command:
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
What is the "ubuntu" for? I am running on Ubuntu but in the explanation of the command, it doesn't say anything about it. Is it something that is optional?
What about the /backup directory, where is that created? In the Home directory of the logged in user?
Thanks,
Don
What is the "ubuntu" for?
It's the base image that contains the tar utility.
I am running on Ubuntu but in the explanation of the command, it doesn't say anything about it. Is it something that is optional?
You need any docker image, which must have the tar command. It is not optional, you must provide the base image that a container will run on. (In your case you are running a one-off container: tar)
What about the /backup directory, where is that created? In the Home directory of the logged in user?
That is a mapped volume:
-v $(pwd):/backup
$(pwd) is your current working directory (which you run the docker command), mapped to the /backup/ dir in the container, where the tar command is instructed to deposit your backup file (backup.tar). So that file will appear where you run the docker command.
The "ubuntu" in the context of the command "docker run ..." stands for the name of the image. It seems like the image being built is the standard ubuntu image.
See this example:
docker run -it ubuntu
https://hub.docker.com/_/ubuntu/
When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.