I have a Docker Image which has a VOLUME ["/log"].
While running the container I am mounting a folder present on the host.
I want all the logs written by docker at VOLUME ["/log"] to be available to the host.
docker run --name=test -v ${pwd}/hostlogfolder:/log dockerimage:1
The logs are not getting written to by hostlogfolder
But the logs are available inside docker at location /log
docker exec -it test bash
cd /log
What is the correct way to mount the folder?
You are almost there, the command needs a small correction:
docker run --name=test -v $(pwd)/hostlogfolder:/log dockerimage:1
Note that the brackets are different:
wrong: ${pwd}
right: $(pwd)
Once running, you can verify the mounted volumes using:
docker inspect <container id> -- you can get the container id using docker ps
Check the Mounts section of the command output.
"Mounts": [
{
"Source": "<host path>",
"Destination": "<container path>",
"Mode": "",
"RW": true
}]
If you want to create an image with predefined settings like add volume, then you have to use Dockerfile. Or, if you don't need to create an image, but only need a temporary container to pass it when you create a container.
I think you need to read the documentation about creating volumes and other directives. It might be very useful for you.
docker run -d -P --name web -v /webapp training/webapp python app.py
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Related
I have the following docker file
RUN touch /root/testing
VOLUME ["/root"]
after i build and inpect and under config i see
"Volumes": {
"/root": {}
},
after i run /bin/bash and inpect
"Mounts": [
{
"Type": "volume",
"Name": "fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398",
"Source": "/var/lib/docker/volumes/fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398/_data",
"Destination": "/root",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
When i start a container it creates a local volume and mount it on /root. It also copies the contents of /root into the local mount
if i do on the host we can see testing file in it
ls /var/lib/docker/volumes/fc1dc25de37d6d7593a21443cd2bef74a0a6a4e3276b8353199054404665c398/_data
testing
But the local volume will be destroyed immediately after the container is killed.
So what is the purpose of local volume. Because sometimes i if by mistake kill the container and still i want to have some data craeted by my container on the local volume, then its not possible since the local volume is also deleted.
I wanted to try named volumes.
I created
docker volume create test
then i my docker file:
RUN touch /root/testing
VOLUME [{"Name":"test","Destination":"/root","external":"true"}]
OR
VOLUME [ "Name:{"Destination":"/root","external":"true"}"]
When i try to build i get:
Error response from daemon: when using JSON array syntax, arrays must be comprised of strings only
Then the only option left out is mount volume from command line rather than Dockerfile
docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash
[root#7c7001221c14 /]# ls /root
testing
Now i check the test volume contents:
$ docker run --rm -it --mount source=test,destination=/tmp/myvolume archlinux/base ls /tmp/myvolume
testing
Here since test volume is completely empty so it copied the contents of the /root (i.e file testing) from the image when i do docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash into the volume test
But if the test volume is not empty befor i docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash: i.e
sudo cd /var/lib/docker/volumes/test/_data
rm -rf *
mkdir hellophp
and then do
docker run --rm -it --mount source=test,destination=/root archlinux/test /bin/bash
[root#7c7001221c14 /]# ls /root
hellophp
So my observations are:
---- VOLUME ["/path/in/container/"] will only create local volumes we cant use named volumes here
---- If i want to use named volumes then
a) create a named volume
docker volume create test
b) mount the named volume into the container path
--mount source=test,destination=/path/in/container
------ *** Most important observation
IF named volume is empty (no files in it) then after runnnig
docker run --rm -it --mount source=test,destination=/path/in/container IMAGENAME CMD
it will copy the contents of /path/in/container to test volume and then mount test volume at /path/in/container
ELSE (i.e named volume has some file in it) then after running
docker run --rm -it --mount source=test,destination=/path/in/container IMAGENAME CMD
It will not change the test volume by copying files from /path/in/container to test volume before mounting.
It will mount test volume at /path/in/container. So any files existing in the /path/in/container will not be available in the container.
If you are running a database in docker you can mount a local directory directly into your container using the -v option on the run command.
docker run -d \
-v <local path>:<container path>:z \
..
..
<your image>
The actual storage will be persistent on your local filesystem, and accessible in the container when the container is running.
Also read this
https://docs.docker.com/storage/volumes/
I put one security file like id_rsa in the docker volume larrycai_vol, trying to mount this into the container as a file.
$ docker volume inspect larrycai-vol
[
{
"CreatedAt": "2018-05-18T06:02:24Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/larrycai-vol/_data",
"Name": "larrycai-vol",
"Options": {},
"Scope": "local"
}
]
Do we have some command like below (it doesn't work now)
docker run -it -v larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
I know it works if I mount file from docker host
docker run -it -v $PWD/.ssh/id_rsa:/root/.ssh/id_rsa own_ubuntu
Please, let me a couple of considerations furthermore the proposed solution to avoid further misunderstandings.
Docker allows mounting volumes (deployed from path dir of your host to docker path, i.e, -v $PWD/.ssh:/root/.ssh = it'd replace complete .ssh folder in destination, so, it's not recommended although possible)
Docker allows mounting named volumes (deployed from named volume name to docker path, i.e, -v larrycai-vol:/root/.ssh = it'd replace complete .ssh folder in destination, so, it's not recommended although possible)
Docker allows mounting files from host to a docker (example: -v $PWD/.ssh/id_rsa:/root/.ssh/id_rsa = your second example)
Docker allows mounting files from named volume to a docker (example: -v /var/lib/docker/volumes/YOUR_NAMED_VOLUME_NAME/_data/file:/dest_dir/file = what you're trying to do)
Your mistake is that you're telling docker to mount id_rsa from your larrycai-vol directory, not from docker named volume.
In other words, three following commands are equivalent:
docker run -it -v ./larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
docker run -it -v $PWD/larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
docker run -it -v larrycai-vol/id_rsa:/root/.ssh/id_rsa own_ubuntu
So, if larrycai-vol directory (not named volume, but directory in your host) doesn't exist, command doesn't work as you want.
Definitively, to do what you're trying you have to create a bind volumeto the directory where named volume store data.
docker run -it -v
/var/lib/docker/volumes/larrycai-vol/_data/id_rsa:/root/.ssh/id_rsa
own_ubuntu
I want the /home/moodle folder on the host to be the same as the /var/www/html folder in the container.
I tried running this command:
sudo docker run -d -P --name moodle --link DB:DB -p 8080:80 -v /home/moodle:/var/www/html jhardison/moodle
It adds this to docker inspect:
"Mounts": [
{
"Type": "bind",
"Source": "/home/moodle",
"Destination": "/var/www/html",
"Mode": "",
"RW": true,
"Propagation": ""
},
But the /home/moodle folder is empty and not the same as /var/www/html in the container
Here an extract from the docker documentation
docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
Mounting a empty folder will overlay whatever contents your container had at that folder. For sharing data from within a container you could mount your folder to some empty directory and let your application copy data into that directory or run bash commands from your container to do the same as described in another thread:
docker exec -it mycontainer /bin/bash
Or you could use docker copy as described in another stackoverflow thread:
docker cp <containerId>:/file/path/within/container /host/path/target
If you mount a host directory in an image directory that previously exited, the content of the image directory is not removed, but the content of your host directory will also be in your container directory.
Take a look to Docker docs to further information: https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume
I have a docker file that looks like this. How can I access this volume from the host? I checked the volumes folder where Docker is installed.
FROM busybox
MAINTAINER Erik Kaareng-sunde <esu#enonic.com>
RUN mkdir -p /enonic-xp/home
RUN adduser -h /enonic-xp/ -H -u 1337 -D -s /bin/sh enonic-xp
RUN chown -R enonic-xp /enonic-xp/
VOLUME /enonic-xp/home
ADD logo.txt /logo.txt
CMD cat /logo.txt
ls
$ docker volume ls
DRIVER VOLUME NAME
local b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
I would like to be able to cd into that volume.
inspect
docker volume inspect b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719/_data",
"Name": "b4e99290fd4d5f7a3fe700ae9b616c2e66b1f758c497662415cdb47905427719",
"Options": {},
"Scope": "local"
}
]
After looking at a lot of posts, I finally found a post that address the question asked here.
Getting path and accessing persistent volumes in Docker for Mac
Note: this works only for Mac.
The path for the tty may also be present here:
~/Library/Containers/com.docker.docker/Data/vm/*/tty
Instead of doing it within the dockerfile, you can simply mount with docker run -v /path/in/host:/path/in/container image-name....
Docker volume ls lists all volumes docker volume inspect lets you inspect a volume. If you cant find your volume with docker volume ls try docker inspect your container and check for info there
When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.