How to set options for volumes specified in Dockerfile - docker

I can set volume options when creating a volume:
$ docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
or when I run a container with a --mount flag:
$ docker run \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
<IMAGE>
But how to set options for volumes created in Dockerfile?:
FROM ubuntu
VOLUME /myvol
Looking at the docs, I can only see a flag for setting just a volume driver:
--volume-driver Optional volume driver for the container

In general, if there are "options" for things you might specify in a Dockerfile, you can't set them there. For a VOLUME you can't specify any specific host path , named volume, or device; for an EXPOSEd port you can't specify that it be published on a specific host interface; and so on.
In most cases I'd suggest avoiding a Dockerfile VOLUME declaration, since it mostly has only confusing side effects (notably, preventing any later RUN command from modifying that directory). You will always need to use a docker run -v or similar option to mount a named volume into the container, and that doesn't need a matching VOLUME in the image.
If you do docker run -v to explicitly mount something on a directory declared as a VOLUME, that mount replaces the implicitly created anonymous volume.

Related

Docker run --volume keeps creating random volumes and not using the one specified

Docker keeps creating random volumes instead of using the I specify when running docker run....
I'll start out with no volumes.
$ docker volume ls
DRIVER VOLUME NAME
I'll create one
docker volume create myvol
It'll get created
$ docker volume ls
DRIVER VOLUME NAME
local myvol
I'll start a container using the volume
$ docker run -d \
--name myapp \
--publish 1337:1337 \
--volume myvol:/my-work-dir/.tmp \
foo/bar:tag
I'll go and check my volumes again and I have the one I created and a new one.
$ docker volume ls
DRIVER VOLUME NAME
local 9f7ffe30c24821c8c2cf71b4228a1ec7bc3ad6320c05451e42661a4e3c2c0fb7
local myvol
Why isn't myvol being use? Why is a new volume being created?
This happens when the image you are using defines a VOLUME in the Dockerfile to a container path that you do not define as a volume in your run command. Docker creates the guid for the volume name when you have a volume without a source, aka an anonymous volume. You can use docker image inspect on the image to see the volumes defined in that image. If you inspect the container (docker container inspect), you'll see that your volume is being used, it's just that there's a second anonymous volume to a different path also being used.
How to work with volumes:
To create a volume:
docker volume create my-vol
To use this volume:
docker run -d --name devtest -v my-vol:/app nginx:latest

Dockerfile VOLUME command, data is lost when mounted as bindmount

I created a Dockerfile and ran the container with bindMount, the contents are lost ( no content)
FROM alpine:3.8
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
I am expecting the "myvol" under "pwd" should contain "greeting", which is not the case
root#default:/home/docker# docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
/myvol #
However, the same works fine if it is mounted the following way
docker#default:~$ docker run -it --name volDemo1 -v myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
1.txt greeting
/myvol # exit
Is it the expected behavior, VOLUME instruction will work only with "volume" and not "bindmounts"
This is how bind mounts work. They mount one folder in another path on the filesystem. All access to the target path get mapped directly back to the source directory.
What docker provides for a named volume (your second example) is an initialization step when that named volume is empty on container creation. They will copy all files, directories, and metadata like file owner and permissions, from the image filesystem into the named volume before the container is started. This only happens with named volumes and not host mounts or tmpfs volumes. And this only happens when the named volume is empty, so it will not updated as you change the image.
You can make a named volume that mounts other directories on the host by passing additional options, giving you something between a host mount and default named volume, since they are both implemented with a bind mount. Three different examples of that are shown below:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
If your goal is to keep things simple for other users of the image, and potentially update the volume with new versions of the image, then you'll want to do this as part of your entrypoint script. I do this in my volume caching scripts included in my base image. You copy the volume directory to a safe location inside the image, and then on container startup, the entrypoint script will copy the files into the volume.

Docker ADD does not copy the contents of a subfolder

While building my dev environment (Linux Mint 18.3) I had to create a docker file which has the instructions:
FROM centos:7
ENV CATALINA_HOME /opt/tomcat
ADD apache-tomcat-8.5.5 ${CATALINA_HOME}
where the apache-tomcat-8.5.5 is a folder with all the files to deploy tomcat-8.5.5 that can be obtained from:
https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.5/bin/apache-tomcat-8.5.5.tar.gz
ADD copies all the files excepting the content at ~/apache-tomcat-8.5.5/webapps/. Does anyone had a similar problem? I even changed the permissions to be RW by anyone yet the problem persist.
I am running this docker using
docker run -v /opt/webapi/tomcat/webapps:/opt/tomcat/webapps -v /opt/webapi/tomcat/logs:/opt/tomcat/logs .....
Could this sharing be deleting the contents in webapps? If this is the case how can we do to avoid this? I do not have any dockerignore file
I am running this docker using
docker run -v /opt/webapi/tomcat/webapps:/opt/tomcat/webapps -v /opt/webapi/tomcat/logs:/opt/tomcat/logs .....
You overwrite the webapps directory with the host mount to /opt/webapi/tomcat/webapps. Only files in that directory on the host will be visible inside that container. The files are most likely being copied into the image, but with that volume mount the files inside the image cannot be seen.
If you do not want to replace the directory inside the container with this host directory, then do not create a volume mount. If you want to overwrite the host directory with the contents from the image, you can do this in an entrypoint, or switched to a named volume. A named volume is initialized when it is empty, and you can point back to any folder on the host be passing options to the volume driver. Here are several examples of a named bind mount:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
For copying the files inside an entrypoint, see the save-volume and load-volume scripts in my docker-base repo. This would give you the flexibility to always overwrite the contents on the host with the saved values in the image. Though if you do this, consider whether you really needed a volume, and how you plan to avoid data loss by overwriting user changes.

How to copy files to host-mounted directory in docker container

I'm trying to copy configuration files to jenkins/jenkins image with host mounted directory.
part of my Dockerfile:
FROM jenkins/jenkins
COPY file.txt /var/jenkins_home/
Tried to use volume like this:
-v volume_name:/var/jenkins_home
in this case i do see "file.txt" in jenkins, but if i use:
-v /folder:/var/jenkins_home
i do not see file.txt in jenkins at all. so what am i miss here?
Per your question:
... if i use:
-v /folder:/var/jenkins_home
i do not see file.txt in jenkins at all. so what am i miss here?
Host volumes, sometimes referred to as bind mounts because of their underlying implementation, do not initialize the volume from the image content. Only named volumes provide a initialization support from the docker engine. However, it is possible to perform a named volume to a bind mount with a different syntax. Here are several examples of different ways to do that:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
In your case, you could do:
docker run -it --rm \
--mount type=volume,dst=/var/jenkins_home,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/folder \
...
To answer what you are actually trying to do:
That said, the Jenkins image defins a volume at /var/jenkins_home which blocks your ability to extend the image with a RUN command that changes that folder. COPY and ADD just happen to work because they do not create a temporary container. As a workaround, Jenkins developers use /usr/share/jenkins/ref/ inside the image as a source to initialize the /var/jenkins_home directory. So your Dockerfile should copy your desired files there instead:
FROM jenkins/jenkins
COPY file.txt /usr/share/jenkins/ref/
Welcome to SO.
In your first scenario you're telling docker to create a Volume (https://docs.docker.com/storage/volumes/) and mount it on /var/jenkins_home,
docker pre-populates the volume with the data that's already existing in the docker image. If the volume already existed it will reuse it.
You can check your volumes by executing:
docker volume ls
In your second scenario you're not seeing the file because you're bind mounting (https://docs.docker.com/storage/bind-mounts/) a directory from your host (local machine / vm) to the container. All the files that you see under /var/jenkins_home will be the same as in your host directory /folder.
This happens at runtime (when container is created), if you want to have some default files in your docker image you do this at build time by using the COPY or ADD instructions, like you're doing, these files will be copied over to the image when you build it. But if at runtime you specify a bind mount of the directory or the file you are basically replacing them.

Docker Volume empty after it's created

I have a simple Dockerfile
FROM alpine
RUN apk add --no-cache lsyncd
CMD ["ls", "-al", "/etc/lsyncd"]
I build the image and it works fine, it works fine if I run it like:
docker run -i -t -P <NAME_OF_THE_IMAGE>
and I get the folder listing with the expected files, since the CMD does that.
If I tun it like:
docker run -i -t -P -v /docker/dcm/tst:/etc/lsyncd <NAME_OF_THE_IMAGE>
It creates the "/docker/dcm/tst" folder which is empty and the ls command also returns empty.
If my understand is correct, if the folder "/docker/dcm/tst" on the local machine does not exist a new one will be created and the contents of the folder "/etc/lsyncd" will be copied in the new folder.
Is my understanding correct? What could be causing the issue that I'm seeing?
uname -a
Linux docker 4.9.53-5.ph2-esx #1-photon SMP Thu Oct 26 02:44:24 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2620 v3 # 2.40GHz GenuineIntel GNU/Linux
Named volumes and host volumes behave differently. A host volume, aka bind mount, maps the directory into the container exactly as it exists on the host. There is no initialization process.
The named volumes support initializing the contents of the volume when that volume is empty on container startup. It will be initialized to the contents of the image at the selected location, including file/directory uid/gid and permissions.
To get a named volume with a host directory, you can define a named volume that is a bind mount using one of the below options:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
There is one other behavior change that I know of with named volumes that point to a bind mount, docker will not create the directory if it doesn't exist. Instead the container creation will fail with the volume creation error.
Bind mount is for that a file or directory on the host machine mounts into a container
Regarding Mounting into a non-empty directory on the container, it is saying like the below.
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount.
For detail refer this.
It means that bind mount will just overwrite contents in the container dir.
I think you have been confused with volume which will copy contents in the container dir into the volume when it creates the first time. However, you are using bind mount and it doesn't work like that.

Resources