Dockerfile VOLUME command, data is lost when mounted as bindmount - docker

I created a Dockerfile and ran the container with bindMount, the contents are lost ( no content)
FROM alpine:3.8
RUN mkdir /myvol
RUN echo "hello world" > /myvol/greeting
VOLUME /myvol
docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
I am expecting the "myvol" under "pwd" should contain "greeting", which is not the case
root#default:/home/docker# docker run -it --name volDemo2 -v $(pwd)/myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
/myvol #
However, the same works fine if it is mounted the following way
docker#default:~$ docker run -it --name volDemo1 -v myvol:/myvol voldemo sh
/ # cd myvol/
/myvol # ls
1.txt greeting
/myvol # exit
Is it the expected behavior, VOLUME instruction will work only with "volume" and not "bindmounts"

This is how bind mounts work. They mount one folder in another path on the filesystem. All access to the target path get mapped directly back to the source directory.
What docker provides for a named volume (your second example) is an initialization step when that named volume is empty on container creation. They will copy all files, directories, and metadata like file owner and permissions, from the image filesystem into the named volume before the container is started. This only happens with named volumes and not host mounts or tmpfs volumes. And this only happens when the named volume is empty, so it will not updated as you change the image.
You can make a named volume that mounts other directories on the host by passing additional options, giving you something between a host mount and default named volume, since they are both implemented with a bind mount. Three different examples of that are shown below:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
If your goal is to keep things simple for other users of the image, and potentially update the volume with new versions of the image, then you'll want to do this as part of your entrypoint script. I do this in my volume caching scripts included in my base image. You copy the volume directory to a safe location inside the image, and then on container startup, the entrypoint script will copy the files into the volume.

Related

Docker ADD does not copy the contents of a subfolder

While building my dev environment (Linux Mint 18.3) I had to create a docker file which has the instructions:
FROM centos:7
ENV CATALINA_HOME /opt/tomcat
ADD apache-tomcat-8.5.5 ${CATALINA_HOME}
where the apache-tomcat-8.5.5 is a folder with all the files to deploy tomcat-8.5.5 that can be obtained from:
https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.5/bin/apache-tomcat-8.5.5.tar.gz
ADD copies all the files excepting the content at ~/apache-tomcat-8.5.5/webapps/. Does anyone had a similar problem? I even changed the permissions to be RW by anyone yet the problem persist.
I am running this docker using
docker run -v /opt/webapi/tomcat/webapps:/opt/tomcat/webapps -v /opt/webapi/tomcat/logs:/opt/tomcat/logs .....
Could this sharing be deleting the contents in webapps? If this is the case how can we do to avoid this? I do not have any dockerignore file
I am running this docker using
docker run -v /opt/webapi/tomcat/webapps:/opt/tomcat/webapps -v /opt/webapi/tomcat/logs:/opt/tomcat/logs .....
You overwrite the webapps directory with the host mount to /opt/webapi/tomcat/webapps. Only files in that directory on the host will be visible inside that container. The files are most likely being copied into the image, but with that volume mount the files inside the image cannot be seen.
If you do not want to replace the directory inside the container with this host directory, then do not create a volume mount. If you want to overwrite the host directory with the contents from the image, you can do this in an entrypoint, or switched to a named volume. A named volume is initialized when it is empty, and you can point back to any folder on the host be passing options to the volume driver. Here are several examples of a named bind mount:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
For copying the files inside an entrypoint, see the save-volume and load-volume scripts in my docker-base repo. This would give you the flexibility to always overwrite the contents on the host with the saved values in the image. Though if you do this, consider whether you really needed a volume, and how you plan to avoid data loss by overwriting user changes.

How to copy files to host-mounted directory in docker container

I'm trying to copy configuration files to jenkins/jenkins image with host mounted directory.
part of my Dockerfile:
FROM jenkins/jenkins
COPY file.txt /var/jenkins_home/
Tried to use volume like this:
-v volume_name:/var/jenkins_home
in this case i do see "file.txt" in jenkins, but if i use:
-v /folder:/var/jenkins_home
i do not see file.txt in jenkins at all. so what am i miss here?
Per your question:
... if i use:
-v /folder:/var/jenkins_home
i do not see file.txt in jenkins at all. so what am i miss here?
Host volumes, sometimes referred to as bind mounts because of their underlying implementation, do not initialize the volume from the image content. Only named volumes provide a initialization support from the docker engine. However, it is possible to perform a named volume to a bind mount with a different syntax. Here are several examples of different ways to do that:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
In your case, you could do:
docker run -it --rm \
--mount type=volume,dst=/var/jenkins_home,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/folder \
...
To answer what you are actually trying to do:
That said, the Jenkins image defins a volume at /var/jenkins_home which blocks your ability to extend the image with a RUN command that changes that folder. COPY and ADD just happen to work because they do not create a temporary container. As a workaround, Jenkins developers use /usr/share/jenkins/ref/ inside the image as a source to initialize the /var/jenkins_home directory. So your Dockerfile should copy your desired files there instead:
FROM jenkins/jenkins
COPY file.txt /usr/share/jenkins/ref/
Welcome to SO.
In your first scenario you're telling docker to create a Volume (https://docs.docker.com/storage/volumes/) and mount it on /var/jenkins_home,
docker pre-populates the volume with the data that's already existing in the docker image. If the volume already existed it will reuse it.
You can check your volumes by executing:
docker volume ls
In your second scenario you're not seeing the file because you're bind mounting (https://docs.docker.com/storage/bind-mounts/) a directory from your host (local machine / vm) to the container. All the files that you see under /var/jenkins_home will be the same as in your host directory /folder.
This happens at runtime (when container is created), if you want to have some default files in your docker image you do this at build time by using the COPY or ADD instructions, like you're doing, these files will be copied over to the image when you build it. But if at runtime you specify a bind mount of the directory or the file you are basically replacing them.

Docker Volume empty after it's created

I have a simple Dockerfile
FROM alpine
RUN apk add --no-cache lsyncd
CMD ["ls", "-al", "/etc/lsyncd"]
I build the image and it works fine, it works fine if I run it like:
docker run -i -t -P <NAME_OF_THE_IMAGE>
and I get the folder listing with the expected files, since the CMD does that.
If I tun it like:
docker run -i -t -P -v /docker/dcm/tst:/etc/lsyncd <NAME_OF_THE_IMAGE>
It creates the "/docker/dcm/tst" folder which is empty and the ls command also returns empty.
If my understand is correct, if the folder "/docker/dcm/tst" on the local machine does not exist a new one will be created and the contents of the folder "/etc/lsyncd" will be copied in the new folder.
Is my understanding correct? What could be causing the issue that I'm seeing?
uname -a
Linux docker 4.9.53-5.ph2-esx #1-photon SMP Thu Oct 26 02:44:24 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2620 v3 # 2.40GHz GenuineIntel GNU/Linux
Named volumes and host volumes behave differently. A host volume, aka bind mount, maps the directory into the container exactly as it exists on the host. There is no initialization process.
The named volumes support initializing the contents of the volume when that volume is empty on container startup. It will be initialized to the contents of the image at the selected location, including file/directory uid/gid and permissions.
To get a named volume with a host directory, you can define a named volume that is a bind mount using one of the below options:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
There is one other behavior change that I know of with named volumes that point to a bind mount, docker will not create the directory if it doesn't exist. Instead the container creation will fail with the volume creation error.
Bind mount is for that a file or directory on the host machine mounts into a container
Regarding Mounting into a non-empty directory on the container, it is saying like the below.
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount.
For detail refer this.
It means that bind mount will just overwrite contents in the container dir.
I think you have been confused with volume which will copy contents in the container dir into the volume when it creates the first time. However, you are using bind mount and it doesn't work like that.

What is the right way to add data to an existing named volume in Docker?

I was using Docker in the old way, with a volume container:
docker run -d --name jenkins-data jenkins:tag echo "data-only container for Jenkins"
But now I changed to the new way by creating a named volume:
docker volume create --name my-jenkins-volume
I bound this new volume to a new Jenkins container.
The only thing I've left is a folder in which I have the /var/jenkins_home of my previous jenkins container. (by using docker cp)
Now I want to fill my new named volume with the content of that folder.
Can I just copy the content of that folder to /var/lib/jenkins/volume/my-jenkins-volume/_data?
You can certainly copy data directly into /var/lib/docker/volumes/my-jenkins-volume/_data, but by doing this you are:
Relying on physical access to the docker host. This technique won't work if you're interacting with a remote docker api.
Relying on a particular aspect of the volume implementation would could change in the future, breaking any processes you have that rely on it.
I think you are better off relying on things you can accomplish using the docker api, via the command line client. The easiest solution is probably just to use a helper container, something like:
docker run -v my-jenkins-volume:/data --name helper busybox true
docker cp . helper:/data
docker rm helper
You don't need to start some container to add data to already existing named volume, just create a container and copy data there:
docker container create --name temp -v my-jenkins-volume:/data busybox
docker cp . temp:/data
docker rm temp
You can reduce the accepted answer to one line using, e.g.
docker run --rm -v `pwd`:/src -v my-jenkins-volume:/data busybox cp -r /src /data
Here are steps for copying contents of ~/data to docker volume named my-vol
Step 1. Attach the volume to a "temporary" container. For that run in terminal this command :
docker run --rm -it --name alpine --mount type=volume,source=my-vol,target=/data alpine
Step 2. Copy contents of ~/data into my-vol . For that run this commands in new terminal window :
cd ~/data
docker cp . alpine:/data
This will copy contents of ~/data into my-vol volume. After copy exit the temporary container.
You can add this BASH function to your .bashrc to copy files to a existing Docker volume without running a container
# Usage: copy-to-docker-volume SRC_PATH DEST_VOLUME_NAME [DEST_PATH]
copy-to-docker-volume() {
SRC_PATH=$1
DEST_VOLUME_NAME=$2
DEST_PATH="${3:-}"
# create smallest Docker image possible
echo -e 'FROM scratch\nLABEL empty=""' | docker build -t empty -
# create temporary container to be able to mount volume
CONTAINER_ID=$(docker container create -v my-volume:/data empty cmd)
# copy files to volume
docker cp "${SRC_PATH}" "${CONTAINER_ID}":"/data/${DEST_PATH}"
# remove temporary container
docker rm "${CONTAINER_ID}"
}
Example
# create volume as destination
docker volume create my-volume
# create directory to copy
mkdir my-dir
echo "hello file1" > my-dir/my-file-1
# copy directory to volume
copy-to-docker-volume my-dir my-volume
# list directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# show file content on volume
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-1
# create another file to copy
echo "hello file2" > my-file-2
# copy file to directory on volume
copy-to-docker-volume my-file-2 my-volume my-dir
# list (updated) directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# check volume content
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-2
If you don't want to create a docker and you can access as privileged user to , simply do (on Linux systems):
docker volume create my_named_volume
sudo cp -p . /var/lib/docker/volumes/my_named_volume/_data/
Furthermore, it also allows you to access data in docker runtime or also with docker containers stopped.
If you don't want to create a temp helper container on windows docker desktop (backed by wsl2) then
copy the files to below location
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\my-volume\_data
here my-volume is the name of your named volume. browse the above path from address bar in your file explorer. This is a internal network created by wsl in windows.
Note: it might be better to use docker API like mentioned by larsks, but I have not faced any issues on windows.
Similarly on linux files can be copied to
/var/lib/docker/volumes/my-volume/_data/

Mount volume to host

I am currently using Boot2Docker on Windows. Is it possible to mount root to host?
Say that I'm using an Ubuntu image and I would like to mount / to the host. How can I do so?
I've been looking around and trying:
docker run -v /c/Users/ubuntu:/ --name ubuntu -dt ubuntu
But I ended up with an error:
docker: Error response from daemon: Invalid bind mount spec "/c/Users/ubuntu:/": volumeslash: Invalid specification: destination can't be '/' in '/c/Users/Leon/ubuntu:/'.
If I understand correctly, you are trying to mount root inside a container as a volume? If that is the case, rather create a new directory inside and expose that one.
For example, dockerfile:
RUN mkdir /something
VOLUME /something
As the Docker documentation says, the container directory must always be an absolute path such as /src/docs. The host-dir can either be an absolute path or a name value.
For more information read this: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume and part "Mount a host directory as a data volume" should give you better understanding.
It's the problem with how you are specifying the path. See the example of mounting a local volume to be used by a container for MongoDB:
docker run --name *container-name* -v **/Users/SKausha3/mongo/imageservicedb/data**:/*data* -v **/Users/SKausha3/mongo/imageservicedb/backup**:/*backup*
c:/Users/SKausha3/mongo/imageservicedb/data is my local folder, but you have to remove 'c:' from the path.
Since you cant mount "/" one option is to add a "WORKDIR" to your Dockerfile, that way all subsequent commands will be relative to that dir and you wont have to modify anything!
FROM python:latest
WORKDIR /myapp
COPY appfile.py appfile.py
In your docker image, the "appfile.py" file will be in the /myapp/appfily.py location.
You cannot specify the '/' root directory of container but you can mount all the folders in to docker volumes present in root directory.....
create volumes by running these command one by one or you can create bash script
docker volume create var
docker volume create usr
docker volume create tmp
docker volume create sys
docker volume create srv
docker volume create sbin
docker volume create run
docker volume create root
docker volume create proc
docker volume create opt
docker volume create mnt
docker volume create media
docker volume create libx32
docker volume create lib64
docker volume create lib32
docker volume create lib
docker volume create home
docker volume create etc
docker volume create dev
docker volume create boot
docker volume create bin
Then run this command
docker run -it -d \
--name=ubuntu-container \
--mount source=var,destination=/var \
--mount source=usr,destination=/usr \
--mount source=tmp,destination=/tmp \
--mount source=sys,destination=/sys \
--mount source=srv,destination=/srv \
--mount source=sbin,destination=/sbin \
--mount source=run,destination=/run \
--mount source=root,destination=/root \
--mount source=opt,destination=/opt \
--mount source=mnt,destination=/mnt \
--mount source=media,destination=/media \
--mount source=libx32,destination=/libx32 \
--mount source=lib64,destination=/lib64 \
--mount source=lib32,destination=/lib32 \
--mount source=lib,destination=/lib \
--mount source=home,destination=/home \
--mount source=etc,destination=/etc \
--mount source=boot,destination=/boot \
--mount source=bin,destination=/bin \
ubuntu:latest

Resources