docker-compose bind mount volume file propagation to host - docker

I've read on this tutorial that when you create a docker-compose.yml file, and bind mount volumes, if you don't create the folder on your host, when running docker-compose up the folder will be automatically created and populated with the content of the container's folder.
Here is the quote:
Then you should volume bind two folders. /etc/nginx is where all your configuration files are stored, and /etc/ssl/private is where your SSL certificates are stored. It is VERY important that your config folder does NOT exist on your host first time you’re starting the container. When you start your container through docker-compose, it will automatically create the folder and populate it with the contents of the container. If you have created an empty config folder on your host, it will mount that, and the folder inside the container will be empty.
But it doesn't seems to work for me.
Here are a few things I've checked.
My docker is not running as root. I created the docker group on my machine and added my user in it, thus I don't need to run sudo docker <command>
I run an ubuntu server 18 LTS
It doesn't matter if I try to bind mount the volumes as read-only or not
When running docker-compose up it creates the folders, but they are owned by the root user
The created folder owned by the root user are empty
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
container_name: reverse_nginx
ports:
- "80:80"
- "443:443"
volumes:
- "./html:/usr/share/nginx/html"
- "./conf:/etc/nginx"
- "./ssl:/etc/ssl/private"
restart: unless-stopped
And here is what is created:
icare#icare:~/nginx
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Mar 20 15:46 conf
-rw-r--r-- 1 icare icare 269 Mar 20 15:27 docker-compose.yml
drwxr-xr-x 2 root root 4096 Mar 20 15:46 html
drwxr-xr-x 2 root root 4096 Mar 20 15:46 ssl
./conf:
total 0
./html:
total 0
./ssl:
total 0
I can't find a solution online and it seems that a person asked on github and it's been solved the hard way (copying files from the container, then bind-mount everything) but I can't help to think that there is another way. Or maybe the tutorial I'm following is outdated or wrong ?

Related

Docker-compose volumes on MacOS Ventura (13.1) are all empty

I'm running into an issue with MacOS Ventura where by all bind volumes - where I link a directory on my host machine to one on the container - created with docker-compose are empty. I've tested the same scripts on MacOS 12.4 and 12.6 and they work as expected giving me the same directory contents on the container as on the host, so it seems v13 changed some permission.
The docker-compose.yml file:
version: "3"
services:
bash:
image: ubuntu:latest
stdin_open: true
tty: true
volumes:
- ./:/app
command: "/bin/bash"
So this should be creating a directory on the container called /app and linking that to the host directory the compose file is in.
But when I start the container:
❯ docker-compose up --build
[+] Running 1/0
⠿ Container ruby-docker-bash-1 Created 0.0s
Attaching to ruby-docker-bash-1
And login, the /app directory is empty:
❯ docker exec -it ruby-docker-bash-1 /bin/bash
root#9644de175d48:/# cd app/
root#9644de175d48:/app# ls -la
total 4
drwxr-xr-x 2 root root 40 Feb 1 09:59 .
drwxr-xr-x 1 root root 4096 Feb 1 09:39 ..
The total 4 is really weird here as there are 4 files supposed to be there, but not accessible:
root#9644de175d48:/app# cat Gemfile
cat: Gemfile: No such file or directory
This is the directory contents on the host are:
❯ ls -la
.rw-r--r-- 2.7k paul 1 Feb 09:31 Dockerfile
.rw-r--r-- 3.9k paul 1 Feb 09:32 Gemfile
.rw-r--r-- 27k paul 1 Feb 09:32 Gemfile.lock
.rw-r--r-- 149 paul 1 Feb 10:00 docker-compose.yml
If anyone has any experience with what might be going wrong or how I can get past this absolute time-sink of an issue, I'd really appreciate it.
Thank you!
I figured it out. I use Colima on MacOS, as there is no MacOS VM by docker.
Then I found this comment on a Colima repo issue, https://github.com/abiosoft/colima/issues/500#issuecomment-1343103477, where a user had mentioned they weren't able to sync directories.
To fix volumes on MacOS, using Colima, I did the folowing:
colima delete # reset
colima start --mount-type 9p
This doesn't seem to be documented anywhere. I’ve been through the site, the readme.
But I did find this line of code inside of the Colima repo:
validMountTypes := map[string]bool{"9p": true, "sshfs": true}
if util.MacOS13OrNewer() {
validMountTypes["virtiofs"] = true
}
I’m in MacOS 13, so it seems like there are issues with virtiofs and not in the older 9p mount type.

tomcat docker mounted volume becomes empty

I have a docker-compose.yml which looks like this:
version: '3'
services:
tomcat:
container_name: tomcat8
restart: always
image: tomcat:8-jdk8
ports:
- 80:8080
volumes:
- /var/docker/myservice/tomcat/data/webapps:/usr/local/tomcat/webapps:Z
I want to mount the tomcat/webapps folder inside the container to the host so that I don't have to enter the docker container to modify the applications.
However, when this container starts up, the /usr/local/tomcat/webapps folder becomes empty. The ROOT/, docs/, examples/, host-manager/, manager/ folders that should have been created when tomcat starts up are all gone.
I originally thought this is because that the container does not have permission to write to the volume on the host machine. But I've followed this post's instruction to add an Z at the end of the volume.
What's wrong with my configuration? Why does /usr/local/tomcat/webapps folder inside the container become empty?
Is there any way to let the data in /usr/local/tomcat/webapps in the container to overwrite the data in /var/docker/myservice/tomcat/data/webapps in the host machine?
For tomcat:8-jdk8, we could see next:
$ docker inspect tomcat:8-jdk8 | grep Entrypoint
"Entrypoint": null,
"Entrypoint": null,
Also, see tomcat:8-jdk8 Dockerfile:
CMD ["catalina.sh", "run"]
To sum all, the only start script for a container is catalina.sh, so if we override it like next:
$ docker run -it --rm tomcat:8-jdk8 ls /usr/local/tomcat/webapps
ROOT docs examples host-manager manager
We can see even we did not start any start script like catalina.sh, we still can see ROOT, docks, etc in /usr/local/tomcat/webapps.
This means, above folders just in the image tomcat:8-jdk8 not dynamically generated by catalina.sh. So, when you use - /var/docker/myservice/tomcat/data/webapps:/usr/local/tomcat/webapps as a bind mount volume, the empty folder on host will just override all things in the container folder /usr/local/tomcat/webapps, so you will see empty folder in container.
UPDATE:
Is there any way to let the data in /usr/local/tomcat/webapps in the container to overwrite the data in /var/docker/myservice/tomcat/data/webapps in the host machine?
The nearest solution is to use named volume:
docker-compose.yaml:
version: '3'
services:
tomcat:
container_name: tomcat8
restart: always
image: tomcat:8-jdk8
ports:
- 80:8080
volumes:
- my-data:/usr/local/tomcat/webapps:Z
volumes:
my-data:
Then see next command list: (NOTE: 99_my-data, here 99 is the folder where you store your docker-compose.yaml)
shubuntu1#shubuntu1:~/99$ docker-compose up -d
Creating tomcat8 ... done
shubuntu1#shubuntu1:~/99$ docker volume inspect 99_my-data
[
{
"CreatedAt": "2019-07-15T15:09:32+08:00",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "99",
"com.docker.compose.version": "1.24.0",
"com.docker.compose.volume": "my-data"
},
"Mountpoint": "/var/lib/docker/volumes/99_my-data/_data",
"Name": "99_my-data",
"Options": null,
"Scope": "local"
}
]
shubuntu1#shubuntu1:~/99$ sudo -s -H
root#shubuntu1:/home/shubuntu1/99# cd /var/lib/docker/volumes/99_my-data/_data
root#shubuntu1:/var/lib/docker/volumes/99_my-data/_data# ls -alh
total 28K
drwxr-xr-x 7 root root 4.0K 7月 15 15:09 .
drwxr-xr-x 3 root root 4.0K 7月 15 15:09 ..
drwxr-xr-x 14 root root 4.0K 7月 15 15:09 docs
drwxr-xr-x 6 root root 4.0K 7月 15 15:09 examples
drwxr-xr-x 5 root root 4.0K 7月 15 15:09 host-manager
drwxr-xr-x 5 root root 4.0K 7月 15 15:09 manager
drwxr-xr-x 3 root root 4.0K 7月 15 15:09 ROOT
This is the nearest way can pop contents to host.
Another solution: mount /var/docker/myservice/tomcat/data/webapps to the container folder but not /usr/local/tomcat/webapps, e.g. /tmp/abc, then customize your CMD to copy the things in /usr/local/tomcat/webapps to /tmp/abc, then in your host's /var/docker/myservice/tomcat/data/webapps could also see them...

Permission denied in mounted volume on Docker with SELinux

I've tried to mount a folder using the following docker compose file (partially reproduced, the rest aren't relevant):
version: '3'
services:
web:
build: .
environment:
- DEBUG=0
volumes:
- /usr/share/nginx/html/assets:/assets:Z
However, aside from cd-ing into the folder /assets in the docker container, I get the following error for other operations in the folder (including chmod and chcon):
ls: cannot open directory '.': Permission denied
The folder UID and GID are 0 (i.e. root) and the UID of the bash in docker is also 0.
However, by removing the Z flag, the docker container is able to read content off the volume, but not write into it.
Here is the output of ls -laZ with the Z flag on:
drwx------. 2 root root system_u:object_r:container_var_run_t:s0 160 Jan 27 15:33 assets
and here is without Z flag:
drwxr-xr-x. 4 root root unconfined_u:object_r:httpd_sys_content_t:s0 72 Jan 23 09:01 assets
It seems that with the Z flag, my group and others permission disappears, but that does not matter because the UID is the same, right?
My question is, how can I get write access to the mounted directory in the docker container?

Permission issues in nexus3 docker container

When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.

Recreate container on stop with docker-compose

I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server

Resources