Docker-compose volumes on MacOS Ventura (13.1) are all empty - docker

I'm running into an issue with MacOS Ventura where by all bind volumes - where I link a directory on my host machine to one on the container - created with docker-compose are empty. I've tested the same scripts on MacOS 12.4 and 12.6 and they work as expected giving me the same directory contents on the container as on the host, so it seems v13 changed some permission.
The docker-compose.yml file:
version: "3"
services:
bash:
image: ubuntu:latest
stdin_open: true
tty: true
volumes:
- ./:/app
command: "/bin/bash"
So this should be creating a directory on the container called /app and linking that to the host directory the compose file is in.
But when I start the container:
❯ docker-compose up --build
[+] Running 1/0
⠿ Container ruby-docker-bash-1 Created 0.0s
Attaching to ruby-docker-bash-1
And login, the /app directory is empty:
❯ docker exec -it ruby-docker-bash-1 /bin/bash
root#9644de175d48:/# cd app/
root#9644de175d48:/app# ls -la
total 4
drwxr-xr-x 2 root root 40 Feb 1 09:59 .
drwxr-xr-x 1 root root 4096 Feb 1 09:39 ..
The total 4 is really weird here as there are 4 files supposed to be there, but not accessible:
root#9644de175d48:/app# cat Gemfile
cat: Gemfile: No such file or directory
This is the directory contents on the host are:
❯ ls -la
.rw-r--r-- 2.7k paul 1 Feb 09:31 Dockerfile
.rw-r--r-- 3.9k paul 1 Feb 09:32 Gemfile
.rw-r--r-- 27k paul 1 Feb 09:32 Gemfile.lock
.rw-r--r-- 149 paul 1 Feb 10:00 docker-compose.yml
If anyone has any experience with what might be going wrong or how I can get past this absolute time-sink of an issue, I'd really appreciate it.
Thank you!

I figured it out. I use Colima on MacOS, as there is no MacOS VM by docker.
Then I found this comment on a Colima repo issue, https://github.com/abiosoft/colima/issues/500#issuecomment-1343103477, where a user had mentioned they weren't able to sync directories.
To fix volumes on MacOS, using Colima, I did the folowing:
colima delete # reset
colima start --mount-type 9p
This doesn't seem to be documented anywhere. I’ve been through the site, the readme.
But I did find this line of code inside of the Colima repo:
validMountTypes := map[string]bool{"9p": true, "sshfs": true}
if util.MacOS13OrNewer() {
validMountTypes["virtiofs"] = true
}
I’m in MacOS 13, so it seems like there are issues with virtiofs and not in the older 9p mount type.

Related

docker-compose bind mount volume file propagation to host

I've read on this tutorial that when you create a docker-compose.yml file, and bind mount volumes, if you don't create the folder on your host, when running docker-compose up the folder will be automatically created and populated with the content of the container's folder.
Here is the quote:
Then you should volume bind two folders. /etc/nginx is where all your configuration files are stored, and /etc/ssl/private is where your SSL certificates are stored. It is VERY important that your config folder does NOT exist on your host first time you’re starting the container. When you start your container through docker-compose, it will automatically create the folder and populate it with the contents of the container. If you have created an empty config folder on your host, it will mount that, and the folder inside the container will be empty.
But it doesn't seems to work for me.
Here are a few things I've checked.
My docker is not running as root. I created the docker group on my machine and added my user in it, thus I don't need to run sudo docker <command>
I run an ubuntu server 18 LTS
It doesn't matter if I try to bind mount the volumes as read-only or not
When running docker-compose up it creates the folders, but they are owned by the root user
The created folder owned by the root user are empty
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
container_name: reverse_nginx
ports:
- "80:80"
- "443:443"
volumes:
- "./html:/usr/share/nginx/html"
- "./conf:/etc/nginx"
- "./ssl:/etc/ssl/private"
restart: unless-stopped
And here is what is created:
icare#icare:~/nginx
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Mar 20 15:46 conf
-rw-r--r-- 1 icare icare 269 Mar 20 15:27 docker-compose.yml
drwxr-xr-x 2 root root 4096 Mar 20 15:46 html
drwxr-xr-x 2 root root 4096 Mar 20 15:46 ssl
./conf:
total 0
./html:
total 0
./ssl:
total 0
I can't find a solution online and it seems that a person asked on github and it's been solved the hard way (copying files from the container, then bind-mount everything) but I can't help to think that there is another way. Or maybe the tutorial I'm following is outdated or wrong ?

How to do the hello_world example from GitHub:linuxkit/linuxkit?

Situation and Problem
I am trying to follow this guide on "how to make your own linuxkit with docker for mac", where you can add some kernel modules usually not present in docker images.
After a lot of reading and testing I am failing to do the simplest (one would think) test case in the repository:
linuxkit/test/cases/020_kernel/011_kmod_4.9.x/
https://github.com/linuxkit/linuxkit/tree/master/test/cases/020_kernel/011_kmod_4.9.x
checking the container for the linux kernel-version and config:
... host$ docker run -it --rm -v /:/host -v $(pwd):/macos alpine:latest
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # / # uname -a
/bin/sh: /: Permission denied
/ #
/ #
/ # uname -a
Linux 029b8e5ada75 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 Linux
/ # cp /host/proc/config.gz /macos/
/ # exit
I went back in the github history to find the hash for my local linuxkit kernel version and modified the dockerfile of that example (or basically used the old one).
So far so good. The problem is, that if I try to do anything related to kernel modules (modinfo, modprobe, depmod, insmod), I will get errors like these:
modinfo: can't open '/lib/modules/4.9.125-linuxkit/modules.dep': No such file or directory
This is because that path simply does not exist in the container (there is not even a modules folder). That is also true if I were to check -- as above -- just in alpine:latest. So there doesn't seem to happen any magic in that dockerfile.
Question
Now I am completely puzzled and left stranded on what to do, hence my question ...
How to do the hello_world example from linuxkit/linuxkit ?
additional notes
The docs of the linuxkit-repository do not mention anything about that problem:
https://github.com/linuxkit/linuxkit/blob/master/docs/kernels.md#compiling-external-kernel-modules
For easy testing I am using
docker-compose
# build with
# docker-compose build
version: '3'
services:
linux-builder:
image: my_linux_kit
build:
context: .
dockerfile: my_linux_kit.dockerfile
# args:
# buildno: 1
privileged: true
And I even tricked it (by inserting by hand) into not showing any errors, but also not doing what I suppose the code should do:
... host$: docker exec -it 7a33fad37914 sh
/ # ls
bin dev hello_world.ko lib mnt root sbin sys usr
check.sh etc home media proc run srv tmp var
/ # /bin/busybox insmod hello_world.ko
/ #

Permission denied in mounted volume on Docker with SELinux

I've tried to mount a folder using the following docker compose file (partially reproduced, the rest aren't relevant):
version: '3'
services:
web:
build: .
environment:
- DEBUG=0
volumes:
- /usr/share/nginx/html/assets:/assets:Z
However, aside from cd-ing into the folder /assets in the docker container, I get the following error for other operations in the folder (including chmod and chcon):
ls: cannot open directory '.': Permission denied
The folder UID and GID are 0 (i.e. root) and the UID of the bash in docker is also 0.
However, by removing the Z flag, the docker container is able to read content off the volume, but not write into it.
Here is the output of ls -laZ with the Z flag on:
drwx------. 2 root root system_u:object_r:container_var_run_t:s0 160 Jan 27 15:33 assets
and here is without Z flag:
drwxr-xr-x. 4 root root unconfined_u:object_r:httpd_sys_content_t:s0 72 Jan 23 09:01 assets
It seems that with the Z flag, my group and others permission disappears, but that does not matter because the UID is the same, right?
My question is, how can I get write access to the mounted directory in the docker container?

Permission issues in nexus3 docker container

When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.

Recreate container on stop with docker-compose

I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server

Resources