Persist data only for selected volumes with docker-compose - docker

I have two Docker images for my documentation files: docs:v1 and docs:v2.
They just contain some files in /docs exposed as a VOLUME:
# docker run --rm docs:v1 cat /docs/doc.txt
Version1
# docker run --rm docs:v2 cat /docs/doc.txt
Version2
And I have my app described in this docker-compose.yml (using v1.4):
app:
image: "busybox"
command: /bin/sh -c "cat /docs/doc.txt && echo `date` >> /logs/log.txt"
volumes:
- "/logs"
volumes_from:
- "docs"
docs:
image: "docs:v1"
So basically my app prints the content of the docs and echo the current date in a log file. The log file is also in a VOLUME.
=> What I want it just to be able to update the docs to docs:v2, see that it prints "Version2" as expected and keep the logs intact.
First run:
# docker-compose up
Creating tmp_docs_1...
Creating tmp_app_1...
Attaching to tmp_docs_1, tmp_app_1
app_1 | Version1
...
# docker run --rm --volumes-from tmp_app_1 busybox cat /logs/log.txt
Tue Aug 25 22:09:11 UTC 2015
It works as expected: Prints the Version1 documentation and echo in the logs.
Next I update the yml file with : image: "docs:v2". Then restart my app:
# docker-compose up
Recreating tmp_docs_1...
Recreating tmp_app_1...
Attaching to tmp_docs_1, tmp_app_1
app_1 | Version1
...
# docker run --rm --volumes-from tmp_app_1 busybox cat /logs/log.txt
Tue Aug 25 22:09:11 UTC 2015
Tue Aug 25 22:10:26 UTC 2015
The logs have been updated, that's fine, but my doc is still in Version1 !
It might be surprising, but that's actually the expected behavior. According to the docker docs: "Changes to a data volume will not be included when you update an image."
Right, but I want to be able to see my updated docs, so let's try to delete the docs container and volume:
# docker-compose rm -v docs
Removing tmp_docs_1... done
# docker-compose up
Creating tmp_docs_1...
Starting tmp_app_1...
Attaching to tmp_docs_1, tmp_app_1
app_1 | Version1
...
No luck .. still in Version 1. That's because the app container still points to the old Version1 volume. So let's try to delete the app as well (just the app, not the volumes this time):
# docker-compose rm app
Removing tmp_app_1... done
# docker-compose up
Starting tmp_docs_1...
Creating tmp_app_1...
Attaching to tmp_docs_1, tmp_app_1
app_1 | Version2
Version2: it worked ! Let's check the logs:
# docker run --rm --volumes-from tmp_app_1 busybox cat /logs/log.txt
Tue Aug 25 22:19:21 UTC 2015
Ach ! My old logs are gone.
So here's the question again: how can I update the docs image, see the change in my app and still be able to keep the logs upon restarts ?

You will want to map your volumes to your actual FS. Right now you are creating volumes in the container by using the volume command, these volumes bypass the UFS and persist even if the container is deleted which is why you have to delete the volume to make it act properly. Map the volumes to an external folder on the host O/S and then when you upgrade your apps the logs will be there but the app upgraded.
To be clear about what is occurring:
You create the intial app > volume inits > logs are saved.
Upgrade app > logs are not updated (Changes to a data volume will not be included when you update an image.) (expected behavior).
Delete data volume > upgrade app > new logs are there old logs are gone because they were deleted.
To resolve mount the volumes to the host O/S so they persist but can be written to on upgrade.

Related

Why is Loki's Docker Driver Client stopping to log after some time?

I want to send logs of my Docker containers to Grafana Loki. Therefore, I installed Loki's Docker Driver Client and started my containers with it. First I can see logs, but after some time I see no more logs.
Installation
I installed Loki's Docker Driver Client as a Docker plugin on my Docker Engine (version 20.10.2):
$ docker plugin install grafana/loki-docker-driver:master-54d1d3b --alias loki --grant-all-permissions
I didn't use the tag lastest, because of the bug Unable to connect to logging plugin in Swarm
Configuration
I started my Docker containers with Loki's Docker Driver Client as log driver:
$ docker container run
--log-driver=loki
--log-opt loki-url="$LOKI_URL"
--log-opt loki-retries=5
--log-opt loki-batch-size=400
--log-opt max-size="10m"
--log-opt max-file=5
--detach
--name $CONTAINER_NAME
--restart unless-stopped
$IMAGE:$TAG
I also added json-log driver's max-size and max-file to limit disk space, see Configuring the Docker Driver.
Problem
First I could see logs in Grafana and in command line with docker container logs, but after some time no more logs were shown. If I tried to look into the logs on Docker host and I saw an error:
$ docker container logs 75d4b13eb3e8
error from daemon in stream: Error grabbing logs: error getting log reader: LogDriver.ReadLogs: logger does not exist for 75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
Research
I looked into the directories of the containers (see Where is a log file with logs from a container?), but I couldn't see any log files:
$ sudo ls /var/lib/docker/containers/75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
checkpoints config.v2.json hostconfig.json hostname hosts mounts resolv.conf resolv.conf.hash
I also checked the log path (see Get an instance’s log path), but it was empty:
$ docker inspect --format='{{.LogPath}}' 75d4b13eb3e8
I found container's logs in plugin's directory (see Loki log driver not storing logs as files on disk, even with keep-file: true), but the log files don't change anymore:
$ sudo ls -la /var/lib/docker/plugins/eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288/rootfs/var/log/docker/75d4b13eb3e8203b9247ecdeb41fdf495cc8fea7dcfc4775fd8261263b1dcd32
total 912
drwxr-xr-x 2 root root 4096 Jan 22 12:59 .
drwxr-xr-x 17 root root 4096 Jan 22 15:46 ..
-rw-r----- 1 root root 923177 Jan 22 13:34 json.log
I looked into Docker daemon's logs (see Read the logs) and found errors and a warning (at the same time logging stopped):
$ sudo journalctl -u docker.service | grep eac33cc9913c
[...]
[...]level=error msg="panic: send on closed channel" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="goroutine 153 [running]:" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="main.(*loki).Log(0xc0000c5e00, 0xc0001d81c0, 0xc0000c5e80, 0x0)" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/loki.go:69 +0x2fb" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="main.consumeLog(0xc0002c0480)" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/driver.go:165 +0x4c2" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="created by main.(*driver).StartLogging" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=error msg="\t/src/loki/cmd/docker-driver/driver.go:116 +0xa75" plugin=eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288
[...]level=warning msg="Unable to connect to plugin: /run/docker/plugins/eac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288/loki.sock/LogDriver.StopLogging: Post http://%2Frun%2Fdocker%2Fplugins%2Feac33cc9913ca962a189904392e516dd495d6fd52391fb5af4a34af46b281288%2Floki.sock/LogDriver.StopLogging: EOF, retrying in 1s"
[...]
What did I do wrong?
I was experiencing the same issue.
My only differences in configuration are that I'm trialing the latest Enterprise Edition (19.03) as it brings dual logging capability although this is also supported in the latest CE versions, and I'm using the latest Loki Docker driver client now that the Github issue previously mentioned has been resolved.
I ended up setting the log-opts properties no-file and keep-file in docker-compose.yml:
logging:
driver: "loki"
options:
loki-url: "http://${LOKI_URL}:3100/loki/api/v1/push"
loki-batch-size: "400"
no-file: "false"
keep-file: "true"
max-size: "5m"
max-file: "3"
Since making this change I am receiving logs in Loki and can still use docker container logs and docker service logs on my Docker hosts.
no-file: "false" tells the driver to continue creating logs on disk and keep-file: "true" tells the driver to keep json logs if the container is stopped (by default files are removed).
Note: Originally I was adding these settings to /etc/docker/daemon.json on the host but would still see the error getting log reader issue, I had to switch to specifying the log driver per container/swarm service.
Regarding this issue
First I could see logs in Grafana and in command line with docker container logs, but after some time no more logs were shown.
On Grafana please select Query type: Range not Instant and you will see all the logs for the selected period of time, if exists in loki.

How to do the hello_world example from GitHub:linuxkit/linuxkit?

Situation and Problem
I am trying to follow this guide on "how to make your own linuxkit with docker for mac", where you can add some kernel modules usually not present in docker images.
After a lot of reading and testing I am failing to do the simplest (one would think) test case in the repository:
linuxkit/test/cases/020_kernel/011_kmod_4.9.x/
https://github.com/linuxkit/linuxkit/tree/master/test/cases/020_kernel/011_kmod_4.9.x
checking the container for the linux kernel-version and config:
... host$ docker run -it --rm -v /:/host -v $(pwd):/macos alpine:latest
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # / # uname -a
/bin/sh: /: Permission denied
/ #
/ #
/ # uname -a
Linux 029b8e5ada75 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 Linux
/ # cp /host/proc/config.gz /macos/
/ # exit
I went back in the github history to find the hash for my local linuxkit kernel version and modified the dockerfile of that example (or basically used the old one).
So far so good. The problem is, that if I try to do anything related to kernel modules (modinfo, modprobe, depmod, insmod), I will get errors like these:
modinfo: can't open '/lib/modules/4.9.125-linuxkit/modules.dep': No such file or directory
This is because that path simply does not exist in the container (there is not even a modules folder). That is also true if I were to check -- as above -- just in alpine:latest. So there doesn't seem to happen any magic in that dockerfile.
Question
Now I am completely puzzled and left stranded on what to do, hence my question ...
How to do the hello_world example from linuxkit/linuxkit ?
additional notes
The docs of the linuxkit-repository do not mention anything about that problem:
https://github.com/linuxkit/linuxkit/blob/master/docs/kernels.md#compiling-external-kernel-modules
For easy testing I am using
docker-compose
# build with
# docker-compose build
version: '3'
services:
linux-builder:
image: my_linux_kit
build:
context: .
dockerfile: my_linux_kit.dockerfile
# args:
# buildno: 1
privileged: true
And I even tricked it (by inserting by hand) into not showing any errors, but also not doing what I suppose the code should do:
... host$: docker exec -it 7a33fad37914 sh
/ # ls
bin dev hello_world.ko lib mnt root sbin sys usr
check.sh etc home media proc run srv tmp var
/ # /bin/busybox insmod hello_world.ko
/ #

Redis permission denied while opening dump.rdb

I am using official redis image with sidekiq on dockers.
Following are yml configurations for redis image:
redis:
build: .
dockerfile: Dockerfile-redis
ports:
- '6379:6379'
volumes:
- 'redis:/var/lib/redis'
sidekiq:
build: .
command: bundle exec sidekiq
links:
- db
- redis
volumes:
- .:/app
env_file:
- .env
Following is the code of my Dockerfile-redis:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
When I build the images everything works fine but after sometime docker-compose logs shows the following permission error:
redis_1 | 98:C 22 Jan 2019 18:40:10.098 # Failed opening the RDB file dump.rdb (in server root dir /var/lib/redis) for saving: Permission denied
redis_1 | 1:M 22 Jan 2019 18:40:10.203 # Background saving error
I have tried many solutions but I am still getting this error in logs. Everytime permission is denied for redis to open dump.rdb file. I have also followed this solution and done follwoing changes in my Dockerfile-redis to give root permission to redis
USER root
CMD chown -R root:root /var/lib/redis/
CMD chown 777 /var/lib/redis/
CMD chown 777 /var/lib/redis/dump.rdb
I have tried 755 for dir and 644 for dbfilename but it didn't worked for me. I also tried the above configurations of Dockerfile-redis with redis user but still I am getting the same permission denied error for opening dump.rdb file.
I don't know what I am doing wrong here. Please help me with this
After an hour of inactivity Redis will try to dump the memory db to disk.
Redis from the official redis image tries to write the .rdb file in the containers /data folder, which is rather unfortunate, as it is a root-owned folder and it is a non-persistent location too (data written there will disappear if your container/pod crashes).
So after an hour of inactivity, if you have run your redis container as a non-root user (e.g. docker run -u 1007 rather than default docker run -u 0), you will get a nicely detailed error msg in your log (see docker logs redis):
1:M 29 Jun 2019 21:11:22.014 * 1 changes in 3600 seconds. Saving...
1:M 29 Jun 2019 21:11:22.015 * Background saving started by pid 499
499:C 29 Jun 2019 21:11:22.015 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: Permission denied
1:M 29 Jun 2019 21:11:22.115 # Background saving error
So what you need to do is to map container's /data folder to an external location (where the non-root user, here: 1007, has write access), e.g:
docker run --rm -d --name redis -p 6379:6379 -u 1007 -v /tmp:/data redis
It seems that the official redis image is using an applicative user to run the redis-server and not root(which is a security best practice) regardless of USER definition - I extracted this from the image's entrypoint shell script:
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec gosu redis "$0" "$#"
fi
when mounting a volume to a container, it is owned by the root user, it will override the default directory in the image's layer along with previous permissions.
It seems that the redis image intentions were not to expose the '/var/lib/redis' dir as a volume, instead they offer mounting to '/data/' for persistence:
If persistence is enabled, data is stored in the VOLUME /data, which can be used with --volumes-from some-volume-container or -v /docker/host/dir:/data (see docs.docker volumes).
For more about Redis Persistence, see http://redis.io/topics/persistence.
Start docker container from root, example:
redis:
build: .
dockerfile: Dockerfile-redis
user: root <-- REQUIRE
ports:
- '6379:6379'
volumes:
- 'redis:/var/lib/redis'
Please check the port in server. If it is open to the public then this issue is there which is very strange and difficult to diagnose.

Can I use "mount" inside a Docker Alpine container?

I am Dockerising an old project. A feature in the project pulls in user-specified Git repos, and since the size of a repo could cause the filing system to be overwhelmed, I created a local filing system of a fixed size, and then mounted it. This was intended to prevent the web host from having its file system filled up.
The general approach is this:
IMAGE=filesystem/image.img
MOUNT_POINT=filesystem/mount
SIZE=20
PROJECT_ROOT=`pwd`
# Number of M to set aside for this filing system
dd if=/dev/zero of=$IMAGE bs=1M count=$SIZE &> /dev/null
# Format: the -F permits creation even though it's not a "block special device"
mkfs.ext3 -F -q $IMAGE
# Mount if the filing system is not already mounted
$MOUNTCMD | cut -d ' ' -f 3 | grep -q "^${PROJECT_ROOT}/${MOUNT_POINT}$"
if [ $? -ne 0 ]; then
# -p Create all parent dirs as necessary
mkdir -p $MOUNT_POINT
/bin/mount -t ext3 $IMAGE $MOUNT_POINT
fi
This works fine in a Linux local or remote VM. However, I'd like to run this shell code, or something like it, inside a container. Part of the reason I'd like to do that is to contain all fiddly stuff inside a container, so that building a new host machine is as kept as simple as possible (in my view, setting up custom mounts and cron-restart rules on the host works against that).
So, this command does not work inside a container ("filesystem" is an on-host Docker volume)
mount -t ext3 filesystem/image.img filesystem/mount
mount: can't setup loop device: No space left on device
It also does not work on a container folder ("filesystem2" is a container directory):
dd if=/dev/zero of=filesystem2/image.img bs=1M count=20
mount -t ext3 filesystem2/image.img filesystem2/mount
mount: can't setup loop device: No space left on device
I wonder whether containers just don't have the right internal machinery to do mounting, and thus whether I should change course. I'd prefer not to spend too much time on this (I'm just moving a project to a Docker-only server) which is why I would like to get mount working if I can.
Other options
If that's not possible, then a size-limited Docker volume, that works with both Docker and Swarm, may be an alternative I'd need to look into. There are conflicting reports on the web as to whether this actually works (see this question).
There is a suggestion here to say this is supported in Flocker. However, I am hesitant to use that, as it appears to be abandoned, presumably having been affected by ClusterHQ going bust.
This post indicates I can use --storage-opt size=120G with docker run. However, it does not look like it is supported by docker service create (unless perhaps the option has been renamed).
Update
As per the comment convo, I made some progress; I found that adding --privileged to the docker run enables mounting, at the cost of removing security isolation. A helpful commenter says that it is better to use the more fine-grained control of --cap-add SYS_ADMIN, allowing the container to retain some of its isolation.
However, Docker Swarm has not yet implemented either of these flags, so I can't use this solution. This lengthy feature request suggests to me that this feature is not going to be added in a hurry; it's been pending for two years already.
You won't be able to safely do this inside of a container. Docker removes the mount privilege from containers because using this you could mount the host filesystem and escape the container. However, you can do this outside of the container and mount the filesystem into the container as a volume using the default local driver. The size option isn't supported by most filesystems, tmpfs being one of the few exceptions. Most of them use the size of the underlying device which you defined with the image file creation command:
dd if=/dev/zero of=filesystem/image.img bs=1M count=$SIZE
I had trouble getting docker to create the loop device dynamically, so here's the process to create it manually:
$ sudo losetup --find --show ./vol-image.img
/dev/loop0
$ sudo mkfs -t ext3 /dev/loop0
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 10240 1k blocks and 2560 inodes
Filesystem UUID: 25c95fcd-6c78-4b8e-b923-f808517b28df
Superblock backups stored on blocks:
8193
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
When defining the volume mount options are passed almost verbatim from the mount command you run on the command line:
docker volume create --driver local --opt type=ext3 \
--opt device=filesystem/image.img app_vol
docker service create --mount type=volume,src=app_vol,dst=/filesystem/mount ...
or in a single service create command:
docker service create \
--mount type=volume,src=app_vol,dst=/filesystem/mount,volume-driver=local,volume-opt=type=ext3,volume-opt=device=filesystem/image.img ...
With docker run, the command looks like:
$ docker run -it --rm --mount type=volume,dst=/data,src=ext3vol,volume-driver=local,volume-opt=type=ext3,volume-opt=device=/dev/loop0 busybox /bin/sh
/ # ls -al /data
total 17
drwxr-xr-x 3 root root 1024 Sep 19 14:39 .
drwxr-xr-x 1 root root 4096 Sep 19 14:40 ..
drwx------ 2 root root 12288 Sep 19 14:39 lost+found
The only prerequisite is that you create this file and loop device before creating the service, and that this file is accessible wherever the service is scheduled. I would also suggest making all of the paths in these commands fully qualified rather than relative to the current directory. I'm pretty sure there are a few places that relative paths don't work.
I have found a size-limiting solution I am happy with, and it does not use the Linux mount command at all. I've not implemented it yet, but the tests documented below are satisfying enough. Readers may wish to note the minor warnings at the end.
I had not tried mounting Docker volumes prior to asking this question, since part of my research stumbled on a Stack Overflow poster casting doubt on whether Docker volumes can be made to respect a size limitation. My test indicates that they can, but you may wish to test this on your own platform to ensure it works for you.
Size limit on Docker container
The below commands have been cobbled together from various sources on the web.
To start with, I create a volume like so, with a 20m size limit:
docker volume create \
--driver local \
--opt o=size=20m \
--opt type=tmpfs \
--opt device=tmpfs \
hello-volume
I then create an Alpine Swarm service with a mount on this container:
docker service create \
--mount source=hello-volume,target=/myvol \
alpine \
sleep 10000
We can ensure the container is mounted by getting a shell on the single container in this service:
docker exec -it amazing_feynman.1.lpsgoyv0jrju6fvb8skrybqap
/ # ls - /myvol
total 0
OK, great. So, while remaining in this shell, let's try slowly overwhelming this disk, in 5m increments. We can see that it fails on the fifth try, which is what we would expect:
/ # cd /myvol
/myvol # ls
/myvol # dd if=/dev/zero of=image1 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image2 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 10240
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
/myvol # dd if=/dev/zero of=image3 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image4 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 20480
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image3
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image4
/myvol # dd if=/dev/zero of=image5 bs=1M count=5
dd: writing 'image5': No space left on device
1+0 records in
0+0 records out
/myvol #
Finally, let's see if we can get an error by overwhelming the disk in one go, in case the limitation only applies to newly opened file handles in a full disk:
/ # cd /myvol
/ # rm *
/myvol # dd if=/dev/zero of=image1 bs=1M count=21
dd: writing 'image1': No space left on device
21+0 records in
20+0 records out
It turns out we can, so that looks pretty robust to me.
Nota bene
The volume is created with a type and a device of "tmpfs", which sounded to me worryingly like a RAM disk. I've successfully checked that the volume remains connected and intact after a system reboot, so it looks good to me, at least for now.
However, I'd say that when it comes to organising your data persistence systems, don't just copy what I have. Make sure the volume is robust enough for your use case before you put it into production, and of course, make sure you include it in your back-up process.
(This is for Docker version 18.06.1-ce, build e68fc7a).

Permission issues in nexus3 docker container

When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.

Resources