When I start nexus3 in a docker container I get the following error messages.
$ docker run --rm sonatype/nexus3:3.8.0
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to Permission denied
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (Permission denied)
Unable to update instance pid: Unable to create directory /nexus-data/instances
It indicates that there is a file permission issue.
I am using Red Hat Enterprise Linux 7.5 as host machine and the most recent docker version.
On another machine (ubuntu) it works fine.
The issue occurs in the persistent volume (/nexus-data). However, I do not mount a specific volume and let docker use a anonymous one.
If I compare the volumes on both machines I can see the following permissions:
For Red Hat, where it is not working is belongs to root.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 0
drwxr-xr-x. 2 root root 6 Mar 1 00:07 etc
drwxr-xr-x. 2 root root 6 Mar 1 00:07 log
drwxr-xr-x. 2 root root 6 Mar 1 00:07 tmp
On ubuntu, where it is working it belongs to nexus. Nexus is also the default user in the container.
$ docker run --rm sonatype/nexus3:3.8.0 ls -l /nexus-data
total 12
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 etc
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 log
drwxr-xr-x 2 nexus nexus 4096 Mar 1 00:07 tmp
Changing the user with the options -u is not an option.
I could solve it by deleting all local docker images: docker image prune -a
Afterwards it downloaded the image again and it worked.
This is strange because I also compared the fingerprints of the images and they were identical.
An example of docker-compose for Nexus :
version: "3"
services:
#Nexus
nexus:
image: sonatype/nexus3:3.39.0
expose:
- "8081"
- "8082"
- "8083"
ports:
# UI
- "8081:8081"
# repositories http
- "8082:8082"
- "8083:8083"
# repositories https
#- "8182:8182"
#- "8183:8183"
environment:
- VIRTUAL_PORT=8081
volumes:
- "./nexus/data/nexus-data:/nexus-data"
Setup the volume :
mkdir -p ./nexus/data/nexus-data
sudo chown -R 200 nexus/ # 200 because it's the UID of the nexus user inside the container
Start Nexus
sudo docker-compose up -d
hf
You should attribute correct right to the folder where the persistent volume is located.
chmod u+wxr -R <folder of /nexus-data volumes>
Be carefull, if you execute previous command, it would give write, read and execution right to all users. If you want to give more restricted right, you should modify the command.
Related
I'm running into an issue with MacOS Ventura where by all bind volumes - where I link a directory on my host machine to one on the container - created with docker-compose are empty. I've tested the same scripts on MacOS 12.4 and 12.6 and they work as expected giving me the same directory contents on the container as on the host, so it seems v13 changed some permission.
The docker-compose.yml file:
version: "3"
services:
bash:
image: ubuntu:latest
stdin_open: true
tty: true
volumes:
- ./:/app
command: "/bin/bash"
So this should be creating a directory on the container called /app and linking that to the host directory the compose file is in.
But when I start the container:
❯ docker-compose up --build
[+] Running 1/0
⠿ Container ruby-docker-bash-1 Created 0.0s
Attaching to ruby-docker-bash-1
And login, the /app directory is empty:
❯ docker exec -it ruby-docker-bash-1 /bin/bash
root#9644de175d48:/# cd app/
root#9644de175d48:/app# ls -la
total 4
drwxr-xr-x 2 root root 40 Feb 1 09:59 .
drwxr-xr-x 1 root root 4096 Feb 1 09:39 ..
The total 4 is really weird here as there are 4 files supposed to be there, but not accessible:
root#9644de175d48:/app# cat Gemfile
cat: Gemfile: No such file or directory
This is the directory contents on the host are:
❯ ls -la
.rw-r--r-- 2.7k paul 1 Feb 09:31 Dockerfile
.rw-r--r-- 3.9k paul 1 Feb 09:32 Gemfile
.rw-r--r-- 27k paul 1 Feb 09:32 Gemfile.lock
.rw-r--r-- 149 paul 1 Feb 10:00 docker-compose.yml
If anyone has any experience with what might be going wrong or how I can get past this absolute time-sink of an issue, I'd really appreciate it.
Thank you!
I figured it out. I use Colima on MacOS, as there is no MacOS VM by docker.
Then I found this comment on a Colima repo issue, https://github.com/abiosoft/colima/issues/500#issuecomment-1343103477, where a user had mentioned they weren't able to sync directories.
To fix volumes on MacOS, using Colima, I did the folowing:
colima delete # reset
colima start --mount-type 9p
This doesn't seem to be documented anywhere. I’ve been through the site, the readme.
But I did find this line of code inside of the Colima repo:
validMountTypes := map[string]bool{"9p": true, "sshfs": true}
if util.MacOS13OrNewer() {
validMountTypes["virtiofs"] = true
}
I’m in MacOS 13, so it seems like there are issues with virtiofs and not in the older 9p mount type.
I'm working on a build step that handles common deployment tasks in a Docker Swarm Mode cluster. As this is a common problem for us and for others, we've shared this build step as a BitBucket pipe: https://bitbucket.org/matchory/swarm-secret-pipe/
The pipe needs to use the docker command to work with a remote Docker installation. This doesn't work, however, because the docker executable cannot be found when the pipe runs.
The following holds true for our test repository pipeline:
The docker option is set to true:
options:
docker: true
The docker service is enabled for the build step:
main:
- step:
services:
- docker: true
Docker works fine in the repository pipeline itself, but not within the pipe.
Pipeline log shows the docker path being mounted into the pipe container:
docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/matchory/swarm-secret-pipe \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
radiergummi/swarm-secret-pipe:1.3.7#sha256:baf05b25b38f2a59b044e07f4ad07065de90257a000137a0e1eb71cbe1a438e5
The pipe is pretty standard and uses a recent Alpine image; nothing special in that regard. The PATH is never overwritten. Now for the fun part: If I do ls /usr/local/bin/docker inside the pipe, it shows an empty directory:
ls /usr/local/bin
total 16K
drwxr-xr-x 1 root root 4.0K May 13 13:06 .
drwxr-xr-x 1 root root 4.0K Apr 4 16:06 ..
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 docker
ls /usr/local/bin/docker
total 8K
drwxr-xr-x 2 root root 4.0K Apr 29 09:30 .
drwxr-xr-x 1 root root 4.0K May 13 13:06 ..
ls: /usr/local/bin/docker/docker: No such file or directory
As far as I understand pipelines and Docker, /usr/local/bin/docker should be the docker binary file. Instead, it appears to be an empty directory for some reason.
What is going on here?
I've also looked at other, official, pipes. They don't do anything differently, but seem to be using the docker command just fine (eg. the Azure pipe).
After talking to BitBucket support, I solved the issue. As it turns out, if the docker context is changed, any docker command is sent straight to the remote docker binary, which (on our services) lives at a different path than in BitBucket Pipelines!
As we changed the docker context before using the pipe, and the docker instance mounted into the pipe still has the remote context set, but the pipe searches for the docker binary at another place, the No such file or directory error is thrown.
TL;DR: Always restore the default docker host/context before passing control to a pipe, e.g.:
script:
- export DEFAULT_DOCKER_HOST=$DOCKER_HOST
- unset DOCKER_HOST
- docker context create remote --docker "host=ssh://${DEPLOY_SSH_USER}#${DEPLOY_SSH_HOST}"
- docker context use remote
# do your thing
- export DOCKER_HOST=$DEFAULT_DOCKER_HOST # <------ restore the default host
- pipe: matchory/swarm-secret-pipe:1.3.16
I've read on this tutorial that when you create a docker-compose.yml file, and bind mount volumes, if you don't create the folder on your host, when running docker-compose up the folder will be automatically created and populated with the content of the container's folder.
Here is the quote:
Then you should volume bind two folders. /etc/nginx is where all your configuration files are stored, and /etc/ssl/private is where your SSL certificates are stored. It is VERY important that your config folder does NOT exist on your host first time you’re starting the container. When you start your container through docker-compose, it will automatically create the folder and populate it with the contents of the container. If you have created an empty config folder on your host, it will mount that, and the folder inside the container will be empty.
But it doesn't seems to work for me.
Here are a few things I've checked.
My docker is not running as root. I created the docker group on my machine and added my user in it, thus I don't need to run sudo docker <command>
I run an ubuntu server 18 LTS
It doesn't matter if I try to bind mount the volumes as read-only or not
When running docker-compose up it creates the folders, but they are owned by the root user
The created folder owned by the root user are empty
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
container_name: reverse_nginx
ports:
- "80:80"
- "443:443"
volumes:
- "./html:/usr/share/nginx/html"
- "./conf:/etc/nginx"
- "./ssl:/etc/ssl/private"
restart: unless-stopped
And here is what is created:
icare#icare:~/nginx
$ ls -lR
.:
total 16
drwxr-xr-x 2 root root 4096 Mar 20 15:46 conf
-rw-r--r-- 1 icare icare 269 Mar 20 15:27 docker-compose.yml
drwxr-xr-x 2 root root 4096 Mar 20 15:46 html
drwxr-xr-x 2 root root 4096 Mar 20 15:46 ssl
./conf:
total 0
./html:
total 0
./ssl:
total 0
I can't find a solution online and it seems that a person asked on github and it's been solved the hard way (copying files from the container, then bind-mount everything) but I can't help to think that there is another way. Or maybe the tutorial I'm following is outdated or wrong ?
I've tried to mount a folder using the following docker compose file (partially reproduced, the rest aren't relevant):
version: '3'
services:
web:
build: .
environment:
- DEBUG=0
volumes:
- /usr/share/nginx/html/assets:/assets:Z
However, aside from cd-ing into the folder /assets in the docker container, I get the following error for other operations in the folder (including chmod and chcon):
ls: cannot open directory '.': Permission denied
The folder UID and GID are 0 (i.e. root) and the UID of the bash in docker is also 0.
However, by removing the Z flag, the docker container is able to read content off the volume, but not write into it.
Here is the output of ls -laZ with the Z flag on:
drwx------. 2 root root system_u:object_r:container_var_run_t:s0 160 Jan 27 15:33 assets
and here is without Z flag:
drwxr-xr-x. 4 root root unconfined_u:object_r:httpd_sys_content_t:s0 72 Jan 23 09:01 assets
It seems that with the Z flag, my group and others permission disappears, but that does not matter because the UID is the same, right?
My question is, how can I get write access to the mounted directory in the docker container?
I am trying to set up a multi-container service with docker-compose.
Some of the containers need to be restarted from a fresh container (eg. the file system should be like in the image) when they restart.
How can I achieve this?
I've found the restart: always option I can put on my service in the docker-compose.yml file, but that doesn't give me a fresh file system as it uses the same container.
I've also seen the --force-recreate option of docker-compose up, but that doesn't apply as that only recreates the containers when the command is runned.
EDIT:
This is probably not a docker-compose issue, but more of a general docker question: What is the best way to make sure a container is in a fresh state when it is restarted? With fresh state, I mean a state identical to that of a brand new container from the same image. Restarted is the docker command docker restart or docker stop and docker start.
In docker, immutability typically refers to the image layers. They are immutable, and any changes are pushed to a container specific copy-on-write layer of the filesystem. That container specific layer will last for the lifetime of the container. So to have those files not-persist, you have two options:
Recreate the container instead of just restart it
Don't write the changes to the container filesystem, and don't write them to any persistent volumes.
You cannot do #1 with a restart policy by it's very definition. A restart policy gives you the same container filesystem, with the application restarted. But if you use docker's swarm mode, it will recreate containers when they exit, so if you can migrate to swarm mode, you can achieve this result.
Option #2 looks more difficult than it is. If you aren't writing to the container filesystem, or to a volume, then where? The answer is a tmpfs volume that is only stored in memory and is lost as soon as the container exits. In compose, this is a tmpfs: /data/dir/to/not/persist line. Here's an example on the docker command line.
First, let's create a container with a tmpfs mounted at /data, add some content, and exit the container:
$ docker run -it --tmpfs /data --name no-persist busybox /bin/sh
/ # ls -al /data
total 4
drwxrwxrwt 2 root root 40 Apr 7 21:50 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'do not save' >>/data/tmp-data.txt
/ # cat /data/tmp-data.txt
do not save
/ # ls -al /data
total 8
drwxrwxrwt 2 root root 60 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 12 Apr 7 21:51 tmp-data.txt
/ # exit
Easy enough, it behaves as a normal container, let's restart it and check the directory contents:
$ docker restart no-persist
no-persist
$ docker attach no-persist
/ # ls -al /data
total 4
drwxr-xr-x 2 root root 40 Apr 7 21:51 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
/ # echo 'still do not save' >>/data/do-not-save.txt
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 60 Apr 7 21:52 .
drwxr-xr-x 1 root root 4096 Apr 7 21:50 ..
-rw-r--r-- 1 root root 18 Apr 7 21:52 do-not-save.txt
/ # exit
As you can see, the directory returned empty, and we can add data as needed back to the directory. The only downside of this is the directory will be empty even if you have content in the image at that location. I've tried combinations of named volumes, or using the mount syntax and passing the volume-nocopy option to 0, without luck. So if you need the directory to be initialized, you'll need to do that as part of your container entrypoint/cmd by copying from another location.
In order to not persist any changes to your containers it is enough that you don't map any directory from host to the container.
In this way, every time the containers runs (with docker run or docker-compose up ), it starts with a fresh file system.
docker-compose down also removes the containers, deleting any data.
The best solution I have found so far, is for the container itself to make sure to clean up when starting or stopping. I solve this by cleaning up when starting.
I copy my app files to /srv/template with the docker COPY directive in my Dockerfile, and have something like this in my ENTRYPOINT script:
rm -rf /srv/server/
cp -r /srv/template /srv/server
cd /srv/server