Docker-compose and named volume permission denied - docker

There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?

Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively

I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh

You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder

Related

How to run only specific command as root and other commands with default user in docker-compose

This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

Dockerfile file not found in keycloak docker image

I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build
Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy
Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.

In Docker, how do I detect from inside a container, that if a file or a directory is mounted by the Docker?

I did docker-compose down/docker container rm and noticed that I lost all my data created in the container. Yes, I forgot to mount my local directory as a volume in the first place. 😭
To prevent this, on startup, I want to warn users that "the data will be non-persistent", if the local volume's not mounted.
Is there a way to detect from inside container whether a file or a directory is a mounted one via Docker?
I googled it but couldn't find a good way. And my current workaround is:
FROM alpine:latest
RUN \
mkdir /data && \
touch /data/.replaceme
...
ENTRYPOINT /detect-mount-and-start.sh
detect-mount-and-start.sh checks if /data/.replaceme exists. If so, it warns to mount a local volume and exits.
Are there a better way to detect it?
Note (2019/09/12): This container is not only used via docker-compose up but docker run --rm too. And the directory name in local are not specified. Meaning, it can be -v $(pwd)/mydata:/data or something like -v $(pwd)/data_local:/data, etc.
Note (2019/09/15): The situation is: I launched a container of Markdown Editor and created something like 100 of .md files. Those files were saved on /data on the root of the container. I should have mounted the volume like -v $(pwd)/data:/data before everything. But I didn't ... and noticed it after removing the container. My bad I know.
I don't know if I understand your question, but when you use docker-compose down, depending on how you create your docker-compose.yml, it will destroy your data, see:
Will delete you data when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
Don't will delete when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
     volumes:
       - ./data:/var/lib/mysql
Don't will delete when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
     volumes:
       - data-volume:/var/lib/mysql
volumes:
   data-volume:
     external: true
PS: I do not have a rating to comment on your question, so I am answering.
The way you are doing may also work, but I was working with one of my client on a project in development environment, It was nodejs based application and they need to make sure the server.js exist before starting the container and server.js was expected from mount location so I came up with this approach, As I did not find a way to sense shared docker volume inside container.
Dockerfile
FROM alpine
run mkdir -p /myapp
copy . /myapp
copy entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
APP_PATH="/myapp"
files=$(ls /myapp/*.js)
echo "Files in Docker mount location: $files"
if [ -f "$APP_PATH/server.js" ] && [ -f "$APP_PATH/index.js" ]; then
echo "Starting container with host mount files"
echo "Starting server.js"
cd $APP_PATH;
node server.js
else
>&2 echo "Error: Pls mount the host location on /myapp path of the container i.e -v host_node_project:/myapp. Current files $(ls $APP_PATH)"
break
fi
build and run
docker build -t myapp .
docker run -it --rm --name myapp myapp
docker-compose stop - doesn't destroy your containers
Then you can use:
docker-compose start

Run sudo commands from inside of docker compose

I am using AWS EC2 instance and have installed docker and docker-compose on AWS linux.
Now I have a docker-compose.yml file which is trying the command mkdir -p /workspace/.m2/repositories. Now this command requires sudo, otherwise it gives permissions error.
I tried adding sudo inside docker-compose but it gave me an error saying
sudo: command not found
I can run this command manually and can comment this command inside of docker-compose.yml file but I am interested to know that is there any way to run this command from inside of docker-compose.yml file?
I may have a solution for you. You can extend the strongbox image in a custom Dockerfile to solve this issue I think.
Create a new Dockerfile, like this one:
Dockerfile
FROM strongboxci/alpine:jdk8-mvn-3.5
USER root
RUN mkdir -p /workspace/.m2/repositories
RUN chown jenkins:jenkins /workspace/.m2/repositories
USER jenkins
Then build the image with something like this:
docker build -t mystrongbox:01 .
And finally update the docker-compose.yml file to this:
docker-compose.yml
version: '2'
services:
strongbox-from-web-core:
image: mystrongbox:01
command:
- /bin/bash
- -c
- |
echo ""
echo "[NOTICE] This will take at least 2 to 5 minutes to start depending on your machine and connection!"
echo ""
echo " Open http://localhost:48080/storages to browse the repository contents."
echo ""
sleep 5
mkdir -p /workspace/.m2/repositories
mvn clean install -DskipTests -Dmaven.repo.local=/workspace/.m2/repositories
cd strongbox-web-core
mvn spring-boot:run -Dmaven.repo.local=/workspace/.m2/repositories
ports:
- 48080:48080
volumes:
- ./:/workspace
working_dir: /workspace
Finally try again with:
docker-compose up
Then you will have the directory created in the image already, and ownership set to the jenkins user.
I'm one of the developers at strongbox/strongbox. We're thrilled that someone is trying out our Docker images for development :)
Now this command requires sudo, otherwise it gives permissions error.
What you are experiencing, is likely a permission issue. Our Docker images are running as user.group = 1000.1000 (which is usually the first user on many distributions). I suspect that your UID/GID is different, which you can check by doing id -u and id -g. If it's something other than 1000.1000 - you would need to do a workaround:
Create a user & group with IDs 1000.1000:
groupadd -g 1000 jenkins
useradd -u 1000 -g 1000 -s /bin/bash -m jenkins
Chown/chmod the cloned strongbox project like this:
chown -R `id -u`.1001 /path/to/strongbox-project
chmod -R 775 /path/to/strongbox-project
Try again docker-compose up
This image does not have sudo installed, so you wouldn't be able to execute it. However, you shouldn't need it as well, because the /workspace is being mounted from your FS (this is the strongbox project) and it will write /workspace/.m2/repository in the volume.

Resources