I have the following docker-compose file:
version: "3.8"
services:
api:
image: myuser/myimage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/gunicorn.py:/app/gunicorn.py
- /home/deployer/log_files:/app/log_files
env_file:
- /home/deployer/config_files/api.env
expose:
- 8000
nginx:
image: myuser/nginximage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/nginx.conf:/etc/nginx/conf.d/nginx.conf
ports:
- 80:80
depends_on:
- api
volumes:
static_volume:
The api service was built using the following docker file (summarized to reduce size):
FROM python:3.9.1
WORKDIR /app
# Copy code etc into container
COPY ./api/ .
COPY entrypoint.sh .
# create static and log files directories
RUN mkdir static
RUN mkdir log_files
# create non root user, change ownership of all files, switch to this user
RUN adduser --system --group appuser
RUN chown -R appuser:appuser /app
USER appuser
CMD [ "/app/entrypoint.sh" ]
If I remove /home/deployer/log_files:/app/log_files from the compose file everything works correctly. However I am trying to use that log files directory for gunicorn to use for log files. Including that line results in the following error when docker-compose up:
Error: Error: '/app/log_files/gunicorn_error.log' isn't writable [PermissionError(13, 'Permission denied')]
On the linux host I am running docker-compose up with the user named deployer. Inside the container as per the Dockerfile I created a user called appuser. I'm guessing this is related to the issue but I'm not sure.
Basically all I'm trying to do is to have log files inside the container be accessible outside the container so that they persist even if the server is restarted.
/app/log_files is still owned by deployers user inside your container and appuser does not have permission to write to it. As per your comment, it seems /home/deployer/log_files is owned by deployer:deployers with permission drwxr-xr-x. The permissions will be the same for /app/log_files inside container as per bind mount.
Even if deployers user does not exists in your container it will still be owned by the UID of deployers user (you can check this by running ls from inside the container).
You can:
Add world-writable permission to /home/deployer/log_files such as
chmod 777 /home/deployer/log_files
This may present a security risk though, other solution is a bit more complex but better.
Retrieve or set the UID of appuser and set ownership of /home/deployer/log_files to this user. For example in Dockerfile create appuser with specific UID 1500:
RUN adduser -u 1500 --system --group appuser
And from your host change directory owner to this UID
sudo chown 1500:1500 /home/deployer/log_files
At container runtime, appuser (UID 1500) will then be able to write to this directory
More generally, you should ensure /home/deployer/log_files is writable by the user running inside the container while keeping it's access secure if needed.
Related
I have a project like this structure:
app
- selenium
- downloads
.env
docker-compose.yml
docker-compose.yml has 2 services: myApp and selenium. And this volumes set for my services:
myApp:
volumes:
- ./app:/home/app
selenium:
volumes:
- ./app/selenium/downloads:/home/seluser/Downloads
When I run docker compose up in this directory, My application and selenium run up and I can develop my application. sometimes I need the selenium container for getting some content around the web.
When selenium container downloads a file, this file is stored in the ./app/selenium/downloads and I can read this file from the myApp container.
Problem
When I changed the selenium default download directory to /home/seluser/Downloads/aaa/bbb, this directory can access by the selenium container with /home/app/selenium/downloads/aaa/bbb, But I can't access aaa directory in the myApp container.
I can Change the permission of the aaa directory with the root user in the myApp container and solve this problem, But the default download directory can be changed for every downloaded file.
What's the solution to this problem?
I guess that you are getting troubles because your main processes of your 2 containers are running with different user. There is an easy way is to modify your Dockerfile and set the same user id (UID) for these both users. So the file/directory generated by a container can be accessed properly by the user of the other container.
An example Dockerfile to create a user with specified UID
ARG UNAME=testuser
ARG UID=1000
ARG GID=1000
RUN groupadd -g $GID -o $UNAME
RUN useradd -m -u $UID -g $GID -o -s /bin/bash $UNAME
There is another way using umask to assign proper permission to file/directory generated in your selenium container (to grant READ access to all users). But I think it's more complicated and make take your time to know which umask is suitable for you https://phoenixnap.com/kb/what-is-umask
This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d
There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?
Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively
I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh
You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder
Usually when I develop application using docker container as my development test base I need in order to run manually composer, phpunit, npm, bower and various development scrips in it a shell via the following command:
docker exec -ti /bin/sh
But when the shell is spawned, is spawned with root permissions. What I want to achieve is to spawn a shell without root permissions but with a specified user one.
How I can do that?
In my case my Dockerfile has the following entries:
FROM php:5.6-fpm-alpine
ARG UID="1000"
ARG GID="1000"
COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh
COPY ./fpm.conf /usr/local/etc/php-fpm.d/zz-docker.conf
RUN chmod +x /usr/local/bin/entrypoint.sh &&\
addgroup -g ${GID} developer &&\
adduser -D -H -S -s /bin/false -G developer -u ${UID} developer
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["php-fpm"]
And I mount a directory of my projject I develop from the host into /var/www/html and preserving the user permissions, so I just need the following docker-compose.yml in order to build it:
version: '2'
services:
php_dev:
build:
context: .
dockerfile: Dockerfile
args:
XDEBUG_HOST: 172.17.0.1
XDEBUG_PORT: 9021
UID: 1000
GID: 1000
image: pcmagas/php_dev
links:
- somedb
volumes:
- "$SRC_PATH:/var/www/html:Z"
Sop by setting the UID and GID into my host's user id and group id and with the following config form fpm:
[global]
daemonize = no
[www]
listen = 9000
user = developer
group = developer
I manage to run any changes to my code without worring about mysterious changes to user wonerships. But I want to be able to spawn a shell inside the running php_dev container as the developer user so any future tool such as composer or npm will run with the appropriate user permissions.
Of cource I guess same principles will apply into other languages as well for examples for python the pip
In case you need to run the container as a non-root user you have to add the following line to your Dockerfile:
USER developer
Note that in order to mount a directory through docker-compose.yml you have to change the permission of that directory before running docker-compose up by executing the following command
chown UID:GID /path/to/folder/on/host
UID and GID should match the UID and GID of the user's container.
This will make the user able to read and write to the mounted volume without any issues
Read more about USER directive
In the end putting the line:
USER developer
Really works when you spawn a shell via the:
docker exec -ti /bin/sh
I've read quite a few threads on the internet about how to best mount local (project) directories into a Docker container so that the directories are not owned by the root user. Unfortunately, I've not found a precise answer.
I'm building my development stack with this docker-compose.yml (SfDocker) file:
db:
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: symfonyrootpass
MYSQL_DATABASE: symfony
MYSQL_USER: symfony
MYSQL_PASSWORD: symfonypass
worker:
image: symfony/worker-dev
ports:
- "8080:80"
environment:
XDEBUG_HOST: 192.168.1.194
XDEBUG_PORT: 9000
XDEBUG_REMOTE_MODE: req
links:
- db
volumes:
- "var/nginx/:/var/log/nginx"
- symfony-code:/var/www/app
Volumes are mounted at runtime only after the images are built. I've added a new user by RUN groupadd -r luqo33 && useradd -r -g luqo33 luqo33 in the symfony/worker-dev image, but I was not able to chmod the mounted volumes so that it is owned by luqo33:www-data. I've tried to do it by:
Copying and running an entrypoint.sh with chmod command:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
The container would start and then shut down with no apparent reason.
Executing CMD chown -R luqo33:www-data while starting containers - this could not work because at the time of starting the worker-dev container, the volumes seem not to be mounted yet.
I did not manage to set the ownership of the mounted directories to users other than root. How can I achieve this?
You seem to be a bit confused about how Docker works, especially with regard to entrypoint and cmd scripts.
Any script referenced in an ENTRYPOINT or CMD instruction will be executed by the container at run-time. Once the script finishes, the container will exit. For this reason, you will need to both run your chmod and start the application in the script.
If the current user is root, a script like the following should work fine to set permissions and start the app:
#!/bin/bash
chown -R luqo33:www-data /var/www/app
sudo -u luqo33 exec start-my-app-in-foreground-script-or-bin
There is a slight problem with sudo creating two processes however, so you may want to use gosu instead.