I am using docker-compose to create an abundance of Docker containers. All of the containers have a shared volume.
volumes:
- ${PHP_SERVICES_FOLDER}:/var/www/web
The docker containers are as follows.
Jenkins(FROM jenkins/jenkins:latest) - This writes to the shared volume
Nginx(FROM nginx) - This reads from the shared volume and uses the php-fpm container
PHP-FPM(FROM php:7.2-fpm)
With the volume's files having permissions 777 Nginx and PHP can read, write and execute the files but as soon as I trigger a build in Jenkins which updates files in the volume.
I think the reason it works when the permissions are 777 is because that allows 'other' users full access to the volume.
How can I have Nginx, PHP-FPM and Jenkins use the same user to read, write and execute files in that volume?
You could create an user in each of the Dockerfiles using the same uid and then give permissions to user with those uids.
For example:
# your dockerfile
RUN groupadd -g 799 appgroup
RUN useradd -u 799 -g appgroup shareduser
USER shareduser
Then you simply need to chown everything in the volume with the newly created uid (in any container or at host, as uids are shared between container and host:
chown -R 799:799 /volume/root/
Related
I have a project like this structure:
app
- selenium
- downloads
.env
docker-compose.yml
docker-compose.yml has 2 services: myApp and selenium. And this volumes set for my services:
myApp:
volumes:
- ./app:/home/app
selenium:
volumes:
- ./app/selenium/downloads:/home/seluser/Downloads
When I run docker compose up in this directory, My application and selenium run up and I can develop my application. sometimes I need the selenium container for getting some content around the web.
When selenium container downloads a file, this file is stored in the ./app/selenium/downloads and I can read this file from the myApp container.
Problem
When I changed the selenium default download directory to /home/seluser/Downloads/aaa/bbb, this directory can access by the selenium container with /home/app/selenium/downloads/aaa/bbb, But I can't access aaa directory in the myApp container.
I can Change the permission of the aaa directory with the root user in the myApp container and solve this problem, But the default download directory can be changed for every downloaded file.
What's the solution to this problem?
I guess that you are getting troubles because your main processes of your 2 containers are running with different user. There is an easy way is to modify your Dockerfile and set the same user id (UID) for these both users. So the file/directory generated by a container can be accessed properly by the user of the other container.
An example Dockerfile to create a user with specified UID
ARG UNAME=testuser
ARG UID=1000
ARG GID=1000
RUN groupadd -g $GID -o $UNAME
RUN useradd -m -u $UID -g $GID -o -s /bin/bash $UNAME
There is another way using umask to assign proper permission to file/directory generated in your selenium container (to grant READ access to all users). But I think it's more complicated and make take your time to know which umask is suitable for you https://phoenixnap.com/kb/what-is-umask
I have a java application that saves a CSV file to the root folder of the application. I am trying to create a docker image of this and run it as a container. However, I want a non-root user with ID of 1010 to be able to access this file and not root. I get errors when trying to specify USER 1010 in my dockerfile
FROM adoptjdk (placeholder)
COPY ./myapp.jar /app/
USER 1010
WORKDIR /opt
EXPOSE PORTNO
That's just the basics of the dockerfile, essentially I want user 1010 to be able to access the CSV file that my java application creates. I am not sure where it saves my CSV file when it is run through docker.
If your application shall write to / inside the container, and your application shall be running as user id 1010, then just set the filesystem privileges accordingly. I chime in that it may not be the recommended setup, but it is not impossible.
So put in your Dockerfile lines like
RUN chmod 777 /
or to be a bit smoother check the GID of / with
RUN ls -ln /
RUN chmod 775 /
then ensure your java application runs with uid 1010 and the GID you saw from the filesystem. In short, such things are the same, be it inside the container or outside.
I have create a docker container with this command:
docker run -d -p 20001:80 -v /home/me/folder1/:/usr/local/apache2/htdocs/ httpd:2.4
This container contains scripts which create files and directories in /usr/local/apache2/htdocs/ folder.
I can see this files on host computer in /home/me/folder1/ folder.
I have tried to open one of this files because i want to write something.
I cannot do that because i do not have write permission on this files. This is because they are owned by root user.
What can i do in order to make this files writable be "me" user ? I want to do that automaticaly
Thanks a lot
you have to do
sudo chmod +x nameofscript.sh
whit this command execute by master you set this scripts executable for all users
I have the following docker-compose file:
version: "3.8"
services:
api:
image: myuser/myimage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/gunicorn.py:/app/gunicorn.py
- /home/deployer/log_files:/app/log_files
env_file:
- /home/deployer/config_files/api.env
expose:
- 8000
nginx:
image: myuser/nginximage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/nginx.conf:/etc/nginx/conf.d/nginx.conf
ports:
- 80:80
depends_on:
- api
volumes:
static_volume:
The api service was built using the following docker file (summarized to reduce size):
FROM python:3.9.1
WORKDIR /app
# Copy code etc into container
COPY ./api/ .
COPY entrypoint.sh .
# create static and log files directories
RUN mkdir static
RUN mkdir log_files
# create non root user, change ownership of all files, switch to this user
RUN adduser --system --group appuser
RUN chown -R appuser:appuser /app
USER appuser
CMD [ "/app/entrypoint.sh" ]
If I remove /home/deployer/log_files:/app/log_files from the compose file everything works correctly. However I am trying to use that log files directory for gunicorn to use for log files. Including that line results in the following error when docker-compose up:
Error: Error: '/app/log_files/gunicorn_error.log' isn't writable [PermissionError(13, 'Permission denied')]
On the linux host I am running docker-compose up with the user named deployer. Inside the container as per the Dockerfile I created a user called appuser. I'm guessing this is related to the issue but I'm not sure.
Basically all I'm trying to do is to have log files inside the container be accessible outside the container so that they persist even if the server is restarted.
/app/log_files is still owned by deployers user inside your container and appuser does not have permission to write to it. As per your comment, it seems /home/deployer/log_files is owned by deployer:deployers with permission drwxr-xr-x. The permissions will be the same for /app/log_files inside container as per bind mount.
Even if deployers user does not exists in your container it will still be owned by the UID of deployers user (you can check this by running ls from inside the container).
You can:
Add world-writable permission to /home/deployer/log_files such as
chmod 777 /home/deployer/log_files
This may present a security risk though, other solution is a bit more complex but better.
Retrieve or set the UID of appuser and set ownership of /home/deployer/log_files to this user. For example in Dockerfile create appuser with specific UID 1500:
RUN adduser -u 1500 --system --group appuser
And from your host change directory owner to this UID
sudo chown 1500:1500 /home/deployer/log_files
At container runtime, appuser (UID 1500) will then be able to write to this directory
More generally, you should ensure /home/deployer/log_files is writable by the user running inside the container while keeping it's access secure if needed.
I have Dockerfile as shown here.
A script in the entrypoint creates a directory and places few artifacts.
# from base image
FROM ......
RUN mkdir -p /home/myuser
RUN groupadd -g 999 myuser &&\
useradd -r -u 999 -g myuser myuser
ENV HOME=/home/myuser
ENV APP_HOME=/home/myuser/workspace
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
RUN chown -R myuser:myuser $APP_HOME
USER myuser
ENTRYPOINT ......
I start a container for the above image as shown here
sudo docker run -v ${WORKSPACE}/output:/home/myuser/workspace/output image
I could not get the artifacts in the host machine. ${WORKSPACE}/output created with permission drwxr_xr_x
What is the process to get the container files into the host machine?
Additional Info:
My host username is kit
container user is myuser
container works perfectly fine - at the time of creating output file it throws an error that Permission denied
I tried to give full permission drwxrwxrwx to ${WORKSPACE}/output. then i could see the output files.
The permission denied error is because you are running a container with uid 999, but trying to write to a host directory that is owned by uid 1000 and only configured to allow writes by the user. You can:
chmod the directory to allow anyone to write (not recommended, but quick and easy)
update your image to match the uid/gid of your user on the host
switch to using a named volume
use an entrypoint to align the container uid/gid to that of a volume mount before starting your app
I go into a bit more detail on these in my slides here. There are also some speaker notes in there (I believe either P or S will bring them up).