Write a file in a docker container and read from another container - docker

I have a project like this structure:
app
- selenium
- downloads
.env
docker-compose.yml
docker-compose.yml has 2 services: myApp and selenium. And this volumes set for my services:
myApp:
volumes:
- ./app:/home/app
selenium:
volumes:
- ./app/selenium/downloads:/home/seluser/Downloads
When I run docker compose up in this directory, My application and selenium run up and I can develop my application. sometimes I need the selenium container for getting some content around the web.
When selenium container downloads a file, this file is stored in the ./app/selenium/downloads and I can read this file from the myApp container.
Problem
When I changed the selenium default download directory to /home/seluser/Downloads/aaa/bbb, this directory can access by the selenium container with /home/app/selenium/downloads/aaa/bbb, But I can't access aaa directory in the myApp container.
I can Change the permission of the aaa directory with the root user in the myApp container and solve this problem, But the default download directory can be changed for every downloaded file.
What's the solution to this problem?

I guess that you are getting troubles because your main processes of your 2 containers are running with different user. There is an easy way is to modify your Dockerfile and set the same user id (UID) for these both users. So the file/directory generated by a container can be accessed properly by the user of the other container.
An example Dockerfile to create a user with specified UID
ARG UNAME=testuser
ARG UID=1000
ARG GID=1000
RUN groupadd -g $GID -o $UNAME
RUN useradd -m -u $UID -g $GID -o -s /bin/bash $UNAME
There is another way using umask to assign proper permission to file/directory generated in your selenium container (to grant READ access to all users). But I think it's more complicated and make take your time to know which umask is suitable for you https://phoenixnap.com/kb/what-is-umask

Related

Run specific docker container without sudo and without docker group

I want to run a specific docker-compose file without entering the sudo password and without assigning that user who runs the command to the docker group for security reasons.
I thought about using the NOPASSWD inside sudoers file and run a bash script called "bash-dockercompose-up.sh" that simply runs docker-compose up -d.
However, it needs the sudo command before the docker-compose up -d to connect to docker host.
This is my /etc/sudoers file:
exampleuser ALL=(root) NOPASSWD:/usr/bin/bash-dockercompose-up.sh
Ok I was able to run it by using the python official sdk library.
https://docs.docker.com/engine/api/sdk/
I created a python script called "service-up.py"
service-up.py
import docker
client = docker.from_env()
container = client.containers.get('id or name here')
container.start()
Then compile it into a binary file in order to change it's uid permissions so a non root user can run it:
pyinstaller service-up.py
move into the dist folder where file is located and run:
chown root:root service-up
chmod 4755 service-up

Docker compose permission denied with volume

I have the following docker-compose file:
version: "3.8"
services:
api:
image: myuser/myimage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/gunicorn.py:/app/gunicorn.py
- /home/deployer/log_files:/app/log_files
env_file:
- /home/deployer/config_files/api.env
expose:
- 8000
nginx:
image: myuser/nginximage
volumes:
- static_volume:/app/static
- /home/deployer/config_files/nginx.conf:/etc/nginx/conf.d/nginx.conf
ports:
- 80:80
depends_on:
- api
volumes:
static_volume:
The api service was built using the following docker file (summarized to reduce size):
FROM python:3.9.1
WORKDIR /app
# Copy code etc into container
COPY ./api/ .
COPY entrypoint.sh .
# create static and log files directories
RUN mkdir static
RUN mkdir log_files
# create non root user, change ownership of all files, switch to this user
RUN adduser --system --group appuser
RUN chown -R appuser:appuser /app
USER appuser
CMD [ "/app/entrypoint.sh" ]
If I remove /home/deployer/log_files:/app/log_files from the compose file everything works correctly. However I am trying to use that log files directory for gunicorn to use for log files. Including that line results in the following error when docker-compose up:
Error: Error: '/app/log_files/gunicorn_error.log' isn't writable [PermissionError(13, 'Permission denied')]
On the linux host I am running docker-compose up with the user named deployer. Inside the container as per the Dockerfile I created a user called appuser. I'm guessing this is related to the issue but I'm not sure.
Basically all I'm trying to do is to have log files inside the container be accessible outside the container so that they persist even if the server is restarted.
/app/log_files is still owned by deployers user inside your container and appuser does not have permission to write to it. As per your comment, it seems /home/deployer/log_files is owned by deployer:deployers with permission drwxr-xr-x. The permissions will be the same for /app/log_files inside container as per bind mount.
Even if deployers user does not exists in your container it will still be owned by the UID of deployers user (you can check this by running ls from inside the container).
You can:
Add world-writable permission to /home/deployer/log_files such as
chmod 777 /home/deployer/log_files
This may present a security risk though, other solution is a bit more complex but better.
Retrieve or set the UID of appuser and set ownership of /home/deployer/log_files to this user. For example in Dockerfile create appuser with specific UID 1500:
RUN adduser -u 1500 --system --group appuser
And from your host change directory owner to this UID
sudo chown 1500:1500 /home/deployer/log_files
At container runtime, appuser (UID 1500) will then be able to write to this directory
More generally, you should ensure /home/deployer/log_files is writable by the user running inside the container while keeping it's access secure if needed.

Docker-compose and named volume permission denied

There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?
Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively
I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh
You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder

Shared Volume Docker Permissions

I am using docker-compose to create an abundance of Docker containers. All of the containers have a shared volume.
volumes:
- ${PHP_SERVICES_FOLDER}:/var/www/web
The docker containers are as follows.
Jenkins(FROM jenkins/jenkins:latest) - This writes to the shared volume
Nginx(FROM nginx) - This reads from the shared volume and uses the php-fpm container
PHP-FPM(FROM php:7.2-fpm)
With the volume's files having permissions 777 Nginx and PHP can read, write and execute the files but as soon as I trigger a build in Jenkins which updates files in the volume.
I think the reason it works when the permissions are 777 is because that allows 'other' users full access to the volume.
How can I have Nginx, PHP-FPM and Jenkins use the same user to read, write and execute files in that volume?
You could create an user in each of the Dockerfiles using the same uid and then give permissions to user with those uids.
For example:
# your dockerfile
RUN groupadd -g 799 appgroup
RUN useradd -u 799 -g appgroup shareduser
USER shareduser
Then you simply need to chown everything in the volume with the newly created uid (in any container or at host, as uids are shared between container and host:
chown -R 799:799 /volume/root/

Docker Spawn a shell as a user instead of root

Usually when I develop application using docker container as my development test base I need in order to run manually composer, phpunit, npm, bower and various development scrips in it a shell via the following command:
docker exec -ti /bin/sh
But when the shell is spawned, is spawned with root permissions. What I want to achieve is to spawn a shell without root permissions but with a specified user one.
How I can do that?
In my case my Dockerfile has the following entries:
FROM php:5.6-fpm-alpine
ARG UID="1000"
ARG GID="1000"
COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh
COPY ./fpm.conf /usr/local/etc/php-fpm.d/zz-docker.conf
RUN chmod +x /usr/local/bin/entrypoint.sh &&\
addgroup -g ${GID} developer &&\
adduser -D -H -S -s /bin/false -G developer -u ${UID} developer
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["php-fpm"]
And I mount a directory of my projject I develop from the host into /var/www/html and preserving the user permissions, so I just need the following docker-compose.yml in order to build it:
version: '2'
services:
php_dev:
build:
context: .
dockerfile: Dockerfile
args:
XDEBUG_HOST: 172.17.0.1
XDEBUG_PORT: 9021
UID: 1000
GID: 1000
image: pcmagas/php_dev
links:
- somedb
volumes:
- "$SRC_PATH:/var/www/html:Z"
Sop by setting the UID and GID into my host's user id and group id and with the following config form fpm:
[global]
daemonize = no
[www]
listen = 9000
user = developer
group = developer
I manage to run any changes to my code without worring about mysterious changes to user wonerships. But I want to be able to spawn a shell inside the running php_dev container as the developer user so any future tool such as composer or npm will run with the appropriate user permissions.
Of cource I guess same principles will apply into other languages as well for examples for python the pip
In case you need to run the container as a non-root user you have to add the following line to your Dockerfile:
USER developer
Note that in order to mount a directory through docker-compose.yml you have to change the permission of that directory before running docker-compose up by executing the following command
chown UID:GID /path/to/folder/on/host
UID and GID should match the UID and GID of the user's container.
This will make the user able to read and write to the mounted volume without any issues
Read more about USER directive
In the end putting the line:
USER developer
Really works when you spawn a shell via the:
docker exec -ti /bin/sh

Resources