Usually when I develop application using docker container as my development test base I need in order to run manually composer, phpunit, npm, bower and various development scrips in it a shell via the following command:
docker exec -ti /bin/sh
But when the shell is spawned, is spawned with root permissions. What I want to achieve is to spawn a shell without root permissions but with a specified user one.
How I can do that?
In my case my Dockerfile has the following entries:
FROM php:5.6-fpm-alpine
ARG UID="1000"
ARG GID="1000"
COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh
COPY ./fpm.conf /usr/local/etc/php-fpm.d/zz-docker.conf
RUN chmod +x /usr/local/bin/entrypoint.sh &&\
addgroup -g ${GID} developer &&\
adduser -D -H -S -s /bin/false -G developer -u ${UID} developer
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["php-fpm"]
And I mount a directory of my projject I develop from the host into /var/www/html and preserving the user permissions, so I just need the following docker-compose.yml in order to build it:
version: '2'
services:
php_dev:
build:
context: .
dockerfile: Dockerfile
args:
XDEBUG_HOST: 172.17.0.1
XDEBUG_PORT: 9021
UID: 1000
GID: 1000
image: pcmagas/php_dev
links:
- somedb
volumes:
- "$SRC_PATH:/var/www/html:Z"
Sop by setting the UID and GID into my host's user id and group id and with the following config form fpm:
[global]
daemonize = no
[www]
listen = 9000
user = developer
group = developer
I manage to run any changes to my code without worring about mysterious changes to user wonerships. But I want to be able to spawn a shell inside the running php_dev container as the developer user so any future tool such as composer or npm will run with the appropriate user permissions.
Of cource I guess same principles will apply into other languages as well for examples for python the pip
In case you need to run the container as a non-root user you have to add the following line to your Dockerfile:
USER developer
Note that in order to mount a directory through docker-compose.yml you have to change the permission of that directory before running docker-compose up by executing the following command
chown UID:GID /path/to/folder/on/host
UID and GID should match the UID and GID of the user's container.
This will make the user able to read and write to the mounted volume without any issues
Read more about USER directive
In the end putting the line:
USER developer
Really works when you spawn a shell via the:
docker exec -ti /bin/sh
Related
I have a project like this structure:
app
- selenium
- downloads
.env
docker-compose.yml
docker-compose.yml has 2 services: myApp and selenium. And this volumes set for my services:
myApp:
volumes:
- ./app:/home/app
selenium:
volumes:
- ./app/selenium/downloads:/home/seluser/Downloads
When I run docker compose up in this directory, My application and selenium run up and I can develop my application. sometimes I need the selenium container for getting some content around the web.
When selenium container downloads a file, this file is stored in the ./app/selenium/downloads and I can read this file from the myApp container.
Problem
When I changed the selenium default download directory to /home/seluser/Downloads/aaa/bbb, this directory can access by the selenium container with /home/app/selenium/downloads/aaa/bbb, But I can't access aaa directory in the myApp container.
I can Change the permission of the aaa directory with the root user in the myApp container and solve this problem, But the default download directory can be changed for every downloaded file.
What's the solution to this problem?
I guess that you are getting troubles because your main processes of your 2 containers are running with different user. There is an easy way is to modify your Dockerfile and set the same user id (UID) for these both users. So the file/directory generated by a container can be accessed properly by the user of the other container.
An example Dockerfile to create a user with specified UID
ARG UNAME=testuser
ARG UID=1000
ARG GID=1000
RUN groupadd -g $GID -o $UNAME
RUN useradd -m -u $UID -g $GID -o -s /bin/bash $UNAME
There is another way using umask to assign proper permission to file/directory generated in your selenium container (to grant READ access to all users). But I think it's more complicated and make take your time to know which umask is suitable for you https://phoenixnap.com/kb/what-is-umask
I wrote a Docker image which need to read some variables so I wrote in the Dockerfile:
ARG UID=1000
ARG GID=1000
ENV UID=${UID}
ENV GID=${GID}
ENV USER=laravel
RUN useradd -G www-data,root -u $UID -d /home/$USER $USER
RUN mkdir -p /home/$USER/.composer && \
chown -R $USER:$USER /home/$USER
This code actually allow me to create a laravel user which has the id of the user that starts the container.
So an user that pull this image actually set in the docker-compose section this content:
env_file: .env
which have:
GROUP_ID=1001
USER_ID=1001
For some weird reason that I don't understand, when I exec in the container with the pulled image, the user laravel is mapped with the id 1000 which that is the default value setted in the Dockerfile.
Instead, if I test the image using:
build:
context: ./docker/php
dockerfile: ./Dockerfile
args:
- UID=${GROUP_ID:-1000}
- GID=${USER_ID:-1000}
I can see correctly the user laravel mapped as 1001. So the questions are the following:
is the UID variable not reading from env file?
is the default value overwriting the env value?
Thanks in advance for any help
UPDATE:
As suggested, I tried to change the user id and group id in the bash script executed in the entrypoint, in the Dockerfile I have this:
ENTRYPOINT ["start.sh"]
then, at the start of start.sh I've added:
usermod -u ${USER_ID} laravel
groupmod -g ${GROUP_ID} laravel
the issue now is:
usermod: user laravel is currently used by process 1
groupmod: Permission denied.
groupmod: cannot lock /etc/group; try again later.
Docker build phase and the run phase are the key moment here. New user is added in the build phase and hence it is important to pass dynamic values while building docker image in build phase with e.g.
docker build --build-arg UID=1001 --build-arg GID=1001 .
or the case which you have already used and where it works (i.e. docker image is re-created with expected IDs), in docker-compose file:
build:
context: ./docker/php
dockerfile: ./Dockerfile
args:
- UID=${GROUP_ID:-1000}
- GID=${USER_ID:-1000}
In run phase, i.e. starting the container instance of already built docker image, passing env does not overwrite vars of build phase. Hence, in you case you can omit passing envs when starting container.
This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d
There is docker-compose that uses base Dockerfile created image for application.
Dockerfile looks similar to below. Some lines are omitted for reason.
FROM ubuntu:18.04
RUN set -e -x ;\
apt-get -y update ;\
apt-get -y upgrade ;
...
USER service
When using this image in docker-compose and adding named volume to service, folder in named volume is not accessible, with message Permission denied. Part from docker-compose looks as below.
version: "3.1"
services:
myapp:
image: myappimage
command:
- /myapp
ports:
- 12345:1234
volumes:
- logs-folder:/var/log/myapp
volumes:
logs-folder:
My assumption was that USER service line is issue, which I confirmed by setting user: root in myapp service.
Now, question is next. I would like to avoid manually creating volume and setting permissions. I would like it to be automated using docker-compose.
Is this possible and if yes, how can this be done?
Yes, there is a trick. Not really in the docker-compose file, but in the Docker file. You need to create the /var/log/myapp folder and set its permissions before switching to the service user:
FROM ubuntu:18.04
RUN useradd myservice
RUN mkdir /var/log/myapp
RUN chown myservice:myservice /var/log/myapp
...
USER myservice:myservice
Docker-compose will preserve permissions.
See Docker Compose mounts named volumes as 'root' exclusively
I had a similar issue but mine was related to a file shared via a volume to a service I was not building with a Dockerfile, but pulling. I had shared a shell script that I used in docker-compose but when I executed it, did not have permission.
I resolved it by using chmod in the command of docker compose
command: -c "chmod a+x ./app/wait-for-it.sh && ./app/wait-for-it.sh -t 150 -h ..."
volumes:
- ./wait-for-it.sh:/app/wait-for-it.sh
You can change volume source permissions to avoid Permission denied error.
chmod a+x logs-folder
I am using AWS EC2 instance and have installed docker and docker-compose on AWS linux.
Now I have a docker-compose.yml file which is trying the command mkdir -p /workspace/.m2/repositories. Now this command requires sudo, otherwise it gives permissions error.
I tried adding sudo inside docker-compose but it gave me an error saying
sudo: command not found
I can run this command manually and can comment this command inside of docker-compose.yml file but I am interested to know that is there any way to run this command from inside of docker-compose.yml file?
I may have a solution for you. You can extend the strongbox image in a custom Dockerfile to solve this issue I think.
Create a new Dockerfile, like this one:
Dockerfile
FROM strongboxci/alpine:jdk8-mvn-3.5
USER root
RUN mkdir -p /workspace/.m2/repositories
RUN chown jenkins:jenkins /workspace/.m2/repositories
USER jenkins
Then build the image with something like this:
docker build -t mystrongbox:01 .
And finally update the docker-compose.yml file to this:
docker-compose.yml
version: '2'
services:
strongbox-from-web-core:
image: mystrongbox:01
command:
- /bin/bash
- -c
- |
echo ""
echo "[NOTICE] This will take at least 2 to 5 minutes to start depending on your machine and connection!"
echo ""
echo " Open http://localhost:48080/storages to browse the repository contents."
echo ""
sleep 5
mkdir -p /workspace/.m2/repositories
mvn clean install -DskipTests -Dmaven.repo.local=/workspace/.m2/repositories
cd strongbox-web-core
mvn spring-boot:run -Dmaven.repo.local=/workspace/.m2/repositories
ports:
- 48080:48080
volumes:
- ./:/workspace
working_dir: /workspace
Finally try again with:
docker-compose up
Then you will have the directory created in the image already, and ownership set to the jenkins user.
I'm one of the developers at strongbox/strongbox. We're thrilled that someone is trying out our Docker images for development :)
Now this command requires sudo, otherwise it gives permissions error.
What you are experiencing, is likely a permission issue. Our Docker images are running as user.group = 1000.1000 (which is usually the first user on many distributions). I suspect that your UID/GID is different, which you can check by doing id -u and id -g. If it's something other than 1000.1000 - you would need to do a workaround:
Create a user & group with IDs 1000.1000:
groupadd -g 1000 jenkins
useradd -u 1000 -g 1000 -s /bin/bash -m jenkins
Chown/chmod the cloned strongbox project like this:
chown -R `id -u`.1001 /path/to/strongbox-project
chmod -R 775 /path/to/strongbox-project
Try again docker-compose up
This image does not have sudo installed, so you wouldn't be able to execute it. However, you shouldn't need it as well, because the /workspace is being mounted from your FS (this is the strongbox project) and it will write /workspace/.m2/repository in the volume.