I wrote a Docker image which need to read some variables so I wrote in the Dockerfile:
ARG UID=1000
ARG GID=1000
ENV UID=${UID}
ENV GID=${GID}
ENV USER=laravel
RUN useradd -G www-data,root -u $UID -d /home/$USER $USER
RUN mkdir -p /home/$USER/.composer && \
chown -R $USER:$USER /home/$USER
This code actually allow me to create a laravel user which has the id of the user that starts the container.
So an user that pull this image actually set in the docker-compose section this content:
env_file: .env
which have:
GROUP_ID=1001
USER_ID=1001
For some weird reason that I don't understand, when I exec in the container with the pulled image, the user laravel is mapped with the id 1000 which that is the default value setted in the Dockerfile.
Instead, if I test the image using:
build:
context: ./docker/php
dockerfile: ./Dockerfile
args:
- UID=${GROUP_ID:-1000}
- GID=${USER_ID:-1000}
I can see correctly the user laravel mapped as 1001. So the questions are the following:
is the UID variable not reading from env file?
is the default value overwriting the env value?
Thanks in advance for any help
UPDATE:
As suggested, I tried to change the user id and group id in the bash script executed in the entrypoint, in the Dockerfile I have this:
ENTRYPOINT ["start.sh"]
then, at the start of start.sh I've added:
usermod -u ${USER_ID} laravel
groupmod -g ${GROUP_ID} laravel
the issue now is:
usermod: user laravel is currently used by process 1
groupmod: Permission denied.
groupmod: cannot lock /etc/group; try again later.
Docker build phase and the run phase are the key moment here. New user is added in the build phase and hence it is important to pass dynamic values while building docker image in build phase with e.g.
docker build --build-arg UID=1001 --build-arg GID=1001 .
or the case which you have already used and where it works (i.e. docker image is re-created with expected IDs), in docker-compose file:
build:
context: ./docker/php
dockerfile: ./Dockerfile
args:
- UID=${GROUP_ID:-1000}
- GID=${USER_ID:-1000}
In run phase, i.e. starting the container instance of already built docker image, passing env does not overwrite vars of build phase. Hence, in you case you can omit passing envs when starting container.
Related
I'm new to the Dockerisation. My goal is to setup a Ruby container to develop a website.
My first challenge is having a persistent working directory on the local host available in the container, so that container can be used to develop and run, but host to "save" and git whenever required.
I thought the first thing to do is having both a local group and user in the container that are equal to my host.
My first lines of the docker file looks like the below:
ARG USER_ID
ARG GROUP_ID
FROM ruby
RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
However, when I try to build my image using the docker-compose build with args in the file or at command line, I still receive the same error The command '/bin/sh -c addgroup --gid $GROUP_ID user' returned a non-zero code: 1.
Inspecting the prompt it seems like my arguments are not passed back to the build process for some reason, but I don't understand why (if I hard code values, I can run past this problem).
My command line looks like: env USER_ID=${UID} GROUP_ID=${GID} docker-compose build while the compose file is as below (first lines):
version: "3.9"
services:
ruby_dev:
build: # "context" and "dockerfile" fields have to be under "build"
context: .
dockerfile: ./Ruby
args:
USER_ID: 1000
GROUP_ID: 1000
# command: ....
Regrettably other questions or examples I looked at are not pointing me to the right direction. Any help for understanding my error?
This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d
How can I convert this command below to docker-compose version?
docker build -t xxx --build-arg SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)" .
I try this block below, but it does not work. Please help. Thanks.
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY: "$(cat ~/.ssh/id_rsa)"
docker-compose doesn't undershell shell code like that. You can do it this way:
xxx:
build:
context: .
dockerfile: Dockerfile
args:
SSH_PRV_KEY
Now, before run docker-compose, export your SSH_PRV_KEY env var:
export SSH_PRV_KEY="$(cat ~/.ssh/id_rsa)"
# now run docker-compose up as you normally do
Then SSH_PRV_KEY will have the right value.
Two thing you need to consider:
It may not work as expected if you have pass pharase in your id_rsa.
This SSH_PRV_KEY will actually available to docker meta data such as docker history or images inspect. To get around that you should look into multi stage build https://docs.docker.com/develop/develop-images/multistage-build/. In your build steps, you use that key to do anything you want. Then in your final image, don't declare SSH_PRV_KEY but simply copy the result from previous image. A more specific example where you use a private key to install dependencies
FROM based as build
ARG SSH_PRV_KEY
RUN echo "$SSH_PRV_KEY" > ~/.ssh/id_rsa
RUN npm install # this may need access to that rsa key
FROM node
COPY --from=builder node_modules node_modules
Notice in second images, we don't declare ARG therefore we don't expose it.
According to this:
https://hub.docker.com/_/mysql/
I can set the MySQL root password with:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
I assumed that MYSQL_ROOT_PASSWORD would be an environment variable that's set using ARG (e.g. Get environment variable value in Dockerfile ) however, looking at the DockerFile (https://github.com/docker-library/mysql/blob/696fc899126ae00771b5d87bdadae836e704ae7d/8.0/Dockerfile ) I don't see this ARG.
So, how is this root password being set?
It's actually used in the entrypoint script -
Ref - https://github.com/docker-library/mysql/blob/696fc899126ae00771b5d87bdadae836e704ae7d/8.0/docker-entrypoint.sh
Entrypoint config in Dockerfile -
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
Let me clarify a bit about parameters in Dockerfile.
ARG - is only available during docker image build.
Let’s say, you want to store in docker image a hash commit of you source code.
ARG Commit
than you build a docker image:
docker build -t someimage —build-arg Commit=<somehash>
ENV - values that are available for docker containers and can be used as a part of RUN command.
On actual runtime, you can change ENV variable or add new env variables by adding it to run string:
docker run -e SOME_VAR=somevar someimage.
Hope this will help you.
Usually when I develop application using docker container as my development test base I need in order to run manually composer, phpunit, npm, bower and various development scrips in it a shell via the following command:
docker exec -ti /bin/sh
But when the shell is spawned, is spawned with root permissions. What I want to achieve is to spawn a shell without root permissions but with a specified user one.
How I can do that?
In my case my Dockerfile has the following entries:
FROM php:5.6-fpm-alpine
ARG UID="1000"
ARG GID="1000"
COPY ./entrypoint.sh /usr/local/bin/entrypoint.sh
COPY ./fpm.conf /usr/local/etc/php-fpm.d/zz-docker.conf
RUN chmod +x /usr/local/bin/entrypoint.sh &&\
addgroup -g ${GID} developer &&\
adduser -D -H -S -s /bin/false -G developer -u ${UID} developer
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["php-fpm"]
And I mount a directory of my projject I develop from the host into /var/www/html and preserving the user permissions, so I just need the following docker-compose.yml in order to build it:
version: '2'
services:
php_dev:
build:
context: .
dockerfile: Dockerfile
args:
XDEBUG_HOST: 172.17.0.1
XDEBUG_PORT: 9021
UID: 1000
GID: 1000
image: pcmagas/php_dev
links:
- somedb
volumes:
- "$SRC_PATH:/var/www/html:Z"
Sop by setting the UID and GID into my host's user id and group id and with the following config form fpm:
[global]
daemonize = no
[www]
listen = 9000
user = developer
group = developer
I manage to run any changes to my code without worring about mysterious changes to user wonerships. But I want to be able to spawn a shell inside the running php_dev container as the developer user so any future tool such as composer or npm will run with the appropriate user permissions.
Of cource I guess same principles will apply into other languages as well for examples for python the pip
In case you need to run the container as a non-root user you have to add the following line to your Dockerfile:
USER developer
Note that in order to mount a directory through docker-compose.yml you have to change the permission of that directory before running docker-compose up by executing the following command
chown UID:GID /path/to/folder/on/host
UID and GID should match the UID and GID of the user's container.
This will make the user able to read and write to the mounted volume without any issues
Read more about USER directive
In the end putting the line:
USER developer
Really works when you spawn a shell via the:
docker exec -ti /bin/sh