I have a base container that has an ENTRYPOINT that runs as root:
Base container Dockerfile:
FROM docker.io/opensuse/leap:latest
# Add scripts to be executed during startup
COPY startup /startup
ADD https://example.com/install-ca-cert.sh /startup/startup.d/install-ca-cert-base.sh
RUN chmod +x /startup/* /startup/startup.d/*
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
And a derived container that uses gosu to perform a root step down after the startup scripts have been run as root:
Derived container Dockerfile:
ADD ./gosu-entrypoint.sh /usr/local/bin/gosu-entrypoint.sh
RUN chmod +x /usr/local/bin/gosu-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/gosu-entrypoint.sh"]
CMD ["whoami"]
gosu-entrypoint.sh:
#!/bin/bash
set -e
# Call original entrypoint (as root)
/tini -s /startup/startup.sh
# If GOSU_USER environment variable is set, execute the specified command as that user
if [ -n "$GOSU_USER" ]; then
useradd --shell /bin/bash --system --user-group --create-home #GOSU_USER
exec /usr/local/bin/gosu $GOSU_USER "$#"
else
# else GOSU_USER environment variable is not set, execute the specified command as the default (root) user
exec "$#"
fi
This all works fine, by setting the GOSU_USER env var and running the container, the startup scripts are executed as root, and the CMD is executed as GOSU_USER:
export GOSU_USER=jim
docker run my-derived-container
# outputs "jim"
...
unset GOSU_USER
docker run my-derived-container
# outputs root
However, I am trying to determine if the above approach (maybe modified) is able to work with the Kubernetes securityContext runAsUser and runAsGroup directives?
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
I think these directives are turned into the containerd equivalent of docker run --user=xxx:yyy, so as such, they wouldn't work, since this:
docker run --user $(id -u):$(id -g) my-derived-container
results in a permissions error due to the startup scripts not being run as root anymore.
I have seen examples of entrypoint.sh scripts that allow the container to be started with the --user flag, but not sure if thats something I can use or not, i.e. even if the --user flag is provided, I still need the startup scripts to run as root:
https://github.com/docker-library/redis/blob/master/5.0/docker-entrypoint.sh#L11
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec gosu redis "$0" "$#"
fi
exec "$#"
Update: Looking again at the above redis example, I'm not sure if this does allow the container to be started with --user as it states, looking at the Dockerfile, redis-server is the CMD passed to the script ($1):
https://github.com/docker-library/redis/blob/master/5.0/Dockerfile#L118
CMD ["redis-server"]
and the redis user is just hardcoded in the above docker-entrypoint.sh:
Related
I am trying to launch multiples startup scripts where it automates my ci/cd tasks, but however, I am just getting the response of the entry point.sh, how can I force to execute other scripts?
entrypoint.sh
#!/bin/sh
IFS=$',\n' ## set IFS to break on comma or newline
for host in $HOSTS; do
## mkdir -p "letsencrypt/live/${host}/fullchain.pem"
echo "mkdir -p letsencrypt/live/${host}/fullchain.pem"
done
init-letsencrypt.sh
#!/bin/sh
echo "cool"
xxxxx:~/xx$ docker-compose logs nginx
Attaching to platform_nginx_1
nginx_1 | mkdir -p letsencrypt/live/domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/www.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/api.domain.io/fullchain.pem
nginx_1 | mkdir -p letsencrypt/live/app.domain.io/fullchain.pem
FROM nginx:1.19.0-alpine
# Install certbot for letsencrypt certificates
RUN apk add --no-cache certbot
COPY . /etc/nginx/
# Directory needed for LetEncrypt certificates renewal
RUN mkdir /var/lib/certbot
# Add scripts and auto-renewal scripts
COPY ./bin/entrypoint.sh /entrypoint.sh
COPY ./bin/init-letsencrypt.sh /init-letsencrypt.sh
# Make them executable
RUN chmod +x /entrypoint.sh
RUN chmod +x /init-letsencrypt.sh
# Install certificates and launch
ENTRYPOINT /entrypoint.sh
To run something at build time, you should use RUN.
CMD and ENTRYPOINT are used to launch the main process of your container. A container is "just" a process that is encapsulated in namespaces, basically. A container runs until this process stops or dies. When you specify you entrypoint.sh as an entrypoint, you are saying that the main process of you container is this script. To say it differently : the only goal of this container is to execute this script then die
You should useRUN to launch both of your scripts, then CMD or ENTRYPOINT to launch your nginx (most probably ENTRYPOINT, you wilk get why if you read the docs ;))
I am trying to fix some tests we're running on Jenkins with Docker, but the script that the ENTRYPOINT in my Dockerfile points to keeps running as root, even though I set the USER in the Dockerfile. This works fine on my local machine but not when running on our Jenkins box.
I've tried running su within my entrypoint script to make sure that the rest of the script run as the correct user, but they still run as root.
So my Dockerfile looks like this:
FROM python:3.6
RUN apt-get update && apt-get install -y gettext libgettextpo-dev
ARG DOCKER_UID # set to 2000 in docker-compose file
ARG ENV=prod
ENV ENV=${ENV}
ARG WORKERS=2
ENV WORKERS=${WORKERS}
RUN useradd -u ${DOCKER_UID} -ms /bin/bash app
RUN chmod -R 777 /home/app
ENV PYTHONUNBUFFERED 1
ADD . /code
WORKDIR /code
RUN chown -R app:app /code
RUN mkdir /platform
RUN chown -R app:app /platform
RUN pip install --upgrade pip
RUN whoami # outputs `root`
USER app
RUN whoami # outputs `app`
RUN .docker/deploy/install_requirements.sh $ENV # runs as `app`
EXPOSE 8000
ENTRYPOINT [".docker/deploy/start.sh", "$ENV"]
and my start.sh looks like:
#!/bin/bash
ENV=$1
echo "USER"
echo `whoami`
echo Running migrations...
python manage.py migrate
mkdir -p static
chmod -R 0755 static
cd /code/
if [ "$ENV" == "performance-dev" ];
then
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini -p 4 --uid app
else
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini --uid app
fi
but the
echo "USER"
echo `whoami`
outputs:
USER
root
which causes commands later in the script the fail as they're the wrong user.
I'd except the output to be
USER
app
and my understanding is that this issue is typically resolved by setting the USER command in the Dockerfile, but I do that and it looks like it is switching user when running the Dockerfile itself.
Edit
The issue was with my docker-compose configuration. My docker-compose config looks like:
version: '3'
services:
service:
user: "${DOCKER_UID}:${DOCKER_UID}"
build:
context: .
dockerfile: .docker/Dockerfile
args:
- ENV=prod
- DOCKER_UID=2000
DOCKER_UID is a variable set on my local machine but not on the Jenkins box, so I set it to 2000 in the override file
The issue I was having, as David Maze pointed out in the comments, was that I was setting the user when actually building the container, via my docker-compose file. I had set the user param to ${DOCKER_UID}, which was never actually set anywhere, so it was defaulting to an empty string. Setting it to 2000 fixed my issue.
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.2
USER root
WORKDIR /usr/share/elasticsearch/
ENV ES_HOSTNAME elasticsearch
ENV ES_PORT 9200
RUN chown elasticsearch:elasticsearch config/elasticsearch.yml
RUN chown -R elasticsearch:elasticsearch data
# install security plugin
RUN bin/elasticsearch-plugin install -b com.floragunn:search-guard-5:5.5.2-16
COPY ./safe-guard/install_demo_configuration.sh plugins/search-guard-5/tools/
COPY ./safe-guard/init-sgadmin.sh plugins/search-guard-5/tools/
RUN chmod +x plugins/search-guard-5/tools/init-sgadmin.sh
ADD ./run.sh .
RUN chmod +x run.sh
RUN chmod +x plugins/search-guard-5/tools/install_demo_configuration.sh
RUN ./plugins/search-guard-5/tools/install_demo_configuration.sh -y
RUN chmod +x sgadmin_demo.sh
RUN yum install tree -y
#RUN curl -k -u admin:admin https://localhost:9200/_searchguard/authinfo
RUN usermod -aG wheel elasticsearch
USER elasticsearch
EXPOSE 9200
#ENTRYPOINT ["nohup", "./run.sh", "&"]
ENTRYPOINT ["/usr/share/elasticsearch/run.sh"]
#CMD ["echo", "hello"]
Once I add either CMD or Entrypoint - "Container is exited with code 0"
#!/bin/bash
exec $#
If I comment ENTRYPOINT or CMD - all is great.
What I am doing wrong???
If you take a look at official 5.6.9 elasticsearch Dockerfile, you will see the following at the bottom:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
If you do not know the difference between CMD and ENTRYPOINT, read this answer.
What you're doing is you're overwriting those two instructions with something else. What you really need is to extend CMD. What I usually do in my images, I create an sh script and combine different things I need and then indicate the script for CMD. So, you need to run sgadmin_demo.sh, but you need to wait for elasticsearch first. Create a start.sh script:
#!/bin/bash
elasticsearch
sleep 15
sgadmin_demo.sh
Now, add your script to your image and run it on CMD:
FROM: ...
...
COPY start.sh /tmp/start.sh
CMD ["/tmp/start.sh"]
Now it should be executed once you start a container. Don't forget to build :)
I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.
This is a part of my dockerfile:
COPY ./startup.sh /root/startup.sh
RUN chmod +x /root/startup.sh
ENTRYPOINT ["/root/startup.sh"]
EXPOSE 3306
CMD ["/usr/bin/mysqld_safe"]
USER jenkins
I have to switch in the end to USER jenkins and i have to run the container as jenkins.
My Question is now how can I run the startup.sh as root user when the container starts?
Delete the USER jenkins line in your Dockefile.
Change the user at the end of your entrypoint script (/root/startup.sh).
by adding: su - jenkins man su
Example:
Dockerfile
FROM debian:8
RUN useradd -ms /bin/bash exemple
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT "/root/entrypoint.sh"
entrypoint.sh
#!/bin/bash
echo "I am root" && id
su - exemple
# needed to run parameters CMD
$#
Now you can run
$ docker build -t so-test .
$ docker run --rm -it so-test bash
I am root
uid=0(root) gid=0(root) groups=0(root)
exemple#37b01e316a95:~$ id
uid=1000(exemple) gid=1000(exemple) groups=1000(exemple)
It's just a simple example, you can also use the su -c option to run command with changing user.