How can I run ENTRYPOINT as root user? - docker

This is a part of my dockerfile:
COPY ./startup.sh /root/startup.sh
RUN chmod +x /root/startup.sh
ENTRYPOINT ["/root/startup.sh"]
EXPOSE 3306
CMD ["/usr/bin/mysqld_safe"]
USER jenkins
I have to switch in the end to USER jenkins and i have to run the container as jenkins.
My Question is now how can I run the startup.sh as root user when the container starts?

Delete the USER jenkins line in your Dockefile.
Change the user at the end of your entrypoint script (/root/startup.sh).
by adding: su - jenkins man su
Example:
Dockerfile
FROM debian:8
RUN useradd -ms /bin/bash exemple
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT "/root/entrypoint.sh"
entrypoint.sh
#!/bin/bash
echo "I am root" && id
su - exemple
# needed to run parameters CMD
$#
Now you can run
$ docker build -t so-test .
$ docker run --rm -it so-test bash
I am root
uid=0(root) gid=0(root) groups=0(root)
exemple#37b01e316a95:~$ id
uid=1000(exemple) gid=1000(exemple) groups=1000(exemple)
It's just a simple example, you can also use the su -c option to run command with changing user.

Related

How to create directory in docker image?

I tried mkdir -p it didn't work.
I have the following Dockerfile:
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
RUN mkdir -p $PLUGIN_DIR
RUN ls $PLUGIN_DIR
# WORKDIR /var/jenkins_home/plugins # Can't use this, as it changes the permission to root
# which breaks the plugin installation step
# # COPY plugins.txt /usr/share/jenkins/plugins.txt
# # RUN jenkins-plugin-cli -f /usr/share/jenkins/plugins.txt --verbose
#
#
# # disable the setup wizard as we will set up jenkins as code
# ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
#
# ENV CASC_JENKINS_CONFIG /configs/jcasc.yaml
The build fails!
docker build -t jenkins:test.1 .
Sending build context to Docker daemon 51.2kB
Step 1/5 : FROM jenkins/jenkins:2.363-jdk11
---> 90ff7cc5bfd1
Step 2/5 : ENV PLUGIN_DIR /var/jenkins_home/plugins
---> Using cache
---> 0a158958aab0
Step 3/5 : RUN echo $PLUGIN_DIR
---> Running in ce56ef9146fc
/var/jenkins_home/plugins
Step 4/5 : RUN mkdir -p $PLUGIN_DIR
---> Using cache
---> dbc4e12b9808
Step 5/5 : RUN ls $PLUGIN_DIR
---> Running in 9a0edb027862
I need this because Jenkins deprecated old plugin installation method. The new cli installs plugins to /usr/share/jenkins/ref/plugins instead.
Also:
+$ docker run -it --rm --entrypoint /bin/bash --name jenkins jenkins:test.1
jenkins#7ad71925f638:/$ ls /var/jenkins_home/
jenkins#7ad71925f638:/$
The official Jenkins image on dockerhub declare VOLUME /var/jenkins_home, and subsequent changes to that directory (even in derived images) are discarded.
To workaround, you can execute mkdir as ENTRYPOINT.
And to verify that its working you can add an sleep to enter into the container and verify. It work !.
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
USER root
RUN echo "#!/bin/sh \n mkdir -pv $PLUGIN_DIR && sleep inf" > ./mkdir.sh
RUN chmod a+x ./mkdir.sh
USER jenkins
ENTRYPOINT [ "/bin/sh", "-c", "./mkdir.sh"]
after
docker build . -t <image_name>
docker run -d <image_name> --name <container_name>
docker exec -it <container_name> bash
and you will see your directory
Sources:
https://forums.docker.com/t/simple-mkdir-p-not-working/42179
https://hub.docker.com/_/jenkins

Using gosu with kubernetes runAsUser security context?

I have a base container that has an ENTRYPOINT that runs as root:
Base container Dockerfile:
FROM docker.io/opensuse/leap:latest
# Add scripts to be executed during startup
COPY startup /startup
ADD https://example.com/install-ca-cert.sh /startup/startup.d/install-ca-cert-base.sh
RUN chmod +x /startup/* /startup/startup.d/*
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
And a derived container that uses gosu to perform a root step down after the startup scripts have been run as root:
Derived container Dockerfile:
ADD ./gosu-entrypoint.sh /usr/local/bin/gosu-entrypoint.sh
RUN chmod +x /usr/local/bin/gosu-entrypoint.sh
ENTRYPOINT ["/usr/local/bin/gosu-entrypoint.sh"]
CMD ["whoami"]
gosu-entrypoint.sh:
#!/bin/bash
set -e
# Call original entrypoint (as root)
/tini -s /startup/startup.sh
# If GOSU_USER environment variable is set, execute the specified command as that user
if [ -n "$GOSU_USER" ]; then
useradd --shell /bin/bash --system --user-group --create-home #GOSU_USER
exec /usr/local/bin/gosu $GOSU_USER "$#"
else
# else GOSU_USER environment variable is not set, execute the specified command as the default (root) user
exec "$#"
fi
This all works fine, by setting the GOSU_USER env var and running the container, the startup scripts are executed as root, and the CMD is executed as GOSU_USER:
export GOSU_USER=jim
docker run my-derived-container
# outputs "jim"
...
unset GOSU_USER
docker run my-derived-container
# outputs root
However, I am trying to determine if the above approach (maybe modified) is able to work with the Kubernetes securityContext runAsUser and runAsGroup directives?
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
I think these directives are turned into the containerd equivalent of docker run --user=xxx:yyy, so as such, they wouldn't work, since this:
docker run --user $(id -u):$(id -g) my-derived-container
results in a permissions error due to the startup scripts not being run as root anymore.
I have seen examples of entrypoint.sh scripts that allow the container to be started with the --user flag, but not sure if thats something I can use or not, i.e. even if the --user flag is provided, I still need the startup scripts to run as root:
https://github.com/docker-library/redis/blob/master/5.0/docker-entrypoint.sh#L11
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec gosu redis "$0" "$#"
fi
exec "$#"
Update: Looking again at the above redis example, I'm not sure if this does allow the container to be started with --user as it states, looking at the Dockerfile, redis-server is the CMD passed to the script ($1):
https://github.com/docker-library/redis/blob/master/5.0/Dockerfile#L118
CMD ["redis-server"]
and the redis user is just hardcoded in the above docker-entrypoint.sh:

Why I am bounced from the Docker container?

FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.2
USER root
WORKDIR /usr/share/elasticsearch/
ENV ES_HOSTNAME elasticsearch
ENV ES_PORT 9200
RUN chown elasticsearch:elasticsearch config/elasticsearch.yml
RUN chown -R elasticsearch:elasticsearch data
# install security plugin
RUN bin/elasticsearch-plugin install -b com.floragunn:search-guard-5:5.5.2-16
COPY ./safe-guard/install_demo_configuration.sh plugins/search-guard-5/tools/
COPY ./safe-guard/init-sgadmin.sh plugins/search-guard-5/tools/
RUN chmod +x plugins/search-guard-5/tools/init-sgadmin.sh
ADD ./run.sh .
RUN chmod +x run.sh
RUN chmod +x plugins/search-guard-5/tools/install_demo_configuration.sh
RUN ./plugins/search-guard-5/tools/install_demo_configuration.sh -y
RUN chmod +x sgadmin_demo.sh
RUN yum install tree -y
#RUN curl -k -u admin:admin https://localhost:9200/_searchguard/authinfo
RUN usermod -aG wheel elasticsearch
USER elasticsearch
EXPOSE 9200
#ENTRYPOINT ["nohup", "./run.sh", "&"]
ENTRYPOINT ["/usr/share/elasticsearch/run.sh"]
#CMD ["echo", "hello"]
Once I add either CMD or Entrypoint - "Container is exited with code 0"
#!/bin/bash
exec $#
If I comment ENTRYPOINT or CMD - all is great.
What I am doing wrong???
If you take a look at official 5.6.9 elasticsearch Dockerfile, you will see the following at the bottom:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
If you do not know the difference between CMD and ENTRYPOINT, read this answer.
What you're doing is you're overwriting those two instructions with something else. What you really need is to extend CMD. What I usually do in my images, I create an sh script and combine different things I need and then indicate the script for CMD. So, you need to run sgadmin_demo.sh, but you need to wait for elasticsearch first. Create a start.sh script:
#!/bin/bash
elasticsearch
sleep 15
sgadmin_demo.sh
Now, add your script to your image and run it on CMD:
FROM: ...
...
COPY start.sh /tmp/start.sh
CMD ["/tmp/start.sh"]
Now it should be executed once you start a container. Don't forget to build :)

How to fix permissions for an Alpine image writing files using Cron as non root user into accessible volume

I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.

Dockerfile: create ENV variable that a USER can see?

Is there a way to set an ENV variable for a custom USER in a docker file?
I am trying the following:
FROM some_repo/my_base_image
ENV FOO_VAR bar_value
USER webapp
# ... continued (not important)
But my "webapp" user can not see the "FOO_VAR" variable. HOWEVER, my root user CAN.
Any help would be appreciated.
Any user can see the environment variables:
$ cat Dockerfile
FROM debian
ENV foo bar
RUN groupadd -r am && useradd -r -g am am
USER am
$ docker build -t test .
...
$ docker run test bash -c 'echo $foo'
bar
So that's not what the problem is. It may be that your process forked a new environment, but I can't be sure as you haven't shared how you're checking the value.
If you switch user context using su within the dockerfile's ENTRYPOINT, CMD or docker exec ... using the form below you enter a new shell process for the given username that does not persist your original environment variables provided by the ENV targets through dockerfile, docker-compose yaml, or docker run -e ...
> su - username -c "run a process"
To avoid this behavior simply remove the dash - from the call like so:
> su username -c "run a process"
Your assigned docker environment variables will now persist.
For future reference, this also holds true within the Dockerfile (and not just for any container's user during run-time):
$ cat Dockerfile
FROM library/debian:9.5
ENV FOO="BAR"
RUN groupadd -r testuser && useradd -r -g testuser testuser
RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah"
USER testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
And docker build:
$ docker build -t test .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM library/debian:9.5
---> be2868bebaba
Step 2/7 : ENV FOO="BAR"
---> Running in f2cd5ecca056
Removing intermediate container f2cd5ecca056
---> f6f7b3f26cad
Step 3/7 : RUN groupadd -r testuser && useradd -r -g testuser testuser
---> Running in ab9c0726cc1e
Removing intermediate container ab9c0726cc1e
---> dc9f2a35fb09
Step 4/7 : RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
---> Running in 108b1c03323d
Removing intermediate container 108b1c03323d
---> 4a63e70fc886
Step 5/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah"
---> Running in 9dcdd6b73e7d
BAR
meh.BAR.blah
Removing intermediate container 9dcdd6b73e7d
---> c33504cadc37
Step 6/7 : USER testuser
---> Running in 596b0588dde6
Removing intermediate container 596b0588dde6
---> 075e2c861021
Step 7/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
---> Running in fb2648d8c120
BAR
meh.BAR.blah
Removing intermediate container fb2648d8c120
---> c7c1c69e200f
Successfully built c7c1c69e200f
Successfully tagged test:latest
(Yet for some reason it doesn't work for me in my own project, when I use the variables as a part of a curl URL target...)
here's what worked for me after browsing around the web looking for the answer:
in the dockerfile
...
RUN apt install sudo -y
ENV MY_VAR="some value"
...
now inside the container (or in my case the script i wrote to run inide it):
sudo -E -u my_user env # <- switch here to whatever command you want to execute
-E stands for preserve-env which means the env vars of the root user will be passed to my_user
heres my reference:
https://dev.to/pfreitag/passing-environment-variables-with-sudo-1ej6

Resources