Dockerfile: create ENV variable that a USER can see? - docker

Is there a way to set an ENV variable for a custom USER in a docker file?
I am trying the following:
FROM some_repo/my_base_image
ENV FOO_VAR bar_value
USER webapp
# ... continued (not important)
But my "webapp" user can not see the "FOO_VAR" variable. HOWEVER, my root user CAN.
Any help would be appreciated.

Any user can see the environment variables:
$ cat Dockerfile
FROM debian
ENV foo bar
RUN groupadd -r am && useradd -r -g am am
USER am
$ docker build -t test .
...
$ docker run test bash -c 'echo $foo'
bar
So that's not what the problem is. It may be that your process forked a new environment, but I can't be sure as you haven't shared how you're checking the value.

If you switch user context using su within the dockerfile's ENTRYPOINT, CMD or docker exec ... using the form below you enter a new shell process for the given username that does not persist your original environment variables provided by the ENV targets through dockerfile, docker-compose yaml, or docker run -e ...
> su - username -c "run a process"
To avoid this behavior simply remove the dash - from the call like so:
> su username -c "run a process"
Your assigned docker environment variables will now persist.

For future reference, this also holds true within the Dockerfile (and not just for any container's user during run-time):
$ cat Dockerfile
FROM library/debian:9.5
ENV FOO="BAR"
RUN groupadd -r testuser && useradd -r -g testuser testuser
RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah"
USER testuser
RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
And docker build:
$ docker build -t test .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM library/debian:9.5
---> be2868bebaba
Step 2/7 : ENV FOO="BAR"
---> Running in f2cd5ecca056
Removing intermediate container f2cd5ecca056
---> f6f7b3f26cad
Step 3/7 : RUN groupadd -r testuser && useradd -r -g testuser testuser
---> Running in ab9c0726cc1e
Removing intermediate container ab9c0726cc1e
---> dc9f2a35fb09
Step 4/7 : RUN mkdir -p /home/testuser && chown -R testuser /home/testuser
---> Running in 108b1c03323d
Removing intermediate container 108b1c03323d
---> 4a63e70fc886
Step 5/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah"
---> Running in 9dcdd6b73e7d
BAR
meh.BAR.blah
Removing intermediate container 9dcdd6b73e7d
---> c33504cadc37
Step 6/7 : USER testuser
---> Running in 596b0588dde6
Removing intermediate container 596b0588dde6
---> 075e2c861021
Step 7/7 : RUN echo "${FOO}" && echo "meh.${FOO}.blah" | tee -a ~/test.xt
---> Running in fb2648d8c120
BAR
meh.BAR.blah
Removing intermediate container fb2648d8c120
---> c7c1c69e200f
Successfully built c7c1c69e200f
Successfully tagged test:latest
(Yet for some reason it doesn't work for me in my own project, when I use the variables as a part of a curl URL target...)

here's what worked for me after browsing around the web looking for the answer:
in the dockerfile
...
RUN apt install sudo -y
ENV MY_VAR="some value"
...
now inside the container (or in my case the script i wrote to run inide it):
sudo -E -u my_user env # <- switch here to whatever command you want to execute
-E stands for preserve-env which means the env vars of the root user will be passed to my_user
heres my reference:
https://dev.to/pfreitag/passing-environment-variables-with-sudo-1ej6

Related

How to create directory in docker image?

I tried mkdir -p it didn't work.
I have the following Dockerfile:
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
RUN mkdir -p $PLUGIN_DIR
RUN ls $PLUGIN_DIR
# WORKDIR /var/jenkins_home/plugins # Can't use this, as it changes the permission to root
# which breaks the plugin installation step
# # COPY plugins.txt /usr/share/jenkins/plugins.txt
# # RUN jenkins-plugin-cli -f /usr/share/jenkins/plugins.txt --verbose
#
#
# # disable the setup wizard as we will set up jenkins as code
# ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
#
# ENV CASC_JENKINS_CONFIG /configs/jcasc.yaml
The build fails!
docker build -t jenkins:test.1 .
Sending build context to Docker daemon 51.2kB
Step 1/5 : FROM jenkins/jenkins:2.363-jdk11
---> 90ff7cc5bfd1
Step 2/5 : ENV PLUGIN_DIR /var/jenkins_home/plugins
---> Using cache
---> 0a158958aab0
Step 3/5 : RUN echo $PLUGIN_DIR
---> Running in ce56ef9146fc
/var/jenkins_home/plugins
Step 4/5 : RUN mkdir -p $PLUGIN_DIR
---> Using cache
---> dbc4e12b9808
Step 5/5 : RUN ls $PLUGIN_DIR
---> Running in 9a0edb027862
I need this because Jenkins deprecated old plugin installation method. The new cli installs plugins to /usr/share/jenkins/ref/plugins instead.
Also:
+$ docker run -it --rm --entrypoint /bin/bash --name jenkins jenkins:test.1
jenkins#7ad71925f638:/$ ls /var/jenkins_home/
jenkins#7ad71925f638:/$
The official Jenkins image on dockerhub declare VOLUME /var/jenkins_home, and subsequent changes to that directory (even in derived images) are discarded.
To workaround, you can execute mkdir as ENTRYPOINT.
And to verify that its working you can add an sleep to enter into the container and verify. It work !.
FROM jenkins/jenkins:2.363-jdk11
ENV PLUGIN_DIR /var/jenkins_home/plugins
RUN echo $PLUGIN_DIR
USER root
RUN echo "#!/bin/sh \n mkdir -pv $PLUGIN_DIR && sleep inf" > ./mkdir.sh
RUN chmod a+x ./mkdir.sh
USER jenkins
ENTRYPOINT [ "/bin/sh", "-c", "./mkdir.sh"]
after
docker build . -t <image_name>
docker run -d <image_name> --name <container_name>
docker exec -it <container_name> bash
and you will see your directory
Sources:
https://forums.docker.com/t/simple-mkdir-p-not-working/42179
https://hub.docker.com/_/jenkins

Docker RUN command in exec form not working

I'm building a docker image based on Alpine.
FROM alpine
RUN apk update \
&& apk add lighttpd \
&& rm -rf /var/cache/apk/*
ENV COLOR red
COPY ./index.html /var/www/localhost/htdocs
RUN /bin/ash -c 'echo abcd'
#working
RUN /bin/ash -c "echo $COLOR; sed -i -e 's/red/\$COLOR/g' /var/www/localhost/htdocs/index.html; cat /var/www/localhost/htdocs/index.html;"
#not working
# RUN ["sh", "-c", "echo $COLOR; sed -i -e 's/red/\$COLOR/g' /var/www/localhost/htdocs/index.html; cat /var/www/localhost/htdocs/index.html;"]
CMD ["lighttpd","-D","-f","/etc/lighttpd/lighttpd.conf"]
When I run in shell form it's working fine, but when I run in exec form it's giving
/bin/sh: [sh,: not found
I tried using bin/sh, sh, bin/ash, ash. Same error for all of them.
Shell is responsible for expanding variables, but only variable in double quotes will be expanded.
Your error comes from wrong \ before $COLOR, in fact it did no meaning for you to get the value from shell, the correct way is next:
RUN ["sh", "-c", "echo $COLOR; sed -i -e \"s/red/$COLOR/g\" /var/www/localhost/htdocs/index.html; cat /var/www/localhost/htdocs/index.html;"]
A minimal example to show the effect, FYI:
Dockerfile:
FROM alpine
ENV COLOR rednew
RUN echo "red" > /tmp/index.html
RUN ["sh", "-c", "sed -i -e \"s/red/$COLOR/g\" /tmp/index.html; cat /tmp/index.html;"]
Result:
$ docker build -t abc:1 . --no-cache
Sending build context to Docker daemon 5.632kB
Step 1/4 : FROM alpine
---> 28f6e2705743
Step 2/4 : ENV COLOR rednew
---> Running in 05c43146fab0
Removing intermediate container 05c43146fab0
---> 28ea1434e626
Step 3/4 : RUN echo "red" > /tmp/index.html
---> Running in 2c8fbbc5fd10
Removing intermediate container 2c8fbbc5fd10
---> f884892ad8c4
Step 4/4 : RUN ["sh", "-c", "sed -i -e \"s/red/$COLOR/g\" /tmp/index.html; cat /tmp/index.html;"]
---> Running in 6930b3d03438
rednew
Removing intermediate container 6930b3d03438
---> b770475672cc
Successfully built b770475672cc
Successfully tagged abc:1
I've been using Docker for a few years and I did not know (until your question) that there are shell|exec forms for RUN ;-)
The issue is that your command includes environment variables ($COLOR) and there's no substituation|evaluation with the exec form.
See:
https://docs.docker.com/engine/reference/builder/#run
"Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen"

Why isn't the USER declared in my Dockerfile reflected in the ENTRYPOINT script?

I am trying to fix some tests we're running on Jenkins with Docker, but the script that the ENTRYPOINT in my Dockerfile points to keeps running as root, even though I set the USER in the Dockerfile. This works fine on my local machine but not when running on our Jenkins box.
I've tried running su within my entrypoint script to make sure that the rest of the script run as the correct user, but they still run as root.
So my Dockerfile looks like this:
FROM python:3.6
RUN apt-get update && apt-get install -y gettext libgettextpo-dev
ARG DOCKER_UID # set to 2000 in docker-compose file
ARG ENV=prod
ENV ENV=${ENV}
ARG WORKERS=2
ENV WORKERS=${WORKERS}
RUN useradd -u ${DOCKER_UID} -ms /bin/bash app
RUN chmod -R 777 /home/app
ENV PYTHONUNBUFFERED 1
ADD . /code
WORKDIR /code
RUN chown -R app:app /code
RUN mkdir /platform
RUN chown -R app:app /platform
RUN pip install --upgrade pip
RUN whoami # outputs `root`
USER app
RUN whoami # outputs `app`
RUN .docker/deploy/install_requirements.sh $ENV # runs as `app`
EXPOSE 8000
ENTRYPOINT [".docker/deploy/start.sh", "$ENV"]
and my start.sh looks like:
#!/bin/bash
ENV=$1
echo "USER"
echo `whoami`
echo Running migrations...
python manage.py migrate
mkdir -p static
chmod -R 0755 static
cd /code/
if [ "$ENV" == "performance-dev" ];
then
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini -p 4 --uid app
else
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini --uid app
fi
but the
echo "USER"
echo `whoami`
outputs:
USER
root
which causes commands later in the script the fail as they're the wrong user.
I'd except the output to be
USER
app
and my understanding is that this issue is typically resolved by setting the USER command in the Dockerfile, but I do that and it looks like it is switching user when running the Dockerfile itself.
Edit
The issue was with my docker-compose configuration. My docker-compose config looks like:
version: '3'
services:
service:
user: "${DOCKER_UID}:${DOCKER_UID}"
build:
context: .
dockerfile: .docker/Dockerfile
args:
- ENV=prod
- DOCKER_UID=2000
DOCKER_UID is a variable set on my local machine but not on the Jenkins box, so I set it to 2000 in the override file
The issue I was having, as David Maze pointed out in the comments, was that I was setting the user when actually building the container, via my docker-compose file. I had set the user param to ${DOCKER_UID}, which was never actually set anywhere, so it was defaulting to an empty string. Setting it to 2000 fixed my issue.

Docker build failed to copy a file

Hi I am new to Docker and trying to wrap around my head on how to clone a private repo from github and found some interesting link issues/6396
I followed one of the post and my dockerfile looks like
FROM python:2.7 as builder
# Deploy app's code
#RUN set -x
RUN mkdir /code
RUN mkdir /root/.ssh/
RUN ls -l /root/.ssh/
# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
RUN echo "${SSH_PRIVATE_KEY}"
# Set up root user SSH access for GitHub
ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
RUN ssh -o StrictHostKeyChecking=no -vT git#github.com 2>&1 | grep -i auth
# Test SSH access (this returns false even when successful, but prints results)
RUN git clone git#github.com:***********.git
COPY . /code
WORKDIR /code
ENV PYTHONPATH /datawarehouse_process
# Setup app's virtualenv
RUN set -x \
&& pip install tox \
&& tox -e luigi
WORKDIR /datawarehouse_process
# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not
going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*
#FROM python:2.7 as runtime
#COPY --from=builder /code /code
When I run docker build . from the correct location I get this error below. Any clue will be appreciated.
c:\Domain\Project\Docker-Images\datawarehouse_process>docker build .
Sending build context to Docker daemon 281.7MB
Step 1/15 : FROM python:2.7 as builder
---> 43c5f3ee0928
Step 2/15 : RUN mkdir /code
---> Running in 841fadc29641
Removing intermediate container 841fadc29641
---> 69fdbcd34f12
Step 3/15 : RUN mkdir /root/.ssh/
---> Running in 50199b0eb002
Removing intermediate container 50199b0eb002
---> 6dac8b120438
Step 4/15 : RUN ls -l /root/.ssh/
---> Running in e15040402b79
total 0
Removing intermediate container e15040402b79
---> 65519edac99a
Step 5/15 : ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
---> Running in 10e0c92eed4f
Removing intermediate container 10e0c92eed4f
---> 707279c92614
Step 6/15 : RUN echo "${SSH_PRIVATE_KEY}"
---> Running in a9f75c224994
C:\Users\MyUser\.ssh\id_rsa
Removing intermediate container a9f75c224994
---> 96e0605d38a9
Step 7/15 : ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
ADD failed: stat /var/lib/docker/tmp/docker-
builder142890167/C:\Users\MyUser\.ssh\id_rsa: no such file or
directory
From the Documentation:
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
You are passing an absolute path to ADD, but you can see from the error:
/var/lib/docker/tmp/docker-builder142890167/C:\Users\MyUser\.ssh\id_rsa:
no such file or directory
It is being looked for within the build context. Again from the documentation:
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context.
So, you need to place the RSA key somewhere in the directory tree which has it's root at the path that you specify in your Docker build command, so if you are entering docker build . your ARG statement would change to something like:
ARG SSH_PRIVATE_KEY = .\.ssh\id_rsa

How can I run ENTRYPOINT as root user?

This is a part of my dockerfile:
COPY ./startup.sh /root/startup.sh
RUN chmod +x /root/startup.sh
ENTRYPOINT ["/root/startup.sh"]
EXPOSE 3306
CMD ["/usr/bin/mysqld_safe"]
USER jenkins
I have to switch in the end to USER jenkins and i have to run the container as jenkins.
My Question is now how can I run the startup.sh as root user when the container starts?
Delete the USER jenkins line in your Dockefile.
Change the user at the end of your entrypoint script (/root/startup.sh).
by adding: su - jenkins man su
Example:
Dockerfile
FROM debian:8
RUN useradd -ms /bin/bash exemple
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT "/root/entrypoint.sh"
entrypoint.sh
#!/bin/bash
echo "I am root" && id
su - exemple
# needed to run parameters CMD
$#
Now you can run
$ docker build -t so-test .
$ docker run --rm -it so-test bash
I am root
uid=0(root) gid=0(root) groups=0(root)
exemple#37b01e316a95:~$ id
uid=1000(exemple) gid=1000(exemple) groups=1000(exemple)
It's just a simple example, you can also use the su -c option to run command with changing user.

Resources