Docker set ENV based on if-else - docker

I have a situation where I need to set an ENV based on runtime condition like thus:
RUN if [ "$RUNTIME" = "prod" ] then VARIABLE="Some Production URL"; else VARIABLE="Some QA URL"; fi;
ENV={VARIABLE}
Been looking at different solutions but none of them seem to be panning out (for example the basic one where VARIABLE is lost when RUN exits). What would be an elegant way to achieve this?

It is an unfortunate constraint that you only have this "dev/qa/prod" environment variable. However, it is possible to achieve what you want.
First, you might consider baking your environment specific configuration into the image for all environments. (Normally I would discourage to do this!)
For example you can COPY three files into your image:
dev-env.sh: contains your dev config in the form:
ELASTICSEARCH_URL=http://elastic-dev:123
qa-env.sh (similar)
prod-env.sh (similar)
Then you evaluate at run-time (not at build-time) in which environment you are: You add an ENTRYPOINT script to your image which will source the correct file, depending on the ENVIRONMENT_NAME variable.
Dockerfile (part):
ENTRYPOINT ["docker-entrypoint.sh"]
docker-entrypoint.sh (copied into WORKDIR of the image):
#!/bin/bash
set -e
if [ "$ENVIRONMENT_NAME" = "prod" ]; then
source prod-env.sh
fi
# else if qa ... , else if dev ..., else fail
exec "$#"
This script will run when you launch the docker container, so this approach is no option for you if you need the variables to be available in Dockerfile-instructions (at build-time).
Another (build-time) workaround is described here and consists of using temporary files to store environment variables across multiple image layers.

The literal conditional execution can be achieved with multistage build and ONBUILD.
ARG mode=prod
FROM alpine as img_prod
ONBUILD ENV ELASTICSEARH_URL=whatever/for/prod
FROM alpine as img_qa
ONBUILD ENV ELASTICSEARH_URL=whatever/for/qa
FROM img_${mode}
...
Then you build with docker build --build-arg mode=qa .

Wouldn't passing env var with docker run be the solution you need? Something like this:
docker run -e YOUR_VARIABLE="Some Production URL" ...

Related

Building image with Dockerfile is not using existing layers?

I'm quite new to Dockerfiles and not sure whats going on here. I'm using Podman and building a new image with a Dockerfile to set some additional permissions using a base image from Docker Hub. According to the layers (#8) in this base image it seems that the variabele SDKMAN_DIR is being set to /home/javaUser/.sdkman.
/bin/sh -c export SDKMAN_DIR="/home/javaUser/.sdkman" && curl -s "https://get.sdkman.io" | bash
However, when I build a new image based on the newly created image with the Dockerfile, start a new container, exec into container and execute the command echo $SDKMAN_DIR, the result is empty. I was expecting that my variant of the image had the same variabels set.
Can anyone tell me what I'm missing here?
bash-5.1$ echo $SDKMAN_DIR
Below you'll find the Dockerfile I'm using.
FROM docker.io/itext/dito-sdk:2.4.5
# Make available to rootless users
USER root
RUN chmod g+x /home/javaUser/.sdkman/bin/sdkman-init.sh
# Become rootless user
USER 2000
EXPOSE 8080
ENTRYPOINT /bin/bash -c "source /home/javaUser/.sdkman/bin/sdkman-init.sh && kotlin /opt/dito/startup-prepare.main.kts && java -jar /opt/dito/dito-sdk-docker.jar server /etc/opt/dito/config.yml"
Thanks in advance.
Edit
Somehow I got it to work by passing in an ENV. Still I'm not exactly sure why the variable is not available without explicitly defining an ENV variable. Does this has to do with not being the root but 2000 user after an exec into the container?
Edit
The variable is not available at runtime because of the fact that the variabele is being declared in the RUN step, and therefore only limited to that scope. Thanks to #DavidMaze for clarifying.

How to unset "ENV" in dockerfile?

For some certain reasons, I have to set "http_proxy" and "https_proxy" ENV in my dockerfile. I would like to now unset them because there are also some building process can't be done through the proxy.
# dockerfile
# ... some process
ENV http_proxy=http://...
ENV https_proxy=http://...
# ... some process that needs the proxy to finish
UNSET ENV http_proxy # how to I unset the proxy ENV here?
UNSET ENV https_proxy
# ... some process that can't use the proxy
It depends on what effect you are trying to achieve.
Note that, as a matter of pragmatics (i.e. how developers actually speak), "unsetting a variable" can mean two things: removing it from the environment, or setting the variable to an empty value. Technically, these are two different operations. In practice though I have not run into a case where the software I'm trying to control differentiates between the variable being absent from the environment, and the variable being present in the environment but set to an empty value. I generally can use either method to get the same result.
If you don't care whether the variable is in the layers produced by Docker, but leaving it with a non-empty value causes problems in later build steps.
For this case, you can use ENV VAR_NAME= at the point in your Dockerfile from which you want to unset the variable. Syntactic note: Docker allows two syntaxes for ENV: this ENV VAR=1 is the same as ENV VAR 1. You can separate the variable name from the value with a space or an equal sign. When you want to "unset" a variable by setting it to an empty value you must use the equal sign syntax or you get an error at build time.
So for instance, you could do this:
ENV NOT_SENSITIVE some_value
RUN something
ENV NOT_SENSITIVE=
RUN something_else
When something runs, NOT_SENSITIVE is set to some_value. When something_else runs, NOT_SENSITIVE is set to the empty string.
It is important to note that doing unset NOT_SENSITIVE as a shell command will not affect anything else than what executes in this shell. Here's an example:
ENV NOT_SENSITIVE some_value
RUN unset NOT_SENSITIVE && printenv NOT_SENSITIVE || echo "does not exist"
RUN printenv NOT_SENSITIVE
The first RUN will print does not exist because NOT_SENSITIVE is unset when printenv executes and because it is unset printenv returns a non-zero exit code which causes the echo to execute. The second RUN is not affected by the unset in the first RUN. It will print some_value to the screen.
But what if I need to remove the variable from the environment, not just set it to an empty value?
In this case using ENV VAR_NAME= won't work. I don't know of any way to tell Docker "from this point on, you must remove this variable from the environment, not just set it to an empty value".
If you still want to use ENV to set your variable, then you'll have to start each RUN in which you want the variable to be unset with unset VAR_NAME, which will unset it for that specific RUN only.
If you want to prevent the variable from being present in the layers produced by Docker.
Suppose that variable contains a secret and the layer could fall into the hands of people who should not have the secret. In this case you CANNOT use ENV to set the variable. A variable set with ENV is baked into the layers to which it applies and cannot be removed from those layers. In particular, (assuming the variable is named SENSITIVE) running
RUN unset SENSITIVE
does not do anything to remove it from the layer. The unset command above only removes SENSITIVE from the shell process that RUN starts. It affects only that shell. It won't affect shells spawned by CMD, ENTRYPOINT, or any command provided through running docker run at the command line.
In order to prevent the layers from containing the secret, I would use docker build --secret= and RUN --mount=type=secret.... For instance, assuming that I've stored my secret in a file named sensitive, I could have a RUN like this:
RUN --mount=type=secret,id=sensitive,target=/root/sensitive \
export SENSITIVE=$(cat /root/sensitive) \
&& [[... do stuff that requires SENSITIVE ]] \
Note that the command given to RUN does not need to end with unset SENSITIVE. Due to the way processes and their environments are managed, setting SENSITIVE in the shell spawned by RUN does not have any effect beyond what that shell itself spawns. Environment changes in this shell won't affect future shells nor will it affect what Docker bakes into the layers it creates.
Then the build can be run with:
$ DOCKER_BUILDKIT=1 docker build --secret id=secret,src=path/to/sensitive [...]
The environment for the docker build command needs DOCKER_BUILDKIT=1 to use BuildKit because this method of passing secrets is only available if Docker uses BuildKit to build the images.
If one needs env vars during the image build but they should not persist, just clear them. In the following example, the running container shows empty env vars.
Dockerfile
# set proxy
ARG http_proxy
ARG https_proxy
ARG no_proxy
ENV http_proxy=$http_proxy
ENV https_proxy=$http_proxy
ENV no_proxy=$no_proxy
# ... do stuff that needs the proxy during the build, like apt-get, curl, et al.
# unset proxy
ENV http_proxy=
ENV https_proxy=
ENV no_proxy=
build.sh
docker build -t the-image \
--build-arg http_proxy="$http_proxy" \
--build-arg https_proxy="$http_proxy" \
--build-arg no_proxy="$no_proxy" \
--no-cache \
.
run.sh
docker run --rm -i \
the-image \
sh << COMMANDS
env
COMMANDS
Output
no_proxy=
https_proxy=
http_proxy=
...
According to docker docs you need to use shell command instead:
FROM alpine
RUN export ADMIN_USER="mark" \
&& echo $ADMIN_USER > ./mark \
&& unset ADMIN_USER
CMD sh
See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#env for more details.
Short-answer:
Try to avoid unnecessary environment variables, so you don't need to unset them.
In case you have to unset for a command you can do the following:
RUN unset http_proxy https_proxy no_proxy \
&& execute_your_command_here
In case you have to unset for the built image you can do the following:
FROM ubuntu_with_http_proxy
ENV http_proxy= \
https_proxy= \
no_proxy=
Once environment variables are set using the ENV instruction we can't really unset them as it is detailed:
Each ENV line creates a new intermediate layer, just like RUN commands. This means that even if you unset the environment variable in a future layer, it still persists in this layer and its value can be dumped.
See: Best practices for writing Dockerfiles
Details:
I prefer to define http_proxy as an argument during build like the following:
FROM ubuntu:20.04
ARG http_proxy=http://host.docker.internal:3128
ARG https_proxy=http://host.docker.internal:3128
ARG no_proxy=.your.domain,localhost,127.0.0.1,.docker.internal
On corporate proxy we need authentication anyways, so we need to configure local proxy server listening on 127.0.0.1:3128 witch is accessible over host.docker.internal:3128 from containers. This way it also works on docker desktop if we connect to corporate network over VPN (with local/home network blocked).
Setting no_proxy is also important to avoid flooding the proxy server.
See the following article for more details on no_proxy related topics:
Can we standardize NO_PROXY?
Sometimes it is also good to read the related documentation:
ENV
ARG
In case we need to configure those environment variables we can use the following command:
during build (link):
docker build ... --build-arg http_proxy='http://alternative.proxy:3128/' ...
during runs (link):
docker run ... -env http_proxy='http://alternative.proxy:3128/' ...
Also note that we don't even need to define proxy related arguments since those are already predefine according to the following section:
Dockerfile reference - Predefined ARGs
You can add below lines in the Dockerfile
ENV http_proxy ""
ENV https_proxy ""
I found the secret approach didn't work because I needed the env variable to persist in the container when I ran it in interactive mode but then needed to completely remove the variable for a later stage build for production.
What worked was in building for the development phase I appended the environment variable to the /root/.basrc file as
RUN echo export AWS_PROFILE=role-name >> /root/.bashrc
``
In the production stage of the build I then removed the last line of /root/.bashrc:
RUN sed -i '$ d' /root/.bashrc

Pass ENV in docker run command

Is there a way we can pass a variable lets say in this example I want to pass a list of animals into an entrypoint.sh file using ENV animals="turtle, monkey, goose"
But I want to be able to pass different animals when running the container for example docker run -t image animals="mouse,rat,kangaroo"
How do you go about passing arguments when running the docker run command?
The goal is to take that variable when using the docker run command and insert them into that entrypoint.sh file
Right now i hard code that in my Dockerfile. But i want to be able to do this when running the docker run command so I dont always have to change the Dockerfile.
FROM anapsix/alpine-java:8u121b13_jdk
ENV FILE_NAME="file_to_run.zip"
ENV animals="turtle, monkey, goose"
ADD ${FILE_NAME} .
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
CMD [ "/bin/ash", "./entrypoint.sh" ]
It looks like you might be confusing the image build with the container run. If the difference between the two isn't immediately clear, I'd recommend reviewing some other questions and docs like:
In Docker, what's the difference between a container and an image?
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
With the above, the variables will be expanded during the image build. The entrypoint.sh will not contain ${FILENAME} ${animals}. Instead, it will contain
file_to_run.zip turtle, monkey, goose
After the build, the docker run command will create a container from that image and run the above script with the environment variables defined but never used since the script already has the variables expanded. To prevent the variable expansion, you need to escape the $ or use single quotes to prevent the expansion, e.g.
RUN echo "\${FILENAME} \${animals}" > ./entrypoint.sh
or
RUN echo '${FILENAME} ${animals}' > ./entrypoint.sh
I would also recommend being explicit with a #!/bin/ash at the top of this script. Then when you run the script, do not override the command with parameters after the image name. Instead set the environment variables with the appropriate flag to run:
docker run -it -e animals="mouse,rat,kangaroo" image
Simplest way, forward individual variables:
docker run ... --env animals="turtle, monkey, goose" --env FILE_NAME="file_to_run.zip"
Forward several variables using file:
Or if you need to grab all your environment variables from outside, you can do something like this first:
printenv | grep -E 'animals|FILE_NAME' > my-env
The grep is because Docker doesn't like some variables, e.g. with spaces in them, which you might possibly have in your real environment.
Then use that file in your Docker command:
docker run ... --env-file ./my-env
The latter is also useful if you want to avoid sending environment variables to logs (like for sensitive variables). I use this approach in a CI/CD pipeline that runs some scripts.
Using variables inside Docker:
With either approach, the environment variables actually become available to scripts running inside the container to use.
#BMitch's answer has more complete details about how to achieve this in your case, where you have related logic in both build and execution.
Reference
See docs here.

Conditionally set ENV var based on hostname in Dockerfile

How can I set an ENV var in my Dockerfile based on the hostname? I tried this:
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
But that failed with
ENV: not found
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
You can't nest docker build instructions, everything after the RUN instruction gets executed in the image context, docker build commands don't
exist there. So that explains the error you are seeing.
Even you if you translated that to proper shell code BAR would only be active for that single RUN instruction during the build.
Either orchestrate on the host and pass BAR via run -e to your container or add a startup script to the image that sets BAR as needed on container start:
FROM foo
COPY my-start.sh /
CMD ["/my-start.sh"]
First of all, you can't embed Docker build command into shell of RUN, the shell will run inside the intermediate container during build process, and Docker build commands will be ran by Docker build engine, they're different things. And besides, Docker does not support conditional commands like IF or something like that. Docker is about immutable infrastructure, Dockerfile is the definition of your image and it's supposed to be able to generate the same image no matter what build context it is in. And from the delivery perspective of view, the image is your deliverable build artifacts, if you want to deliver different stuff, then use different Dockerfile to build different images, otherwise if the differences is about the runtime, I think you could really consider postpone the env definition to the runtime with -e option of docker run.
The reason why your build is failing has been explained by #shizhz & #Erik Dannenberk.
However, if you do really need that behavior I suggest you make a little script to do that:
export BAR=`[[ hostname = "foo" ]] && echo "BAZ" || echo "BIFF"`
docker build -t hello/hi - <<EOF
FROM alpine
ENV BAR $BAR
CMD echo $BAR
EOF

Parse a variable with the result of a command in DockerFile

I need to fill a variable in dockerfile with the result of a command
Like in bash var=$(date)
EDIT 1
date is a example.
in my case i use FROM phusion/baseimage:0.9.17 so i want at each building use the last version so i use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
but i don't know how i parse it in var with dockerfile for this result
ENV verbaseimage=curl...
FROM phusion/baseimage:$verbaseimage
RESULT
In my use case
FROM phusion/baseimage:latest
But the question remains unresolved for other case
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
The old workaround is mentioned here (issue 2637: Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR):
One work around that I've used, is to have a file in my context called "build-env". What I do is source it and run my desired command in the same RUN step. So for example:
build-env:
VERSION=stable
Dockerfile:
FROM radial/axle-base:latest
ADD build-env /build-env
RUN source build-env && mkdir /$VERSION
RUN ls /
But for date, that might not be as precise as you want.
Other workarounds are in issue 2022 "Dockerfile with variable interpolation".
In docker 1.9 (end of October 2015), you will have "support for build-time environment variables to the 'build' API (PR 9176)" and "Support for passing build-time variables in build context (PR 15182)".
docker build --build-arg=[]: Set build-time variables
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the ---build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don't persist in the intermediate or final images like ENV values do.
so I want at each building use the last version so I use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
If you want to use the last version of that image, all you need to do is use the tag 'latest' with the FROM directive:
FROM phusion/baseimage:latest
See also "The misunderstood Docker tag: latest": it doesn't always reference the actual latest build, but in this instance, it should work.
If you really want to use the curl|parse option, use it to generate a Dockerfile with the right value (as in a template processed to generate the right file).
Don't try to use it directly in the Dockerfile.
I wanted to set an ENV or LABEL variable from a computation in the Dockerfile, e.g. to make some computed installation options visible in docker inspect.
There does not seem to be any way to do that, and this issue suggests that it's a security design choice.
A Dockerfile can set an ENV variable to $X, ${X:-default}, or ${X:+substitute} where that $X must be another ENV or ARG variable.
A single RUN command can set and use shell variables, but that goes away at the end of the RUN command when that container layer shuts down.
A RUN command can write computed data into files, but the Dockerfile still can't get that data into an ENV or LABEL even if the file is ~/.bashrc. (File contents can, of course, be used by code running in the Container.)
The build can at least RUN echo $X to record choices to the build log -- unless that step comes from the build cache, in which case the RUN step doesn't run.
Please do correct me if there's a way out.
Partially connected to question. If one wants to use the result of some command later on it is possible within single RUN statement as follows:
RUN CUR_DIR=`pwd` && \
echo $CUR_DIR

Resources