Dockerfile if else condition with external arguments - docker

I have dockerfile
FROM centos:7
ENV foo=42
then I build it
docker build -t my_docker .
and run it.
docker run -it -d my_docker
Is it possible to pass arguments from command line and use it with if else in Dockerfile? I mean something like
FROM centos:7
if (my_arg==42)
{ENV=TRUE}
else:
{ENV=FALSE}
and build with this argument.
docker build -t my_docker . --my_arg=42

It might not look that clean but you can have your Dockerfile (conditional) as follow:
FROM centos:7
ARG arg
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
and then build the image as:
docker build -t my_docker . --build-arg arg=45
or
docker build -t my_docker .

There is an interesting alternative to the proposed solutions, that works with a single Dockerfile, require only a single call to docker build per conditional build and avoids bash.
Solution:
The following Dockerfile solves that problem. Copy-paste it and try it yourself.
ARG my_arg
FROM centos:7 AS base
RUN echo "do stuff with the centos image"
FROM base AS branch-version-1
RUN echo "this is the stage that sets VAR=TRUE"
ENV VAR=TRUE
FROM base AS branch-version-2
RUN echo "this is the stage that sets VAR=FALSE"
ENV VAR=FALSE
FROM branch-version-${my_arg} AS final
RUN echo "VAR is equal to ${VAR}"
Explanation of Dockerfile:
We first get a base image (centos:7 in your case) and put it into its own stage. The base stage should contain things that you want to do before the condition. After that, we have two more stages, representing the branches of our condition: branch-version-1 and branch-version-2. We build both of them. The final stage than chooses one of these stages, based on my_arg. Conditional Dockerfile. There you go.
Output when running:
(I abbreviated this a little...)
my_arg==2
docker build --build-arg my_arg=2 .
Step 1/12 : ARG my_arg
Step 2/12 : ARG ENV
Step 3/12 : FROM centos:7 AS base
Step 4/12 : RUN echo "do stuff with the centos image"
do stuff with the centos image
Step 5/12 : FROM base AS branch-version-1
Step 6/12 : RUN echo "this is the stage that sets VAR=TRUE"
this is the stage that sets VAR=TRUE
Step 7/12 : ENV VAR=TRUE
Step 8/12 : FROM base AS branch-version-2
Step 9/12 : RUN echo "this is the stage that sets VAR=FALSE"
this is the stage that sets VAR=FALSE
Step 10/12 : ENV VAR=FALSE
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to FALSE
my_arg==1
docker build --build-arg my_arg=1 .
...
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to TRUE
Thanks to Tõnis for this amazing idea!

Do not use build args described in other answers where at all possible. This is an old messy solution. Docker's target property solves for this issue.
Target Example
Dockerfile
FROM foo as base
RUN ...
# Build dev image
FROM base as image-dev
RUN ...
COPY ...
# Build prod image
FROM base as image-prod
RUN ...
COPY ...
docker build --target image-dev -t foo .
version: '3.4'
services:
dev:
build:
context: .
dockerfile: Dockerfile
target: image-dev
Real World
Dockerfiles get complex in the real world. Use buildkit & COPY --from for faster, more maintainable Dockerfiles:
Docker builds every stage above the target, regardless of whether it is inherited or not. Use buildkit to build only inherited stages. Docker must by v19+. Hopefully this will be a default feature soon.
Targets may share build stages. Use COPY --from to simplify inheritance.
FROM foo as base
RUN ...
WORKDIR /opt/my-proj
FROM base as npm-ci-dev
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci
FROM base as npm-ci-prod
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci --only=prod
FROM base as proj-files
COPY --chown=www-data:www-data ./ /opt/my-proj
FROM base as image-dev
# Will mount, not copy in dev environment
RUN ...
FROM base as image-ci
COPY --from=npm-ci-dev /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-stage
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-prod
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
Enable experimental mode.
sudo echo '{"experimental": true}' | sudo tee /etc/docker/daemon.json
Build with buildkit enabled. Buildkit builds without cache by default - enable with --build-arg BUILDKIT_INLINE_CACHE=1
CI build job.
DOCKER_BUILDKIT=1 \
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--target image-ci\
-t foo:ci
.
Use cache from a pulled image with --cache-from
Prod build job
docker pull foo:ci
docker pull foo:stage
DOCKER_BUILDKIT=1 \
docker build \
--cache-from foo:ci,foo:stage \
--target image-prod \
-t prod
.

From some reason most of the answers here didn't help me (maybe it's related to my FROM image in the Dockerfile)
So I preferred to create a bash script in my workspace combined with --build-arg in order to handle if statement while Docker build by checking if the argument is empty or not
Bash script:
#!/bin/bash -x
if test -z $1 ; then
echo "The arg is empty"
....do something....
else
echo "The arg is not empty: $1"
....do something else....
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
....
Docker Build:
docker build --pull -f "Dockerfile" -t $SERVICE_NAME --build-arg arg="yes" .
Remark: This will go to the else (false) in the bash script
docker build --pull -f "Dockerfile" -t $SERVICE_NAME .
Remark: This will go to the if (true)
Edit 1:
After several tries I have found the following article and this one
which helped me to understand 2 things:
1) ARG before FROM is outside of the build
2) The default shell is /bin/sh which means that the if else is working a little bit different in the docker build. for example you need only one "=" instead of "==" to compare strings.
So you can do this inside the Dockerfile
ARG argname=false #default argument when not provided in the --build-arg
RUN if [ "$argname" = "false" ] ; then echo 'false'; else echo 'true'; fi
and in the docker build:
docker build --pull -f "Dockerfile" --label "service_name=${SERVICE_NAME}" -t $SERVICE_NAME --build-arg argname=true .

Just use the "test" binary directly to do this. You also should use the noop command ":" if you don't want to specify an "else" condition, so docker does not stop with a non zero return value error.
RUN test -z "$YOURVAR" || echo "var is set" && echo "var is not set"
RUN test -z "$YOURVAR" && echo "var is not set" || :
RUN test -z "$YOURVAR" || echo "var is set" && :

The accepted answer may solve the question, but if you want multiline if conditions in the dockerfile, you can do that placing \ at the end of each line (similar to how you would do in a shell script) and ending each command with ;. You can even define someting like set -eux as the 1st command.
Example:
RUN set -eux; \
if [ -f /path/to/file ]; then \
mv /path/to/file /dest; \
fi; \
if [ -d /path/to/dir ]; then \
mv /path/to/dir /dest; \
fi
In your case:
FROM centos:7
ARG arg
RUN if [ -z "$arg" ] ; then \
echo Argument not provided; \
else \
echo Argument is $arg; \
fi
Then build with:
docker build -t my_docker . --build-arg arg=42

According to the doc for the docker build command, there is a parameter called --build-arg.
Example usage:
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
IMO it's what you need :)

Exactly as others told, shell script would help.
Just an additional case, IMHO it's worth mentioning (for someone else who stumble upon here, looking for an easier case), that is Environment replacement.
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:
${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.

Using Bash script and Alpine/Centos
Dockerfile
FROM alpine #just change this to centos
ARG MYARG=""
ENV E_MYARG=$MYARG
ADD . /tmp
RUN chmod +x /tmp/script.sh && /tmp/script.sh
script.sh
#!/usr/bin/env sh
if [ -z "$E_MYARG" ]; then
echo "NO PARAM PASSED"
else
echo $E_MYARG
fi
Passing arg:
docker build -t test --build-arg MYARG="this is a test" .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in 10b0e07e33fc
this is a test
Removing intermediate container 10b0e07e33fc
---> f6f085ffb284
Successfully built f6f085ffb284
Without arg:
docker build -t test .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in b89210b0cac0
NO PARAM PASSED
Removing intermediate container b89210b0cac0
....

I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
As I need the same environment variables on container startup, I used a file to "persist" it from build to run.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

You can just add a simple check:
RUN [ -z "$ARG" ] \
&& echo "ARG argument not provided." \
&& exit 1 || exit 0

I saw a lot of possible solutions, but no one fits on the problem I faced today. So, I'm taking time to answer the question with one another possible solution that worked to me.
In my case I toke advantage of the well known if [ "$VAR" == "this" ]; then echo "do that"; fi. The caveat is that Docker, I don't know explain why, doesn't like the double equal on this case. So we need to write like that if [ "$VAR" = "this" ]; then echo "do that"; fi.
There is the full example that worked in my case:
FROM node:16
# Let's set args and envs
ARG APP_ENV="dev"
ARG NPM_CMD="install"
ARG USER="nodeuser"
ARG PORT=8080
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
ENV NODE_ENV=${APP_ENV}
# Let's set the starting point
WORKDIR /app
# Let's build a cache
COPY package*.json .
RUN date \
# If the environment is production or staging, omit dev packages
# If any other environment, install dev packages
&& if [ "$APP_ENV" = "production" ]; then NPM_CMD="ci --omit=dev"; fi \
&& if [ "$APP_ENV" = "staging" ]; then NPM_CMD="ci --omit=dev"; fi \
&& npm ${NPM_CMD} \
&& usermod -d /app -l ${USER} node
# Let's add the App
COPY . .
# Let's expose the App port
EXPOSE ${PORT}
# Let's set the user
USER ${USER}
# Let's set the start App command
CMD [ "node", "server.js" ]
So if the user pass the proper build argument, the docker build command will create an image of app for production. If not, it will create an image of the app with dev Node.js packages.
To make it works, you can call like this:
# docker build --build-arg APP_ENV=production -t app-node .

For any one trying to build Windows based image, you need to access argument with %% for cmd.
# Dockerfile Windows
# ...
ARG SAMPLE_ARG
RUN if %SAMPLE_ARG% == hello_world ( `
echo hehe %SAMPLE_ARG% `
) else ( `
echo haha %SAMPLE_ARG% `
)
# ...
BTW, ARG declaration must be placed after FROM, otherwise the argument will not be available.

# The ARGs in front of FROM is for image
ARG IMLABEL=xxxx \
IMVERS=x.x
FROM ${IMLABEL}:${IMVERS}
# The ARGs after FROM is for parameters to be used in the script
ARG condition-x
RUN if [ "$condition-x" = "condition-1" ]; then \
echo "$condition-1"; \
elif [ "$condition-x" = "condition-1" ]; then \
echo "$condition-2"; \
else
echo "$condition-others"; \
fi
build -t --build-arg IMLABEL --build-arg IMVERS --build-arg condition-x -f Dockerfile -t image:version .

Related

Is there a docker build method supports conditional copying files from local to image? [duplicate]

This question already has answers here:
Conditional COPY/ADD in Dockerfile?
(8 answers)
Closed 4 months ago.
I want to build a docker image. During this process, a massive package (150 MB) needed to be copied from local file system to the image.
I want check the (local directory) existence of this file firstly. If there is, I can COPY it into the image directly. Otherwise I will let docker-build download it from the Internet by a URL and it will take a lot of time.
I call this conditional COPY.
But I don't know how to implemented this feature.
RUN if command cannot do this.
You can run if condition in RUN for example
#!/bin/bash -x
if test -z $1 ; then
echo "argument empty"
........
else
echo "Arg not empty: $1"
........
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
Or you can try this
FROM centos:7
ARG arg
RUN if [ "x$arg" = "x" ] ; then echo Argument not provided ; else echo Argument is $arg ; fi
For Copy you can find this dockerfile helpful
#########
# BUILD #
#########
ARG BASE_IMAGE
FROM maven:3.6.3-jdk-11 AS BUILD
RUN mkdir /opt/trunk
RUN mkdir /opt/tmp
WORKDIR /opt/trunk
RUN --mount=target=/root/.m2,type=cache
RUN --mount=source=.,target=/opt/trunk,type=bind,rw mvn clean package && cp -r /opt/trunk/out/app.ear /opt/tmp
##################
# Dependencies #
##################
FROM $BASE_IMAGE
ARG IMAGE_TYPE
ENV DEPLOYMENT_LOCATION /opt/wildfly/standalone/deployments/app.ear
ENV TMP_LOCATION /opt/tmp/app.ear
ARG BASE_IMAGE
COPY if [ "$BASE_IMAGE" = "external" ] ; then COPY --from=BUILD $TMP_LOCATION/*.properties $DEPLOYMENT_LOCATION \
; COPY --from=BUILD $TMP_LOCATION/*.xml $DEPLOYMENT_LOCATION \
; COPY standalone.conf /opt/wildfly/bin ; fi
I would personally rely on the built-in caching of docker to do this:
FROM alpine
ADD http://www.example.com/link_to_large_file /some/path
For the first time the build will download the file. After that every subsequent build will use the layer cache, as long as you don't delete the previous images. You can also externalize the build cache with storage backends.
If the instructions before your ADD often change the image you can also use the --link flag for ADD to store this layer independently. See the documentation for ADD --link or better COPY --link.

conditional environment definition in Dockerfile [duplicate]

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Trying to pass argument to dockerfile not working when build an image with .net core app

I have .NET Core web app and I'm trying to build image using the following command:
docker build -f "C:\myapp\Dockerfile" --force-rm -t infoeditor --label "com.microsoft.created-by=visual-studio" --label "com.microsoft.visual-studio.project-name=InfoEditor.Web" --build-arg USER=MYUSERNAME --build-arg PAT=MYPASS "C:\myapp\InfoEditor"
in dockerfile I have:
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
ARG USER
ARG PAT
RUN echo $PAT
RUN echo $USER
but echo returns $PAT instead of MYPASSWORD. Also happen to $USER.
What I'm doing wrong ? I put those ARG in first line before FROM ... same thing
In a multistage build, you need to renew the arguments in each stage:
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
ARG USER
RUN echo "1) $USER"
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
ARG USER
RUN echo "2) $USER"
And then, to test:
$ docker build --tag temp --build-arg USER=MYUSERNAME .
From the reference:
An ARG instruction goes out of scope at the end of the build stage
where it was defined. To use an arg in multiple stages, each stage
must include the ARG instruction.

Conditional check in Dockerfile

I have a Dockefile, in which I want to copy certain files based on input environment variable. So far I have tried the following. I am able to verify that my environment variable is passed correctly. During my docker build I get the following error -->> /bin/sh: COPY: not found
ARG arg=a
RUN if [ "$arg" = "a" ] ; then \
echo arg is $arg; \
COPY test.txt /
else \
echo arg is $arg; \
fi
What you are essentially trying to do here is to have a COPY command inside a RUN command.
Dockerfiles don't have nested commands.
Moreover, a RUN command runs inside an intermediate container built from the image. Namely, ARG arg=a will create an intermediate image, then docker will spin up a container, and use it to run the RUN command, and commit that container as the next intermediate image in the build process.
so COPY is not something that can run inside the container, and in fact RUN basically runs a shell command inside the container, and COPY is not a shell command.
AFAICT dockerfiles don't have any means of doing conditional execution. The best you can do is:
COPY test.txt
RUN if [ "$arg" = "a" ] ; then \
echo arg is $arg; \
else \
echo arg is $arg; \
rm -r test.txt \
fi
But keep in mind that if test.txt is a 20GB file, the size of your image will still be > 20GB.

Conditional ENV in Dockerfile

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Resources