Conditional ENV in Dockerfile - docker

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production

Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development

You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.

Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi

While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}

If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.

This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.

Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1

I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Related

How to override a docker env if when it is used by another env?

Is there any way to have an environment variable use another in a Dockerfile, in a way that I can override them at docker run time?
$ cat Dockerfile
FROM docker.ouroath.com:4443/containers/ylinux7-buildtools
ENV VARX foo
ENV VARY ${VARX}bar
CMD env
$ docker build -t envtest
...
$ docker run envtest
VARY=foobar
VARX=foo
$ docker run -e VARX=123 envtest
VARY=foobar
VARX=123
How can I change only X=123 and get Y=123bar to implement something like shortvar pattern?
Everything in a Dockerfile (except the CMD) is fully evaluated and expanded when the image is built. So in this setup, the variable Y always has the value when the ENV statement was executed, even if you change parts of the expression later when the image is run.
You can get around this with an entrypoint wrapper script. For example:
#!/bin/sh
# entrypoint.sh
# Give the variable Y a computed value, if it's not already set.
if [ -z "$VARY" ]; then
export VARY="${VARX}bar"
fi
# Run the main container command.
exec "$#"
# Dockerfile
FROM ubuntu:20.04
ENV VARX=foo
# do not set VARY here
COPY entrypoint.sh /usr/local/bin
ENTRYPOINT ["entrypoint.sh"] # must be JSON-array syntax
CMD env
The ENTRYPOINT will run at container startup, getting passed the CMD as arguments. This does the first-time setup (here setting the environment variable) and then the last line runs the CMD (or whatever you override it with running the container).
docker build -t envtest .
docker run --rm envtest | grep VAR
# VARX=foo
# VARY=foobar
docker run --rm -e VARX=quux envtest | grep VAR
# VARX=quux
# VARY=quuxbar
docker run --rm -e VARY=quux envtest | grep VAR
# VARX=foo
# VARY=quux
docker run --rm envtest sh -c 'echo hello $VARY'
# hello foobar

conditional environment definition in Dockerfile [duplicate]

Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.

Passing docker runtime environment variables in docker image

Here's my docker image. I want to override the default environment variables being set below from whatever is passed in the docker run command mentioned in the end
FROM ubuntu:16.04
ADD http://www.nic.funet.fi/pub/mirrors/apache.org/tomcat/tomcat-8/v8.0.48/bin/apache-tomcat-8.0.48.tar.gz /usr/local/
RUN cd /usr/local && tar -zxvf apache-tomcat-8.0.48.tar.gz && rm apache-tomcat-8.0.48.tar.gz
RUN mv /usr/local/apache-tomcat-8.0.48 /usr/local/tomcat
RUN rm -rf /usr/local/tomcat/webapps/*
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV CATALINA_HOME /usr/local/tomcat
ENV CATALINA_BASE /usr/local/tomcat
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/bin
ENV dummy_url defaulturl
ENV database databasedefault
COPY my.war /usr/local/tomcat/webapps/
RUN echo >> /usr/local/tomcat/conf/test.properties
RUN echo dummy_url =$dummy_url >> /usr/local/tomcat/conf/test.properties
RUN echo database =$database >> /usr/local/tomcat/conf/test.properties
ENTRYPOINT ["catalina.sh", "run"]
To run in local :
docker run -p 8080:8080 -e dummy_url=http:google.com -e database=jdbc://mysql allimages/myimage:latest
dummy_url and database do not seem to be getting overridden in the file that I am adding them in - test.properties. Any ideas would be greatly appreciated.
I want to override the default environment variables being set below from whatever is passed in the docker run command mentioned in the end
That means overriding an image file (/usr/local/tomcat/conf/test.properties) when running the image as a container (docker run), not building the image (docker build and its --build-args option and its ARG Dockerfile entry).
That means you create locally a script file which:
modifies /usr/local/tomcat/conf/test.properties
calls catalina.sh run $# (see also "Store Bash script arguments $# in a variable" from "Accessing bash command line args $# vs $*")
That is:
myscript.sh
#!/bin/sh
echo dummy_url=$dummy_url >> /usr/local/tomcat/conf/test.properties
echo database=$database >> /usr/local/tomcat/conf/test.properties
args=("$#")
catalina.sh run "${args[#]}"
You would modify your Dockerfile to COPY that script and call it:
COPY myscript.sh /usr/local/
...
ENTRYPOINT ["/usr/local/myscript.sh"]
Then, and only then, the -e options of docker run would work.
You are confusing what gets executed when building the image and what gets executed when starting the container.
The RUN command inside the dockerfile is executed when building the image, when running docker build ...
RUN echo dummy_url =$dummy_url >> /usr/local/tomcat/conf/test.properties
RUN echo database =$database >> /usr/local/tomcat/conf/test.properties
Thus when the above execute the file test.properties will contain the default values specified in the Dockerfile.
When you execute docker run -p 8080:8080 -e dummy_url=http:google.com -e database=jdbc://mysql allimages/myimage:latest the ENTRYPOINT ["catalina.sh", "run"] will get executed with
env values dummy_url=http:google.com and database=jdbc://mysql.
You can allow values in test.properties to be ovveriden using:
Move $dummy_url >> /usr/local/tomcat/conf/test.properties and $database >> /usr/local/tomcat/conf/test.properties to the start of catalina.sh script.
Override the values when building the image as such:
ARG dummy_url_arg
ARG database_arg
ENV dummy_url $dummy_url_arg
ENV database $database_arg
COPY my.war /usr/local/tomcat/webapps/
RUN echo >> /usr/local/tomcat/conf/test.properties
RUN echo dummy_url =$dummy_url >> /usr/local/tomcat/conf/test.properties
RUN echo database =$database >> /usr/local/tomcat/conf/test.properties
ENTRYPOINT ["catalina.sh", "run"]
And when building the image override the values using docker build --build-arg dummy_url_arg=http:google.com --build-arg database_arg=jdbc://mysql allimages/myimage:latest ...

Dockerfile if else condition with external arguments

I have dockerfile
FROM centos:7
ENV foo=42
then I build it
docker build -t my_docker .
and run it.
docker run -it -d my_docker
Is it possible to pass arguments from command line and use it with if else in Dockerfile? I mean something like
FROM centos:7
if (my_arg==42)
{ENV=TRUE}
else:
{ENV=FALSE}
and build with this argument.
docker build -t my_docker . --my_arg=42
It might not look that clean but you can have your Dockerfile (conditional) as follow:
FROM centos:7
ARG arg
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
and then build the image as:
docker build -t my_docker . --build-arg arg=45
or
docker build -t my_docker .
There is an interesting alternative to the proposed solutions, that works with a single Dockerfile, require only a single call to docker build per conditional build and avoids bash.
Solution:
The following Dockerfile solves that problem. Copy-paste it and try it yourself.
ARG my_arg
FROM centos:7 AS base
RUN echo "do stuff with the centos image"
FROM base AS branch-version-1
RUN echo "this is the stage that sets VAR=TRUE"
ENV VAR=TRUE
FROM base AS branch-version-2
RUN echo "this is the stage that sets VAR=FALSE"
ENV VAR=FALSE
FROM branch-version-${my_arg} AS final
RUN echo "VAR is equal to ${VAR}"
Explanation of Dockerfile:
We first get a base image (centos:7 in your case) and put it into its own stage. The base stage should contain things that you want to do before the condition. After that, we have two more stages, representing the branches of our condition: branch-version-1 and branch-version-2. We build both of them. The final stage than chooses one of these stages, based on my_arg. Conditional Dockerfile. There you go.
Output when running:
(I abbreviated this a little...)
my_arg==2
docker build --build-arg my_arg=2 .
Step 1/12 : ARG my_arg
Step 2/12 : ARG ENV
Step 3/12 : FROM centos:7 AS base
Step 4/12 : RUN echo "do stuff with the centos image"
do stuff with the centos image
Step 5/12 : FROM base AS branch-version-1
Step 6/12 : RUN echo "this is the stage that sets VAR=TRUE"
this is the stage that sets VAR=TRUE
Step 7/12 : ENV VAR=TRUE
Step 8/12 : FROM base AS branch-version-2
Step 9/12 : RUN echo "this is the stage that sets VAR=FALSE"
this is the stage that sets VAR=FALSE
Step 10/12 : ENV VAR=FALSE
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to FALSE
my_arg==1
docker build --build-arg my_arg=1 .
...
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to TRUE
Thanks to Tõnis for this amazing idea!
Do not use build args described in other answers where at all possible. This is an old messy solution. Docker's target property solves for this issue.
Target Example
Dockerfile
FROM foo as base
RUN ...
# Build dev image
FROM base as image-dev
RUN ...
COPY ...
# Build prod image
FROM base as image-prod
RUN ...
COPY ...
docker build --target image-dev -t foo .
version: '3.4'
services:
dev:
build:
context: .
dockerfile: Dockerfile
target: image-dev
Real World
Dockerfiles get complex in the real world. Use buildkit & COPY --from for faster, more maintainable Dockerfiles:
Docker builds every stage above the target, regardless of whether it is inherited or not. Use buildkit to build only inherited stages. Docker must by v19+. Hopefully this will be a default feature soon.
Targets may share build stages. Use COPY --from to simplify inheritance.
FROM foo as base
RUN ...
WORKDIR /opt/my-proj
FROM base as npm-ci-dev
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci
FROM base as npm-ci-prod
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci --only=prod
FROM base as proj-files
COPY --chown=www-data:www-data ./ /opt/my-proj
FROM base as image-dev
# Will mount, not copy in dev environment
RUN ...
FROM base as image-ci
COPY --from=npm-ci-dev /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-stage
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-prod
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
Enable experimental mode.
sudo echo '{"experimental": true}' | sudo tee /etc/docker/daemon.json
Build with buildkit enabled. Buildkit builds without cache by default - enable with --build-arg BUILDKIT_INLINE_CACHE=1
CI build job.
DOCKER_BUILDKIT=1 \
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--target image-ci\
-t foo:ci
.
Use cache from a pulled image with --cache-from
Prod build job
docker pull foo:ci
docker pull foo:stage
DOCKER_BUILDKIT=1 \
docker build \
--cache-from foo:ci,foo:stage \
--target image-prod \
-t prod
.
From some reason most of the answers here didn't help me (maybe it's related to my FROM image in the Dockerfile)
So I preferred to create a bash script in my workspace combined with --build-arg in order to handle if statement while Docker build by checking if the argument is empty or not
Bash script:
#!/bin/bash -x
if test -z $1 ; then
echo "The arg is empty"
....do something....
else
echo "The arg is not empty: $1"
....do something else....
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
....
Docker Build:
docker build --pull -f "Dockerfile" -t $SERVICE_NAME --build-arg arg="yes" .
Remark: This will go to the else (false) in the bash script
docker build --pull -f "Dockerfile" -t $SERVICE_NAME .
Remark: This will go to the if (true)
Edit 1:
After several tries I have found the following article and this one
which helped me to understand 2 things:
1) ARG before FROM is outside of the build
2) The default shell is /bin/sh which means that the if else is working a little bit different in the docker build. for example you need only one "=" instead of "==" to compare strings.
So you can do this inside the Dockerfile
ARG argname=false #default argument when not provided in the --build-arg
RUN if [ "$argname" = "false" ] ; then echo 'false'; else echo 'true'; fi
and in the docker build:
docker build --pull -f "Dockerfile" --label "service_name=${SERVICE_NAME}" -t $SERVICE_NAME --build-arg argname=true .
Just use the "test" binary directly to do this. You also should use the noop command ":" if you don't want to specify an "else" condition, so docker does not stop with a non zero return value error.
RUN test -z "$YOURVAR" || echo "var is set" && echo "var is not set"
RUN test -z "$YOURVAR" && echo "var is not set" || :
RUN test -z "$YOURVAR" || echo "var is set" && :
The accepted answer may solve the question, but if you want multiline if conditions in the dockerfile, you can do that placing \ at the end of each line (similar to how you would do in a shell script) and ending each command with ;. You can even define someting like set -eux as the 1st command.
Example:
RUN set -eux; \
if [ -f /path/to/file ]; then \
mv /path/to/file /dest; \
fi; \
if [ -d /path/to/dir ]; then \
mv /path/to/dir /dest; \
fi
In your case:
FROM centos:7
ARG arg
RUN if [ -z "$arg" ] ; then \
echo Argument not provided; \
else \
echo Argument is $arg; \
fi
Then build with:
docker build -t my_docker . --build-arg arg=42
According to the doc for the docker build command, there is a parameter called --build-arg.
Example usage:
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
IMO it's what you need :)
Exactly as others told, shell script would help.
Just an additional case, IMHO it's worth mentioning (for someone else who stumble upon here, looking for an easier case), that is Environment replacement.
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:
${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.
Using Bash script and Alpine/Centos
Dockerfile
FROM alpine #just change this to centos
ARG MYARG=""
ENV E_MYARG=$MYARG
ADD . /tmp
RUN chmod +x /tmp/script.sh && /tmp/script.sh
script.sh
#!/usr/bin/env sh
if [ -z "$E_MYARG" ]; then
echo "NO PARAM PASSED"
else
echo $E_MYARG
fi
Passing arg:
docker build -t test --build-arg MYARG="this is a test" .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in 10b0e07e33fc
this is a test
Removing intermediate container 10b0e07e33fc
---> f6f085ffb284
Successfully built f6f085ffb284
Without arg:
docker build -t test .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in b89210b0cac0
NO PARAM PASSED
Removing intermediate container b89210b0cac0
....
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
As I need the same environment variables on container startup, I used a file to "persist" it from build to run.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.
You can just add a simple check:
RUN [ -z "$ARG" ] \
&& echo "ARG argument not provided." \
&& exit 1 || exit 0
I saw a lot of possible solutions, but no one fits on the problem I faced today. So, I'm taking time to answer the question with one another possible solution that worked to me.
In my case I toke advantage of the well known if [ "$VAR" == "this" ]; then echo "do that"; fi. The caveat is that Docker, I don't know explain why, doesn't like the double equal on this case. So we need to write like that if [ "$VAR" = "this" ]; then echo "do that"; fi.
There is the full example that worked in my case:
FROM node:16
# Let's set args and envs
ARG APP_ENV="dev"
ARG NPM_CMD="install"
ARG USER="nodeuser"
ARG PORT=8080
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
ENV NODE_ENV=${APP_ENV}
# Let's set the starting point
WORKDIR /app
# Let's build a cache
COPY package*.json .
RUN date \
# If the environment is production or staging, omit dev packages
# If any other environment, install dev packages
&& if [ "$APP_ENV" = "production" ]; then NPM_CMD="ci --omit=dev"; fi \
&& if [ "$APP_ENV" = "staging" ]; then NPM_CMD="ci --omit=dev"; fi \
&& npm ${NPM_CMD} \
&& usermod -d /app -l ${USER} node
# Let's add the App
COPY . .
# Let's expose the App port
EXPOSE ${PORT}
# Let's set the user
USER ${USER}
# Let's set the start App command
CMD [ "node", "server.js" ]
So if the user pass the proper build argument, the docker build command will create an image of app for production. If not, it will create an image of the app with dev Node.js packages.
To make it works, you can call like this:
# docker build --build-arg APP_ENV=production -t app-node .
For any one trying to build Windows based image, you need to access argument with %% for cmd.
# Dockerfile Windows
# ...
ARG SAMPLE_ARG
RUN if %SAMPLE_ARG% == hello_world ( `
echo hehe %SAMPLE_ARG% `
) else ( `
echo haha %SAMPLE_ARG% `
)
# ...
BTW, ARG declaration must be placed after FROM, otherwise the argument will not be available.
# The ARGs in front of FROM is for image
ARG IMLABEL=xxxx \
IMVERS=x.x
FROM ${IMLABEL}:${IMVERS}
# The ARGs after FROM is for parameters to be used in the script
ARG condition-x
RUN if [ "$condition-x" = "condition-1" ]; then \
echo "$condition-1"; \
elif [ "$condition-x" = "condition-1" ]; then \
echo "$condition-2"; \
else
echo "$condition-others"; \
fi
build -t --build-arg IMLABEL --build-arg IMVERS --build-arg condition-x -f Dockerfile -t image:version .

Can we pass ENV variables through cmd line while building a docker image through dockerfile?

I am working on a task that involves building a docker image with centOs as its base using a Dockerfile . One of the steps inside the dockerfile needs http_proxy and https_proxy ENV variables to be set in order to work behind the proxy.
As this Dockerfile will be used by multiple teams having different proxies, I want to avoid having to edit the Dockerfile for each team. Instead I am looking for a solution which allows me to pass ENV variables at build time, e.g.,
sudo docker build -e http_proxy=somevalue .
I'm not sure if there is already an option that provides this. Am I missing something?
Containers can be built using build arguments (in Docker 1.9+) which work like environment variables.
Here is the method:
FROM php:7.0-fpm
ARG APP_ENV=local
ENV APP_ENV=${APP_ENV}
RUN cd /usr/local/etc/php && ln -sf php.ini-${APP_ENV} php.ini
and then build a production container:
docker build --build-arg APP_ENV=prod .
For your particular problem:
FROM debian
ENV http_proxy=${http_proxy}
and then run:
docker build --build-arg http_proxy=10.11.24.31 .
Note that if you build your containers with docker-compose, you can specify these build-args in the docker-compose.yml file, but not on the command-line. However, you can use variable substitution in the docker-compose.yml file, which uses environment variables.
So I had to hunt this down by trial and error as many people explain that you can pass ARG -> ENV but it doesn't always work as it highly matters whether the ARG is defined before or after the FROM tag.
The below example should explain this clearly. My main problem originally was that all of my ARGS were defined prior to FROM which resulted all the ENV to be undefined always.
# ARGS PRIOR TO FROM TAG ARE AVAIL ONLY TO FROM for dynamic a FROM tag
ARG NODE_VERSION
FROM node:${NODE_VERSION}-alpine
# ARGS POST FROM can bond/link args to env to make the containers environment dynamic
ARG NPM_AUTH_TOKEN
ARG EMAIL
ARG NPM_REPO
ENV NPM_AUTH_TOKEN=${NPM_AUTH_TOKEN}
ENV EMAIL=${EMAIL}
ENV NPM_REPO=${NPM_REPO}
# for good measure, what do we really have
RUN echo NPM_AUTH_TOKEN: $NPM_AUTH_TOKEN && \
echo EMAIL: $EMAIL && \
echo NPM_REPO: $NPM_REPO && \
echo $HI_5
# remember to change HI_5 every build to break `docker build`'s cache if you want to debug the stdout
..... # rest of whatever you want RUN, CMD, ENTRYPOINT etc..
I faced the same situation.
According to Sin30's answer pretty solution is using shell,
CMD ["sh", "-c", "cd /usr/local/etc/php && ln -sf php.ini-$APP_ENV php.ini"]

Resources