I'm trying to build Oracle WebLogic Docker image with custom environment variables
$ docker build -t 12213-domain --build-arg ADMIN_PORT=8888 --build-arg ADMIN_PASSWORD=wls .
but I get the following warning on the build log
[Warning] One or more build-args [ADMIN_PASSWORD ADMIN_PORT] were not consumed
Here is the Dockerfile of the image I'm trying to build
#Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This Dockerfile extends the Oracle WebLogic image by creating a sample domain.
#
# Util scripts are copied into the image enabling users to plug NodeManager
# automatically into the AdminServer running on another container.
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sudo docker build -t 12213-domain
#
# Pull base image
# ---------------
FROM oracle/weblogic:12.2.1.3-developer
# Maintainer
# ----------
MAINTAINER Monica Riccelli <monica.riccelli#oracle.com>
# WLS Configuration
# ---------------------------
ENV ADMIN_HOST="wlsadmin" \
NM_PORT="5556" \
MS_PORT="8001" \
DEBUG_PORT="8453" \
ORACLE_HOME=/u01/oracle \
SCRIPT_FILE=/u01/oracle/createAndStartWLSDomain.sh \
CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostnameVerification=true" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/${DOMAIN_NAME:-base_domain}/bin:/u01/oracle
# Domain and Server environment variables
# ------------------------------------------------------------
ENV DOMAIN_NAME="${DOMAIN_NAME:-base_domain}" \
PRE_DOMAIN_HOME=/u01/oracle/user_projects \
ADMIN_PORT="${ADMIN_PORT:-7001}" \
ADMIN_USERNAME="${ADMIN_USERNAME:-weblogic}" \
ADMIN_NAME="${ADMIN_NAME:-AdminServer}" \
MS_NAME="${MS_NAME:-""}" \
NM_NAME="${NM_NAME:-""}" \
ADMIN_PASSWORD="${ADMIN_PASSWORD:-""}" \
CLUSTER_NAME="${CLUSTER_NAME:-DockerCluster}" \
DEBUG_FLAG=true \
PRODUCTION_MODE=dev
# Add files required to build this image
COPY container-scripts/* /u01/oracle/
#Create directory where domain will be written to
USER root
RUN chmod +xw /u01/oracle/*.sh && \
chmod +xw /u01/oracle/*.py && \
mkdir -p $PRE_DOMAIN_HOME && \
chmod a+xr $PRE_DOMAIN_HOME && \
chown -R oracle:oracle $PRE_DOMAIN_HOME
VOLUME $PRE_DOMAIN_HOME
# Expose Node Manager default port, and also default for admin and managed server
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT $DEBUG_PORT
USER oracle
WORKDIR $ORACLE_HOME
# Define default command to start bash.
CMD ["/u01/oracle/createAndStartWLSDomain.sh"]
I'm running docker-toolbox on windows, and the docker version is
$ docker --version
Docker version 18.03.0-ce, build 0520e24302
ARG
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg = flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a
warning.
[Warning] One or more build-args [foo] were not consumed.
https://docs.docker.com/engine/reference/builder/#arg
Using ARG variables
You can use an ARG or an ENV instruction to specify variables that are
available to the RUN instruction. Environment variables defined using
the ENV instruction always override an ARG instruction of the same
name. Consider this Dockerfile with an ENV and ARG instruction.
Unlike an ARG instruction, ENV values are always persisted in the
built image. Consider a docker build without the --build-arg flag:
ARG is only available during the build of a Docker image (RUN etc), not after the image is created and containers are started from it (ENTRYPOINT, CMD). You can use ARG values to set ENV values to work around that.
So you need to do something like this
# Assign any default value to avoid any error. do not worry your build flag will override this.
ARG ADMIN_PORT=some_default_value
ENV ADMIN_PORT=${ADMIN_PORT}
https://vsupalov.com/docker-arg-env-variable-guide/
Update:
In simple Word, If you pass ARGs like --build-arg SOME_ARG=some_value to Docker build and did not declare the ARGS in Dockerfile this warning will be printed.
my Dockerfile to consume ARG
FROM alpine
ARG SOME_ARG="Default_Value"
RUN echo "ARGS is ${SOME_ARG}"
Build command
docker build --no-cache --build-arg SOME_ARG=some_value -t alpine .
So it will not print any Warning as ARGS is declared in my Dockerfile.
Now if try to remove ARGS from Dockerfile and build the image
FROM alpine
RUN echo "without ARGS dockerfile"
Build command
docker build --no-cache --build-arg SOME_ARG=some_value -t alpine .
So now we will get a [Warning] One or more build-args [SOME_ARG] were not consumed because we did not consume or declared SOME_ARG in our Dockerfile.
if you want to run on Docker-Compose with Dockerfile ;
Docker-compose.yml ;
version: '3.3'
services:
somecontb:
container_name: somecontainer
hostname : somecontainer
build:
args:
SOME_ARG: Default_Value
context: .
dockerfile: DockerFile
image : someimage
volumes:
- somecontainer_volume:/somefile
volumes:
somecontainer_volume:
.
.
.
DockerFile ;
FROM alpine
ARG SOME_ARG
ENV SOME_ENV=$SOME_ARG
docker commands ;
docker-compose down
docker volume rm somecontainer_volume
docker-compose build --no-cache
docker-compose up -d
Related
I have a multi-stage build where a python script runs in the first stage and uses several env vars.
How do I set these variables in the docker build command?
Here's the Dockerfile:
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
My export.py script reads several env vars, and I have a .env file. If I run a container built with teh first stage and pass --env-file it works, but I can't seem to get it to work in the build stage.
How can I get the env vars to be available when building the first stage?
I don't care if they are saved in the image or not...
its seens you are looking for the ARG instruction. it's only avaible at the building time and won't be avaible at image runtime. Don’t use them for secrets which are not meant to stick around!
# default value if not using --build-arg instruction
ARG GLOBAL_AVAILABLE=iamglobal
FROM python:3 AS exporter
RUN mkdir -p /opt/export && pip install mysql-connector-python
ADD --chmod=555 export.py /opt/export
ARG GLOBAL_AVAILABLE
ENV GLOBAL_AVAILABLE=$GLOBAL_AVAILABLE
# only visible at exporter build stage:
ARG LOCAL_AVAILABLE=aimlocal
# multistage visible:
RUN echo ${GLOBAL_AVAILABLE}
# local stage visible (exporter build stage):
RUN echo ${LOCAL_AVAILABLE}
CMD ["python", "/opt/export/export.py"]
FROM nginx
COPY --from=exporter /tmp/gen/* /usr/share/nginx/html
you can pass custom ARG values by using the --build-arg flag:
docker build -t <image-name>:<tag> --build-arg GLOBAL_AVAILABLE=abc .
the general format to pass multiple args is:
docker build -t <image-name>:<tag> --build-arg <key1>=<value1> --build-arg <key2>=<value2> .
some refs:
https://docs.docker.com/engine/reference/builder/
https://blog.bitsrc.io/how-to-pass-environment-info-during-docker-builds-1f7c5566dd0e
https://vsupalov.com/docker-arg-env-variable-guide/
When my Dockerfile was like below, it was working well.
...
RUN pip install git+https://user_name:my_password#github.com/repo_name.git#egg=repo_name==1.0.0
...
But when I changed Dockerfile to the below
...
RUN pip install git+https://user_name:${GITHUB_PASSWORD}#github.com/repo_name.git#egg=repo_name==1.0.0
...
And used the command below, it's not working.
docker build -t my_repo:tag_name . --build-arg GITHUB_PASSWORD=my_password
You need to add an ARG declaration into the Dockerfile:
FROM ubuntu
ARG PASSWORD
RUN echo ${PASSWORD} > /password
Then build your docker image:
$ docker build -t foo . --build-arg PASSWORD="foobar"
After this, you can check for the existence of the parameter in your docker container:
$ docker run -it foo bash
root#ebeb5b33941e:/# cat /password
foobar
Therefore, add the ARG GITHUB_PASSWORD build arg into your dockerfile to get it to work.
Using the docker build command line I can pass in a build secret as follows
docker build \
--secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties \
--build-arg project=template-ms \
.
Then use it in a Dockerfile
# syntax = docker/dockerfile:1.0-experimental
FROM gradle:jdk12 AS build
COPY *.gradle .
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle dependencies
COPY src/ src/
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle build
RUN ls -lR build
FROM alpine AS unpacker
ARG project
COPY --from=build /home/gradle/build/libs/${project}.jar /tmp
RUN mkdir -p /opt/ms && unzip -q /tmp/${project}.jar -d /opt/ms && \
mv /opt/ms/BOOT-INF/lib /opt/lib
FROM openjdk:12
EXPOSE 8080
WORKDIR /opt/ms
USER nobody
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000", "-Dnetworkaddress.cache.ttl=5", "org.springframework.boot.loader.JarLauncher"]
HEALTHCHECK --start-period=600s CMD curl --silent --output /dev/null http://localhost:8080/actuator/health
COPY --from=unpacker /opt/lib /opt/ms/BOOT-INF/lib
COPY --from=unpacker /opt/ms/ /opt/ms/
I want to do a build using docker-compose, but I can't find in the docker-compose.yml reference how to pass the secret.
That way the developer just needs to type in docker-compose up
You can use environment or args to pass variables to container in docker-compose.
args:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
environment:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
I have dockerfile
FROM centos:7
ENV foo=42
then I build it
docker build -t my_docker .
and run it.
docker run -it -d my_docker
Is it possible to pass arguments from command line and use it with if else in Dockerfile? I mean something like
FROM centos:7
if (my_arg==42)
{ENV=TRUE}
else:
{ENV=FALSE}
and build with this argument.
docker build -t my_docker . --my_arg=42
It might not look that clean but you can have your Dockerfile (conditional) as follow:
FROM centos:7
ARG arg
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
and then build the image as:
docker build -t my_docker . --build-arg arg=45
or
docker build -t my_docker .
There is an interesting alternative to the proposed solutions, that works with a single Dockerfile, require only a single call to docker build per conditional build and avoids bash.
Solution:
The following Dockerfile solves that problem. Copy-paste it and try it yourself.
ARG my_arg
FROM centos:7 AS base
RUN echo "do stuff with the centos image"
FROM base AS branch-version-1
RUN echo "this is the stage that sets VAR=TRUE"
ENV VAR=TRUE
FROM base AS branch-version-2
RUN echo "this is the stage that sets VAR=FALSE"
ENV VAR=FALSE
FROM branch-version-${my_arg} AS final
RUN echo "VAR is equal to ${VAR}"
Explanation of Dockerfile:
We first get a base image (centos:7 in your case) and put it into its own stage. The base stage should contain things that you want to do before the condition. After that, we have two more stages, representing the branches of our condition: branch-version-1 and branch-version-2. We build both of them. The final stage than chooses one of these stages, based on my_arg. Conditional Dockerfile. There you go.
Output when running:
(I abbreviated this a little...)
my_arg==2
docker build --build-arg my_arg=2 .
Step 1/12 : ARG my_arg
Step 2/12 : ARG ENV
Step 3/12 : FROM centos:7 AS base
Step 4/12 : RUN echo "do stuff with the centos image"
do stuff with the centos image
Step 5/12 : FROM base AS branch-version-1
Step 6/12 : RUN echo "this is the stage that sets VAR=TRUE"
this is the stage that sets VAR=TRUE
Step 7/12 : ENV VAR=TRUE
Step 8/12 : FROM base AS branch-version-2
Step 9/12 : RUN echo "this is the stage that sets VAR=FALSE"
this is the stage that sets VAR=FALSE
Step 10/12 : ENV VAR=FALSE
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to FALSE
my_arg==1
docker build --build-arg my_arg=1 .
...
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to TRUE
Thanks to Tõnis for this amazing idea!
Do not use build args described in other answers where at all possible. This is an old messy solution. Docker's target property solves for this issue.
Target Example
Dockerfile
FROM foo as base
RUN ...
# Build dev image
FROM base as image-dev
RUN ...
COPY ...
# Build prod image
FROM base as image-prod
RUN ...
COPY ...
docker build --target image-dev -t foo .
version: '3.4'
services:
dev:
build:
context: .
dockerfile: Dockerfile
target: image-dev
Real World
Dockerfiles get complex in the real world. Use buildkit & COPY --from for faster, more maintainable Dockerfiles:
Docker builds every stage above the target, regardless of whether it is inherited or not. Use buildkit to build only inherited stages. Docker must by v19+. Hopefully this will be a default feature soon.
Targets may share build stages. Use COPY --from to simplify inheritance.
FROM foo as base
RUN ...
WORKDIR /opt/my-proj
FROM base as npm-ci-dev
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci
FROM base as npm-ci-prod
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci --only=prod
FROM base as proj-files
COPY --chown=www-data:www-data ./ /opt/my-proj
FROM base as image-dev
# Will mount, not copy in dev environment
RUN ...
FROM base as image-ci
COPY --from=npm-ci-dev /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-stage
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-prod
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
Enable experimental mode.
sudo echo '{"experimental": true}' | sudo tee /etc/docker/daemon.json
Build with buildkit enabled. Buildkit builds without cache by default - enable with --build-arg BUILDKIT_INLINE_CACHE=1
CI build job.
DOCKER_BUILDKIT=1 \
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--target image-ci\
-t foo:ci
.
Use cache from a pulled image with --cache-from
Prod build job
docker pull foo:ci
docker pull foo:stage
DOCKER_BUILDKIT=1 \
docker build \
--cache-from foo:ci,foo:stage \
--target image-prod \
-t prod
.
From some reason most of the answers here didn't help me (maybe it's related to my FROM image in the Dockerfile)
So I preferred to create a bash script in my workspace combined with --build-arg in order to handle if statement while Docker build by checking if the argument is empty or not
Bash script:
#!/bin/bash -x
if test -z $1 ; then
echo "The arg is empty"
....do something....
else
echo "The arg is not empty: $1"
....do something else....
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
....
Docker Build:
docker build --pull -f "Dockerfile" -t $SERVICE_NAME --build-arg arg="yes" .
Remark: This will go to the else (false) in the bash script
docker build --pull -f "Dockerfile" -t $SERVICE_NAME .
Remark: This will go to the if (true)
Edit 1:
After several tries I have found the following article and this one
which helped me to understand 2 things:
1) ARG before FROM is outside of the build
2) The default shell is /bin/sh which means that the if else is working a little bit different in the docker build. for example you need only one "=" instead of "==" to compare strings.
So you can do this inside the Dockerfile
ARG argname=false #default argument when not provided in the --build-arg
RUN if [ "$argname" = "false" ] ; then echo 'false'; else echo 'true'; fi
and in the docker build:
docker build --pull -f "Dockerfile" --label "service_name=${SERVICE_NAME}" -t $SERVICE_NAME --build-arg argname=true .
Just use the "test" binary directly to do this. You also should use the noop command ":" if you don't want to specify an "else" condition, so docker does not stop with a non zero return value error.
RUN test -z "$YOURVAR" || echo "var is set" && echo "var is not set"
RUN test -z "$YOURVAR" && echo "var is not set" || :
RUN test -z "$YOURVAR" || echo "var is set" && :
The accepted answer may solve the question, but if you want multiline if conditions in the dockerfile, you can do that placing \ at the end of each line (similar to how you would do in a shell script) and ending each command with ;. You can even define someting like set -eux as the 1st command.
Example:
RUN set -eux; \
if [ -f /path/to/file ]; then \
mv /path/to/file /dest; \
fi; \
if [ -d /path/to/dir ]; then \
mv /path/to/dir /dest; \
fi
In your case:
FROM centos:7
ARG arg
RUN if [ -z "$arg" ] ; then \
echo Argument not provided; \
else \
echo Argument is $arg; \
fi
Then build with:
docker build -t my_docker . --build-arg arg=42
According to the doc for the docker build command, there is a parameter called --build-arg.
Example usage:
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
IMO it's what you need :)
Exactly as others told, shell script would help.
Just an additional case, IMHO it's worth mentioning (for someone else who stumble upon here, looking for an easier case), that is Environment replacement.
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:
${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.
Using Bash script and Alpine/Centos
Dockerfile
FROM alpine #just change this to centos
ARG MYARG=""
ENV E_MYARG=$MYARG
ADD . /tmp
RUN chmod +x /tmp/script.sh && /tmp/script.sh
script.sh
#!/usr/bin/env sh
if [ -z "$E_MYARG" ]; then
echo "NO PARAM PASSED"
else
echo $E_MYARG
fi
Passing arg:
docker build -t test --build-arg MYARG="this is a test" .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in 10b0e07e33fc
this is a test
Removing intermediate container 10b0e07e33fc
---> f6f085ffb284
Successfully built f6f085ffb284
Without arg:
docker build -t test .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in b89210b0cac0
NO PARAM PASSED
Removing intermediate container b89210b0cac0
....
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
As I need the same environment variables on container startup, I used a file to "persist" it from build to run.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.
You can just add a simple check:
RUN [ -z "$ARG" ] \
&& echo "ARG argument not provided." \
&& exit 1 || exit 0
I saw a lot of possible solutions, but no one fits on the problem I faced today. So, I'm taking time to answer the question with one another possible solution that worked to me.
In my case I toke advantage of the well known if [ "$VAR" == "this" ]; then echo "do that"; fi. The caveat is that Docker, I don't know explain why, doesn't like the double equal on this case. So we need to write like that if [ "$VAR" = "this" ]; then echo "do that"; fi.
There is the full example that worked in my case:
FROM node:16
# Let's set args and envs
ARG APP_ENV="dev"
ARG NPM_CMD="install"
ARG USER="nodeuser"
ARG PORT=8080
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
ENV NODE_ENV=${APP_ENV}
# Let's set the starting point
WORKDIR /app
# Let's build a cache
COPY package*.json .
RUN date \
# If the environment is production or staging, omit dev packages
# If any other environment, install dev packages
&& if [ "$APP_ENV" = "production" ]; then NPM_CMD="ci --omit=dev"; fi \
&& if [ "$APP_ENV" = "staging" ]; then NPM_CMD="ci --omit=dev"; fi \
&& npm ${NPM_CMD} \
&& usermod -d /app -l ${USER} node
# Let's add the App
COPY . .
# Let's expose the App port
EXPOSE ${PORT}
# Let's set the user
USER ${USER}
# Let's set the start App command
CMD [ "node", "server.js" ]
So if the user pass the proper build argument, the docker build command will create an image of app for production. If not, it will create an image of the app with dev Node.js packages.
To make it works, you can call like this:
# docker build --build-arg APP_ENV=production -t app-node .
For any one trying to build Windows based image, you need to access argument with %% for cmd.
# Dockerfile Windows
# ...
ARG SAMPLE_ARG
RUN if %SAMPLE_ARG% == hello_world ( `
echo hehe %SAMPLE_ARG% `
) else ( `
echo haha %SAMPLE_ARG% `
)
# ...
BTW, ARG declaration must be placed after FROM, otherwise the argument will not be available.
# The ARGs in front of FROM is for image
ARG IMLABEL=xxxx \
IMVERS=x.x
FROM ${IMLABEL}:${IMVERS}
# The ARGs after FROM is for parameters to be used in the script
ARG condition-x
RUN if [ "$condition-x" = "condition-1" ]; then \
echo "$condition-1"; \
elif [ "$condition-x" = "condition-1" ]; then \
echo "$condition-2"; \
else
echo "$condition-others"; \
fi
build -t --build-arg IMLABEL --build-arg IMVERS --build-arg condition-x -f Dockerfile -t image:version .
Is it possible to conditionally set an ENV variable in a Dockerfile based on the value of a build ARG?
Ex: something like
ARG BUILDVAR=sad
ENV SOMEVAR=if $BUILDVAR -eq "SO"; then echo "hello"; else echo "world"; fi
Update: current usage based on Mario's answer:
ARG BUILD_ENV=prod
ENV NODE_ENV=production
RUN if [ "${BUILD_ENV}" = "test" ]; then export NODE_ENV=development; fi
However, running with --build-arg BUILD_ENV=test and then going onto the host, I still get
docker run -it mycontainer bin/bash
[root#brbqw1231 /]# echo $NODE_ENV
production
Yes, it is possible, but you need to use your build argument as flag. You can use parameter expansion feature of shell to check condition. Here is a proof-of-concept Docker file:
FROM debian:stable
ARG BUILD_DEVELOPMENT
# if --build-arg BUILD_DEVELOPMENT=1, set NODE_ENV to 'development' or set to null otherwise.
ENV NODE_ENV=${BUILD_DEVELOPMENT:+development}
# if NODE_ENV is null, set it to 'production' (or leave as is otherwise).
ENV NODE_ENV=${NODE_ENV:-production}
Testing build:
docker build --rm -t env_prod ./
...
docker run -it env_prod bash
root#2a2c93f80ad3:/# echo $NODE_ENV
production
root#2a2c93f80ad3:/# exit
docker build --rm -t env_dev --build-arg BUILD_DEVELOPMENT=1 ./
...
docker run -it env_dev bash
root#2db6d7931f34:/# echo $NODE_ENV
development
You cannot run bash code in the Dockerfile directly, but you have to use the RUN command. So, for example, you can change ENV with RUN and export the variable in the if, like below:
ARG BUILDVAR=sad
RUN if [ "$BUILDVAR" = "SO" ]; \
then export SOMEVAR=hello; \
else export SOMEVAR=world; \
fi
I didn't try it but should work.
Your logic is actually correct.
The problem here is that RUN export ... won't work in a Dockerfile because the export command won't persist across images.
Dockerfiles create a temporary container in order to generate the image for it, therefore the environment variables won't exist.
ENV on the other hand as per the documentation states:
The environment variables set using ENV will persist when a container is run from the resulting image.
The only way to do this is during your docker run command when generating the container from your image, and wrap your logic around that:
if [ "${BUILD_ENV}" = "test" ]; then
docker run -e NODE_ENV=development myimage
else
docker run myimage
fi
While you can't set conditional ENV variables but you may be able to acomplish what you are after with the RUN command and a null-coalescing environment variable:
RUN node /var/app/current/index.js --env ${BUILD_ENV:-${NODE_ENV:-"development"}}
If we are talking only about environment variable, then just set it with production
ENV NODE_ENV prod
And during container start in development, you may use -e NODE_ENV=dev.
This way image is always built-in production but the local container is launched in development.
This answer is great if you only need to check whether a build-arg is present and you want to set a default value.
To improve this solution, in case you want to use the data passed by the build-arg, you can do the following:
FROM debian:stable
ARG BUILD_DEVELOPMENT=production
ENV NODE_ENV=$BUILD_DEVELOPMENT
The magic comes from the default value for the ARG.
Passing values to Dockerfile and then to entrypoint script
From the command line pass in your required value (TARG)
docker run --env TARG=T1_WS01 -i projects/msbob
Then in your Dockerfile put something like this
Dockerfile:
# if $TARG is not set then "entrypoint" defaults to Q0_WS01
CMD ./entrypoint.sh ${TARG} Q0_WS01
The entrypoint.sh script only reads the first argument
entrypoint.sh:
#!/bin/bash
[ $1 ] || { echo "usage: entrypoint.sh <$TARG>" ; exit ; }
target_env=$1
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.