Dockerfile: Inherit environmental variables from shell - docker

When building an image using a Dockerfile, in order to make some env vars available to the docker build context, one should explicitly declare associations of the form
ENV MYVAR=MYVALUE
AFAIK (correct me if I am misguided here), the environmental variables exported to the shell from which the docker build command is executed, are not passed to the Docker build context, i.e. if in my shell I have previously
export FOO=BAR
having the following declaration in my Dockerfile
ENV FOO=$FOO
and then echoing (still from within the Dockerfile) $FOO will print an empty string.
So if all of the above is correct, my question is if there is a way for the docker build context to inherit the environment of the shell being called from.

You could define default values with ARG:
ARG build_var=default_value
ENV ENV_VAR=$build_var
and then override at build time:
docker build --build-arg build_var=$HOST_VAR

You can get the value from your terminal and pass it like this...
$ export test="works"
$ docker run --name your_image_name -e TEST="${test}" -d -P your_image
$ docker exec -it your_image_name /bin/bash
$ env
...
TEST=works

Related

How to set environment variables in Windows Command Prompt so they’re passed in `docker run -e FOO -e …` like with bash export

Is there a Windows equivalent to bash's export? I want to set an environment variable so that it's available to subsequent commands. I don't want to setx a permanent env variable.
For example this works as expected with Docker for Windows in a Windows Terminal PowerShell Command Prompt. The FOO environment variable value is available in the container.
PS C:\Users\Owner> docker run -e FOO=bar centos env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=05b90a09d7fd
FOO=bar
HOME=/root
But how do I set an env variable in Windows with the equivalent of bash export so that it’s available without setting the value directly on each docker run command?
You can see here that set does not pass the value of FOO to the container.
PS C:\Users\Owner> set FOO=bar
PS C:\Users\Owner> docker run -e FOO centos env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=6642033b3753
HOME=/root
I know I can create an env file and pass that to docker run but I'm trying to avoid that.
PS: I asked this question here but didn't get an answer:
https://forums.docker.com/t/how-to-set-environment-variables-in-command-prompt-so-theyre-passed-in-docker-run-e-foo-e/106776/4
From this answer, you can set a local environment variable for your shell with Powershell's $env:VARIABLE_NAME.
If I'm understanding your question correctly this should work for you if your objective is to just grab the variable name/value from the current terminal session.
PS C:\> $env:FOO = 'BAR'
PS C:\> docker run -it -e FOO alpine:3.9 /bin/sh
/ # env
HOSTNAME=e1ef1d7393b2
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
FOO=BAR
PWD=/
/ # exit
PS C:\>
An equivalent of unset would be one of the following (thanks to #mklement0 for the suggestion) :
Remove-Item env:\FOO
# suggested by mklement0
$env:FOO = $null
Create a Powershell script docker_run.ps1
And inside the script, you can define global variables and then use them in the docker run command written at the end of the script.
This way it becomes easy to re-use the command easy to edit as well for any quick changes.
The file would look as below:
$global:ENV1="VALUE1"
$global:ENV2="VALUE2"
docker run -e ENV1=$ENV1 -e ENV2=$ENV2 --name Container_Name Image_Name
And now just run this script inside the CMD as:
powershell
powershell docker_run.ps1

Can Docker environment variables be used as an dynamic entrypoint runtime arg?

I'm trying to parameterize my Dockerfile running nodeJS so I can have my entrypoint command's args be customizable on docker run so I can maintain one container artifact that can be deployed repeatedly with variations to some runtime args.
I've tried a few different ways, the most basic being
ENV CONFIG_FILE=default.config.js
ENTRYPOINT node ... --config ${CONFIG_FILE}
What I'm finding is that whatever value is defaulted remains in my docker run command even if I'm using -e to pass in new values. Such as
docker run -e CONFIG_FILE=desired.config.js
Another Dockerfile form I've tried is this:
ENTRYPOINT node ... --config ${CONFIG_FILE:-default.config.js}
Not specifying the environment variable with an ENV directive, but using bash expansion to specify a default value if nonexistent or null is found. This gives me the same behavior though.
Lastly, the last thing I tried was to create a bash script file that contains the same entrypoint command, then ADD it to the docker context and invoke it in my ENTRYPOINT. And this also seems to give the same behavior.
Is what I'm attempting even possible?
EDIT:
Here is a minimal dockerfile that reproduces this behavior for me:
FROM alpine
ENV CONFIG "no"
ENTRYPOINT echo "CONFIG=${CONFIG}"
Here is the build command:
docker build -f test.Dockerfile -t test .
Here is the run command, which echoes no despite the -e arg:
docker run -t test -e CONFIG=yes
Some additional details,
I'm running OSX sierra with a Docker version of 18.09.2, build 6247962

Docker Entrypoint environment variables not printed

I'm new to Docker. All I want is to print an environment variable I pass to docker run via the -e flag. My Dockerfile looks like this:
FROM openjdk:8-jdk-alpine
ENTRYPOINT echo $TEST
I build my image with docker build -t test-docker . and execute it with docker run test-docker -e TEST=bar. It just prints an empty line and exits.
This happens because you run the image having parameters in wrong order, should be:
docker run --rm -e TEST=bar test-docker
Notice the env var is specified before the image name. Everything after your image name is considered as an argument of your container.
Use --rm always when playing to prevent garbage containers from piling up.

Override ENV variable in base docker image

I have a base docker image, call it docker-image with Dockerfile
FROM Ubuntu
ENV USER default
CMD ['start-application']
a customized docker image, based on docker-image
FROM docker-image
ENV USER username
I want to overwrite USER Environment Variable without changing the base-image, (before the application starts), is that possible?
If you cannot build another image, as described in "Dockerfile Overriding ENV variable", you at least can modify it when starting the container with docker run -e
See "ENV (environment variables)"
the operator can set any environment variable in the container by using one or more -e flags, even overriding those mentioned above, or already defined by the developer with a Dockerfile ENV
$ docker run -e "deep=purple" -e today --rm alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d2219b854598
deep=purple <=============

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Resources