Can Docker environment variables be used as an dynamic entrypoint runtime arg? - docker

I'm trying to parameterize my Dockerfile running nodeJS so I can have my entrypoint command's args be customizable on docker run so I can maintain one container artifact that can be deployed repeatedly with variations to some runtime args.
I've tried a few different ways, the most basic being
ENV CONFIG_FILE=default.config.js
ENTRYPOINT node ... --config ${CONFIG_FILE}
What I'm finding is that whatever value is defaulted remains in my docker run command even if I'm using -e to pass in new values. Such as
docker run -e CONFIG_FILE=desired.config.js
Another Dockerfile form I've tried is this:
ENTRYPOINT node ... --config ${CONFIG_FILE:-default.config.js}
Not specifying the environment variable with an ENV directive, but using bash expansion to specify a default value if nonexistent or null is found. This gives me the same behavior though.
Lastly, the last thing I tried was to create a bash script file that contains the same entrypoint command, then ADD it to the docker context and invoke it in my ENTRYPOINT. And this also seems to give the same behavior.
Is what I'm attempting even possible?
EDIT:
Here is a minimal dockerfile that reproduces this behavior for me:
FROM alpine
ENV CONFIG "no"
ENTRYPOINT echo "CONFIG=${CONFIG}"
Here is the build command:
docker build -f test.Dockerfile -t test .
Here is the run command, which echoes no despite the -e arg:
docker run -t test -e CONFIG=yes
Some additional details,
I'm running OSX sierra with a Docker version of 18.09.2, build 6247962

Related

Exported environmental variable via docker entrypoint are not shown when logging into the container

Assume a simple Dockerfile
FROM php-fpm:8
ADD entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["php-fpm"]
In the entry-point script I just export a variable and print the environment
#!/bin/bash
set -e
export FOO=bar
env // just print the environment while entry point is running
exec "$#"
Then I build the image as myimage and use it to deploy a stack in docker swarm mode
docker stack deploy -c docker-compose.yml teststack
// The docker-compose.yml file used is the following:
app:
image: myimage:latest
environment:
APP_ENV: production
Now the question: If a check the logs of the app service I can see (because of the env command in the entry point) that the FOO variable is exported
docker service logs teststack_app
teststack_app.1.nbcqgnspn1te#soulatso | PWD=/var/www/html
teststack_app.1.nbcqgnspn1te#soulatso | FOO=bar
teststack_app.1.nbcqgnspn1te#soulatso | HOME=/root
However if I login in the running container and manually run env the FOO variable is not shown
docker container exec -it teststack_app.1.nbcqgnspn1tebirfatqiogmwp bash
root#df9c6d9c5f98:/var/www/html# env // run env inside the container
PWD=/var/www/html
HOME=/root
// No FOO variable :(
What I am missing here?
A debugging shell you launch via docker exec isn't a child process of the main container process, and doesn't run the entrypoint itself, so it doesn't see the environment variables that are set there.
Depending on what you're trying to do, there are a couple of options to get around this.
If you're just trying to inspect what your image build produced, you can launch a debugging container instead. The command you pass here will override the CMD in the Dockerfile, and when your entrypoint script does something like exec "$#" to run the command it gets passed, it will run this command instead. This lets you inspect things in an environment just after your entrypoint's first-time setup has happened.
docker-compose run app env | grep FOO
docker-compose run app bash
Or, if the only thing your entrypoint script is to set environment variables, you can explicitly invoke it.
docker-compose exec app ./entrypoint.sh bash
It is important that your entrypoint script accept an ordinary command as parameters. If it is a shell script, it should use something like exec "$#" to launch the main container process. If your entrypoint ignores its parameters and launches a fixed command, or if you've set ENTRYPOINT to a language interpreter and CMD to a script name, these debugging techniques will not work.

Docker run uses host PATH when chaining commands

I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.

Override ENV variable in base docker image

I have a base docker image, call it docker-image with Dockerfile
FROM Ubuntu
ENV USER default
CMD ['start-application']
a customized docker image, based on docker-image
FROM docker-image
ENV USER username
I want to overwrite USER Environment Variable without changing the base-image, (before the application starts), is that possible?
If you cannot build another image, as described in "Dockerfile Overriding ENV variable", you at least can modify it when starting the container with docker run -e
See "ENV (environment variables)"
the operator can set any environment variable in the container by using one or more -e flags, even overriding those mentioned above, or already defined by the developer with a Dockerfile ENV
$ docker run -e "deep=purple" -e today --rm alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d2219b854598
deep=purple <=============

Dockerfile: Inherit environmental variables from shell

When building an image using a Dockerfile, in order to make some env vars available to the docker build context, one should explicitly declare associations of the form
ENV MYVAR=MYVALUE
AFAIK (correct me if I am misguided here), the environmental variables exported to the shell from which the docker build command is executed, are not passed to the Docker build context, i.e. if in my shell I have previously
export FOO=BAR
having the following declaration in my Dockerfile
ENV FOO=$FOO
and then echoing (still from within the Dockerfile) $FOO will print an empty string.
So if all of the above is correct, my question is if there is a way for the docker build context to inherit the environment of the shell being called from.
You could define default values with ARG:
ARG build_var=default_value
ENV ENV_VAR=$build_var
and then override at build time:
docker build --build-arg build_var=$HOST_VAR
You can get the value from your terminal and pass it like this...
$ export test="works"
$ docker run --name your_image_name -e TEST="${test}" -d -P your_image
$ docker exec -it your_image_name /bin/bash
$ env
...
TEST=works

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Resources