Docker Entrypoint environment variables not printed - docker

I'm new to Docker. All I want is to print an environment variable I pass to docker run via the -e flag. My Dockerfile looks like this:
FROM openjdk:8-jdk-alpine
ENTRYPOINT echo $TEST
I build my image with docker build -t test-docker . and execute it with docker run test-docker -e TEST=bar. It just prints an empty line and exits.

This happens because you run the image having parameters in wrong order, should be:
docker run --rm -e TEST=bar test-docker
Notice the env var is specified before the image name. Everything after your image name is considered as an argument of your container.
Use --rm always when playing to prevent garbage containers from piling up.

Related

How to `docker cp` ssh key to docker container before its entrypoint is executed

Say I have this right now:
docker run -v /root/.ssh:/root/.ssh:ro my_image
and the ENTRYPOINT for the above image is:
ENTRYPOINT ["echo", "foo"]
instead I want to do something like this:
docker run -d --name c my_image # problem: this will likely exit early :(
docker cp /root/.ssh c:/root/.ssh
docker exec c echo foo
the problem is: how do I keep the container alive so that it waits for me to copy the ssh key into it and then run the echo foo command?
Maybe I can keep it alive by telling it to wait for stdin? But how exactly?
you need first to create the container:
docker create my_image
then Copy the files:
docker cp /root/.ssh MY_CREATED_CON:/root/.ssh
start the container normally:
docker start MY_CREATED_CON

Can Docker environment variables be used as an dynamic entrypoint runtime arg?

I'm trying to parameterize my Dockerfile running nodeJS so I can have my entrypoint command's args be customizable on docker run so I can maintain one container artifact that can be deployed repeatedly with variations to some runtime args.
I've tried a few different ways, the most basic being
ENV CONFIG_FILE=default.config.js
ENTRYPOINT node ... --config ${CONFIG_FILE}
What I'm finding is that whatever value is defaulted remains in my docker run command even if I'm using -e to pass in new values. Such as
docker run -e CONFIG_FILE=desired.config.js
Another Dockerfile form I've tried is this:
ENTRYPOINT node ... --config ${CONFIG_FILE:-default.config.js}
Not specifying the environment variable with an ENV directive, but using bash expansion to specify a default value if nonexistent or null is found. This gives me the same behavior though.
Lastly, the last thing I tried was to create a bash script file that contains the same entrypoint command, then ADD it to the docker context and invoke it in my ENTRYPOINT. And this also seems to give the same behavior.
Is what I'm attempting even possible?
EDIT:
Here is a minimal dockerfile that reproduces this behavior for me:
FROM alpine
ENV CONFIG "no"
ENTRYPOINT echo "CONFIG=${CONFIG}"
Here is the build command:
docker build -f test.Dockerfile -t test .
Here is the run command, which echoes no despite the -e arg:
docker run -t test -e CONFIG=yes
Some additional details,
I'm running OSX sierra with a Docker version of 18.09.2, build 6247962

Dockerfile: Inherit environmental variables from shell

When building an image using a Dockerfile, in order to make some env vars available to the docker build context, one should explicitly declare associations of the form
ENV MYVAR=MYVALUE
AFAIK (correct me if I am misguided here), the environmental variables exported to the shell from which the docker build command is executed, are not passed to the Docker build context, i.e. if in my shell I have previously
export FOO=BAR
having the following declaration in my Dockerfile
ENV FOO=$FOO
and then echoing (still from within the Dockerfile) $FOO will print an empty string.
So if all of the above is correct, my question is if there is a way for the docker build context to inherit the environment of the shell being called from.
You could define default values with ARG:
ARG build_var=default_value
ENV ENV_VAR=$build_var
and then override at build time:
docker build --build-arg build_var=$HOST_VAR
You can get the value from your terminal and pass it like this...
$ export test="works"
$ docker run --name your_image_name -e TEST="${test}" -d -P your_image
$ docker exec -it your_image_name /bin/bash
$ env
...
TEST=works

Arguments for overridden docker entrypoint

The Entrypoint of a docker image can be modified while running the image using --entrypoint in docker run command. I want to start a script in my image with some arguments at startup. I can get docker to run the script at startup as
docker run -it --rm --entrypoint /my/script/path.sh my-docker-image
How do I pass arguments to my script?
Note that I cannot modify the original dockerfile with which this image was created. Neither do I want to create another docker image with this image as its base.
When your Docker image has an ENTRYPOINT, either via a Dockerfile or provided on the command line with --entrypoint, any arguments on the docker run command line after the image name are passed to the entrypoint script.
So for example, if I have a script like this in myscript.sh:
#!/bin/sh
echo "Here are my arguments: $#"
And I run an image like this:
$ chmod 755 myscript.sh
$ docker run -it --rm -v $PWD/myscript.sh:/myscript.sh \
--entrypoint /myscript.sh alpine one two three
I will see the output:
Here are my arguments: one two three
...and the container will exit, because the entrypoint script didn't arrange to do anything else. You could replace alpine here (which is a minimal docker image) with any other Docker image that has /bin/sh (so, most of them). For example:
$ docker run -it --rm -v $PWD/myscript.sh:/myscript.sh \
--entrypoint /myscript.sh centos one two three
Here are my arguments: one two three
Note that I'm using the -v argument in this example to mount a script on my host into the container, since I didn't want to create a new image for the purposes of this example. You could obviously bake a similar script into your image instead.
For details, read the ENTRYPOINT docs.

how to pass command line arguments to a python script running in docker

I have a python file called perf_alarm_checker.py, this python file requires two command line arguments: python perf_alarm_checker.py -t something -d something, the Dockerfile looks like this:
# Base image
FROM some base image
ADD perf_alarm_checker.py /perf-test/
CMD python perf_alarm_checker.py
How to pass the two command line arguments, -t and -d to docker run? I tried docker run -w /perf-test alarm-checker -t something -d something but doesn't work.
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python", "perf_alarm_checker.py"]
You cannot use -t and -d as you intend, as those are options for docker run.
-t starts a terminal.
-d starts the docker container as a daemon.
For setting environment variables in your Dockerfile use the ENV command.
ENV <key>=<value>
See the Dockerfile reference.
Another option is to pass environment variables through docker run:
docker run ... -e "key=value" ...
See the docker run reference.
Those environment variables can be accessed from the CMD.
CMD python perf_alarm_checker.py -t $ENV1 -d $ENV2

Resources