Pass ENV in docker run command - docker

Is there a way we can pass a variable lets say in this example I want to pass a list of animals into an entrypoint.sh file using ENV animals="turtle, monkey, goose"
But I want to be able to pass different animals when running the container for example docker run -t image animals="mouse,rat,kangaroo"
How do you go about passing arguments when running the docker run command?
The goal is to take that variable when using the docker run command and insert them into that entrypoint.sh file
Right now i hard code that in my Dockerfile. But i want to be able to do this when running the docker run command so I dont always have to change the Dockerfile.
FROM anapsix/alpine-java:8u121b13_jdk
ENV FILE_NAME="file_to_run.zip"
ENV animals="turtle, monkey, goose"
ADD ${FILE_NAME} .
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
CMD [ "/bin/ash", "./entrypoint.sh" ]

It looks like you might be confusing the image build with the container run. If the difference between the two isn't immediately clear, I'd recommend reviewing some other questions and docs like:
In Docker, what's the difference between a container and an image?
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
With the above, the variables will be expanded during the image build. The entrypoint.sh will not contain ${FILENAME} ${animals}. Instead, it will contain
file_to_run.zip turtle, monkey, goose
After the build, the docker run command will create a container from that image and run the above script with the environment variables defined but never used since the script already has the variables expanded. To prevent the variable expansion, you need to escape the $ or use single quotes to prevent the expansion, e.g.
RUN echo "\${FILENAME} \${animals}" > ./entrypoint.sh
or
RUN echo '${FILENAME} ${animals}' > ./entrypoint.sh
I would also recommend being explicit with a #!/bin/ash at the top of this script. Then when you run the script, do not override the command with parameters after the image name. Instead set the environment variables with the appropriate flag to run:
docker run -it -e animals="mouse,rat,kangaroo" image

Simplest way, forward individual variables:
docker run ... --env animals="turtle, monkey, goose" --env FILE_NAME="file_to_run.zip"
Forward several variables using file:
Or if you need to grab all your environment variables from outside, you can do something like this first:
printenv | grep -E 'animals|FILE_NAME' > my-env
The grep is because Docker doesn't like some variables, e.g. with spaces in them, which you might possibly have in your real environment.
Then use that file in your Docker command:
docker run ... --env-file ./my-env
The latter is also useful if you want to avoid sending environment variables to logs (like for sensitive variables). I use this approach in a CI/CD pipeline that runs some scripts.
Using variables inside Docker:
With either approach, the environment variables actually become available to scripts running inside the container to use.
#BMitch's answer has more complete details about how to achieve this in your case, where you have related logic in both build and execution.
Reference
See docs here.

Related

Delete environmental variable from docker image

I have looked around online and tried the obvious route (explained below) to remove an environmental variable from a docker image.
1 - I create a container from a modified ubuntu image using:
docker run -it --name my_container my_image
2 - I inspect the image and see the two environmental variables that I want to remove using:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
3 - I exec into the container and remove the environmental variables via:
docker exec -it my_container bash
unset env_variable_1
unset env_variable_2
4 - I check to make sure the specified variables are gone:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_3=mnopqr",
...
5 - I then commit this modified container as an image via:
docker commit my_container my_new_image
6 - And check for the presence of the deleted environmental variables via:
docker run -it --name my_new_container my_new_image
docker inspect my_new_container
which yields (drumroll please):
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
AKA the deleted variables are not carried through from the modified container to the new image in the docker commit
What am I missing out on here? Is unset really deleting the variables? Should I use another method to remove these environmental variables or another/modified method to commit the container as an image?
PS: I've confirmed the variables first exist when inside the container via env. I then confirmed they were not active using the same method after using unset my_variable
Thanks for your help!
You need to edit the Dockerfile that built the original image. The Dockerfile ENV directive has a couple of different syntaxes to set variables but none to unset them. docker run -e and the Docker Compose environment: setting can't do this either. This is not an especially common use case.
Depending on what you need, it may be enough to set the variables to an empty value, though this is technically different.
FROM my_image
ENV env_variable_1=""
RUN test -z "$env_variable_1" && echo variable 1 is empty
RUN echo variable 1 is ${env_variable_1:-empty}
RUN echo variable 1 is ${env_variable_1-unset}
# on first build will print out "empty", "empty", and nothing
The big hammer is to use an entrypoint script to unset the variable. The script would look like:
#!/bin/sh
unset env_variable_1 env_variable_2
exec "$#"
It would be paired with a Dockerfile like:
FROM my_image
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["same", "as", "before"]
docker inspect would still show the variable as set (because it is in the container metadata) but something like ps e that shows the container process's actual environment will show it unset.
As a general rule you should always use the docker build system to create an image, and never use docker commit. ("A modified Ubuntu image" isn't actually a reproducible recipe for debugging things or asking for help, or for rebuilding it when a critical security patch appears in six months.) docker inspect isn't intrinsically harmful but has an awful lot of useless information; I rarely have reason to use it.
Maybe you can try with this way, as in this answer:
docker exec -it -e env_variable_1 my_container bash
And then commit the container as usual.
I personally was looking to remove all environment variables to have a fresh image but without losing the contents inside the image.
The problem was that when i reused this image and reset those environment variables with new values, they were not changed, the old values were still present.
My solution was to reinitialize the image with docker export and then docker import.
Export
First, spin up a container with the image, then export the container to a tarball
docker export {container_name} > my_image.tar
Import
Import the tarball to a new image
docker import my_image.tar my_image_tag:latest
Doing this will reset the image, meaning only the contents of the container will remain.
All layers, environment variables, entrypoint, and command data will be gone.

Docker set ENV based on if-else

I have a situation where I need to set an ENV based on runtime condition like thus:
RUN if [ "$RUNTIME" = "prod" ] then VARIABLE="Some Production URL"; else VARIABLE="Some QA URL"; fi;
ENV={VARIABLE}
Been looking at different solutions but none of them seem to be panning out (for example the basic one where VARIABLE is lost when RUN exits). What would be an elegant way to achieve this?
It is an unfortunate constraint that you only have this "dev/qa/prod" environment variable. However, it is possible to achieve what you want.
First, you might consider baking your environment specific configuration into the image for all environments. (Normally I would discourage to do this!)
For example you can COPY three files into your image:
dev-env.sh: contains your dev config in the form:
ELASTICSEARCH_URL=http://elastic-dev:123
qa-env.sh (similar)
prod-env.sh (similar)
Then you evaluate at run-time (not at build-time) in which environment you are: You add an ENTRYPOINT script to your image which will source the correct file, depending on the ENVIRONMENT_NAME variable.
Dockerfile (part):
ENTRYPOINT ["docker-entrypoint.sh"]
docker-entrypoint.sh (copied into WORKDIR of the image):
#!/bin/bash
set -e
if [ "$ENVIRONMENT_NAME" = "prod" ]; then
source prod-env.sh
fi
# else if qa ... , else if dev ..., else fail
exec "$#"
This script will run when you launch the docker container, so this approach is no option for you if you need the variables to be available in Dockerfile-instructions (at build-time).
Another (build-time) workaround is described here and consists of using temporary files to store environment variables across multiple image layers.
The literal conditional execution can be achieved with multistage build and ONBUILD.
ARG mode=prod
FROM alpine as img_prod
ONBUILD ENV ELASTICSEARH_URL=whatever/for/prod
FROM alpine as img_qa
ONBUILD ENV ELASTICSEARH_URL=whatever/for/qa
FROM img_${mode}
...
Then you build with docker build --build-arg mode=qa .
Wouldn't passing env var with docker run be the solution you need? Something like this:
docker run -e YOUR_VARIABLE="Some Production URL" ...

How to provide user defined argument and value in docker run?

I want to achieve something like
docker run --delay=
I could provide the value for delay using ENTRYPOINT AND CMD without providing argument in docker run but could not find a way to do from docker run.
In short, I want to know how to pass user defined argument and value to docker run command or using dockerfile
You can achieve it using environment variable. There are two ways to set environment variable.
In Dockerfile -> You can set as follows. Detailed Explanation at https://docs.docker.com/engine/reference/builder/#env
ENV <key> = <value>
In docker run command -> You can set using -e flag. Detailed Explanation at https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file
docker run -e <key> = <value> <image_name>
There are multiple ways to do that but I would recommend to go with environment variables. Just define the variable while running docker run and use it in your ENTRYPOINT script.
docker run -e DELAY=30 IMAGE [COMMAND] [ARG...]
Afterward use it in your ENTRYPOINT script as:
!#/bin/bash
# Play with $DELAY
echo $DELAY
# Start the root process
exec root_process_command
I hope it help!

Conditionally set ENV var based on hostname in Dockerfile

How can I set an ENV var in my Dockerfile based on the hostname? I tried this:
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
But that failed with
ENV: not found
RUN if [ hostname = "foo" ]; then ENV BAR "BAZ"; else ENV BAR "BIFF"; fi
You can't nest docker build instructions, everything after the RUN instruction gets executed in the image context, docker build commands don't
exist there. So that explains the error you are seeing.
Even you if you translated that to proper shell code BAR would only be active for that single RUN instruction during the build.
Either orchestrate on the host and pass BAR via run -e to your container or add a startup script to the image that sets BAR as needed on container start:
FROM foo
COPY my-start.sh /
CMD ["/my-start.sh"]
First of all, you can't embed Docker build command into shell of RUN, the shell will run inside the intermediate container during build process, and Docker build commands will be ran by Docker build engine, they're different things. And besides, Docker does not support conditional commands like IF or something like that. Docker is about immutable infrastructure, Dockerfile is the definition of your image and it's supposed to be able to generate the same image no matter what build context it is in. And from the delivery perspective of view, the image is your deliverable build artifacts, if you want to deliver different stuff, then use different Dockerfile to build different images, otherwise if the differences is about the runtime, I think you could really consider postpone the env definition to the runtime with -e option of docker run.
The reason why your build is failing has been explained by #shizhz & #Erik Dannenberk.
However, if you do really need that behavior I suggest you make a little script to do that:
export BAR=`[[ hostname = "foo" ]] && echo "BAZ" || echo "BIFF"`
docker build -t hello/hi - <<EOF
FROM alpine
ENV BAR $BAR
CMD echo $BAR
EOF

Parse a variable with the result of a command in DockerFile

I need to fill a variable in dockerfile with the result of a command
Like in bash var=$(date)
EDIT 1
date is a example.
in my case i use FROM phusion/baseimage:0.9.17 so i want at each building use the last version so i use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
but i don't know how i parse it in var with dockerfile for this result
ENV verbaseimage=curl...
FROM phusion/baseimage:$verbaseimage
RESULT
In my use case
FROM phusion/baseimage:latest
But the question remains unresolved for other case
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
The old workaround is mentioned here (issue 2637: Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR):
One work around that I've used, is to have a file in my context called "build-env". What I do is source it and run my desired command in the same RUN step. So for example:
build-env:
VERSION=stable
Dockerfile:
FROM radial/axle-base:latest
ADD build-env /build-env
RUN source build-env && mkdir /$VERSION
RUN ls /
But for date, that might not be as precise as you want.
Other workarounds are in issue 2022 "Dockerfile with variable interpolation".
In docker 1.9 (end of October 2015), you will have "support for build-time environment variables to the 'build' API (PR 9176)" and "Support for passing build-time variables in build context (PR 15182)".
docker build --build-arg=[]: Set build-time variables
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the ---build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don't persist in the intermediate or final images like ENV values do.
so I want at each building use the last version so I use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
If you want to use the last version of that image, all you need to do is use the tag 'latest' with the FROM directive:
FROM phusion/baseimage:latest
See also "The misunderstood Docker tag: latest": it doesn't always reference the actual latest build, but in this instance, it should work.
If you really want to use the curl|parse option, use it to generate a Dockerfile with the right value (as in a template processed to generate the right file).
Don't try to use it directly in the Dockerfile.
I wanted to set an ENV or LABEL variable from a computation in the Dockerfile, e.g. to make some computed installation options visible in docker inspect.
There does not seem to be any way to do that, and this issue suggests that it's a security design choice.
A Dockerfile can set an ENV variable to $X, ${X:-default}, or ${X:+substitute} where that $X must be another ENV or ARG variable.
A single RUN command can set and use shell variables, but that goes away at the end of the RUN command when that container layer shuts down.
A RUN command can write computed data into files, but the Dockerfile still can't get that data into an ENV or LABEL even if the file is ~/.bashrc. (File contents can, of course, be used by code running in the Container.)
The build can at least RUN echo $X to record choices to the build log -- unless that step comes from the build cache, in which case the RUN step doesn't run.
Please do correct me if there's a way out.
Partially connected to question. If one wants to use the result of some command later on it is possible within single RUN statement as follows:
RUN CUR_DIR=`pwd` && \
echo $CUR_DIR

Resources