How do I break a long ENV declaration in Dockerfile? - docker

I have a Dockerfile with an ENV declaration for set of paths to be searched that over time has become somewhat comically long:
ENV SPECIAL_PATHS=/foo/bar:/yada/dada:{... ~20 more .. }:/the/end
I cannot see what is the idiomatic way to break in the documentation. I could, of course, define pieces in multiple ENV lines and then combine, but I'd rather not add yet more layers.

You can use \ to break it up over multiple lines.
FROM alpine:3.8
ENV SPECIAL_PATHS \
/foo/bar:\
/yada/yada:\
/the/end
Here's the env in container run from the resulting image.
$ docker container run --rm env-test env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2fae9abd1eea
SPECIAL_PATHS=/foo/bar:/yada/yada:/the/end
HOME=/root

I would use the backslash character (\) to escape the new line.

Related

Docker is adding single quotes to ENTERYPOINT argument

I am creating a Dockerfile that needs to source a script before a shell is run.
ENTRYPOINT ["/bin/bash", "-rcfile","<(echo '. ./mydir/scripttosource.sh')"]
However, the script isn't sourced as expected.
Combining these parameters on a command line (normal Linux instance, outside of any Docker container), it works properly, for example:
$ /bin/bash -rcfile <(echo '. ./mydir/scripttosource.sh')
So I took a look at what was actually used by the container when it was run.
$ docker ps --format "table {{.ID}} \t {{.Names}} \t {{.Command}}" --no-trunc
CONTAINER ID NAMES COMMAND
70a5f846787075bd9bd55432dc17366268c33c1ab06fb36b23a50f5c3aef19bb happy_cray "/bin/bash -rcfile '<(echo '. ./mydir/scripttosource.sh')'"
Besides the fact that it properly identified the emotional state of Cray computers, Docker seems to be sneaking in undesired single quotes into the third parameter to ENTRYPOINT.
'<(echo '. ./mydir/scripttosource.sh')'
Thus the command actually being executed is:
$ /bin/bash -rcfile '<(echo '. ./mydir/scripttosource.sh')'
Which doesn't work...
Now I realize there are more ways to skin this cat, and I could make this work a different way, I am curious about the insertion of single quotes to the third argument to ENTRYPOINT. Is there a way to avoid this?
Thank you,
At a super low level, the Unix execve(2) function launches a process by taking a sequence of words, where the first word is the actual command to run and the remaining words are its arguments. When you run a command interactively, the shell breaks it into words, usually at spaces, and then calls an exec-type function to run it. The shell also does other processing like replacing $VARIABLE references or the bash-specific <(subprocess) construct; all of these are at layers above simply "run a process".
The Dockerfile ENTRYPOINT (and also CMD, and less frequently RUN) has two forms. You're using the JSON-array exec form. If you do this, you're telling Docker that you want to run the main container command with exactly these three literal strings as arguments. In particular the <(...) string is passed as a literal argument to bash --rcfile, and nothing actually executes it.
The obvious answer here is to use the string-syntax shell form instead
ENTRYPOINT /bin/bash -rcfile <(echo '. ./mydir/scripttosource.sh')
Docker wraps this in an invocation of sh -c (or the Dockerfile SHELL). That causes a shell to preprocess the command string, break it into words, and execute it. Assuming the SHELL is bash and not a pure POSIX shell, this will handle the substitution.
However, there are some downsides to this, most notably that the sh -c invocation "eats" all of the arguments that might be passed in the CMD. If you want your main container process to be anything other than an interactive shell, this won't work.
This brings you to the point of trying to find simpler alternatives to doing this. One specific observation is that the substitution here isn't doing anything; <(echo something) will always produce the fixed string something and you can do it without the substitution. If you can avoid the substitution then you don't need the shell either:
ENTRYPOINT ["/bin/bash", "--rcfile", "./mydir/scripttosource.sh"]
Another sensible approach here is to use an entrypoint wrapper script. This uses the ENTRYPOINT to run a shell script that does whatever initialization is needed, then exec "$#" to run the main container command. In particular, if you use the shell . command to set environment variables (equivalent to the bash-specific source) those will "stick" for the main container process.
#!/bin/sh
# entrypoint.sh
# read the file that sets variables
. ./mydir/scripttosource.sh
# run the main container command
exec "$#"
# Dockerfile
COPY entrypoint.sh ./ # may be part of some other COPY
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ???
This should have the same net effect. If you get a debugging shell with docker run --rm -it your-image bash, it will run under the entrypoint wrapper and see the environment variables. You can do other setup in the wrapper script if required. This particular setup also doesn't use any bash-specific options, and might run better under minimal Alpine-based images.
insertion of single quotes can be avoided by using escape characters in the third argument to ENTRYPOINT.
ENTRYPOINT ["/bin/bash", "-rcfile","$(echo '. ./mydir/scripttosource.sh')"]

How to unset "ENV" in dockerfile?

For some certain reasons, I have to set "http_proxy" and "https_proxy" ENV in my dockerfile. I would like to now unset them because there are also some building process can't be done through the proxy.
# dockerfile
# ... some process
ENV http_proxy=http://...
ENV https_proxy=http://...
# ... some process that needs the proxy to finish
UNSET ENV http_proxy # how to I unset the proxy ENV here?
UNSET ENV https_proxy
# ... some process that can't use the proxy
It depends on what effect you are trying to achieve.
Note that, as a matter of pragmatics (i.e. how developers actually speak), "unsetting a variable" can mean two things: removing it from the environment, or setting the variable to an empty value. Technically, these are two different operations. In practice though I have not run into a case where the software I'm trying to control differentiates between the variable being absent from the environment, and the variable being present in the environment but set to an empty value. I generally can use either method to get the same result.
If you don't care whether the variable is in the layers produced by Docker, but leaving it with a non-empty value causes problems in later build steps.
For this case, you can use ENV VAR_NAME= at the point in your Dockerfile from which you want to unset the variable. Syntactic note: Docker allows two syntaxes for ENV: this ENV VAR=1 is the same as ENV VAR 1. You can separate the variable name from the value with a space or an equal sign. When you want to "unset" a variable by setting it to an empty value you must use the equal sign syntax or you get an error at build time.
So for instance, you could do this:
ENV NOT_SENSITIVE some_value
RUN something
ENV NOT_SENSITIVE=
RUN something_else
When something runs, NOT_SENSITIVE is set to some_value. When something_else runs, NOT_SENSITIVE is set to the empty string.
It is important to note that doing unset NOT_SENSITIVE as a shell command will not affect anything else than what executes in this shell. Here's an example:
ENV NOT_SENSITIVE some_value
RUN unset NOT_SENSITIVE && printenv NOT_SENSITIVE || echo "does not exist"
RUN printenv NOT_SENSITIVE
The first RUN will print does not exist because NOT_SENSITIVE is unset when printenv executes and because it is unset printenv returns a non-zero exit code which causes the echo to execute. The second RUN is not affected by the unset in the first RUN. It will print some_value to the screen.
But what if I need to remove the variable from the environment, not just set it to an empty value?
In this case using ENV VAR_NAME= won't work. I don't know of any way to tell Docker "from this point on, you must remove this variable from the environment, not just set it to an empty value".
If you still want to use ENV to set your variable, then you'll have to start each RUN in which you want the variable to be unset with unset VAR_NAME, which will unset it for that specific RUN only.
If you want to prevent the variable from being present in the layers produced by Docker.
Suppose that variable contains a secret and the layer could fall into the hands of people who should not have the secret. In this case you CANNOT use ENV to set the variable. A variable set with ENV is baked into the layers to which it applies and cannot be removed from those layers. In particular, (assuming the variable is named SENSITIVE) running
RUN unset SENSITIVE
does not do anything to remove it from the layer. The unset command above only removes SENSITIVE from the shell process that RUN starts. It affects only that shell. It won't affect shells spawned by CMD, ENTRYPOINT, or any command provided through running docker run at the command line.
In order to prevent the layers from containing the secret, I would use docker build --secret= and RUN --mount=type=secret.... For instance, assuming that I've stored my secret in a file named sensitive, I could have a RUN like this:
RUN --mount=type=secret,id=sensitive,target=/root/sensitive \
export SENSITIVE=$(cat /root/sensitive) \
&& [[... do stuff that requires SENSITIVE ]] \
Note that the command given to RUN does not need to end with unset SENSITIVE. Due to the way processes and their environments are managed, setting SENSITIVE in the shell spawned by RUN does not have any effect beyond what that shell itself spawns. Environment changes in this shell won't affect future shells nor will it affect what Docker bakes into the layers it creates.
Then the build can be run with:
$ DOCKER_BUILDKIT=1 docker build --secret id=secret,src=path/to/sensitive [...]
The environment for the docker build command needs DOCKER_BUILDKIT=1 to use BuildKit because this method of passing secrets is only available if Docker uses BuildKit to build the images.
If one needs env vars during the image build but they should not persist, just clear them. In the following example, the running container shows empty env vars.
Dockerfile
# set proxy
ARG http_proxy
ARG https_proxy
ARG no_proxy
ENV http_proxy=$http_proxy
ENV https_proxy=$http_proxy
ENV no_proxy=$no_proxy
# ... do stuff that needs the proxy during the build, like apt-get, curl, et al.
# unset proxy
ENV http_proxy=
ENV https_proxy=
ENV no_proxy=
build.sh
docker build -t the-image \
--build-arg http_proxy="$http_proxy" \
--build-arg https_proxy="$http_proxy" \
--build-arg no_proxy="$no_proxy" \
--no-cache \
.
run.sh
docker run --rm -i \
the-image \
sh << COMMANDS
env
COMMANDS
Output
no_proxy=
https_proxy=
http_proxy=
...
According to docker docs you need to use shell command instead:
FROM alpine
RUN export ADMIN_USER="mark" \
&& echo $ADMIN_USER > ./mark \
&& unset ADMIN_USER
CMD sh
See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#env for more details.
Short-answer:
Try to avoid unnecessary environment variables, so you don't need to unset them.
In case you have to unset for a command you can do the following:
RUN unset http_proxy https_proxy no_proxy \
&& execute_your_command_here
In case you have to unset for the built image you can do the following:
FROM ubuntu_with_http_proxy
ENV http_proxy= \
https_proxy= \
no_proxy=
Once environment variables are set using the ENV instruction we can't really unset them as it is detailed:
Each ENV line creates a new intermediate layer, just like RUN commands. This means that even if you unset the environment variable in a future layer, it still persists in this layer and its value can be dumped.
See: Best practices for writing Dockerfiles
Details:
I prefer to define http_proxy as an argument during build like the following:
FROM ubuntu:20.04
ARG http_proxy=http://host.docker.internal:3128
ARG https_proxy=http://host.docker.internal:3128
ARG no_proxy=.your.domain,localhost,127.0.0.1,.docker.internal
On corporate proxy we need authentication anyways, so we need to configure local proxy server listening on 127.0.0.1:3128 witch is accessible over host.docker.internal:3128 from containers. This way it also works on docker desktop if we connect to corporate network over VPN (with local/home network blocked).
Setting no_proxy is also important to avoid flooding the proxy server.
See the following article for more details on no_proxy related topics:
Can we standardize NO_PROXY?
Sometimes it is also good to read the related documentation:
ENV
ARG
In case we need to configure those environment variables we can use the following command:
during build (link):
docker build ... --build-arg http_proxy='http://alternative.proxy:3128/' ...
during runs (link):
docker run ... -env http_proxy='http://alternative.proxy:3128/' ...
Also note that we don't even need to define proxy related arguments since those are already predefine according to the following section:
Dockerfile reference - Predefined ARGs
You can add below lines in the Dockerfile
ENV http_proxy ""
ENV https_proxy ""
I found the secret approach didn't work because I needed the env variable to persist in the container when I ran it in interactive mode but then needed to completely remove the variable for a later stage build for production.
What worked was in building for the development phase I appended the environment variable to the /root/.basrc file as
RUN echo export AWS_PROFILE=role-name >> /root/.bashrc
``
In the production stage of the build I then removed the last line of /root/.bashrc:
RUN sed -i '$ d' /root/.bashrc

Pass ENV in docker run command

Is there a way we can pass a variable lets say in this example I want to pass a list of animals into an entrypoint.sh file using ENV animals="turtle, monkey, goose"
But I want to be able to pass different animals when running the container for example docker run -t image animals="mouse,rat,kangaroo"
How do you go about passing arguments when running the docker run command?
The goal is to take that variable when using the docker run command and insert them into that entrypoint.sh file
Right now i hard code that in my Dockerfile. But i want to be able to do this when running the docker run command so I dont always have to change the Dockerfile.
FROM anapsix/alpine-java:8u121b13_jdk
ENV FILE_NAME="file_to_run.zip"
ENV animals="turtle, monkey, goose"
ADD ${FILE_NAME} .
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
CMD [ "/bin/ash", "./entrypoint.sh" ]
It looks like you might be confusing the image build with the container run. If the difference between the two isn't immediately clear, I'd recommend reviewing some other questions and docs like:
In Docker, what's the difference between a container and an image?
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
With the above, the variables will be expanded during the image build. The entrypoint.sh will not contain ${FILENAME} ${animals}. Instead, it will contain
file_to_run.zip turtle, monkey, goose
After the build, the docker run command will create a container from that image and run the above script with the environment variables defined but never used since the script already has the variables expanded. To prevent the variable expansion, you need to escape the $ or use single quotes to prevent the expansion, e.g.
RUN echo "\${FILENAME} \${animals}" > ./entrypoint.sh
or
RUN echo '${FILENAME} ${animals}' > ./entrypoint.sh
I would also recommend being explicit with a #!/bin/ash at the top of this script. Then when you run the script, do not override the command with parameters after the image name. Instead set the environment variables with the appropriate flag to run:
docker run -it -e animals="mouse,rat,kangaroo" image
Simplest way, forward individual variables:
docker run ... --env animals="turtle, monkey, goose" --env FILE_NAME="file_to_run.zip"
Forward several variables using file:
Or if you need to grab all your environment variables from outside, you can do something like this first:
printenv | grep -E 'animals|FILE_NAME' > my-env
The grep is because Docker doesn't like some variables, e.g. with spaces in them, which you might possibly have in your real environment.
Then use that file in your Docker command:
docker run ... --env-file ./my-env
The latter is also useful if you want to avoid sending environment variables to logs (like for sensitive variables). I use this approach in a CI/CD pipeline that runs some scripts.
Using variables inside Docker:
With either approach, the environment variables actually become available to scripts running inside the container to use.
#BMitch's answer has more complete details about how to achieve this in your case, where you have related logic in both build and execution.
Reference
See docs here.

How do I use Docker environment variable in ENTRYPOINT array?

If I set an environment variable, say ENV ADDRESSEE=world, and I want to use it in the entry point script concatenated into a fixed string like:
ENTRYPOINT ["./greeting", "--message", "Hello, world!"]
with world being the value of the environment varible, how do I do it? I tried using "Hello, $ADDRESSEE" but that doesn't seem to work, as it takes the $ADDRESSEE literally.
You're using the exec form of ENTRYPOINT. Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, ENTRYPOINT [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: ENTRYPOINT [ "sh", "-c", "echo $HOME" ].
When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.(from Dockerfile reference)
In your case, I would use shell form
ENTRYPOINT ./greeting --message "Hello, $ADDRESSEE\!"
After much pain, and great assistance from #vitr et al above, i decided to try
standard bash substitution
shell form of ENTRYPOINT (great tip from above)
and that worked.
ENV LISTEN_PORT=""
ENTRYPOINT java -cp "app:app/lib/*" hello.Application --server.port=${LISTEN_PORT:-80}
e.g.
docker run --rm -p 8080:8080 -d --env LISTEN_PORT=8080 my-image
and
docker run --rm -p 8080:80 -d my-image
both set the port correctly in my container
Refs
see https://www.cyberciti.biz/tips/bash-shell-parameter-substitution-2.html
I tried to resolve with the suggested answer and still ran into some issues...
This was a solution to my problem:
ARG APP_EXE="AppName.exe"
ENV _EXE=${APP_EXE}
# Build a shell script because the ENTRYPOINT command doesn't like using ENV
RUN echo "#!/bin/bash \n mono ${_EXE}" > ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
# Run the generated shell script.
ENTRYPOINT ["./entrypoint.sh"]
Specifically targeting your problem:
RUN echo "#!/bin/bash \n ./greeting --message ${ADDRESSEE}" > ./entrypoint.sh
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
I SOLVED THIS VERY SIMPLY!
IMPORTANT: The variable which you wish to use in the ENTRYPOINT MUST be ENV type (and not ARG type).
EXAMPLE #1:
ARG APP_NAME=app.jar # $APP_NAME can be ARG or ENV type.
ENV APP_PATH=app-directory/$APP_NAME # $APP_PATH must be ENV type.
ENTRYPOINT java -jar $APP_PATH
This will result with executing:
java -jar app-directory/app.jar
EXAMPLE #2 (YOUR QUESTION):
ARG ADDRESSEE="world" # $ADDRESSEE can be ARG or ENV type.
ENV MESSAGE="Hello, $ADDRESSEE!" # $MESSAGE must be ENV type.
ENTRYPOINT ./greeting --message $MESSAGE
This will result with executing:
./greeting --message Hello, world!
Please verify to be sure, whether you need quotation-marks "" when assigning string variables.
MY TIP: Use ENV instead of ARG whenever possible to avoid confusion on your part or the SHELL side.
For me, I wanted to store the name of the script in a variable and still use the exec form.
Note: Make sure, the variable you are trying to use is declared an environment variable either from the commandline or via the ENV directive.
Initially I did something like:
ENTRYPOINT [ "${BASE_FOLDER}/scripts/entrypoint.sh" ]
But obviously this didn't work because we are using the shell form and the first program listed needs to be an executable on the PATH. So to fix this, this is what I ended up doing:
ENTRYPOINT [ "/bin/bash", "-c", "exec ${BASE_FOLDER}/scripts/entrypoint.sh \"${#}\"", "--" ]
Note the double quotes are required
What this does is to allow us to take whatever extra args were passed to /bin/bash, and supply those same arguments to our script after the name has been resolved by bash.
man 7 bash
-- A -- signals the end of options and disables further
option processing. Any arguments after the -- are treated
as filenames and arguments. An argument of - is
equivalent to --.
In my case worked this way: (for Spring boot app in docker)
ENTRYPOINT java -DidMachine=${IDMACHINE} -jar my-app-name
and passing the params on docker run
docker run --env IDMACHINE=Idmachine -p 8383:8383 my-app-name
I solved the problem using a variation on "create a custom script" approach above. Like this:
FROM hairyhenderson/figlet
ENV GREETING="Hello"
RUN printf '#!/bin/sh\nfiglet -W \${GREETING} \$#\n' > /runme && chmod +x /runme
ENTRYPOINT ["/runme"]
CMD ["World"]
Run like
docker container run -it --rm -e GREETING="G'Day" dockerfornovices/figlet-greeter Alec
If someone wants to pass an ARG or ENV variable to exec form of ENTRYPOINT then a temp file created during image building process might be used.
In my case I had to start the app differently depending on whether the .NET app has been published as self-contained or not.
What I did is I created the temp file and I used its name in the if statement of my bash script.
Part of my dockerfile:
ARG SELF_CONTAINED=true #ENV SELF_CONTAINED=true also works
# File has to be used as a variable as it's impossible to pass variable do ENTRYPOINT using Exec form. File name allows to check whether app is self-contained
RUN touch ${SELF_CONTAINED}.txt
COPY run-dotnet-app.sh .
ENTRYPOINT ["./run-dotnet-app.sh", "MyApp" ]
run-dotnet-app.sh:
#!/bin/sh
FILENAME=$1
if [ -f "true.txt" ]; then
./"${FILENAME}"
else
dotnet "${FILENAME}".dll
fi
Here is what worked for me:
ENTRYPOINT [ "/bin/bash", "-c", "source ~/.bashrc && ./entrypoint.sh ${#}", "--" ]
Now you can supply whatever arguments to the docker run command and still read all environment variables.

Parse a variable with the result of a command in DockerFile

I need to fill a variable in dockerfile with the result of a command
Like in bash var=$(date)
EDIT 1
date is a example.
in my case i use FROM phusion/baseimage:0.9.17 so i want at each building use the last version so i use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
but i don't know how i parse it in var with dockerfile for this result
ENV verbaseimage=curl...
FROM phusion/baseimage:$verbaseimage
RESULT
In my use case
FROM phusion/baseimage:latest
But the question remains unresolved for other case
I had same issue and found way to set environment variable as result of function by using RUN command in dockerfile.
For example i need to set SECRET_KEY_BASE for Rails app just once without changing as would when i run:
docker run -e SECRET_KEY_BASE="$(openssl rand -hex 64)"
Instead it i write to Dockerfile string like:
RUN bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" >> /etc/bash.bashrc'
and my env variable available from root, even after bash login.
or may be
RUN /bin/bash -l -c 'echo export SECRET_KEY_BASE="$(openssl rand -hex 64)" > /etc/profile.d/docker_init.sh'
then it variable available in CMD and ENTRYPOINT commands
Docker cache it as layer and change only if you change some strings before it.
You also can try different ways to set environment variable.
The old workaround is mentioned here (issue 2637: Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR):
One work around that I've used, is to have a file in my context called "build-env". What I do is source it and run my desired command in the same RUN step. So for example:
build-env:
VERSION=stable
Dockerfile:
FROM radial/axle-base:latest
ADD build-env /build-env
RUN source build-env && mkdir /$VERSION
RUN ls /
But for date, that might not be as precise as you want.
Other workarounds are in issue 2022 "Dockerfile with variable interpolation".
In docker 1.9 (end of October 2015), you will have "support for build-time environment variables to the 'build' API (PR 9176)" and "Support for passing build-time variables in build context (PR 15182)".
docker build --build-arg=[]: Set build-time variables
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the ---build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don't persist in the intermediate or final images like ENV values do.
so I want at each building use the last version so I use this
curl -v --silent api.github.com/repos/phusion/baseimage-docker/tags 2>&1 | grep -oh 'rel-.*",' | head -1 | sed 's/",//' | sed 's/rel-//' ==> 0.9.17.
If you want to use the last version of that image, all you need to do is use the tag 'latest' with the FROM directive:
FROM phusion/baseimage:latest
See also "The misunderstood Docker tag: latest": it doesn't always reference the actual latest build, but in this instance, it should work.
If you really want to use the curl|parse option, use it to generate a Dockerfile with the right value (as in a template processed to generate the right file).
Don't try to use it directly in the Dockerfile.
I wanted to set an ENV or LABEL variable from a computation in the Dockerfile, e.g. to make some computed installation options visible in docker inspect.
There does not seem to be any way to do that, and this issue suggests that it's a security design choice.
A Dockerfile can set an ENV variable to $X, ${X:-default}, or ${X:+substitute} where that $X must be another ENV or ARG variable.
A single RUN command can set and use shell variables, but that goes away at the end of the RUN command when that container layer shuts down.
A RUN command can write computed data into files, but the Dockerfile still can't get that data into an ENV or LABEL even if the file is ~/.bashrc. (File contents can, of course, be used by code running in the Container.)
The build can at least RUN echo $X to record choices to the build log -- unless that step comes from the build cache, in which case the RUN step doesn't run.
Please do correct me if there's a way out.
Partially connected to question. If one wants to use the result of some command later on it is possible within single RUN statement as follows:
RUN CUR_DIR=`pwd` && \
echo $CUR_DIR

Resources