How to use environment variable from parent Docker file? - docker

I have two Dockerfiles Dockerfile.A & Dockerfile.B where Dockerfile.B inherits using the FROM keyword from Dockerfile.A. In Dockerfile.A I set an environment variable that I would like to use in Dockerfile.B (PATH). Is this possible, and how would I go about doing it?
So far I have tried the following in Dockerfile.A:
RUN export PATH=/my/new/dir:$PATH
ENV PATH=/my/new/dir:$PATH
RUN echo "PATH=/my/new/dir:$PATH" >/etc/profile
And in Dockerfile.B, respectively:
Just use tools in the path to see if they were available (they were not)
ENV PATH
RUN source /etc/profile
I realized that every RUN command is executed in it's own environment, and that is probably why the ENV keyword exists, to make it possible to treat environments independently of the RUN commands. But I am not sure what that means for my case.
So how can I do this?

Works as expected for me.
Dockerfile.A
FROM alpine:3.6
ENV TEST=VALUE
Build it.
docker build -t imageA .
Dockerfile.B
FROM imageA
CMD echo $TEST
Build it.
$ docker build -t imageB .
Run it
$ docker run -it imageB
VALUE

Related

how to set an environment variable with pwd in a docker container

I would like to set the LD_LIBRARY_PATH variable based on my working directory using my Dockerfile. I have tried using ENV $PWD/some/subpath but when I inspect the container later using docker exec mycontainer bash -c "env" it shows up as /some/subpath rather than /my/working/dir/some/subpath however I also see that PWD is defined as /my/working/dir/ as I would expect it to be. so why is using $PWD in my Dockerfile not substituting the way I am expecting it to?
From this answer, $PWD is a special environment variable set when running a shell. Unlike RUN commands, ENV commands do not create a shell so PWD is never set.
To get the value of PWD at build time, you could instead use a build-arg and pass in $PWD in the build command.
You'd do this in your Dockerfile like this:
# dockerfile
ARG working_directory
ENV $working_directory/some/subpath
and build like this:
docker build --build-arg "working_directory=$PWD" .

Docker - pass env variable to replace Java max memory

I have a Dockerfile as follows.
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
RUN echo "/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &" > /apps/demo/entrypoint.sh
RUN chmod +x /apps/demo/entrypoint.sh
When I run the dockerfile, I see a file 'entrypoint.sh' with the java command that I specified in the Dockerfile.
But I want to change the java max memory depending on the environment. So I am running like this.
docker run -it <image_id> sh -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev"
But when I run it, i check the entrypoint.sh, i don't see the environment variables replaced. Am I missing something?
Does it replace only on the fly when I actually run the container?
You need to escape the $ in ${APP_OPTS} (i.e., change it to \${APP_OPTS}) -- during docker build, the variable is getting replaced with the "current" environment variable, which would be whatever is in your env output (otherwise null). Calling docker run ... -e "APP_OPTS=-Xmx9144m" won't do anything at this point because ${APP_OPTS} has been replaced after the image was created.
Otherwise, you could try saving the entrypoint.sh file and put it in the same folder as your Dockerfile instead of having your Dockerfile create it (and use COPY instead to put it where you want it). That way, the ${APP_OPTS} environment variable won't get replaced during docker build
The Dockerfile (and the RUN command) are only executed when you build the image. SPRING_ENV and APP_UMPFE_OPTS are being evaluated only once and during the build.
When you run the image, the --env=KEY=VALUE are passed to the shell (!) running the process defined in the ENTRYPOINT or CMD (which you need but do not have).
You're missing a FROM ... statement near the top of the Dockerfile too.
You will need to define (recommend the shell-form of) ENTRYPOINT that invokes the java runtime, passes the environment variables and runs your code, perhaps (have not tried this):
FROM ???
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
ENTRYPOINT /usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV}
Example:
FROM busybox
ENV DOG=Freddie
ENTRYPOINT echo ${DOG}
Then:
docker build --tag=58208029 --file=./Dockerfile .
docker run -it 58208029:latest
Freddie
docker run -it --env=DOG=Henry 58208029:latest
Henry
HTH!
The entrypoint.sh is being written when you build the image, so that RUN statement won't be executed again when you run the container. So the entrypoint.sh file itself will not be updated.
Another issue is that when you do the docker run, the -e options need to be before the image name and command:
docker run -it -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev" <image_id> sh
Otherwise those are just being passed as arguments to the entrypoint/command
Also, in your Dockerfile, you probably want single quotes around your entrypoint script so that it doesn't interpolate the values at build time.
RUN echo '/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &' > /apps/demo/entrypoint.sh
Then when you run the container, the entrypoint script should read the variable values at run time from the environment.

Passing multiple classFile as argument to Dockerfile

I have a Dockerfile like this:
FROM java:8
ARG cName
ADD target/jar1.jar p2p.jar
ADD ci/docker_entrypoint.sh .
CMD ["bash", "docker_entrypoint.sh" , "$cName"]
I have a docker_entrypoint.sh which look :
java -cp p2p.jar $1
I have multiple classes to run and I am providing className as input parameter to dockerfile. I am running couple of commands to build and run docker.
docker build -f Dockerfile -t docker-p2p --build-arg cName=com.HelloWorld .
docker run docker-p2p
after running the second command I am getting below error:
Error: Could not find or load main class $cName
I am new to docker and I am not able to parameterise by dockerfile but when I mention a className "HelloWorld" in the dockerfile, it runs well. But when I try to pass parameters , it throws me out with this error.
You have to differ between docker run, cmd and entrypoint.
For your example you can use an entrypoint and set the parameter via an environment variable.
One simple and easy Dockerfile example could be:
FROM java:8
ENV NAME="John Dow"
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $NAME"]
with docker build . -t test and docker run -e NAME="test123" test
Also have a look at some further docu: docker-run-vs-cmd-vs-entrypoint.
If you do wind up with a Docker image that can do multiple things, it's a little unusual to create one image per task the way you're describing. You can pass additional command-line parameters in docker run or most other ways to start a container, and you can use that to control what the image does.
For example, you might want to set up your image so that you can run
docker run ... docker-p2p com.HelloWorld
passing the class name as an argument. I'd write an entrypoint script that wrapped this in a java call if appropriate (but passed through non-class names, like docker run ... sh):
#!/bin/sh
set -e
case "$1" of
com.*) exec java "$#" ;;
*) exec "$#" ;;
esac
The corresponding Dockerfile doesn't take any ARGs; it could be
FROM java:8
# I prefer COPY to ADD, unless you explicitly want automatic
# HTTP fetches and/or tar file extraction.
COPY target/jar1.jar /p2p.jar
COPY ci/docker_entrypoint.sh /
# Globally set the class path. (A Docker image only does one thing.)
ENV CLASSPATH /p2p.jar
# Always launch the entrypoint script.
ENTRYPOINT ["/docker_entrypoint.sh"]
# Give a default command, which with our script is a class name.
CMD ["com.HelloWorld"]
If you actually want a container per task, you could create a base image that contained everything up to the ENTRYPOINT line, and then created derived images FROM that base image that just set a different CMD.

Docker run command in dockerfile executes only if dont specifiy a command on cli

Say I have a Dockerfile:
.
.
RUN echo 'source /root/script.sh' >> /etc/bash.bashrc
(The script adds some env variables)
If I:
1) Do this:
docker run -it -v /home/user/script.sh:/root/script.sh image
It takes me to shell where if I call "env" I see the variable set by the script
But if I:
2) Do this:
docker run -it -v /home/user/script.sh:/root/script.sh image env
It prints out env and exits and my variable is missing
What am I missing? I need the variable to exists even if I specify a command/script like "env" at the end of the docker run command
When you run a command like
docker run ... image command
Docker directly runs the command you give; it doesn’t launch any kind of shell, and there’s no opportunity for a .bashrc or similar file to be read.
I’d suggest two things here:
If your program does need environment variables set in some form, set them directly using Dockerfile ENV directives. Don’t try to edit .bashrc or /etc/profile or any other shell dotfile; they won’t reliably get run.
As much as you can install things in places so that you don’t need to change environment variables. For instance, Python supports a “virtual environment” concept that allows an isolated library environment, which requires changing $PATH and similar things; but Docker provides the same isolation on its own, so just install things into the “global” package space.
If you really can’t manage either of these things, then you can write an entrypoint script that sets environment variables and then launches the container’s command. This might look like
#!/bin/sh
. /root/script.sh
exec "$#"
And then you could include this in your Dockerfile like
...
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/app/myapp"]
(If you need to use docker exec to get a debugging shell in the container, that won’t be a child process of the entrypoint and won’t get its environment variables.)

Pass ARG to ENTRYPOINT

Say I have this in a Dockerfile:
ARG FOO=1
ENTRYPOINT ["docker.r2g", "run"]
where I build the above with:
docker build -t "$tag" --build-arg FOO="$(date +%s)" .
is there a way to do something like:
ENTRYPOINT ["docker.r2g", "run", ARG FOO] // something like this
I guess the argument could also be passed with docker run instead of during the docker build phase?
You could combine ARG and ENV in your Dockerfile, as I mention in "ARG or ENV, which one to use in this case?"
ARG FOO
ENV FOO=${FOO}
That way, you docker.r2g can access the ${FOO} environment variable.
I guess the argument could also be passed with docker run instead of during the docker build phase?
That is also possible, if it makes more sense to give FOO a value at runtime:
docker run -e FOO=$(...) ...
This simple technique works for me:
FROM node:9
# ...
ENTRYPOINT dkr2g run "$dkr2g_run_args"
then we launch the container with:
docker run \
-e dkr2g_run_args="$run_args" \
--name "$container_name" "$tag_name"
there might be some edge case issues with spreading an env variable into command line arguments, but should work for the most part.
ENTRYPOINT can work either like so:
ENTRYPOINT ["foo", "--bar", "$baz"] # $baz will not be interpreted
or like so:
ENTRYPOINT foo --bar $baz
not sure why the latter is not preferred - but env variable interpolation/interpretation is only possible using the latter. See: How do I use Docker environment variable in ENTRYPOINT array?
However, a more robust way of passing arguments is to use $# instead of an env variable. So what you should do then is override --entrypoint using the docker run command, like so:
docker run --entrypoint="foo" <tag> --bar $#
To learn the correct syntax of how to properly override entrypoint, you have to look that up, to be sure, but in general it's weird - you have to put --entrypoint="foo" before the tag name, and the arguments to --entrypoint, after the tag name. weird.
In my case I needed this to be set on build time, meaning I didn't have the control over the docker run command so I really struggled with it because it didn't work to use it as ARG or ENV directives in the Dockerfile. So below was is my solution and it worked like a charm:
ENTRYPOINT export $(grep -v '^#' .env | xargs -d '\n') \
&& your_command_passing_the_variable ${FOO}
Basically what I did was copy the variables into a file and then export the values in the same bash instance created by the ENTRYPOINT directive. The value is captured and passed correctly to the command. Hopefully, this helps.
Note: If you need to put secrets in that file, do not add the file to the version control system (e.g. git), instead create the file during your pipeline and be sure to clean up any sensitive information.

Resources