I have a Dockerfile like this:
FROM java:8
ARG cName
ADD target/jar1.jar p2p.jar
ADD ci/docker_entrypoint.sh .
CMD ["bash", "docker_entrypoint.sh" , "$cName"]
I have a docker_entrypoint.sh which look :
java -cp p2p.jar $1
I have multiple classes to run and I am providing className as input parameter to dockerfile. I am running couple of commands to build and run docker.
docker build -f Dockerfile -t docker-p2p --build-arg cName=com.HelloWorld .
docker run docker-p2p
after running the second command I am getting below error:
Error: Could not find or load main class $cName
I am new to docker and I am not able to parameterise by dockerfile but when I mention a className "HelloWorld" in the dockerfile, it runs well. But when I try to pass parameters , it throws me out with this error.
You have to differ between docker run, cmd and entrypoint.
For your example you can use an entrypoint and set the parameter via an environment variable.
One simple and easy Dockerfile example could be:
FROM java:8
ENV NAME="John Dow"
ENTRYPOINT ["/bin/bash", "-c", "echo Hello, $NAME"]
with docker build . -t test and docker run -e NAME="test123" test
Also have a look at some further docu: docker-run-vs-cmd-vs-entrypoint.
If you do wind up with a Docker image that can do multiple things, it's a little unusual to create one image per task the way you're describing. You can pass additional command-line parameters in docker run or most other ways to start a container, and you can use that to control what the image does.
For example, you might want to set up your image so that you can run
docker run ... docker-p2p com.HelloWorld
passing the class name as an argument. I'd write an entrypoint script that wrapped this in a java call if appropriate (but passed through non-class names, like docker run ... sh):
#!/bin/sh
set -e
case "$1" of
com.*) exec java "$#" ;;
*) exec "$#" ;;
esac
The corresponding Dockerfile doesn't take any ARGs; it could be
FROM java:8
# I prefer COPY to ADD, unless you explicitly want automatic
# HTTP fetches and/or tar file extraction.
COPY target/jar1.jar /p2p.jar
COPY ci/docker_entrypoint.sh /
# Globally set the class path. (A Docker image only does one thing.)
ENV CLASSPATH /p2p.jar
# Always launch the entrypoint script.
ENTRYPOINT ["/docker_entrypoint.sh"]
# Give a default command, which with our script is a class name.
CMD ["com.HelloWorld"]
If you actually want a container per task, you could create a base image that contained everything up to the ENTRYPOINT line, and then created derived images FROM that base image that just set a different CMD.
Related
I have a Dockerfile as follows.
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
RUN echo "/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &" > /apps/demo/entrypoint.sh
RUN chmod +x /apps/demo/entrypoint.sh
When I run the dockerfile, I see a file 'entrypoint.sh' with the java command that I specified in the Dockerfile.
But I want to change the java max memory depending on the environment. So I am running like this.
docker run -it <image_id> sh -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev"
But when I run it, i check the entrypoint.sh, i don't see the environment variables replaced. Am I missing something?
Does it replace only on the fly when I actually run the container?
You need to escape the $ in ${APP_OPTS} (i.e., change it to \${APP_OPTS}) -- during docker build, the variable is getting replaced with the "current" environment variable, which would be whatever is in your env output (otherwise null). Calling docker run ... -e "APP_OPTS=-Xmx9144m" won't do anything at this point because ${APP_OPTS} has been replaced after the image was created.
Otherwise, you could try saving the entrypoint.sh file and put it in the same folder as your Dockerfile instead of having your Dockerfile create it (and use COPY instead to put it where you want it). That way, the ${APP_OPTS} environment variable won't get replaced during docker build
The Dockerfile (and the RUN command) are only executed when you build the image. SPRING_ENV and APP_UMPFE_OPTS are being evaluated only once and during the build.
When you run the image, the --env=KEY=VALUE are passed to the shell (!) running the process defined in the ENTRYPOINT or CMD (which you need but do not have).
You're missing a FROM ... statement near the top of the Dockerfile too.
You will need to define (recommend the shell-form of) ENTRYPOINT that invokes the java runtime, passes the environment variables and runs your code, perhaps (have not tried this):
FROM ???
ENV SPRING_ENV="local"
ENV APP_OPTS "-Xmx8144m"
ENTRYPOINT /usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV}
Example:
FROM busybox
ENV DOG=Freddie
ENTRYPOINT echo ${DOG}
Then:
docker build --tag=58208029 --file=./Dockerfile .
docker run -it 58208029:latest
Freddie
docker run -it --env=DOG=Henry 58208029:latest
Henry
HTH!
The entrypoint.sh is being written when you build the image, so that RUN statement won't be executed again when you run the container. So the entrypoint.sh file itself will not be updated.
Another issue is that when you do the docker run, the -e options need to be before the image name and command:
docker run -it -e "APP_OPTS=-Xmx9144m" -e "SPRING_ENV=dev" <image_id> sh
Otherwise those are just being passed as arguments to the entrypoint/command
Also, in your Dockerfile, you probably want single quotes around your entrypoint script so that it doesn't interpolate the values at build time.
RUN echo '/usr/lib/jvm/java-1.8-openjdk/bin/java ${APP_OPTS} -Djava.security.egd=file:/dev/./urandom -jar /apps/demo/demo-fe.jar --spring.config.location=file:///apps/demo/conf/ump.properties -Dspring.profiles.active=${SPRING_ENV} &' > /apps/demo/entrypoint.sh
Then when you run the container, the entrypoint script should read the variable values at run time from the environment.
I am defining an image in a Dockerfile which has another image as its parent:
FROM parent_org/parent:1.0.0
...
The parent image's documentation mentions an argument (special-arg) that can be passed when running an instance of the container:
docker run parent_org/parent:1.0.0 --special-arg
How can I enable special-arg in my Dockerfile?
TL;DR: you could use the CMD directive by doing something like this:
FROM parent_org/parent:1.0.0
CMD ["--special-arg"]
however note that passing extra flags to docker run as below would overwrite --special-arg (as CMD is intended to specify default arguments):
docker build -t child_org/child .
docker run child_org/child # would imply --special-arg
docker run child_org/child --other-arg # "--other-arg" replaces "--special-arg"
If this is not what you'd like to obtain, you should redefine the ENTRYPOINT as suggested below.
The CMD and ENTRYPOINT directives
To have more insight on CMD as well as on ENTRYPOINT, you can take a look at the table involved in this other SO answer: CMD doesn't run after ENTRYPOINT in Dockerfile.
In your case, you could redefine the ENTRYPOINT in your child image (and if need be, the default CMD) by adapting child_org/child/Dockerfile w.r.t. what was defined in the parent Dockerfile.
Assuming the parent_org/parent/Dockerfile looks like this:
FROM debian:stable # for example
WORKDIR /usr/src/foo
COPY entrypoint.sh .
RUN chmod a+x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["--default-arg"]
You could write a child_org/child/Dockerfile like this:
FROM parent_org/parent:1.0.0
RUN […]
# Redefine the ENTRYPOINT so the --special-arg flag is always passed
ENTRYPOINT ["./entrypoint.sh", "--special-arg"]
# If need be, redefine the list of default arguments,
# as setting ENTRYPOINT resets CMD to an empty value:
CMD ["--default-arg"]
This baffled me at first, too... Run them using the command: declaration... A command and an Entrypoint are two different things... The entrypoint runs whatever script/execution call your service needs to initialize and start. That entrypoint script then usually runs logic to append whatever you pass in from the command: declaration as further arguments to alter the behavior of the service.
Say I have this in a Dockerfile:
ARG FOO=1
ENTRYPOINT ["docker.r2g", "run"]
where I build the above with:
docker build -t "$tag" --build-arg FOO="$(date +%s)" .
is there a way to do something like:
ENTRYPOINT ["docker.r2g", "run", ARG FOO] // something like this
I guess the argument could also be passed with docker run instead of during the docker build phase?
You could combine ARG and ENV in your Dockerfile, as I mention in "ARG or ENV, which one to use in this case?"
ARG FOO
ENV FOO=${FOO}
That way, you docker.r2g can access the ${FOO} environment variable.
I guess the argument could also be passed with docker run instead of during the docker build phase?
That is also possible, if it makes more sense to give FOO a value at runtime:
docker run -e FOO=$(...) ...
This simple technique works for me:
FROM node:9
# ...
ENTRYPOINT dkr2g run "$dkr2g_run_args"
then we launch the container with:
docker run \
-e dkr2g_run_args="$run_args" \
--name "$container_name" "$tag_name"
there might be some edge case issues with spreading an env variable into command line arguments, but should work for the most part.
ENTRYPOINT can work either like so:
ENTRYPOINT ["foo", "--bar", "$baz"] # $baz will not be interpreted
or like so:
ENTRYPOINT foo --bar $baz
not sure why the latter is not preferred - but env variable interpolation/interpretation is only possible using the latter. See: How do I use Docker environment variable in ENTRYPOINT array?
However, a more robust way of passing arguments is to use $# instead of an env variable. So what you should do then is override --entrypoint using the docker run command, like so:
docker run --entrypoint="foo" <tag> --bar $#
To learn the correct syntax of how to properly override entrypoint, you have to look that up, to be sure, but in general it's weird - you have to put --entrypoint="foo" before the tag name, and the arguments to --entrypoint, after the tag name. weird.
In my case I needed this to be set on build time, meaning I didn't have the control over the docker run command so I really struggled with it because it didn't work to use it as ARG or ENV directives in the Dockerfile. So below was is my solution and it worked like a charm:
ENTRYPOINT export $(grep -v '^#' .env | xargs -d '\n') \
&& your_command_passing_the_variable ${FOO}
Basically what I did was copy the variables into a file and then export the values in the same bash instance created by the ENTRYPOINT directive. The value is captured and passed correctly to the command. Hopefully, this helps.
Note: If you need to put secrets in that file, do not add the file to the version control system (e.g. git), instead create the file during your pipeline and be sure to clean up any sensitive information.
I have a Dockerfile with the following CMD as the last line
CMD ["/usr/local/myapp/bin/startup.sh", "docker"]
Part of a script that is executed against the docker image during startup is as follows
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run "$DOCKER_IMAGE_ID" find "$DOCKER_JAVA_HOME" -name cacerts)")
However, this still executes the CMD line from my Dockerfile.
I have found that I can alter this behaviour by changing the line in the script as follows.
# find directory of cacerts file in DOCKER_JAVA_HOME of container
DOCKER_CACERTS_DIR=$(dirname "$(docker run --entrypoint find "$DOCKER_IMAGE_ID" "$DOCKER_JAVA_HOME" -name cacerts)")
However, I didn't think this would be necessary. Is it normal for docker to execute the CMD when overridden in the docker run command? I thought this was supposed to be one of the differences between using CMD and ENTRYPOINT, that you could easily override CMD without using the --entrypoint flag.
In case it's important, this is using docker version 17.03.0-ce
The image being run has an ENTRYPOINT defined somewhere. Probably in the image you are building FROM if there isn't one in your Dockerfile.
When ENTRYPOINT and CMD are defined, Docker will pass the CMD to the ENTRYPOINT as arguments. From there, it's up to the ENTRYPOINT executable to decide what to do.
The arguments could be ignored completely, modified as the entry point sees fit or it can pass the complete command on to be run. That behaviour is image specific.
I have two Dockerfiles Dockerfile.A & Dockerfile.B where Dockerfile.B inherits using the FROM keyword from Dockerfile.A. In Dockerfile.A I set an environment variable that I would like to use in Dockerfile.B (PATH). Is this possible, and how would I go about doing it?
So far I have tried the following in Dockerfile.A:
RUN export PATH=/my/new/dir:$PATH
ENV PATH=/my/new/dir:$PATH
RUN echo "PATH=/my/new/dir:$PATH" >/etc/profile
And in Dockerfile.B, respectively:
Just use tools in the path to see if they were available (they were not)
ENV PATH
RUN source /etc/profile
I realized that every RUN command is executed in it's own environment, and that is probably why the ENV keyword exists, to make it possible to treat environments independently of the RUN commands. But I am not sure what that means for my case.
So how can I do this?
Works as expected for me.
Dockerfile.A
FROM alpine:3.6
ENV TEST=VALUE
Build it.
docker build -t imageA .
Dockerfile.B
FROM imageA
CMD echo $TEST
Build it.
$ docker build -t imageB .
Run it
$ docker run -it imageB
VALUE