I've got the following in my Dockerfile:
ENTRYPOINT echo "wtf"
CMD ["wtf wtf"]
When I start it up I get:
nginx_1 | wtf
I would expect the output to be wtf wtf wtf instead of wtf.
Modifying the Dockerfile to:
ENTRYPOINT echo "$#"
CMD ["wtf wtf"]
Results in empty out
Why aren't the additional commands passed to echo?
You should use the array representation in the entrypoint as well as stated in the documentation it is the preferred way of specifying ENTRYPOINT:
ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
If it's not in array form docker won't understand which part is the command and what are the arguments.
Related
I have a Dockerfile where I start a executable with default arguments like this:
ENTRYPOINT ["executable", "cmd"]
CMD ["--param1=1", "--param2=2"]
This works fine and I can run the container with default arguments:
docker run image_name
or with custom arguments:
docker run image_name --param1=a --param2=2
Now i would like to have a default parameter depend on a environment variable or default to the deafult value (1) like this:
--param1='${PARAM1:-1}'
I Understand that
ENTRYPOINT ["executable", "cmd"]
CMD ["--param1='${PARAM1:-1}'", "--param2=2"]
does not work since CMD is in exec form and does not invoke a command shell and cannot substitute environment variables.
But if I use CMD in shell form:
ENTRYPOINT ["executable", "cmd"]
CMD "--param1='${PARAM1:-1}' --param2=2"
I get no such option: -c
So my question is:
How get I archive environment variable substitution within the default arguments in CMD for my ENTRYPOINT?
One way would be to lose the CMD and wrap all the defaults up in a custom entrypoint. I try to avoid doing this, but sometimes it seems like the cleanest way, and you can be a lot more flexible:
Dockerfile:
COPY 'my-entrypoint.sh' '/somewhere/in/path/my-entrypoint'
ENTRYPOINT ['my-entrypoint']
my-entrypoint.sh
#!/bin/sh
ARGS="${#}"
if [ -z "${ARGS}" ]; then
ARGS="--param1=${PARAM1:-1} --param2=2"
fi
executable cmd $ARGS
You can't do this the way you describe, for the reasons you've laid out in the question. The ENTRYPOINT and CMD simply get concatenated together to form a single command line, and if either or both of those parts is a string rather than a JSON array it gets automatically converted to sh -c 'the string'.
ENTRYPOINT ["executable", "cmd"]
CMD "--param1='${PARAM1:-1}' --param2=2"
# Equivalently:
ENTRYPOINT ["executable", "cmd", "/bin/sh", "-c", "\"--param1=...\""]
CMD []
There are two techniques I'd suggest to work around this problem, though both require potentially substantial changes in the setup.
In Docker and Kubernetes, it turns out to generally be more convenient to pass options via environment variables than on the command line. This means your application needs to know to look for those variables, and supply some of the defaults you describe here. Some argument-parsing libraries support this out-of-the-box, but not all. Python's standard argparse library, for example, doesn't directly have environment-variable support, but you can still easily support them:
import argparse
import os
parser = argparse.ArgumentParser()
parser.add_argument('param1', default=os.environ.get('PARAM1', '1'))
args = parser.parse_args()
print(args.param1)
# Uses --param1 option, or else $PARAM1 variable, or else default "1"
The other approach I generally recommend is to make CMD a well-formed shell command; don't try to split the command between CMD and ENTRYPOINT. This avoids the problem of Docker inserting the sh -c wrapper in the middle of the line.
# no ENTRYPOINT
CMD executable cmd --param1="${PARAM1:-1}" --param2=2
The ENTRYPOINT pattern that I do find useful is to use a wrapper script to provide defaults and do other first-time setup. If that script is a Bourne shell script and ends with exec "$#", then it will run the CMD as the main container process.
#!/bin/sh
# docker-entrypoint.sh
# In Docker specifically, default $PARAM1 to "docker", not "1".
: ${PARAM1:=docker}
# Run the main container command.
exec "$#"
ENTRYPOINT ["/docker-entrypoint.sh"] # must be a JSON array
CMD executable cmd --param2=2
(There is no requirement to have an ENTRYPOINT. Making ENTRYPOINT be an interpreter and putting the script name in CMD doesn't bring any benefit, and makes it harder to run debugging commands like docker run --rm my-image ls -l /app.)
I've a dockerfile where I use a custom entrypoint.sh. In this file I want to use the ARGS which I pass from docker-compose to the dockerfile.
The problem is that I don't get the content of the variable to the dockerfile I just get the variable name.
For example:
ARGS ENVIROMENT=production
ENTRYPOINT ["/var/www/html/entrypoint.sh"]
CMD ["${ENVIROMENT}"]
entrypoint.sh
#!/bin/sh
cd /var/www/html
composer update
echo $1;
The echo $1 show "${Enviroment}" instead of "production" what I expect.
Ouch ! You've hit a sensible point of Docker with this question.
But first, let me clarify some point here :
First of all, you have a typo in your example. It's ARG not ARGS
ARG allows you to define a build-time variable. Meaning that this variable will only be usefull when doing a docker image build command. You'll then be able to override it with --build-arg. For example :
docker image build --build-arg ENVIROMENT=integration ...`
At the opposite, ENV allows you to define an environment variable which can be used during runtime.
You can find all the info you need in the official documentation for env and arg
Now, back to the point...
To make it simple:
Do not use both ENTRYPOINT and CMD when you want to pass some environment variable to your entrypoint from your cmd. It's just a pain. Really.
When you want to use a environment variable inside CMD, then you'll have to either use bash format, or to prefix the command with sh -c for exec format :
CMD ["sh", "-c", "echo ${GREETINGS}"]
#or
CMD echo ${GREETINGS}
Here is a Dockerfile that works with both syntax (just uncomment the CMD you want to use) :
FROM debian:8
ENV GREETINGS="hello world"
#CMD ["sh", "-c", "echo ${GREETINGS}"]
#CMD echo ${GREETINGS}
You can find more detailled info on those issues :
Issue 5509
Issue 34772
I would like to complete Marc abouchacra's answer.
What is still missing is how to use the ARG command.
A possible solution could be:
ARG ENVIRONMENT=production
ENV ENVIRONMENT=$ENVIRONMENT
CMD exec /var/www/html/entrypoint.sh $ENVIRONMENT
The exec is there to make sure your entrypoint.sh is the process with the PID=1.
I saw for example in the Dockerfile of the postgres image (https://github.com/docker-library/postgres/blob/master/10/Dockerfile) that at the end the startup of the container is defined like this:
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
If I understand this right, the argument postgres is transferred into the docker-entrypoint.sh so $1 in the script is replaced with postgres and the script will be executed.
Now my question is if I can define my own Dockerfile based on the postgres image (FROM postgres) but overwrite the CMD of the base Dockerfile and accomplish to first execute a command on startup and then execute the docker-entrypoint.sh with the postgres argument?
Something like this:
FROM postgres
...
CMD <my-command> && [“postgres”]
You can create you own my-entrypoint.sh
$ cat my-entrypoint.sh
#!/bin/bash
#do what you need here e.g. <my-command>
#next command will run postgres entrypoint will all params
docker-entrypoint.sh "$#"
And your docker file will look as follows
FROM postgres
# your stuff
ENTRYPOINT ["my-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
Yes you can do such a thing
For the CMD command in a Dockerfile, you have 3 possible syntaxes
Extract from
https://docs.docker.com/engine/reference/builder/#cmd
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
So you can use any, for doing what you want, the shell form (the last one) seems well suited
You can also launch a shell script that does all your stuff
Here's a simple Dockerfile
FROM centos:6.6
ENTRYPOINT ["/bin/bash", "-l", "-c"]
CMD ["echo", "foo"]
Unfortunately it doesn't work. Nothing is echo'd when you run the resulting container that's built.
If you comment out the ENTRYPOINT then it works. However, if you set the ENTRYPOINT to /bin/sh -c, then it fails again
FROM centos:6.6
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["echo", "foo"]
I thought that was the default ENTRYPOINT for an container that didn't have one defined, why didn't that work?
Finally, this also works
FROM centos:6.6
ENTRYPOINT ["/bin/bash", "-l", "-c"]
CMD ["echo foo"]
Before I submit an issue, I wanted to see if I'm doing something obviously wrong?
I'm using rvm inside my container which sort of needs a login shell to work right.
Note that the default entry point/cmd for an official centos 6 image is:
no entrypoint
only CMD ["/bin/bash"]
If you are using the -c command, you need to pass one argument (which is the full command): "echo foo".
Not a series of arguments (CMD ["echo", "foo"]).
As stated in dockerfile CMD section:
If you use the shell form of the CMD, then the <command> will execute in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must express the command as a JSON array and give the full path to the executable
Since echo is a built-in command in the bash and C shells, the shell form here is preferable.
Is there a way to execute a command as an argument in a Dockerfile ENTRYPOINT? I am creating an image that should automatically run mpirun for the number of processors, i.e., mpirun -np $(nproc) or mpirun -np $(getconf _NPROCESSORS_ONLN).
The following line works:
ENTRYPOINT ["/tini", "--", "mpirun", "-np", "4"] # works
But I cannot get an adaptive form to work:
ENTRYPOINT ["/tini", "--", "mpirun", "-np", "$(nproc)"] # doesn't work
ENTRYPOINT ["/tini", "--", "mpirun", "-np", "$(getconf _NPROCESSORS_ONLN)"] # doesn't work
Using the backtick `nproc` notation does not work either. Nor can I pass an environment variable to the command.
ENV processors 4
ENTRYPOINT ["/tini", "--", "mpirun", "-np", "$processors"] # doesn't work
Has anyone managed to get this kind of workflow?
Those likely won't work: see issue 4783
ENTRYPOINT and CMD are special, as they get started without a shell (so you can choose your own) and iirc they are escaped too.
Unlike the shell form, the exec form does not invoke a command shell.
This means that normal shell processing does not happen.
For example, ENTRYPOINT [ "echo", "$HOME" ] will not do variable substitution on $HOME.
If you want shell processing then either use the shell form or execute a shell directly, for example: ENTRYPOINT [ "sh", "-c", "echo", "$HOME" ].
A workaround would be to use a script.
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
That script, when docker run triggers it, should at least benefit from the environment variable.
See for example the Dockerfile of vromero/activemq-artemis-docker, which runs the script docker-entrypoint.sh.
In order to allow CMD to run as well, the scripts end with:
exec "$#"
(It will execute whatever parameter comes after, either from the CMD directive, or from docker run parameters)
The OP Gilly adds in the comments:
I use in the Dockerfile:
COPY docker-entrypoint.sh
ENTRYPOINT ["/tini", "--", "/docker-entrypoint.sh"]
And in the entrypoint script:
#!/bin/bash
exec mpirun -np $(nproc) "$#"
It is because you are using the exec form for your entry point and variable substitution will not happen in the exec form.
This is the exec form:
ENTRYPOINT ["executable", "param1", "param2"]
this is the shell form:
ENTRYPOINT command param1 param2
From the official documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, ENTRYPOINT [ "echo", "$HOME" ] will not do variable substitution on $HOME