Default Docker entrypoint - docker

I am creating an image from another image that set a specific entrypoint. However I want my image to have default one. How do I reset the ENTRYPOINT?
I tried the following Dockerfile:
FROM some-image
ENTRYPOINT ["/bin/sh", "-c"]
Unfortunately it doesn't work like the default entrypoint as it need the command to be quoted.
docker run myimage ls -l / # "-l /" arguments are ignored
file1 file2 file3 # files in current working directory
docker run myimage "ls -l /" # works correctly
How do I use commands without quoting?

To disable an existing ENTRYPOINT, set an empty array in your docker file
ENTRYPOINT []
Then your arguments to docker run will exec as a shell form CMD would normally.
The reason your ENTRYPOINT ["/bin/sh", "-c"] requires quoted strings is that without the quotes, the arguments to ls are being passed to sh instead.
Unquoted results in lots of arguments being sent to sh
"/bin/sh", "-c", "ls", "-l", "/"
Quoting allows the complete command (sh -c) to be passed on to sh as one argument.
"/bin/sh", "-c", "ls -l /"

This isn't really related to docker. Try running the following:
/bin/sh -c echo foo
/bin/sh -c "echo foo"
The -c means that /bin/sh only picks up one argument. So removing the -c from the entrypoint you define should fix it. This is more flexible than resetting the entry point; e.g. you can do this to use Software Collections:
ENTRYPOINT ["scl", "enable", "devtoolset-4", "--", "bash"]

Note: beware of ENTRYPOINT [].
As mentioned in moby/moby issue 3465 ("Reset properties inherited from parent image"), Brendon C. notes:
Looks like ENTRYPOINT [] and ENTRYPOINT [""] both invalidate the cache on each build when not using BuildKit.
Simple Dockerfile to demonstrate:
FROM jrottenberg/ffmpeg:4.3-alpine311 as base
ENTRYPOINT []
RUN echo "HERE!"
Steps 2 and 3 will never use cache. This is my workaround:
FROM jrottenberg/ffmpeg:4.3-alpine311 as base
ENTRYPOINT ["/usr/bin/env"]
RUN echo "HERE!"

Related

CMD and ENTRYPOINT with script, same Dockerfile

Trying to run a pod based on an image with this Dockerfile:
...
ENTRYPOINT [ "./mybashscript", ";", "flask" ]
CMD [ "run" ]
I would be expecting the full command to be ./mybashscript; flask run.
However, in this example, the pod / container executes ./mybashscript but not flask.
I also tried a couple of variations like:
...
ENTRYPOINT [ "/bin/bash", "-c", "./mybashscript && flask" ]
CMD [ "run" ]
Now, flask gets executed but run is ignored.
PS: I am trying to understand why this doesn't work and I am aware that I can fit all into the entrypoint or shove everything inside the bash script, but that is not the point.
In both cases you show here, you use the JSON-array exec form for ENTRYPOINT and CMD. This means no shell is run, except in the second case where you run it explicitly. The two parts are just combined together into a single command.
The first construct runs the script ./mybashscript, which must be executable and have a valid "shebang" line (probably #!/bin/bash). The script is passed three arguments, which you can see in the shell variables $1, $2, and $3: a semicolon ;, flask, and run.
The second construct runs /bin/sh -c './mybashscript && flask' run. sh -c takes a single argument, which is mybashscript && flask; the remaining argument run is interpreted as a positional argument, and the sh -c command would see it as $0.
The arbitrary split of ENTRYPOINT and CMD you show doesn't really make sense. The only really important difference between the two is that it is easier to change CMD when you run the container, for example by putting it after the image name in a docker run command. It makes sense to put all of the command in the command part, or none of it, but not really to put half of the command in one part and half in another.
My first pass here would be to write:
# no ENTRYPOINT
CMD ./mybashscript && flask run
Docker will insert a sh -c wrapper for you in bare-string shell form, so the && has its usual Bourne-shell meaning.
This setup looks like you're trying to run an initialization script before the main container command. There's a reasonably standard pattern of using an ENTRYPOINT for this. Since it gets passed the CMD as parameters, the script can end with exec "$#" to run the CMD (potentially as overridden in the docker run command). The entrypoint script could look like
#!/bin/sh
# entrypoint.sh
./mybashscript
exec "$#"
(If you wrote mybashscript, you could also end it with the exec "$#" line, and use that script as the entrypoint.)
In the Dockerfile, set this wrapper script as the ENTRYPOINT, and then whatever the main command is as the CMD.
ENTRYPOINT ["./entrypoint.sh"] # must be a JSON array
CMD ["flask", "run"] # can be either form
If you provide an alternate command, it replaces CMD, and so the exec "$#" line will run that command instead of what's in the Dockerfile, but the ENTRYPOINT wrapper still runs.
# See the environment the wrapper sets up
docker run --rm your-image env
# Double-check the data directory setup
docker run --rm -v $PWD/data:/data your-image ls -l /data
If you really want to use the sh -c form and the split ENTRYPOINT, then the command inside sh -c has to read $# to find its positional arguments (the CMD), plus you need to know the first argument is $0 and not $1. The form you show would be functional if you wrote
# not really recommended but it would work
ENTRYPOINT ["/bin/sh", "-c", "./mybashscript && flask \"$#\"", "flask"]
CMD ["run"]

How to pass an unknown list of environment variables to a command in Dockerfile

I have a very long and often changing list of environment variables which I need to pass to the same Docker image when starting it. These environment variables are configured in a Rancher environment and will be passed individually as such. They should all be passed to the command that is about to start within the image.
When I had just a few parameters it was possible to pass them while having them explicitly declared in the Dockerfile:
CMD [ "sh", "-c", "node src/server.js --param1=$ENV_PARAM_1" --param2=$ENV_PARAM_2 ... --paramN=$ENV_PARAM_N"" ]
Now this is not possible anymore because the list has grown to far and is dynamically changing a lot. I also can't build a new image per usecase.
I need something like:
CMD [ "sh", "-c", "node src/server.js $PRINT_ALL_MY_PARAMS_HERE" ]
Side note: The command will fail when providing command arguments that are unknown to the command.
Any idea how I could solve this?
You can override CMD when you run the container. Say you've built an image with a default command
CMD node src/server.js
When you go to actually run the container, you can override this with whatever you want
docker run \
-d -p ... \
my/image \
node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
As I've written it here the $ENV_PARAM_N will be resolved by the host system's shell, but if a tool is launching the container for you that might not be a problem. If some of the values are from Dockerfile ENV directives you'll need to force the container shell to do the expansion
docker run \
-d -p ... \
-e ENV_PARAM_2=not-in-the-dockerfile \
my/image \
sh -c 'node src/server.js --param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...'
There's also a pattern of using the ENTRYPOINT as the main program to run and using CMD only for additional options.
ENTRYPOINT ["node", "src/server.js"]
CMD []
docker run \
-d -p ... \
my/image \
--param1=$ENV_PARAM_1 --param2=$ENV_PARAM_2 ...
However, note in this case that you cannot ask the container shell to expand things for you. ENTRYPOINT must use the JSON-array syntax, and you can't insert an sh -c anywhere in this command usefully. (sh -c command consumes only a single shell "word" as its command, and any other options you write after that will generally get ignored.)
You could use ENTRYPOINT to define the part that should always be there when launching the container and CMD for the part that is overridden by command given at container launch:
ENTRYPOINT [ "sh", "-c", "node src/server.js"]
CMD ["--param1=$ENV_PARAM_1", "--param2=$ENV_PARAM_2",... "--paramN=$ENV_PARAM_N"]
This way you can have different parameters for each run:
docker run server # Executes ENTRYPOINT + CMD from Dockerfile
docker run server --help # Executes ENTRYPOINT + "--help"

How are CMD and ENTRYPOINT exec forms in a docker file parsed?

I am running Jupyter in a Docker container. The following shell form will run fine:
CMD jupyter lab --ip='0.0.0.0' --port=8888 --no-browser --allow-root /home/notebooks
But the following one on docker file will not:
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["jupyter", "lab", "--ip='0.0.0.0'", "--port=8888", "--no-browser", "--allow-root", "/home/notebooks"]
The error is:
usage: jupyter [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir] [--paths] [--json] [subcommand]
jupyter: error: one of the arguments --version subcommand --config-dir --data-dir --runtime-dir --paths is required
So obviously /bin/sh -c sees the jupyter argument, but not the following ones.
Interestingly,
CMD ["jupyter", "lab", "--ip='0.0.0.0'", "--port=8888", "--no-browser", "--allow-root", "/home/notebooks"]
will run fine, so it cannot be the number of arguments, or can it?
According to https://docs.docker.com/engine/reference/builder/#cmd, the shell form of CMD executes with /bin/sh -c. So from my point of view I see little difference in the 2 versions. But the reason must be how the exec forms are being evaluated when ENTRYPOINT and CMD are present at the same time.
At a very low level, Linux commands are executed as a series of "words". Typically your shell will take a command line like ls -l "a directory" and breaks that into three words ls -l a directory. (Note the space in "a directory": in the shell form that needs to be quoted to be in the same word.)
The Dockerfile CMD and ENTRYPOINT (and RUN) commands have two forms. In the form you've specified that looks like a JSON array, you are explicitly specifying how the words get broken up. If it doesn't look like a JSON array then the whole thing is taken as a single string, and wrapped in an sh -c command.
# Explicitly spelling out the words
RUN ["ls", "-l", "a directory"]
# Asking Docker to run it via a shell
RUN ls -l 'a directory'
# The same as
RUN ["sh", "-c", "ls -l 'a directory'"]
If you specify both ENTRYPOINT and CMD the two lists of words just get combined together. The important thing for your example is that sh -c takes the single next word and runs it as a shell command; any remaining words can be used as $0, $1, ... positional arguments within that command string.
So in your example, the final thing that gets run is more or less
ENTRYPOINT+CMD ["sh", "-c", "jupyter", ...]
# If the string "jupyter" contained "$1" it would expand to the --ip option
The other important corollary to this is that, practically, ENTRYPOINT can't be the bare-string format: when the CMD is appended to it you get
ENTRYPOINT some command
CMD with args
ENTRYPOINT+CMD ["sh", "-c", "some command", "sh", "-c", "with args"]
and by the same rule all of the CMD words get ignored.
In practice you almost never need to explicitly put sh -c or a SHELL declaration in a Dockerfile; use a string-form command instead, or put complex logic into a shell script.

Add arguments to entrypoint/cmd for different containers

I have this simple node.js image:
FROM node:12
USER root
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm i --production
COPY . .
ENTRYPOINT node dist/main.js
ultimately, I just want to be able to pass different arguments to node dist/main.js like so:
docker run -d my-image --foo --bar=3
so that the executable when run is
node dist/main.js --foo --bar=3
I have read about CMD / ENTRYPOINT and I don't know how to do this, anybody know?
I would suggest writing a custom entrypoint script to handle this case.
In general you might find it preferable to use CMD to ENTRYPOINT in most cases. In particular, the debugging shell pattern of
docker run --rm -it myimage sh
is really useful, and using ENTRYPOINT to run your main application breaks this. The entrypoint script pattern I’m about to describe is also really useful in general and it’s easy to drop in if your main container process is described with CMD.
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["node", "dist/main.js"]
The script itself is an ordinary shell script that gets passed the CMD as command-line arguments. It will typically end with exec "$#" to actualy run the CMD as the main container process.
Since the entrypoint script is a shell script, and it gets passed the command from the docker run command line as arguments, you can do dynamic switching on it, and meet both your requirement to just be able to pass additional options to your script and also my requirement to be able to run arbitrary programs instead of the Node application.
#!/bin/sh
if [ $# = 1 ]; then
# no command at all
exec node dist/main.js
else
case "$1" of
-*) exec node dist/main.js "$#" ;;
*) exec "$#" ;;
esac
fi
This seems to work:
ENTRYPOINT ["node", "dist/main.js"]
CMD []
which appears to be equivalent to just:
ENTRYPOINT ["node", "dist/main.js"]
you can't seem to use single quotes - double quotes are necessary, and you have to use shell syntax..not sure why, but this style does not work:
ENTRYPOINT node dist/main.js

Docker CMD weirdness when ENTRYPOINT is a shell script

Here's a simple Dockerfile
FROM centos:6.6
ENTRYPOINT ["/bin/bash", "-l", "-c"]
CMD ["echo", "foo"]
Unfortunately it doesn't work. Nothing is echo'd when you run the resulting container that's built.
If you comment out the ENTRYPOINT then it works. However, if you set the ENTRYPOINT to /bin/sh -c, then it fails again
FROM centos:6.6
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["echo", "foo"]
I thought that was the default ENTRYPOINT for an container that didn't have one defined, why didn't that work?
Finally, this also works
FROM centos:6.6
ENTRYPOINT ["/bin/bash", "-l", "-c"]
CMD ["echo foo"]
Before I submit an issue, I wanted to see if I'm doing something obviously wrong?
I'm using rvm inside my container which sort of needs a login shell to work right.
Note that the default entry point/cmd for an official centos 6 image is:
no entrypoint
only CMD ["/bin/bash"]
If you are using the -c command, you need to pass one argument (which is the full command): "echo foo".
Not a series of arguments (CMD ["echo", "foo"]).
As stated in dockerfile CMD section:
If you use the shell form of the CMD, then the <command> will execute in /bin/sh -c:
FROM ubuntu
CMD echo "This is a test." | wc -
If you want to run your <command> without a shell then you must express the command as a JSON array and give the full path to the executable
Since echo is a built-in command in the bash and C shells, the shell form here is preferable.

Resources