Opposite command to ENTRYPOINT - docker

Is there a way to run a command on container stop either using the Dockerfile or in docker-compose.yml using Docker Compose? Essentially I want a command that runs on container exit (opposite to ENTRYPOINT).

Sure, ENTRYPOINT can do that. It takes the CMD as command-line arguments. Usually your ENTRYPOINT script will want to exec "$#" to run the CMD after doing its setup, but if you're willing to take on the responsibility of being process ID 1, you can run CMD as a subprocess and then do stuff afterwards.
#!/bin/sh
echo "BEFORE"
"$#"
STATUS=$?
echo "AFTER"
exit $STATUS
Note that the set of things you can usefully do at termination is pretty limited since your filesystem is about to go away.
Also note that this requires you to run your "normal" process as CMD, but for reasons like this I tend to think of that as better practice in any case. Your Dockerfile would look something like
...
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["whateverd", "--foreground"]

Related

How do I get a docker container to automatically execute a bash script once it starts up?

I'm stuck trying to achieve the objective described in the title. Tried various options last of which is found in this article. Currently my Dockerfile is as follows:
FROM ubuntu:18.04
EXPOSE 8081
CMD cd /var/www/html/components
CMD "bash myscript start" "-D" "FOREGROUND"
#ENTRYPOINT ["bash", "myscript", "start"]
Neither the CMD..."FOREGROUND" nor the commented-out ENTRYPOINT lines work. However, when I open an interactive shell into the container, cd into /var/.../components folder and execute the exact same command to run the script, it works.
What do I need to change?
Once you pass your .sh file, run it with CMD. This is a snippet:
ADD ./configure.and.run.myapp.sh /tmp/
RUN chmod +x /tmp/configure.and.run.myapp.sh
...
CMD ["sh", "-c", "/tmp/configure.and.run.myapp.sh"]
And here is my full dockerfile, have a look.
I see three problems with the Dockerfile you've shown.
There are multiple CMDs. A Docker container only runs one command (and then exits); if you have multiple CMD directives then only the last one has an effect. If you want to change directories, use the WORKDIR directive instead.
Nothing is COPYd into the image. Unless you explicitly COPY your script into the image, it won't be there when you go to run it.
The CMD has too many quotes. In particular, the quotes around "bash myscript start" make it into a single shell word, and so the system looks for an executable program named exactly that, including spaces as part of the filename.
You should be able to correct this to something more like:
FROM ubuntu:18.04
# Instead of `CMD cd`; a short path like /app is very common
WORKDIR /var/www/html/components
# Make sure the application is part of the image
COPY ./ ./
EXPOSE 8081
# If the script is executable and begins with #!/bin/sh then
# you don't need to explicitly say "bash"; you probably do need
# the path if it's not in /usr/local/bin or similar
CMD ./myscript start -D FOREGROUND
(I tend to avoid ENTRYPOINT here, for two main reasons. It's easier to docker run --rm -it your-image bash to get a debugging shell or run other one-off commands without an ENTRYPOINT, especially if the command requires arguments. There's also a useful pattern of using ENTRYPOINT to do first-time setup before running the CMD and this is a little easier to set up if CMD is already the main container command.)

How to use environment variables in CMD for ENTRYPOINT arguments in Dockerfile?

I have a Dockerfile where I start a executable with default arguments like this:
ENTRYPOINT ["executable", "cmd"]
CMD ["--param1=1", "--param2=2"]
This works fine and I can run the container with default arguments:
docker run image_name
or with custom arguments:
docker run image_name --param1=a --param2=2
Now i would like to have a default parameter depend on a environment variable or default to the deafult value (1) like this:
--param1='${PARAM1:-1}'
I Understand that
ENTRYPOINT ["executable", "cmd"]
CMD ["--param1='${PARAM1:-1}'", "--param2=2"]
does not work since CMD is in exec form and does not invoke a command shell and cannot substitute environment variables.
But if I use CMD in shell form:
ENTRYPOINT ["executable", "cmd"]
CMD "--param1='${PARAM1:-1}' --param2=2"
I get no such option: -c
So my question is:
How get I archive environment variable substitution within the default arguments in CMD for my ENTRYPOINT?
One way would be to lose the CMD and wrap all the defaults up in a custom entrypoint. I try to avoid doing this, but sometimes it seems like the cleanest way, and you can be a lot more flexible:
Dockerfile:
COPY 'my-entrypoint.sh' '/somewhere/in/path/my-entrypoint'
ENTRYPOINT ['my-entrypoint']
my-entrypoint.sh
#!/bin/sh
ARGS="${#}"
if [ -z "${ARGS}" ]; then
ARGS="--param1=${PARAM1:-1} --param2=2"
fi
executable cmd $ARGS
You can't do this the way you describe, for the reasons you've laid out in the question. The ENTRYPOINT and CMD simply get concatenated together to form a single command line, and if either or both of those parts is a string rather than a JSON array it gets automatically converted to sh -c 'the string'.
ENTRYPOINT ["executable", "cmd"]
CMD "--param1='${PARAM1:-1}' --param2=2"
# Equivalently:
ENTRYPOINT ["executable", "cmd", "/bin/sh", "-c", "\"--param1=...\""]
CMD []
There are two techniques I'd suggest to work around this problem, though both require potentially substantial changes in the setup.
In Docker and Kubernetes, it turns out to generally be more convenient to pass options via environment variables than on the command line. This means your application needs to know to look for those variables, and supply some of the defaults you describe here. Some argument-parsing libraries support this out-of-the-box, but not all. Python's standard argparse library, for example, doesn't directly have environment-variable support, but you can still easily support them:
import argparse
import os
parser = argparse.ArgumentParser()
parser.add_argument('param1', default=os.environ.get('PARAM1', '1'))
args = parser.parse_args()
print(args.param1)
# Uses --param1 option, or else $PARAM1 variable, or else default "1"
The other approach I generally recommend is to make CMD a well-formed shell command; don't try to split the command between CMD and ENTRYPOINT. This avoids the problem of Docker inserting the sh -c wrapper in the middle of the line.
# no ENTRYPOINT
CMD executable cmd --param1="${PARAM1:-1}" --param2=2
The ENTRYPOINT pattern that I do find useful is to use a wrapper script to provide defaults and do other first-time setup. If that script is a Bourne shell script and ends with exec "$#", then it will run the CMD as the main container process.
#!/bin/sh
# docker-entrypoint.sh
# In Docker specifically, default $PARAM1 to "docker", not "1".
: ${PARAM1:=docker}
# Run the main container command.
exec "$#"
ENTRYPOINT ["/docker-entrypoint.sh"] # must be a JSON array
CMD executable cmd --param2=2
(There is no requirement to have an ENTRYPOINT. Making ENTRYPOINT be an interpreter and putting the script name in CMD doesn't bring any benefit, and makes it harder to run debugging commands like docker run --rm my-image ls -l /app.)

Run a command without entry point

How can I run a command against a container and tell docker not to run the entry point? e.g.
docker-compose run foo bash
The above will run the entry point of the foo machine. How to prevent it temporarily without modifying Dockerfile?
docker-compose run --entrypoint=bash foo bash
It'll run a nested bash, a bit useless, but you'll have your prompt.
If you control the image, consider moving the entire default command line into the CMD instruction. Docker concatenates the ENTRYPOINT and CMD together when running a container, so you can do this yourself at build time.
# Bad: prevents operators from running any non-Python command
ENTRYPOINT ["python"]
CMD ["myapp.py"]
# Better: allows overriding command at runtime
CMD ["python", "myapp.py"]
This is technically "modifying the Dockerfile" but it won't change the default operation of your container: if you don't specify entrypoint: or command: in the docker-compose.yml then it will run the exact same command, but it also allows running things like debug shells in the way you're trying to.
I tend to reserve ENTRYPOINT for two cases. There's a common pattern of using an ENTRYPOINT to do some first-time setup (e.g., running database migrations) and then exec "$#" to run whatever was passed as CMD. This preserves the semantics of CMD (your docker-compose run bash will still work, but migrations will happen first). If I'm building a FROM scratch or other "distroless" image where it's actually impossible to run other commands (there isn't a /bin/sh at all) then making the single thing in the image be the ENTRYPOINT makes sense.

Add arguments to entrypoint/cmd for different containers

I have this simple node.js image:
FROM node:12
USER root
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm i --production
COPY . .
ENTRYPOINT node dist/main.js
ultimately, I just want to be able to pass different arguments to node dist/main.js like so:
docker run -d my-image --foo --bar=3
so that the executable when run is
node dist/main.js --foo --bar=3
I have read about CMD / ENTRYPOINT and I don't know how to do this, anybody know?
I would suggest writing a custom entrypoint script to handle this case.
In general you might find it preferable to use CMD to ENTRYPOINT in most cases. In particular, the debugging shell pattern of
docker run --rm -it myimage sh
is really useful, and using ENTRYPOINT to run your main application breaks this. The entrypoint script pattern I’m about to describe is also really useful in general and it’s easy to drop in if your main container process is described with CMD.
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["node", "dist/main.js"]
The script itself is an ordinary shell script that gets passed the CMD as command-line arguments. It will typically end with exec "$#" to actualy run the CMD as the main container process.
Since the entrypoint script is a shell script, and it gets passed the command from the docker run command line as arguments, you can do dynamic switching on it, and meet both your requirement to just be able to pass additional options to your script and also my requirement to be able to run arbitrary programs instead of the Node application.
#!/bin/sh
if [ $# = 1 ]; then
# no command at all
exec node dist/main.js
else
case "$1" of
-*) exec node dist/main.js "$#" ;;
*) exec "$#" ;;
esac
fi
This seems to work:
ENTRYPOINT ["node", "dist/main.js"]
CMD []
which appears to be equivalent to just:
ENTRYPOINT ["node", "dist/main.js"]
you can't seem to use single quotes - double quotes are necessary, and you have to use shell syntax..not sure why, but this style does not work:
ENTRYPOINT node dist/main.js

Execute a script before CMD

As per Docker documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
I wish to execute a simple bash script(which processes docker environment variable) before the CMD command(which is init in my case).
Is there any way to do this?
Use a custom entrypoint
Make a custom entrypoint which does what you want, and then exec's your CMD at the end.
NOTE: if your image already defines a custom entrypoint, you may need to extend it rather than replace it, or you may change behavior you need.
entrypoint.sh:
#!/bin/sh
## Do whatever you need with env vars here ...
# Hand off to the CMD
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Docker will run your entrypoint, using CMD as arguments. If your CMD is init, then:
/entrypoint.sh init
The exec at the end of the entrypoint script takes care of handing off to CMD when the entrypoint is done with what it needed to do.
Why this works
The use of ENTRYPOINT and CMD frequently confuses people new to Docker. In comments, you expressed confusion about it. Here is how it works and why.
The ENTRYPOINT is the initial thing run inside the container. It takes the CMD as an argument list. Therefore, in this example, what is run in the container is this argument list:
# ENTRYPOINT = /entrypoint.sh
# CMD = init
["/entrypoint.sh", "init"]
# or shown in a simpler form:
/entrypoint.sh init
It is not required that an image have an ENTRYPOINT. If you don't define one, Docker has a default: /bin/sh -c.
So with your original situation, no ENTRYPOINT, and using a CMD of init, Docker would have run this:
/bin/sh -c 'init'
^--------^ ^--^
| \------- CMD
\--------------- ENTRYPOINT
In the beginning, Docker offered only CMD, and /bin/sh -c was hard-coded as the ENTRYPOINT (you could not change it). At some point along the way, people had use cases where they had to do more custom things, and Docker exposed ENTRYPOINT so you could change it to anything you want.
In the example I show above, the ENTRYPOINT is replaced with a custom script. (Though it is still ultimately being run by sh, because it starts with #!/bin/sh.)
That ENTRYPOINT takes the CMD as is argument. At the end of the entrypoint.sh script is exec "$#". Since $# expands to the list of arguments given to the script, this is turned into
exec "init"
And therefore, when the script is finished, it goes away and is replaced by init as PID 1. (That's what exec does - it replaces the current process with a different command.)
How to include CMD
In the comments, you asked about adding CMD in the Dockerfile. Yes, you can do that.
Dockerfile:
CMD ["init"]
Or if there is more to your command, e.g. arguments like init -a -b, would look like this:
CMD ["init", "-a", "-b"]
Dan's answer was correct, but I found it rather confusing to implement. For those in the same situation, here are code examples of how I implemented his explanation of the use of ENTRYPOINT instead of CMD.
Here are the last few lines in my Dockerfile:
#change directory where the mergeandlaunch script is located.
WORKDIR /home/connextcms
ENTRYPOINT ["./mergeandlaunch", "node", "keystone.js"]
Here are the contents of the mergeandlaunch bash shell script:
#!/bin/bash
#This script should be edited to execute any merge scripts needed to
#merge plugins and theme files before starting ConnextCMS/KeystoneJS.
echo Running mergeandlaunch script
#Execute merge scripts. Put in path to each merge script you want to run here.
cd ~/theme/rtb4/
./merge-plugin
#Launch KeystoneJS and ConnextCMS
cd ~/myCMS
exec "$#"
Here is how the code gets executed:
The ENTRYPOINT command kicks off the mergeandlaunch shell script
The two arguments 'node' and 'keystone.js' are passed along to the shell script.
At the end of the script, the arguments are passed on to the exec command.
The exec command then launched my node program the same way the Docker command CMD would do.
Thanks to Dan for his answer.
Although I found I had to do something like this within the Dockerfile:
WORKDIR /
COPY startup.sh /
RUN chmod 755 /startup.sh
ENTRYPOINT sh /startup.sh /usr/sbin/init
NOTE: I named the script startup.sh as opposed to entrypoint.sh
The key here was that I needed to provide 'sh' otherwise I kept getting "no such file..." errors coming out of 'docker logs -f container_name'.
See:
https://github.com/docker/compose/issues/3876

Resources