What is a proper way to run an application with any flag in Docker?
I have tried this:
Dockerfile
# ...
CMD ["-flag_name='value"]
ENTRYPOINT ["./app"]
But my go app cannot see that flag in main.go:
f := flag.String("flag_name", "default_value", "")
And f always equal to "default_value".
I think in your case, only the CMD can work, providing entrypoint is helpful when you want to have some custom logic to prepare container or you when you want to pass flag at run time, where you are trying to set flag at build time.
CMD ["./app","-flag_name=value"]
While providing flag at runtime then entrypoint can make sense
ENTRYPOINT ["./app"]
then
docker run -it --rm myapp -flag_name=value
BTW combination of entrypoint and CMD should also work
entrypoint ["/app/hello"]
CMD ["-word=value"]
Related
I'm stuck trying to achieve the objective described in the title. Tried various options last of which is found in this article. Currently my Dockerfile is as follows:
FROM ubuntu:18.04
EXPOSE 8081
CMD cd /var/www/html/components
CMD "bash myscript start" "-D" "FOREGROUND"
#ENTRYPOINT ["bash", "myscript", "start"]
Neither the CMD..."FOREGROUND" nor the commented-out ENTRYPOINT lines work. However, when I open an interactive shell into the container, cd into /var/.../components folder and execute the exact same command to run the script, it works.
What do I need to change?
Once you pass your .sh file, run it with CMD. This is a snippet:
ADD ./configure.and.run.myapp.sh /tmp/
RUN chmod +x /tmp/configure.and.run.myapp.sh
...
CMD ["sh", "-c", "/tmp/configure.and.run.myapp.sh"]
And here is my full dockerfile, have a look.
I see three problems with the Dockerfile you've shown.
There are multiple CMDs. A Docker container only runs one command (and then exits); if you have multiple CMD directives then only the last one has an effect. If you want to change directories, use the WORKDIR directive instead.
Nothing is COPYd into the image. Unless you explicitly COPY your script into the image, it won't be there when you go to run it.
The CMD has too many quotes. In particular, the quotes around "bash myscript start" make it into a single shell word, and so the system looks for an executable program named exactly that, including spaces as part of the filename.
You should be able to correct this to something more like:
FROM ubuntu:18.04
# Instead of `CMD cd`; a short path like /app is very common
WORKDIR /var/www/html/components
# Make sure the application is part of the image
COPY ./ ./
EXPOSE 8081
# If the script is executable and begins with #!/bin/sh then
# you don't need to explicitly say "bash"; you probably do need
# the path if it's not in /usr/local/bin or similar
CMD ./myscript start -D FOREGROUND
(I tend to avoid ENTRYPOINT here, for two main reasons. It's easier to docker run --rm -it your-image bash to get a debugging shell or run other one-off commands without an ENTRYPOINT, especially if the command requires arguments. There's also a useful pattern of using ENTRYPOINT to do first-time setup before running the CMD and this is a little easier to set up if CMD is already the main container command.)
Currently if I run a container I must speicfy a new CMD in order to pass args.
I.e. the format is docker run image [CMD] [ARGS]
Is there any way to pass args to the CMD at the end of the Dockerfile without specifying a new CMD when running the container.
The reason this probably works for you is that usually, the default ENTRYPOINT is /bin/bash -c, and then when you add CMD with executable it adds to the ENTRYPOINT to run the CMD contents as you'd expect.
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
Dockerfile docs
This means your main option is to replace the old CMD with an ENTRYPOINT in the new Dockerfile and append the runtime arguments with a new CMD directive, either with permanent default values, ones received from environment variables, or a combination of the two
Hopes this helps
You can write your CMD statement like this at end of the Dockerfile.
CMD ["executable","param1","param2"]
source: https://docs.docker.com/engine/reference/builder/
I am defining an image in a Dockerfile which has another image as its parent:
FROM parent_org/parent:1.0.0
...
The parent image's documentation mentions an argument (special-arg) that can be passed when running an instance of the container:
docker run parent_org/parent:1.0.0 --special-arg
How can I enable special-arg in my Dockerfile?
TL;DR: you could use the CMD directive by doing something like this:
FROM parent_org/parent:1.0.0
CMD ["--special-arg"]
however note that passing extra flags to docker run as below would overwrite --special-arg (as CMD is intended to specify default arguments):
docker build -t child_org/child .
docker run child_org/child # would imply --special-arg
docker run child_org/child --other-arg # "--other-arg" replaces "--special-arg"
If this is not what you'd like to obtain, you should redefine the ENTRYPOINT as suggested below.
The CMD and ENTRYPOINT directives
To have more insight on CMD as well as on ENTRYPOINT, you can take a look at the table involved in this other SO answer: CMD doesn't run after ENTRYPOINT in Dockerfile.
In your case, you could redefine the ENTRYPOINT in your child image (and if need be, the default CMD) by adapting child_org/child/Dockerfile w.r.t. what was defined in the parent Dockerfile.
Assuming the parent_org/parent/Dockerfile looks like this:
FROM debian:stable # for example
WORKDIR /usr/src/foo
COPY entrypoint.sh .
RUN chmod a+x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["--default-arg"]
You could write a child_org/child/Dockerfile like this:
FROM parent_org/parent:1.0.0
RUN […]
# Redefine the ENTRYPOINT so the --special-arg flag is always passed
ENTRYPOINT ["./entrypoint.sh", "--special-arg"]
# If need be, redefine the list of default arguments,
# as setting ENTRYPOINT resets CMD to an empty value:
CMD ["--default-arg"]
This baffled me at first, too... Run them using the command: declaration... A command and an Entrypoint are two different things... The entrypoint runs whatever script/execution call your service needs to initialize and start. That entrypoint script then usually runs logic to append whatever you pass in from the command: declaration as further arguments to alter the behavior of the service.
I am using base image ibmcom/mq which uses ENTRYPOINT to execute its process:
ENTRYPOINT ["mq.sh"]
If in my Dockerfile I use CMD the parent image works fine, but my CMD doesn't seem to be executed. If in my Dockerfile I use ENTRYPOINT my command is running but then the parent ENTRYPOINT doesn't seem to be running.
What am i missing here?
OK. I now understand that if I use CMD it acts as a parameter to the ENTRYPOINT and if I use ENTRYPOINT it overrides it. I thought this is so only within the same Dockerfile.
As per Docker documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
I wish to execute a simple bash script(which processes docker environment variable) before the CMD command(which is init in my case).
Is there any way to do this?
Use a custom entrypoint
Make a custom entrypoint which does what you want, and then exec's your CMD at the end.
NOTE: if your image already defines a custom entrypoint, you may need to extend it rather than replace it, or you may change behavior you need.
entrypoint.sh:
#!/bin/sh
## Do whatever you need with env vars here ...
# Hand off to the CMD
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Docker will run your entrypoint, using CMD as arguments. If your CMD is init, then:
/entrypoint.sh init
The exec at the end of the entrypoint script takes care of handing off to CMD when the entrypoint is done with what it needed to do.
Why this works
The use of ENTRYPOINT and CMD frequently confuses people new to Docker. In comments, you expressed confusion about it. Here is how it works and why.
The ENTRYPOINT is the initial thing run inside the container. It takes the CMD as an argument list. Therefore, in this example, what is run in the container is this argument list:
# ENTRYPOINT = /entrypoint.sh
# CMD = init
["/entrypoint.sh", "init"]
# or shown in a simpler form:
/entrypoint.sh init
It is not required that an image have an ENTRYPOINT. If you don't define one, Docker has a default: /bin/sh -c.
So with your original situation, no ENTRYPOINT, and using a CMD of init, Docker would have run this:
/bin/sh -c 'init'
^--------^ ^--^
| \------- CMD
\--------------- ENTRYPOINT
In the beginning, Docker offered only CMD, and /bin/sh -c was hard-coded as the ENTRYPOINT (you could not change it). At some point along the way, people had use cases where they had to do more custom things, and Docker exposed ENTRYPOINT so you could change it to anything you want.
In the example I show above, the ENTRYPOINT is replaced with a custom script. (Though it is still ultimately being run by sh, because it starts with #!/bin/sh.)
That ENTRYPOINT takes the CMD as is argument. At the end of the entrypoint.sh script is exec "$#". Since $# expands to the list of arguments given to the script, this is turned into
exec "init"
And therefore, when the script is finished, it goes away and is replaced by init as PID 1. (That's what exec does - it replaces the current process with a different command.)
How to include CMD
In the comments, you asked about adding CMD in the Dockerfile. Yes, you can do that.
Dockerfile:
CMD ["init"]
Or if there is more to your command, e.g. arguments like init -a -b, would look like this:
CMD ["init", "-a", "-b"]
Dan's answer was correct, but I found it rather confusing to implement. For those in the same situation, here are code examples of how I implemented his explanation of the use of ENTRYPOINT instead of CMD.
Here are the last few lines in my Dockerfile:
#change directory where the mergeandlaunch script is located.
WORKDIR /home/connextcms
ENTRYPOINT ["./mergeandlaunch", "node", "keystone.js"]
Here are the contents of the mergeandlaunch bash shell script:
#!/bin/bash
#This script should be edited to execute any merge scripts needed to
#merge plugins and theme files before starting ConnextCMS/KeystoneJS.
echo Running mergeandlaunch script
#Execute merge scripts. Put in path to each merge script you want to run here.
cd ~/theme/rtb4/
./merge-plugin
#Launch KeystoneJS and ConnextCMS
cd ~/myCMS
exec "$#"
Here is how the code gets executed:
The ENTRYPOINT command kicks off the mergeandlaunch shell script
The two arguments 'node' and 'keystone.js' are passed along to the shell script.
At the end of the script, the arguments are passed on to the exec command.
The exec command then launched my node program the same way the Docker command CMD would do.
Thanks to Dan for his answer.
Although I found I had to do something like this within the Dockerfile:
WORKDIR /
COPY startup.sh /
RUN chmod 755 /startup.sh
ENTRYPOINT sh /startup.sh /usr/sbin/init
NOTE: I named the script startup.sh as opposed to entrypoint.sh
The key here was that I needed to provide 'sh' otherwise I kept getting "no such file..." errors coming out of 'docker logs -f container_name'.
See:
https://github.com/docker/compose/issues/3876