I'm using docker 3.5. In this docker version I've got an issue with nodes dependencies at start time. I tried to resolve it as it was recommended using external sh script coping into docker file. It lead to more issues. For example `script is present but execution was not detected, executed, but program was not started. My docker-compose is up, but swarm mode is failed and so on...
I think I'm not clear with Docker life cycle. Lets imagine we have Dockerfile, docker-compose.yml and docker-swarm.yml. Each of them has an CMD and ENTRYPOINT instruction.
Starting docker-compose I can detect that my service waits for required one (because of waiting script). In case I'm using swarm mode i'm getting fails and my service can't start correctly.
Can you please help with considering lifecycle?
there are instructions:
CMD (docker file)
ENTRYPOINT (docker file)
entrypoint (docker-compose)
command (docker-compose)
entrypoint (docker-swarm)
command (docker-swarm)
Is it possible to have information about execution order of specified instructions for different scenarios?
There is no "execution order" between an entrypoint and a command, regardless of whether it is defined in your image (Dockerfile) or overridden at runtime (with a compose file or cli argument). There is only one command that docker will run to start your container, and when that command exits, the container exits.
If you only define an entrypoint or a command, docker will run that. If you define both an entrypoint and a command, docker will append the command as an argument to the entrypoint. So if you have:
ENTRYPOINT ["/bin/app", "arg1"]
CMD ["script.sh", "arg2"]
Docker will run your container with the command:
/bin/app arg1 script.sh arg2
meaning that script.sh is passed as a cli argument to /bin/app.
If you use the shell/string syntax instead of the exec/json syntax, this can get a bit strange since the shell syntax wraps your command with a /bin/sh -c "$string", and more importantly, the -c arg to /bin/sh only takes a single argument. That means:
ENTRYPOINT /bin/app arg1
CMD script.sh arg2
Will run:
/bin/sh -c "/bin/app arg1" /bin/sh -c "script.sh arg2"
which will ultimately run:
/bin/app arg1
The standard workflow to call a command after running your entrypoint script, is to include the following line a the end of the entrypoint.sh script:
exec "$#"
which will run any cli args to the entrypoint script, typically the value of CMD, as the new pid 1.
Related
Assume a simple Dockerfile
FROM php-fpm:8
ADD entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["php-fpm"]
In the entry-point script I just export a variable and print the environment
#!/bin/bash
set -e
export FOO=bar
env // just print the environment while entry point is running
exec "$#"
Then I build the image as myimage and use it to deploy a stack in docker swarm mode
docker stack deploy -c docker-compose.yml teststack
// The docker-compose.yml file used is the following:
app:
image: myimage:latest
environment:
APP_ENV: production
Now the question: If a check the logs of the app service I can see (because of the env command in the entry point) that the FOO variable is exported
docker service logs teststack_app
teststack_app.1.nbcqgnspn1te#soulatso | PWD=/var/www/html
teststack_app.1.nbcqgnspn1te#soulatso | FOO=bar
teststack_app.1.nbcqgnspn1te#soulatso | HOME=/root
However if I login in the running container and manually run env the FOO variable is not shown
docker container exec -it teststack_app.1.nbcqgnspn1tebirfatqiogmwp bash
root#df9c6d9c5f98:/var/www/html# env // run env inside the container
PWD=/var/www/html
HOME=/root
// No FOO variable :(
What I am missing here?
A debugging shell you launch via docker exec isn't a child process of the main container process, and doesn't run the entrypoint itself, so it doesn't see the environment variables that are set there.
Depending on what you're trying to do, there are a couple of options to get around this.
If you're just trying to inspect what your image build produced, you can launch a debugging container instead. The command you pass here will override the CMD in the Dockerfile, and when your entrypoint script does something like exec "$#" to run the command it gets passed, it will run this command instead. This lets you inspect things in an environment just after your entrypoint's first-time setup has happened.
docker-compose run app env | grep FOO
docker-compose run app bash
Or, if the only thing your entrypoint script is to set environment variables, you can explicitly invoke it.
docker-compose exec app ./entrypoint.sh bash
It is important that your entrypoint script accept an ordinary command as parameters. If it is a shell script, it should use something like exec "$#" to launch the main container process. If your entrypoint ignores its parameters and launches a fixed command, or if you've set ENTRYPOINT to a language interpreter and CMD to a script name, these debugging techniques will not work.
I want to run a script during run time and not during image build.
The script runs based on env variable that I pass during container run.
Script:
#!/bin/bash
touch $env
Docker file
FROM busybox
ENV env parm
RUN mkdir PRATHAP
ADD apt.sh /PRATHAP
WORKDIR /PRATHAP
RUN chmod 777 apt.sh
CMD sh apt.sh
when I try to run: docker container run -it -e env=test.txt sh
the script is not running
I am just getting the sh terminal. If I remove it the the container is not alive.. Please help me how to achieve it
Your docker run starts sh which overrides your CMD in Dockerfile. To get around this, you need to replicate the original CMD via the command line.
$ docker run -it -e env=test.txt <image:tag> sh -c "./init.sh; sh"
Remember that a Docker container runs a single command, and then exits. If you docker run your image without overriding the command, the only thing the container will do is touch a file inside the isolated container filesystem, and then it will promptly exit.
If you need to do some startup-time setup, a useful pattern is to write it into an entrypoint script. When a container starts up, Docker runs whatever you have named as the ENTRYPOINT, passing the CMD as additional parameters (or it just runs CMD if there is no ENTRYPOINT). You can use the special shell command exec "$#" to run the command. So revisiting your script as an entrypoint script:
#!/bin/sh
# ^^ busybox image doesn't have bash (nor does alpine)
# Do the first-time setup
touch "$env"
# Launch the main container process
exec "$#"
In your Dockerfile set this script to be the ENTRYPOINT, and then whatever long-running command you actually want the container to do to be the CMD.
FROM busybox
WORKDIR /PRATHAP # Also creates the directory
COPY apt.sh . # Generally prefer COPY to ADD
RUN chmod 0755 apt.sh # Not world-writable
ENV env parm
ENTRYPOINT ["./apt.sh"] # Must be JSON-array syntax
# Do not need to name interpreter, since
# it is executable with #! line
CMD sh # Or whatever the container actually does
I'm trying to parameterize my Dockerfile running nodeJS so I can have my entrypoint command's args be customizable on docker run so I can maintain one container artifact that can be deployed repeatedly with variations to some runtime args.
I've tried a few different ways, the most basic being
ENV CONFIG_FILE=default.config.js
ENTRYPOINT node ... --config ${CONFIG_FILE}
What I'm finding is that whatever value is defaulted remains in my docker run command even if I'm using -e to pass in new values. Such as
docker run -e CONFIG_FILE=desired.config.js
Another Dockerfile form I've tried is this:
ENTRYPOINT node ... --config ${CONFIG_FILE:-default.config.js}
Not specifying the environment variable with an ENV directive, but using bash expansion to specify a default value if nonexistent or null is found. This gives me the same behavior though.
Lastly, the last thing I tried was to create a bash script file that contains the same entrypoint command, then ADD it to the docker context and invoke it in my ENTRYPOINT. And this also seems to give the same behavior.
Is what I'm attempting even possible?
EDIT:
Here is a minimal dockerfile that reproduces this behavior for me:
FROM alpine
ENV CONFIG "no"
ENTRYPOINT echo "CONFIG=${CONFIG}"
Here is the build command:
docker build -f test.Dockerfile -t test .
Here is the run command, which echoes no despite the -e arg:
docker run -t test -e CONFIG=yes
Some additional details,
I'm running OSX sierra with a Docker version of 18.09.2, build 6247962
If I want to execute one shell script as ENTRYPOINT and enter into docker container when shell script execution is complete.
My Dockerfile has following lines at the end:
WORKDIR artifacts
ENTRYPOINT ./my_shell.sh
When I run it with following command, it executes shell script but doesn't enter into docker container.
docker run -it testub /bin/bash
Can someone please let me know if I am missing anything here?
There are two options that control what a container runs when it starts, the entrypoint (ENTRYPOINT) and the command (CMD). They follow the following logic:
If the entrypoint is defined, then it is run with the value for the command included as additional arguments.
If the entrypoint is not defined, then the command is run by itself.
You can override one or both of the values defined in the image. docker run -it --entrypoint /bin/sh testub would run /bin/sh instead of ./my_shell.sh, overriding the entrypoint. And docker run -it testub /bin/bash will override the command, making the container start with ./my_shell.sh /bin/bash.
The quick answer is to run docker run -it --entrypoint /bin/bash testub and from there, kick off your ./my_shell.sh. A better solution is to update ./my_shell.sh to check for any additional parameters and run them with the following at the end of the script:
if [ $# -gt 0 ]; then
exec "$#"
fi
The documentation for the run command follows the following syntax:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
however I've found at times that I want to pass a flag to [COMMAND].
For example, I've been working with this image, where the [COMMAND] as specified in the Dockerfile is:
CMD ["/bin/bash", "-c", "/opt/solr/bin/solr -f"]
Is there any way to tack on flags to "/opt/solr/bin/solr -f" so that it's in the form "/opt/solr/bin/solr -f [-MY FLAGS]"?
Do I need to edit the DockerFile or is there some built in functionality for this?
There is a special directive ENTRYPOINT which fits your needs. Unlike CMD it will add additional flags at the end of your command.
For example, you can write
ENTRYPOINT ["python"]
and run it with
docker run <image_name> -c "print(1)"
Note, that this only will work if you write command in exec form (via ["...", "..."]), otherwise ENTRYPOINT will invoke shell and pass your args there, not to your script.
More generally, you can combine ENTRYPOINT and CMD
ENTRYPOINT ["ping"]
CMD ["www.google.com"]
Where CMD means default args for your ENTRYPOINT. Now you can run both of
docker run <image_name>
docker run <image_name> yandex.ru
and only CMD will be replaced.
Full reference about how ENTRYPOINT and CMD interact can be found here
The CMD directive of a Dockerfile is the command that would be run when the container starts if no command was specified in the docker run command.
The main purpose of a CMD is to provide defaults for an executing container.
In your case, just use the docker run command as follow to override the default command specified in the Dockerfile:
docker run makuk66/docker-solr /bin/bash -c "/opt/solr/bin/solr -f [your flags]"