I am new to Docker, so bear with me. I have written a Dockerfile that creates an image with a Java language Spring Boot service within. I am now trying to set up an entry point to start the service in the container. I chose to write an external shell script to be called as the entry point.
This is how the service is set up.
When the project is built, a zip file is produced containing the service jar, all dependency jars, config resources and a bash script used to launch the service. The script takes a number of arguments, validates them and uses them to construct and then execute a "java" command to run the service.
If you were to run this the "non-container-way", you would just unpack the zip file in a directory somewhere and invoke the script, passing the necessary arguments. The script displays "usage" information if required arguments are not present.
In the Docker container case, I'm trying to figure out how to do the same from the ENTRYPOINT in the Dockerfile. I specified my launcher script in the ENTRYPOINT, and it did invoke it, although, of course no arguments were passed, so the script exits with the usage information.
I can't figure out how to pass the arguments that the launcher script expects.
I get the impression that I'm missing an important detail in the usage of ENTRYPOINT.
Below are snippets of the relevant files, to try to illustrate my situation.
Dockerfile:
...
# Copy Route Assesssor service archive and unpack.
COPY target/route-assessor.zip .
RUN unzip route-assessor.zip
WORKDIR route-assessor
ENTRYPOINT ["run-route-assessor.sh"]
run-route-assessor.sh:
Rather than include a fairly lengthy script, I'll show the usage statement to give an idea of what this script expects for arguments.
show_usage() {
echo "Usage: `basename "$0"` <args>"
echo " --port=<service port>"
echo " --instance=<service instance> [optional, default: 1]"
echo " --uuid=<service UUID>"
echo " --ssl [optional]"
echo " --keystore=<key store path> [required when --ssl specified]"
echo " --key-alias=<key alias> [required when --ssl specified]"
echo " --apm-host=<Elastic APM server host> [optional]"
echo " --apm-port=<Elastic APM server port> [optional]"
}
A container instance was created from the image:
[jo24447#489337-mitll route-assessor]$ docker create --name route-assessor-1 route-assessor
Examples of container start attempts:
[jo24447#489337-mitll route-assessor]$ docker start -ai route-assessor-1
Service port is required.
Usage: run-route-assessor.sh <args>
--port=<service port>
--instance=<service instance> [optional, default: 1]
--uuid=<service UUID>
--ssl [optional]
--keystore=<key store path> [required when --ssl specified]
--key-alias=<key alias> [required when --ssl specified]
--apm-host=<Elastic APM server host> [optional]
--apm-port=<Elastic APM server port> [optional]
[jo24447#489337-mitll route-assessor]$ docker start -ai route-assessor-1 --port=9100
unknown flag: --port
See 'docker start --help'.
If you want to run a container with some default parameters you need to define these parameters in CMD:
ENTRYPOINT [ "./run-route-assessor.sh" ]
CMD ["-cmd1 value1 -cmd2 value2"]
So you can run the container with just:
docker run image-name
If you want to override that default parameters list with no parameters you can do it with:
docker run image-name "''"
However the better way, would be to pass a flag to ./run-route-assessor.sh and let the script handle how to run the Spring Boot service with no parameters
docker run image-name -no-params
Related
I running a docker container through an ECS task, and attempting to override the Docker CMD in the Task Definition. I do not have control over the docker container, but by default it has an entrypoint of "/share/script.sh".
This entrypoint script, ultimately, invokes Chef Inspec (a compliance checking application) with arguments passed in from $#, like this:
inspec exec linux-baseline $#
When I pass in plaintext arguments by overriding CMD, everything is great. For example, passing in
CMD ["--backend","ssh"]
will result in
inspec exec linux-baseline --backend ssh
being executed.
What I would like to do is pass in a reference to a container environment variable via CMD (let's assume we know it's defined that $STACK=stack-name) - something like:
CMD ["--stack","${STACK}"]
where the executed code would be
inspec exec linux-baseline --stack stack-name
Is there any way to do this?
The best way might be to move this option into your startup script. You can't do this with only CMD syntax.
If you're willing to part with the container-as-command pattern, you can achieve this by not having an ENTRYPOINT and using the string form of CMD:
# Reset ENTRYPOINT to empty
ENTRYPOINT []
CMD /share/script.sh --stack "${STACK}"
This also means you would need to include the script name if you override CMD in a docker run invocation or a Compose command:.
A similar option is to write your own wrapper script to be the new entrypoint that potentially fills in more options:
#!/bin/sh
exec /share/script.sh --stack "${STACK}" "$#"
ENTRYPOINT ["/new-entrypoint.sh"]
Docker never does environment variable expansion natively here. Instead, the CMD directive has two forms:
If you use a JSON array CMD ["--stack", "${STACK}"] there is no interpolation or other processing; the command part is exactly the two words --stack {STACK}.
If you use anything else, Docker injects a shell to run the command, and that shell can do environment variable expansion; the command part is exactly the three words sh, -c, and the command string as a single word (quotes, punctuation, braces, and all).
In your case you can't use either form: the first form doesn't do the variable expansion, and the second form includes words sh and -c that your script won't understand.
Given the following docker run command:
-p 80:80 -p 443:443 \
rancher/rancher:latest \
--acme-domain <YOUR.DNS.NAME>
What is the notation for writing --acme-domain in the docker-compose file? I was not able to find this in the docs. Thanks
Everything after the image name in your docker run command line is the "command", which gets executed either by the shell or by your ENTRYPOINT script. The equivalent docker-compose directive is command. For example:
service:
image: rancher/rancher:latest
ports:
- "80:80"
- "443:443"
command: "--acme-domain <YOUR.DNS.NAME>"
...
You can try
docker-compose run service_name --acme-domain <example.com>
Runs a one-time command against a service. For example, the following
command starts the web service and runs bash as its command.
docker-compose run web bash
Commands you use with run start in new containers with configuration
defined by that of the service, including volumes, links, and other
details. However, there are two important differences.
First, the command passed by run overrides the command defined in the
service configuration. For example, if the web service configuration
is started with bash, then docker-compose run web python app.py
overrides it with python app.py.
docker-compose-run
update:
As mentioned by #larsks, pass anything to command in docker-compose will be treated as an argument, if look into the dockerfile the entrypoint is
exec tini -- rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=${AUDIT_LOG_PATH} --audit-level=${AUDIT_LEVEL} --audit-log-maxage=${AUDIT_LOG_MAXAGE} --audit-log-maxbackup=${AUDIT_LOG_MAXBACKUP} --audit-log-maxsize=${AUDIT_LOG_MAXSIZE} "${#}"
so you can follow the #larsks answer or you can try the above without changing in docker-compose, as the entrypoint will process "${#}"
I am trying to spin up a webapp using docker build. For generating certs, I want to use certbot. However, if I just put
RUN certbot --nginx,
I get
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): Plugins selected: Authenticator nginx, Installer nginx
An unexpected error occurred:
EOFError.
Is there a way to provide this information in the Dockerfile or ignore it ?
RUN certbot -n -m ${EMAIL} -d ${DOMAINS} --nginx
My one suggestion is not to do this during docker build, but instead generate the cert when the container starts up. This is because letsencrypt will attempt to connect to your server at the domains you're specifying, which probably is not where you're building the image.
To decrease startup time though, you'll want to skip bootstrapping dependencies (but you will need these installed). For this purpose, I would use the following command in your Dockerfile to list certificates (this will ensure dependencies are properly installed) and then alter the CMD (assuming you're using the nginx image)
Dockerfile:
ARG EMAIL_ARG=defaultemail#example.com
ARG DOMAINS_ARG=example.com
ENV EMAIL=${EMAIL_ARG}
ENV DOMAINS=${DOMAINS_ARG}
RUN certbot --help
...
CMD ["sh", "-c", "certbot --no-bootstrap -n -m ${EMAIL} -d ${DOMAINS} --nginx", "&&", "nginx", "-g", "daemon off;"]
The -n is for non-interactive mode
The --no-bootstrap is to skip the bootstrapping of dependencies (installing python and such)
The -m is to specify the email used for important notifications
The -d is to specify a comma separated list of domains
Using "sh", "-c" will invoke a shell when the command is executed, so you'll get the shell like behavior of replacing your environment variables with their values. Passing the values in to the build as build args doesn't expose them at startup time of your container, which is why they are then being placed into environment variables. The added benefit of them being used from environment variables is you can override these values in different environments (dev, test, stage, prod, etc...).
According to Controlling startup order in Compose, one can control the order in which Docker Compose starts containers by using a "wait-for-it" script. Script wait-for-it.sh expects both a host:port argument as well as the command that the script should execute when the port is available. The documentation recommends that Docker Compose invoke this script using the entrypoint: option. However, if one uses this option, the container will no longer run its default ENTRYPOINT or CMD because entrypoint: overrides the default.
How might one provide this default command to wait-for-it.sh so that the script can invoke the default ENTRYPOINT or CMD when the condition for which it waits is satisfied?
In my case, I've implemented a script wait-for-file.sh that polls waiting for a file to exist:
#!/bin/bash
set -e
waitFile="$1"
shift
cmd="$#"
until test -e $waitFile
do
>&2 echo "Waiting for file [$waitFile]."
sleep 1
done
>&2 echo "Found file [$waitFile]."
exec $cmd
Docker Compose invokes wait-for-file.sh as the entry-point to a slightly custom container derived from tomcat:8-jre8:
platinum-oms:
image: opes/platinum-oms
ports:
- "8080:8080"
volumes_from:
- liquibase
links:
- postgres:postgres
- activemq:activemq
depends_on:
- liquibase
- activemq
entrypoint: /wait-for-file.sh /var/run/liquibase/done
Before it exits successfully, another custom container liquibase creates /var/run/liquibase/done and so platinum-oms effectively waits for container liquibase to complete.
Once container liquibase creates file /var/run/liquibase/done, wait-for-file.sh prints Found file [/var/run/liquibase/done]., but fails to invoke default command catalina.sh run in base container tomcat:8-jre8. Why?
Test Scenario
I created a simplified test scenario docker-compose-wait-for-file to demonstrate my problem. Container ubuntu-wait-for-file waits for container ubuntu-create-file to create file /wait/done and then I expect container ubuntu-wait-for-file to invoke the default ubuntu container command /bin/bash, but instead, it exits. Why doesn't it work as I expect?
However, if one uses this option, the container will no longer run its default ENTRYPOINT or CMD command because entrypoint: overrides the default.
That is expected, which is why the wait-for-it is presented as a wrapper script.
It does allow to execute a "subcommand" though:
wait-for-it.sh host:port [-s] [-t timeout] [-- command args]
^^^^^^^^^^^^
The subcommand will be executed regardless if the service is up or not.
If you wish to execute the subcommand only if the service is up, add the --strict argument.
That means the CMD part of your image can be used for your actual container command, as its parameters will passed in parameters to the ENTRYPOINT command:
entrypoint: wait-for-it.sh host:port --
cmd: mycmd myargs
This should work... except for docker-compose issue 3140 (mentioned by the OP Derek Mahar in the comments)
entrypoint defined in docker-compose.yml wipes out CMD defined in Dockerfile
That issue suggests (Jan. 2021)
If you have a custom image you can add a startscript to the build and call it inside the dockerfile and in the docker-compose you can call it again.
Thats a way to avoid duplicate for more complicated entrypoints.
I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).