What is the difference?
Which is preferred?
Should CMD be omitted if command is defined?
command overrides the CMD in dockerfile.
If you control the dockerfile yourself, put it there. It is the cleanest way.
If you want to test something or need to alter the CMD while developing it is faster than always changing the dockerfile and rebuild the image.
Or if it is a prebuilt image and you don't want to build a derivate FROM ... image just to change the CMD it is also a quick solution doing it by command.
In the common case, you should have a Dockerfile CMD and not a Compose command:.
command: in the Compose file overrides CMD in the Dockerfile. There are some minor syntactic differences (notably, Compose will never automatically insert a sh -c shell wrapper for you) but they control the same thing in the container metadata.
However, remember that there are other ways to run a container besides Compose. docker run won't read your docker-compose.yml file and so won't see that command: line; it's also not read in tools like Kubernetes. If you build the CMD into the image, it will be honored in all of these places.
The place where you do need a command: override is if you need to launch a non-default main process for a container.
Imagine you're building a Python application. You might have a main Django application and a Celery worker, but these have basically the same source code. So for this setup you might make the image's CMD launch the Django server, and override command: to run a Celery worker off the same image.
# Dockerfile
# ENTRYPOINT is not required
CMD ["./manage.py", "runserver", "0.0.0.0:8080"]
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports: ['8080:8080']
# no command:
worker:
build: .
command: celery worker
Related
I am migrating some web-apps to be managed via docker compose
It seems the docker-compose.yaml has a section for the container entry-point.
However, my individual docker files have an ENTRYPOINT themselves... should I remove this from the Dockerfiles? Does the entry-point in docker-compose override the Docker one?
You usually shouldn't specify entrypoint: or command: in a Compose file. Prefer specifying these in a Dockerfile. The one big exception is if you have a container that can do multiple things (for example, it can be both a Web server and a queue worker, with the same code) and you need to tell it with a command: to do not-the-default thing.
I'd suggest a typical setup like:
# docker-compose.yml
version: '3.8'
services:
app:
build: .
# with neither entrypoint: nor command:
# Dockerfile
FROM ...
WORKDIR /app
COPY ...
RUN ...
# ENTRYPOINT ["./entrypoint-wrapper.sh"]
CMD ["./my_app"]
Compose entrypoint: overrides the Dockerfile ENTRYPOINT and resets the CMD. Compose command: overrides the Dockerfile CMD.
In the Dockerfile both ENTRYPOINT and CMD are optional. If your base image already includes a correct command setup (nginx, php:fpm) then you can safely skip both.
It's otherwise somewhat a matter of style whether to use CMD or ENTRYPOINT in your Dockerfile. I prefer CMD for two reasons: it's easier to replace in a docker run ... image-name alternate command invocation, and there's a pattern of using ENTRYPOINT as a wrapper script to do first-time setup and then launch the CMD with exec "$#". If you have a JSON-array-syntax ENTRYPOINT then you can pass additional command-line arguments to it as docker run ... image-name --option. Both setups are commonplace.
The thing you shouldn't do is put an interpreter in ENTRYPOINT and a script name in CMD. I only ever see this in Python, but ENTRYPOINT ["python3"] is wrong. On the one hand this is hard to override in the same way ENTRYPOINT is in general, and on the other neither normal command override format works (you still have to repeat the script name if you want to run the same script with different options).
im unable to find an easy solution, but probably i'm just searching for the wrong things:
I have a docker-compose.yml which contains a tomcat that is built by the contents of the /tomcat folder. In /tomcat there is a Dockerfile, a .war and a server.xml.
The Dockerfile is based on tomcat:9, and copys the server.xml and .war files into the right directories.
If I do docker-compose up, everything is running fine. But i would love to find a way to update the connectors within the server.xml, without pruning the image, adjusting the server.xml and start it again.
It would be perfect to put a $CONNECTOR_CONFIG in the server.xml, and provide an variables.env to docker-compose where the $CONNECTOR_CONFIG variable is set to like ""
I know i could adjust the server.xml within the Dockerfile with sed, but this way the image must be pruned everytime i want to change something right?
Is there a way that i can later just edit the variables.env and docker-compose down/up?
Regards,
EdFred
A useful pattern here is to use the image's ENTRYPOINT as a wrapper script that does first-time setup. If that script ends with exec "$#" then it will execute the image's CMD as normal. You can use this to do things like rewrite configuration files based on environment variables.
#!/bin/sh
# docker-entrypoint.sh
# Replace any environment variable references in server.xml.tmpl.
# (Assumes the image has the full GNU tool set.)
envsubst <"$CATALINA_BASE/conf/server.xml.tmpl" >"$CATALINA_BASE/conf/server.xml"
# Run the standard container command.
exec "$#"
Normally in a tomcat image you wouldn't include a CMD since the base image knows how to start Tomcat. The Docker Hub tomcat image page has a mention of it, or you can click through to find the original Dockerfile. You need to know this since specifying an ENTRYPOINT in a derived Dockerfile will reset the CMD.
Your Dockerfile then needs to COPY this script in and set up the ENTRYPOINT and CMD.
# Dockerfile
FROM tomcat:9
COPY myapp.war /usr/local/tomcat/webapps/
COPY server.xml.tmpl /usr/local/tomcat/conf/
COPY docker-entrypoint.sh /usr/local/tomcat/bin/
# ENTRYPOINT _MUST_ be JSON-array form
ENTRYPOINT ["docker-entrypoint.sh"]
# Duplicate from base image
CMD ["catalina.sh", "run"]
You can verify this by hand using a docker run command. Any command you specify after the image name gets run instead of the CMD; but the main container command is still constructed by passing that command as arguments to the alternate ENTRYPOINT and so your wrapper script will run.
docker run --rm \
-e CONNECTOR_CONFIG=test-connector-config \
my-image \
cat /usr/local/tomcat/conf/server.xml
In your final Compose setup, you can include the configuration as an environment: variable.
version: '3.8'
services:
myapp:
build: .
ports: ['8080:8080']
environment:
CONNECTOR_CONFIG: ...
envsubst is a GNU tool that replaces $ENVIRONMENT_VARIABLE references in text files. It's very useful for this specific case, but you can do the same work with sed or another text-processing tool, especially if you don't have the GNU tools available (in particular if you have an Alpine-based image).
I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.
I want to run M/Monit (https://mmonit.com/) in a docker container and found this Dockerfile: https://github.com/mlebee/docker-mmonit/blob/master/Dockerfile
I'm using it with a simple docker-compose.yml in my test environment:
version: '3'
services:
mmonit:
build: .
ports:
- "8080:8080"
#volumes:
#- ./db/:/opt/mmonit/db/
It does work, but I want to extend the Dockerfile so that the path /opt/mmonit/db/ is exported as a volume. I'm struggling to implement the following behaviour:
When the volume mapped to /opt/mmonit/db/ is empty (for example on first setup) the files from the install archive should be written to the volume. The db folder is part of the archive.
When the database file /opt/mmonit/db/mmonit.db already exists in the volume, it should not be overwritten in any circumstances.
I do have an idea how to script the required operations / checks in bash, but I'm not even sure if it would be better to replace the ENTRYPOINT with a custom start script or if it should be done by modifying the Dockerfile only.
That's why I ask for the recommended way.
In general the strategy you lay out is the correct path; it's essentially what the standard Docker Hub database images do.
The image you link to is a community image, so you shouldn't feel particularly bound to that image's decisions. Given the lack of any sort of license file in the GitHub repository you may not be able to copy it as-is, but it's also not especially complex.
Docker supports two "halves" of the command to run, the ENTRYPOINT and CMD. CMD is easy to provide on the Docker command line, and if you have both, Docker combines them together into a single command. So a very typical pattern is to put the actual command to run (mmmonit -i) as the CMD, and have the ENTRYPOINT be a wrapper script that does the required setup and then exec "$#".
#!/bin/sh
# I am the Docker entrypoint script
# Create the database, but only if it does not already exist:
if ! test -f /opt/mmonit/db/mmonit.db; then
cp -a /opt/monnit/db_base /opt/monnit/db
fi
# Replace this script with the CMD
exec "$#"
In your Dockerfile, then, you'd specify both the CMD and ENTRYPOINT:
# ... do all of the installation ...
# Make a backup copy of the preinstalled data
RUN cp -a db db_base
# Install the custom entrypoint script
COPY entrypoint.sh /opt/monit/bin
RUN chmod +x entrypoint.sh
# Standard runtime metadata
USER monit
EXPOSE 8080
# Important: this must use JSON-array syntax
ENTRYPOINT ["/opt/monit/bin/entrypoint.sh"]
# Can be either JSON-array or bare-string syntax
CMD /opt/monit/bin/mmonit -i
I would definitely make these kind of changes in a Dockerfile, either starting FROM that community image or building your own.
I found the following guideline to set up a Jupyter notebook locally:
version: "3"
services:
datascience-notebook:
image: jupyter/datascience-notebook
volumes:
- /Absolute/Path/To/Where/Your/Notebook/Files/Will/Be/Saved:/home/jovyan/work
ports:
- 8888:8888
container_name: datascience-notebook-container
Now I want to add one more library to this image. The command is conda install -c conda-forge fbprophet It's explained here how to achieve it with a .Dockerfile. However, how can I achieve that using compose?
You can override entrypoint in docker compose file, as this will override the entrypoint command in any ancestor of the docker file, you need to make sure you also call that entry point command.
The entrypoint of jupyter/base-notebook (root of docker image you are using) is
ENTRYPOINT ["tini", "-g", "--"]
Adding following line in the compose file may do what you wanted to do (I haven't tried it)
entrypoint: [ "conda", "install", "-c", "conda-forge", "fbprophet", "&&", "tini", "-g", "--"]
But downside of it is that this command will run every time container is started slowing down container start up time. So recommended way is the solution you mentioned in you question